uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,156,228 | arxiv | \section{Introduction}
\label{uno}
Dynamics of standard and compact astrophysical bodies
could be aimed to unravelling the information contained in the
gravitational wave (GW) signals. Several situations are possible
such as GWs emitted by coalescing compact binaries--systems of
neutron stars (NS), black holes (BH) driven into coalescence by
emission of gravitational radiation (considered in the
inspiralling, merging and ring-down phases, respectively), hard
stellar encounters, and other high-energy phenomena where GW
emission is expected. Furthermore the signature of GWs can be
always determined by the relative motion of the sources. In this
review paper, we want to discuss the problem of how the waveform
and the emission of GWs depend on the relative motions in
Newtonian, Relativistic and post-Relativistic regimes as for
example situations where gravitomagnetic corrections have to be
considered in the orbital motion.
As a first consideration, we have to say that the problem of
motion, {\it i.e.} the problem of describing the dynamics of
gravitationally interacting bodies, is the cardinal problem of any
theory of gravity. From the publication of Newton's {\it
Principia} to the beginning of the twentieth century, this problem
has been thoroughly investigated within the framework of Newton's
dynamics. This approach led to the formulation of many concepts
and theoretical tools which have been applied to other fields of
physics. As a consequence, the relationship between Einstein's and
Newton's theories of gravity has been, and still is, very
peculiar. On the one hand, from a theoretical point of view, the
existence of Newton's theory facilitated the early development of
Einstein's theory by suggesting an approximation method (called
post-Newtonian (PN)) which allowed to draw very soon some
observational consequences of General Relativity (GR). Indeed, the
PN approximation method, developed by Einstein himself
\cite{einstein}, Droste and de Sitter \cite{droste,desitter}
within one year after the publication of GR, led to the
predictions of $i)$ the relativistic advance of perihelion of
planets, $ii)$ the gravitational redshift, $iii)$ the deflection
of light, $iv)$ the relativistic precession of the Moon orbit,
that are the so--called "classical" tests of GR.
On the other hand, as emphasized by Eisenstaedt
\cite{eisenstaedt}, the use of
PN approximation method has had, from a conceptual point of view,
the adverse side-effect of introducing implicitly a
'neo-Newtonian' interpretation of GR. Indeed, technically this
approximation method follows the Newtonian way of tackling
gravitational problems as closely as possible. But this technical
reduction of Einstein's theory into the Procrustean bed of
Newton's theory surreptitiously entails a corresponding conceptual
reduction: the Einstenian problem of motion is conceived within
the Newtonian framework of an "absolute" coordinate space and an
"absolute" coordinate time. However, some recent developments
oblige us to reconsider the problem of motion within Einstein's
theory. On the other hand, the discovery of the binary pulsar PSR
1913+16 by Hulse and Taylor in 1974 \cite{hulse}, and its
continuous observation by Taylor and coworkers (see references
\cite{taylor,weisberg}), led to an impressively accurate tracking
of the orbital motion of a NS in a binary system. This means that
it is worth reconsidering in detail, i.e. at its foundation, the
problem of motion also in relation to the problem of generation
and detection of GWs. In other words, the motion of sources could
give further signatures to GWs and then it has to be carefully
reconsidered.
The first part of this review paper is devoted to the theory of
orbits. The most natural way to undertake this task is starting
with the discussion of the Newtonian problem of motion then we
consider the relativistic problem of motion, in particular the PN
approximation and the further gravitomagnetic corrections.
The theory of orbits can be connected to GWs since studies of
binary systems prove, beyond reasonable doubts, that such a form
of radiation has to exist. Detecting the waves directly and
exploiting them could result a very impressive way to study
astrophysical objects. In other words, the detection of GWs could
give rise to the so-called {\it Gravitational Astronomy}.
In view of this achievement, it is relevant to stress that GW
science has entered a new era. Experimentally~\footnote{GW
experiments started with the pioneering work of Joseph Weber at
Maryland in the 60s}, several ground-based laser-interferometer GW
detectors ($ 10 \mbox{--} 1$ kHz) have been built in the United
States (LIGO)~\cite{LIGO}, Europe (VIRGO and GEO)
~\cite{VIRGO,GEO} and Japan (TAMA)~\cite{TAMA}, and are now taking
data at designed sensitivity.
A laser
interferometer space antenna (LISA)~\cite{LISA}
($10^{-4} \mbox{--} 10^{-2}$ Hz) might fly within
the next decade.
From a theoretical point of view, last years have witnessed
numerous major advances. Concerning the most promising GW sources
for ground-based and space-based detectors, i.e. binary systems
composed of NS, BHs, our understanding of the relativistic
two-body problem, and the consequent GW-generation problem, has
improved significantly.
Knowledge has also progressed on the problem of motion of a point
particle in curved spacetime when the emission of GWs is taken
into account (non-geodesic motion)~\cite{RR,others}. Solving this
problem is of considerable importance for predicting very accurate
waveforms emitted by extreme mass-ratio binaries, which are among
the most promising sources for LISA~\cite{emri}.
The GW community working at the interface between
the theory and the experiment has provided
{\it templates}~\cite{templates,DIS98,EOB} for binaries
and developed robust algorithms~\cite{DA,algorithms} for
pulsars and other GW-sources observable
with ground-based and space-based interferometers. The joined work of
data analysts and experimentalists has established
astrophysically significant upper limits for several GW sources
~\cite{lsc,lscpulsar,lscstoch} and
is now eagerly waiting for the first detection.
In this research framework, searching for criteria to classify
how sources collide and interact is of fundamental importance. A
first rough criterion can be the classification of stellar
encounters in {\it collisional}, as in the globular clusters, and
in {\it collisionless} as in the galaxies \cite{binney}. A
fundamental parameter is the richness and the density of the
stellar system and then, obviously, we expect a large production
of GWs in rich and dense systems.
Systems with these features are the globular clusters and the
galaxy centers. In particular, one can take into account the
stars (early-type and late-type) which are around our Galactic
Center, e.g. Sagittarius $A^{*}$ ($Sgr A^{*}$) which could be very
interesting targets for the above mentioned ground-based and
space-based detectors.
In recent years, detailed information has been achieved for
kinematics and dynamics of stars moving in the gravitational field
of such a central object. The statistical properties of spatial
and kinematical distributions are of particular interest (see e.g.
\cite{Genzel,Sellgreen,CapozIov}). Using them, it is possible to
give a quite accurate estimate of the mass and the size of the
central object: we have $(2.61\pm0.76)\times10^6M_{\odot}$
concentrated within a radius of $0.016 pc$ (about $30$
light-days)\cite{Ghez,Thatte}. More precisely, in \cite{Ghez}, it
is described a campaign of observations where velocity
measurements in the central $arcsec^{2}$ are extremely accurate.
Then from this bulk of data, considering a field of resolved
stars whose proper motions are accurately known, one can classify
orbital motions and deduce, in principle, the rate of production
of GWs according to the different types of orbits.
These issues motivate this review paper in which, by a
classification of orbits in accordance with the conditions of
motion, we want to calculate the GW luminosity for different
types of stellar encounters and orbits (see also \cite{CDDIN,SF}).
Following the method outlined in \cite{pm1,pm2}, we investigate
the GW emission by binary systems in the quadrupole
approximation considering bounded (circular or elliptical) and
unbounded (parabolic or hyperbolic) orbits. Obviously, the main
parameter is the approaching energy of the stars in the system
(see also \cite{schutz} and references therein). We expect that
gravitational waves are emitted with a "peculiar" signature
related to the encounter-type: such a signature has to be a
"burst" wave-form with a maximum in correspondence of the
periastron distance. The problem of {\it bremsstrahlung}-like
gravitational wave emission has been studied in detail by Kovacs
and Thorne \cite{kt} by considering stars interacting on unbounded
and bounded orbits. In this review paper, we face this problem discussing
in detail the dynamics of such a phenomenon which could greatly
improve the statistics of possible GW sources.
For further
details see also
\cite{landau,gravitation,weinberg,BW,BS,SC,maggiore, KT87,BA,M00,CT02,AB03,FH05,KTcaltech}.
The review is organized as follows. In Part I, as we said, we
discuss the theory of orbits. In Sec.2, we start with the
Newtonian theory of orbits and discuss the main features of
stellar encounters by classifying trajectories. Sec.3 is devoted
to orbits with relativistic corrections. A method for solving the
equations of motion of binary systems at the first
PN-approximation is reviewed. The aim is to express the solution
in a quasi-Newtonian form. In the Sec.4, we study higher order
relativistic corrections to the orbital motion considering
gravitomagnetic effects. We discuss in details how such
corrections come out by taking into account "magnetic" components
in the weak field limit of gravitational field. Finally, the
orbital structure and the stability conditions are discussed
giving numerical examples. Beside the standard periastron
corrections of GR, a new nutation effect have to be considered
thanks to ${\displaystyle c^{-3}}$ corrections. The transition to
a chaotic behavior strictly depends on the initial conditions. The
orbital phase space portrait is discussed.
Part II is devoted to the production and signature of
gravitational waves. We start, in Sec.5, by deriving the wave
equation in linearized gravity and discuss the gauge properties of
GWs. Sec.6 is devoted to the problems of generation, emission and
interaction of GWs with a detector. In Sect.7, we discuss the
problem of GW luminosity and emission from binary systems moving
along Newtonian orbits. The quadrupole approximation is assumed.
Sect.8 is devoted to the same problem taking into account
relativistic motion. In Sect.9, also gravitomagnetic effects on
the orbits and the emission are considered. In Sect.10, as an
outstanding application of the above considerations, we derive the
expected rate of events from the Galactic Center. Due to the
peculiar structure of this stellar system, it can be considered a
privileged target from where GWs could be detected and
classified. Discussion, concluding remarks and perspectives are
given in Sect.11.
\part{\large Theory of orbits}
\section{Newtonian orbits}
\label{due}
We want to describe, as accurately as possible, the dynamics of a
system of two bodies, gravitationally interacting, each one having
finite dimensions. Each body exerts a conservative, central force
on the other and no other external forces are considered assuming
the system as isolated from the rest of the universe. Then, we
first take into account the non-relativistic theory of orbits
since stellar systems, also if at high densities and constituted
by compact objects, can be usually assumed in Newtonian regime. In
most cases, the real situation is more complicated. Nevertheless,
in all cases, it is an excellent starting approximation to treat
the two bodies of interest as being isolated from outside
interactions. We give here a self-contained summary of the
well-known orbital types \cite{binney,landau} which will be
extremely useful for the further discussion.
\subsection{Equations of motion and conservation laws}
\label{uno1}
Newton's equations of motion for two particles of masses $m_1$
and $m_2$, located at ${\bf r_1}$ and ${\bf r_2}$, respectively,
and interacting by gravitational attraction are, in the absence of
external forces,
\begin{eqnarray}
\frac{d{\bf p_1}}{dt} &=& -G\frac{m_1 m_2}{|{\bf r_1}-{\bf r_2}|^3}({\bf r_1}-{\bf r_2})\, ,\nonumber\\
\frac{d{\bf p_2}}{dt} &=& +G\frac{m_1 m_2}{|{\bf r_1}-{\bf
r_2}|^3}({\bf r_1}-{\bf r_2})\, , \label{1}\end{eqnarray}
where $\displaystyle{{\bf p_i}=m_{i}\frac{\bf dr_{i}}{\bf dt}}$
is the momentum of particle $i$, $ (i = 1,2)$, and $G$ is
Newtonian gravitational constant.
\begin{displaymath}
\frac{d}{dt}({\bf p_1}+{\bf p_2})=0\, ,\nonumber\\
\end{displaymath}
or, with ${\bf P}={\bf p_1}+{\bf p_2}$ denoting the total momentum of the two body system,
\begin{displaymath}
{\bf P}=const\,.
\end{displaymath}
Thus we have found a first {\it conservation law}, namely the conservation of the total momentum
of a two-body system in the absence of external forces. We can make use of this by carrying
out a Galilei transformation to another inertial frame in which the total momentum is equal to
zero. Indeed, let us apply the transformation
\begin{displaymath}
{\bf r_i}\rightarrow{\bf r'_i}={\bf r_i}-{\bf v}t,\qquad i=1,2
\end{displaymath}
hence $\displaystyle{{\bf p_i}\rightarrow {\bf p'_i}={\bf p_i}-m_i{\bf v}}$ and hence, with $M = m_1 + m_2$,
\begin{displaymath}
{\bf P}\rightarrow{\bf P'}={\bf P}-M{\bf v}\, ,\nonumber\\
\end{displaymath}
and if we choose $\displaystyle{{\bf v} = \frac{\bf P}{M}}$, then
the total momentum is equal to zero in the primed frame. We also
note that the gravitational force is invariant under the Galilei
transformation, since it depends only on the difference
$\displaystyle{{\bf r_1}-{\bf r_2}}$. Thus let us from now on work
in the primed frame, but drop the primes for convenience of
notation. We can now replace the original equations of motion with
the equivalent ones,
\begin{displaymath}
{\bf P}=0,\qquad\frac{d\bf p}{dt}=-G\frac{m_1 m_2}{r^3}{\bf r}\, ,\nonumber\\
\end{displaymath}
where $\displaystyle{{\bf r}= {\bf r_1}-{\bf r_2}}$,
$\displaystyle{r = |{\bf r}|}$, and $\displaystyle{{\bf p} = {\bf
p_1}-{\bf p_2}}$. Next we introduce the position vector ${\bf R}$
of the center of mass of the system:
\begin{displaymath}
{\bf R}=\frac{m_1{\bf r_1}+m_2{\bf r_2}}{m_1+m_2}\, ,\nonumber\\
\end{displaymath}
hence
\begin{displaymath}
{\bf P}=M\frac{{d\bf R}}{dt}\, ,
\end{displaymath}
and hence from ${\bf P}=0$ we have
\begin{displaymath}
{\bf R}=const\, .
\end{displaymath}
and we can carry out a translation of the origin of our coordinate frame such that ${\bf R}= 0$. The
coordinate frame we have arrived at is called {\it center-of-mass frame} (CMS). We can also see now
that
\begin{displaymath}
{\bf p}={\bf p_1}=m_1\frac{d{\bf r_1}}{dt}=\mu\frac{d{\bf r}}{dt}\, ,
\end{displaymath}
where $\displaystyle{\mu=\frac{m_1 m_2}{m_1+m_2}}$ is the {\it reduced mass} of the system, and hence the equation of
motion can be cast in the form
\begin{equation}
\mu\frac{d^2{\bf r}}{dt^2}=-G\frac{\mu M}{r^2}{\bf \hat {r}}\, ,
\label{10}\end{equation} where we have defined the radial unit
vector $\displaystyle{{\bf \hat {r}}=\frac{{\bf r}}{r}}$. We can
get two more conservation laws if we take the scalar product of
Eq. (\ref{10}) with $\displaystyle{\frac{d{\bf r}}{dt}}$ and its
vector product with ${\bf r}$. The scalar product with
$\displaystyle{\frac{d{\bf r}}{dt}}$ gives on the left-hand side
\begin{displaymath}
\frac{d{\bf r}}{dt}\cdot\frac{d^2{\bf r}}{dt^2}=\frac{1}{2}\frac{d}{dt}\left(\frac{d{\bf r}}{dt}\right)^2\, ,
\end{displaymath}
and on the right-hand side we have
\begin{displaymath}
\frac{{\bf \hat{r}}}{r^2}\cdot\frac{d{\bf r}}{dt}=-\frac{d}{dt}\left(\frac{1}{r}\right)\, ,
\end{displaymath}
hence
\begin{displaymath}
\frac{d}{dt}\left(\frac{\bf p^2}{2\mu}-\frac{\gamma}{r}\right)=0\, ,
\end{displaymath}
where $\displaystyle{\gamma=G\mu M}$. This implies that the expression in brackets is conserved, {\it i.e.}
\begin{equation}
\frac{\bf p^2}{2\mu}-\frac{\gamma}{r}=E=const\, .
\label{12}\end{equation}
Here the first term is the kinetic energy, the second term is the potential energy, and the sum of
kinetic energy and potential energy is the total energy $E$, which is a constant of motion. Now
take the cross product of Eq. (\ref{10}) with ${\bf r}$: on the right-hand side, we get the cross product of
collinear vectors, which is equal to zero, hence
\begin{displaymath}
{\bf r}\times \mu\frac{d}{dt}\left({\bf r}\times\frac{d{\bf r}}{dt}\right)=\frac{d}{dt}({\bf r }\times {\bf p})=0\, ,
\end{displaymath}
and hence, if we define the {\it angular momentum} ${\bf L}$ by
\begin{displaymath}
{\bf L}={\bf r}\times{\bf p}\, ,
\end{displaymath}
we get the result
\begin{displaymath}
\frac{d{\bf L}}{dt}=0\, ,
\end{displaymath}
or
\begin{displaymath}
{\bf L}=const\, ,
\end{displaymath}
{\it i.e.} conservation of angular momentum. An immediate consequence of this conservation law is
that the radius vector ${\bf r}$ always stays in one plane, namely the plane perpendicular to ${\bf L}$. This
implies that we can without loss of generality choose this plane as the $(xy)$ coordinate plane.
The vector ${\bf r}$ is then a two-dimensional vector,
\begin{displaymath}
{\bf r}=(x,y)=r {\bf \hat{r}}. \qquad {\bf \hat{r}}=(\cos\phi,\sin\phi)\, ,
\end{displaymath}
where we have defined the polar angle $\phi$. With this notation we can express the magnitude of
angular momentum as
\begin{equation}
L=r^2\frac{d\phi}{dt} \label{eq:momang1}\, ,
\end{equation}
The conservation of angular momentum can be used to simplify the equation of motion
(\ref{12}). To do this we note that
\begin{displaymath}
{\bf L}^2=({\bf r}\times{\bf p})^2={\bf r^2} {\bf p^2}-({\bf r}\cdot {\bf p})^2\, ,
\end{displaymath}
hence
\begin{displaymath}
{\bf p^2}=\frac{{\bf L^2}}{r^2}+p_{r}^{2}\, ,
\end{displaymath}
where $\displaystyle{p_r={\bf \hat{r}}\cdot {\bf p}}$ is the radial component of momentum. Substituting into Eq. (\ref{12}) then gives
\begin{displaymath}
\frac{p_{r}^{2}}{2\mu}+\frac{{\bf L^2}}{2\mu r^2}-\frac{\gamma}{r}=E\, ,
\end{displaymath}
or, with $\displaystyle{p_r=\frac \mu {dr}{dt}}$,
\begin{equation}
\frac{1}{2}{\mu\left(\frac{dr}{dt}\right)}^{2}+\frac{\mathbb{L}^{2}}{2\mu
r^{2}}-\frac{\gamma}{r}=E\, . \label{eq:energia}\end{equation}
Looking back at our starting point, Eq. (\ref{1}), we reduce the
dimensionality of our problem: from the simultaneous differential
equations of six functions of time, namely the six components of
the position vectors ${\bf r_1}$ and ${\bf r_2}$, we reduce to a
pair of simultaneous differential equations for the polar
coordinates $r(t)$ and $\phi(t)$ these equations contain two
constants of motion, the total energy $E$ and angular momentum
$L$. Then a mass $m_1$ is moving in the gravitational potential
$\Phi$ generated by a second mass $m_2$. The vector radius and
the polar angle depend on the time as a consequence of the star
motion, i.e. $\textbf{r}=\textbf{r}(t)$ and $\phi=\phi(t)$. With
this choice, the velocity $\textbf{v}$ of the mass $m_1$ can be
parameterized as
\begin{displaymath}
\textbf{v}=v_r\widehat{r}+v_{\phi}\widehat{\phi}~,
\end{displaymath}
where the radial and tangent components of the velocity are,
respectively,
\begin{displaymath}
v_r=\frac{dr}{dt}\, , ~~~~~~~~v_{\phi}=r \frac{d\phi}{dt}~.
\end{displaymath}
We can split the kinetic energy into two terms where, due to the
conservation of angular momentum, the second one is a function of
$r$ only. An effective potential energy $V_{eff}$,
\begin{displaymath}
V_{eff}=\frac{\mathbb{L}^{2}}{2\mu r^{2}}-\frac{\gamma}{r}\, ,\label{eq:energpot}
\end{displaymath}
is immediately defined. The first term corresponds to a repulsive
force, called the angular momentum barrier. The second term is the
gravitational attraction. The interplay between attraction and
repulsion is such that the effective potential energy has a
minimum. Indeed, differentiating with respect to $r$ one finds
that the minimum lies at $\displaystyle{r_0=\frac{L^{2}}{\gamma\mu}}$ and that
\begin{displaymath}
V_{eff}^{min}=-\frac{\mu\gamma^{2}}{2L^{2}}\label{eq:enrgpotmin}\,.\end{displaymath}
Therefore, since the radial part of kinetic energy,
\begin{displaymath}
K_{r}=\frac{1}{2}\mu\left(\frac{dr}{dt}\right)^{2}\, ,
\end{displaymath},
is non-negative, the total energy must be not less than
$V_{eff}^{min}$, i.e.
\begin{displaymath}
E\geq
E_{min}=-\frac{\mu\gamma^{2}}{2L^{2}}\label{eq:emin}\,.\end{displaymath}
The equal sign corresponds to the radial motion. For
$E_{min}<E<0$, the trajectory lies between a smallest value
$r_{min}$ and greatest value $r_{max}$ which can be found from the
condition $E=V_{eff}$, i.e.
\begin{displaymath}
r_{\{min,max\}}=-\frac{\gamma}{2E}\pm\sqrt{\left(\frac{\gamma}{2E}\right)^{2}+\frac{L^{2}}{2\mu
E}}\, ,\label{eq:rminmax}\end{displaymath} where the upper (lower) sign
corresponds to $r_{max}$ ($r_{min}$). Only for $E>0$, the upper
sign gives an acceptable value; the second root is negative and
must be rejected.
Let us now proceed in solving the differential equations (\ref{eq:momang1})
and (\ref{eq:energia}). We have
\begin{equation}
\frac{dr}{dt}=\frac{dr}{d\phi}\frac{d\phi}{dt}=\frac{L}{\mu r^{2}}\frac{dr}{d\phi}=
-\frac{L}{\mu}\frac{d}{d\phi}\left(\frac{1}{r}\right)\label{eq:diff}\, ,\end{equation}
and defining, as standard, the auxiliary variable $u=1/r$, Eq.
(\ref{eq:energia}) takes the form
\begin{equation}
u'^{2}+u^{2}-\frac{2\gamma\mu}{L^{2}}u=\frac{2\mu E}{L^{2}}\, ,\label{eq:diffenerg}\end{equation}
where $\displaystyle{u'=du/d\phi}$ and we have divided by $\displaystyle{L^{2}/2\mu}$. Differentiating
with respect to $\phi$, we get
\begin{displaymath}
u'\left(u''+u-\frac{\gamma\mu}{L^{2}}\right)=0\, ,\end{displaymath}
hence either $u'=0$, corresponding to the circular motion, or
\begin{equation}
u''+u=\frac{\gamma\mu}{L^{2}}\, ,\label{eq:moto}\end{equation}
which has the solution
\begin{displaymath}
u=\frac{\gamma\mu}{L^{2}}+C\cos\left(\phi+\alpha\right)\, ,
\end{displaymath}
or, reverting the variable,
\begin{equation}
r=\left[\frac{\gamma\mu}{L^{2}}+C\cos\left(\phi+\alpha\right)\right]^{-1}\, ,\label{eq:solution}\end{equation}
which is the canonical form of conic sections in polar coordinates
\cite{smart}. The constant $C$ and $\alpha$ are two integration
constants of the second order differential equation
(\ref{eq:moto}). The solution (\ref{eq:solution}) must satisfy the
first order differential equation (\ref{eq:diffenerg}).
Substituting (\ref{eq:solution}) into (\ref{eq:diffenerg}) we
find, after a few algebra,
\begin{equation}
C^{2}=\frac{2\mu E}{L^{2}}+\left(\frac{\gamma\mu}{L^{2}}\right)^{2}\, ,\label{eq:C}
\end{equation}
and therefore, taking account of Eq. (\ref{eq:emin}), we get
$C^{2}\geq 0$. This implies the four kinds of orbits given in
Table I and in Fig. \ref{fig:orbits}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|cl}
\hline $C=0$& $E=E_{min}$& circular orbits\tabularnewline \hline
$0<\left|C\right|<\frac{\gamma\mu}{L^{2}}$& $E_{min}<E<0$&
elliptic orbits\tabularnewline \hline
$\left|C\right|=\frac{\gamma\mu}{L^{2}}$& $E=0$& parabolic
orbits\tabularnewline \hline
$\left|C\right|>\frac{\gamma\mu}{L^{2}}$& $E>0$,& hyperbolic
orbits\tabularnewline \hline
\end{tabular}
\end{center}
\caption{Orbits in Newtonian regime classified by the approaching
energy.}
\end{table}
\begin{figure}[ht]
\includegraphics[scale=0.4]{orbits.eps}
\caption{Newtonian paths: in black line we have hyperbolic path,
in blue line we have parabolic path, in red line the elliptical
path and in ciano the circular path} \label{fig:orbits}
\end{figure}
\subsection{Circular Orbits}
\label{uno2}
{\bf Circular motion} corresponds to the condition $u'=0$ from which one
find $r_{0}=L^{2}/\mu\gamma$ where $V_{eff}$ has its minimum. We
also note that the expression for $r_{0}$ together with
Eq.(\ref{eq:emin}) gives
\begin{equation}
r_{0}=-\frac{\gamma}{2E_{min}}\, .\label{eq:rzero}\end{equation}
Thus the two bodies move in concentric circles with radii, inversely
proportional to their masses and are always in opposition.
\subsection{Elliptical Orbits}
\label{uno3}
For $0<\left|C\right|<\mu\gamma/L^{2}$, $r$ remains finite for all
values of $\phi$. Since $r(\phi+2\pi)=r(\phi)$, the trajectory is
closed and it is an {\bf ellipse}. If one chooses $\alpha=0$, the major
axis of the ellipse corresponds to $\phi=0$. We get
\begin{displaymath}
r_{\left|\phi=0\right.}=r_{min}=\left[\frac{\gamma\mu}{L^{2}}+C\right]^{-1}\, ,\label{eq:rphi}\end{displaymath}
and
\begin{displaymath}
r_{\left|\phi=\pi\right.}=r_{max}=\left[\frac{\gamma\mu}{L^{2}}-C\right]^{-1}\, ,\label{eq:rpi}\end{displaymath}
and since $r_{max}+r_{min}=2a$, where $a$ is the semi-major axis
of the ellipse, one obtains
\begin{displaymath}
a=r_{\left|\phi=0\right.}=r_{min}=\frac{\gamma\mu}{L^{2}}\left[\left(\frac{\gamma\mu}{L^{2}}\right)^{2}+C^{2}\right]^{-1}\, ,\end{displaymath}
$C$ can be eliminated from the latter equation and Eq. (
\ref{eq:C}) and then
\begin{equation}
a=-\frac{\gamma}{2E}\, ,\label{eq:a}\end{equation}
Furthermore, if we denote the distance
$r_{\left|\phi=\pi/2\right.}$ by $l$, the so-called {\it
semi-latus rectum} or the parameter of the ellipse, we get
\begin{equation}
l=\frac{L^{2}}{\gamma\mu}\, ,\label{eq:latusrectum}\end{equation}
and hence the equation of the trajectory
\begin{equation}
r=\frac{l}{1+\epsilon\cos\phi}\, ,\label{eq:traiettoria}\end{equation}
where $\displaystyle{\epsilon=\sqrt{\frac{1-l}{a}}}$ is the
eccentricity of the ellipse. If we consider the major semiaxis of
orbit $a$ Eq. (\ref{eq:a}) and the eccentric anomaly ${\cal E}$,
the orbit can be written also as (see \cite{roy})
\begin{displaymath}\label{Ellisse}
r = a (1 - \epsilon \cos{\cal E})\, ,
\end{displaymath}
this equation, known as Kepler's equation, is transcendental
in ${\cal E}$, and the solution for this quantity cannot expressed
in a finite numbers of terms. Hence, there is the following
relation between the eccentric anomaly and the angle $\phi$:
\begin{displaymath}\label{camb_E}
\cos \phi\,=\,\frac{\cos {\cal E} - \epsilon}{1-\epsilon \cos {\cal E}}\, .
\end{displaymath}
\subsection{Parabolic and Hyperbolic Orbits}
\label{uno4}
These solutions can be dealt together. They correspond to $E\geq
0$ which is the condition to obtain unbounded orbits.
Equivalently, one has $\left|C\right|\geq\gamma\mu/L^{2}$.
The trajectory is
\begin{equation}
r=l\left(1+\epsilon\cos\phi\right)^{-1}\, ,\label{eq:traie}\end{equation}
where $\epsilon\geq1$. The equal sign corresponds to $E=0$ .
Therefore, in order to ensure positivity of $r$, the polar angle
$\phi$ has to be restricted to the range given by
\begin{displaymath}
1+\epsilon\cos\phi>0 \label{eq:cosphi}\, .\end{displaymath}
This means $\cos\phi>-1$, i.e. $\phi\in(-\pi,\pi)$ and the
trajectory is not closed any more. For $\phi\rightarrow\pm\pi$,
we have $r\rightarrow\infty$. The curve (\ref{eq:traie}), with
$\epsilon=1$, is a {\bf parabola}. For $\epsilon>1$, the allowed
interval of polar angles is smaller than $\phi\in(-\pi,\pi)$, and
the trajectory is a {\bf hyperbola}. Such trajectories correspond to
non-returning objects.
Let us consider a semi-axis $a$ and ${\cal F}$ as
variable, analogous to the elliptic eccentric anomaly ${\cal E}$. The
hyperbolic orbit is defined also by
\begin{displaymath}\label{Iperbole}
r = a(\epsilon \cosh {\cal F} - 1)\, ,
\end{displaymath}
hence, there is the following relation between $F$ and the angle
$\phi$:
\begin{displaymath} \label{camb_F}
\cos \phi\,=\,\frac{l-a(\epsilon \cosh {\cal F} - 1)}{\epsilon
a (\epsilon \cosh {\cal F} - 1)}\, .
\end{displaymath}
Finally the parabolic orbit can be defined by the another
relation (see \cite{roy})
\begin{displaymath}\label{Parabola}
r \,=\, \frac{P^2}{2}\, ,
\end{displaymath}
where P is a parameter. In this case
\begin{displaymath} \label{camb_P}
\cos \phi \,=\, \frac{2 l - P^2}{P^2}\, .
\end{displaymath}
As we will discuss below, this classification of orbital motions
can reveal extremely useful for the waveform signature of
gravitational radiation. Let us now take into account the
relativistic theory of orbits.
\section{Relativistic Orbits}
\label{tre}
As we have seen in the above Section, the non-relativistic two bodies problem consists in two sub-problems:
\begin{enumerate}
\item deriving the equations of orbital motion for two gravitationally interacting extended bodies,
\item solving these equations of motion.
\end{enumerate}
In the case of widely separated objects, one can simplify the
sub-problem by neglecting the contribution of the quadrupole and
higher multipole momenta of the bodies to their external
gravitational field, thereby approximating the equations of
orbital motion of two extended bodies by the equations of motion
of two point masses located at the Newtonian center of mass of the
extended objects. Then the sub-problem can be exactly solved as
shown in the above Section. The two body problem in GR is more
complicated: because of the non-linear hyperbolic structure of
Einstein's field equations, one is not sure of the good
formulation of boundary conditions at infinity, so that the
problem is not even well posed \cite{ehlers}. Moreover, since in
Einstein's theory the local equations of motion are contained in
the gravitational field equations, it is {\it a priori} difficult
to separate the problem in two sub-problems, as in the
non-relativistic case, where one can compute the gravitational
field as a linear functional of the matter distribution
independently of its motion. Furthermore, even when one can
achieve such separation and derive some equations of orbital
motion for the two bodies, these equations will {\it a priori} not
be ordinary differential equations but, because of the finite
velocity of propagation of gravity, will consist in some kind of
retarded integro-differential system \cite{deruelle}. However, all
these difficulties can be somehow dealt with if one resorts to
approximation procedures and breaks the general covariance by
selecting special classes of coordinates systems \cite{dixon}.
Two physically different situations, amenable to perturbation
treatments, have been considered in the literature:
\begin{enumerate}
\item the problem of two {\it weakly self-gravitating, slowly moving, widely separated}
fluid bodies which has been treated by the so-called PN approximation schemes
(for references see \cite{breuer,caporali,spyrou,maggiore,gravitation}),
\item the problem of two {\it strongly self-gravitating, widely separated}
bodies which has been treated by matching a strong field "internal" approximation
scheme in and near the objects to a weak field "external" approximations scheme outside the objects.
\end{enumerate}
The approach has been pursued both for slowly moving objects,
either BHs \cite{death} or in general strongly self-gravitating
objects \cite{kates}, and for strongly self-gravitating objects
moving with arbitrary velocities \cite{damour,bel}. In the latter
case, equations of orbital motion were considered in the form of
a retarded-integro-differential system which however could be
transformed into ordinary differential equations and which, when
attention was restricted to slowly moving bodies, were expanded in
power series of $\displaystyle{\frac{v}{c}}$ \cite{DD,damour}.
When keeping only the first relativistic corrections to Newton's
law (first post-Newtonian approximation), it turns out that the
equations of orbital motion of widely separated, slowly moving,
{\it strongly} self-gravitating objects depend only on two
parameters (the Schwarzschild masses) and are identical to the
equations of motion of weakly self-gravitating objects (when
using, in both cases, a coordinate system which is harmonic at
lowest order). This is, in fact, a non-trivial consequence of the
structure of Einstein's theory \cite{deruelle}. Then, in the next
subsections, we consider the PN motion including secular and
periodic effects at first order approximation and we shall show
that the equations of motion can be written in a quasi-Newtonian
form.
\subsection{Relativistic Motion and conservation laws}
The relativistic case can be seen as a correction to the Newtonian
theory of orbits \cite{deruelle,DD}. In GR, the time is incorporate as a mathematical
dimension, so that the four-dimensional rectangular perifocal
coordinates are $(x,y,z,t)$ and the four dimensional polar
coordinates are $(r,\theta,\phi,t) $. The first post Newtonian
equations of orbital motion of a binary system constrain the
evolution in coordinate time $t$ of the positions ${\bf r_1}$ and
${\bf r_2}$ of the two objects. These positions represent the
center of mass in the case of weakly self-gravitating objects (see
e.g.\cite{spyrou,deruelle}) and the center of field in the case of
strongly self gravitating objects (see \cite{damour}). They can be
derived from Lagrangian which is function of the {\it
simultaneous} position ${\bf r_1}(t),{\bf r_2}(t)$, and velocities
$\displaystyle{{\bf v_1}(t)=\frac{d{\bf r}}{dt}}$ and
$\displaystyle{{\bf v_2}(t)=\frac{d{\bf r_2}}{dt}}$ in a given
harmonic coordinate system, and of two constant parameters, the
Schwarzschild masses of the objects $m_1$ and $m_2$:
\begin{equation}
{\cal L}_{PN}\left({\bf r_1}(t),{\bf r_2}(t),{\bf v_1}(t),{\bf v_2}(t)\right)={\cal L}_N+\frac{1}{c^2}{\cal L}_2\, ,
\label{2.1a} \end{equation}
with
\begin{equation}
{\cal L}_N=\frac{1}{2}m_1v_1^2+\frac{1}{2}m_2v_2^2+\frac{Gm_1 m_2}{R}\, ,
\label{2.1b} \end{equation}
and
\begin{eqnarray}
{\cal L}_2 &=& \frac{1}{8}m_1v_1^4+\frac{1}{8}m_2v_2^4+\nonumber\\ && +\frac{Gm_1 m_2}{2R}\left[3(v_1^2+v_2^2)-7(v_1v_2)-N^2v_1v_2-G\frac{M}{R}\right]\,,\nonumber\\
\label{2.1c} \end{eqnarray}
where we have introduced the instantaneous relative position
vector ${\bf R}={\bf r_1}-{\bf r_2}$ and $R=|{\bf R}|$ while
$\displaystyle{{\bf N}=\frac{{\bf R}}{R}}$. In (\ref{2.1a}) and
(\ref{2.1b}) we used the short notations: $\displaystyle{{\bf
v_1}\cdot {\bf v_1}= |{\bf v_1}|=v_1^2}$, $\displaystyle{{\bf
v_1}\cdot {\bf v_2}= v_1v_2}$ for the ordinary Euclidean scalar
products, and $c$ is the velocity of light. The invariance, at the
PN approximation, and modulo an exact time derivative, of $ {\cal
L}_{PN}$ under spatial translations and Lorentz boosts implies,
via Noether's theorem, the conservation of the total linear
momentum of the system:
\begin{displaymath}
{\bf P}_{PN}=\frac{\partial {\cal L}_{PN}}{\partial {\bf v_1}}+\frac{\partial {\cal L}_{PN}}{\partial {\bf v_2}}\,,
\label{2.2} \end{displaymath}
and of the relativistic center of mass integral
\begin{eqnarray*}
{\bf K}_{PN} & = & {\bf G}_{PN}-t{\bf P}_{PN}\, ,\\
{\bf G}_{PN} & = &\sum \left(m_1+\frac{1}{2}\frac{m_1v^2}{c^2}-\frac{1}{2}\frac{Gm_1m_2}{R c^2}\right){\bf r}\, ,\label{2.3ab}
\end{eqnarray*}
the sum is over the two objects \cite{spyrou,DD}.
By a Poincar\'e transformation it is possible to get a PN center of mass frame where $\displaystyle{{\bf P}_{PN}= {\bf K}_{PN}= 0}$. In this frame one has:
\begin{eqnarray}
{\bf r_1} & = & \frac{\mu}{m_1}{\bf R}+\frac{\mu(m_1-m_2)}{2M^2c^2}\left(V^2-\frac{GM}{R}\right){\bf R}\, ,\nonumber\\
{\bf r_2} & = & -\frac{\mu}{m_2}{\bf R}+\frac{\mu(m_1-m_2)}{2M^2c^2}\left(V^2-\frac{GM}{R}\right){\bf R}\, ,\nonumber\\ \label{2.4ab}
\end{eqnarray}
where $\displaystyle{{\bf V}=\frac{d{\bf R}}{dt}={\bf v_1}-{\bf v_2}}$ is the istantaneous relative velocity. The problem of solving the motion of the binary system is then reduced to the simpler problem of solving the relative motion in the PN center of mass frame. For the sake of completeness, let us write down these equations of motion derived from (\ref{2.1a})-(\ref{2.1c}), and where, after variation, the positions and velocities are replaced by their center of mass expressions (\ref{2.4ab}):
\begin{eqnarray}
&& \frac{d{\bf V}}{dt}= \frac{G M}{2 c^2 R^3}\left[4 G M {\bf N} (\nu +2)-
R \left(2 {\bf N} c^2+\right.\right.\nonumber\\ &&+4 (N V) {\bf V} (\nu -2)-\left.\left.3 (NV)^2 {\bf N}
\nu +2 {\bf N} V^2 (3 \nu +1)\right)\right], \nonumber\\
\label{2.5}
\end{eqnarray}
where we have introduced a mass parameter
$\displaystyle{\nu=\frac{\mu}{M}=\frac{m_1m_2}{(m_1+m_2)^2}}$ with
$\displaystyle{\left(0\leq\nu\leq\frac{1}{4}\right)}$. At this
point it is worth to notice that in spite of the fact that it is in
general incorrect to use, {\it before variation}, in a Lagrangian
a consequence, like Eq.(\ref{2.4ab}), of the equations of motions,
which are obtained only {\it after variation}, it turns out that
the relative motion in PN center of mass frame, Eq. (\ref{2.5}),
can be correctly derived from a Lagrangian obtained by replacing
in the total Lagrangian (divided by $\mu$)
$\displaystyle{\frac{1}{\mu} {\cal L}_{PN}\left({\bf r_1},{\bf
r_2},{\bf v_1}, {\bf v_2}\right)}$ the positions and velocities by
their PN center of mass expressions obtained from (\ref{2.4ab}) and
that moreover it is even sufficient to use the {\it
non-relativistic} center of mass expressions:
\begin{eqnarray}
{\bf r_1}_N & = &\frac{\mu}{m_1}{\bf R}\, ,\nonumber\\
{\bf r_2}_N & = &\frac{\mu}{m_2}{\bf R}\, ,\nonumber\\
{\bf v_1}_N & = &\frac{\mu}{m_1}{\bf V}\, ,\nonumber\\
{\bf v_2}_N & = &\frac{\mu}{m_2}{\bf V}\, .\nonumber\\
\label{2.6abcd}
\end{eqnarray}
The proof goes as follows \cite{deruelle}.
Let us introduce the following linear change spatial variables in the PN Lagrangian $\displaystyle{{\cal L}_{PN}\left( {\bf r_1}- {\bf r_2},\frac{ {\bf r_1}}{dt},\frac{ {\bf r_2}}{dt}\right)}$ :
$\displaystyle{( {\bf r_1}, {\bf r_2})\rightarrow({\bf R},{\bf X})}$ with $\displaystyle{{\bf R}={\bf r_1}-{\bf r_2}}$ and $\displaystyle{{\bf X}=\frac{(m_1{\bf r_1}+m_2{\bf r_2})}{M}}$, that is:
\begin{eqnarray*}
{\bf r_1} & =& {\bf r_1}_N+{\bf X}\, ,\\
{\bf r_2} & =& {\bf r_2}_N+{\bf X}\, ,\\
\label{2.7ab} \end{eqnarray*}
which implies (denoting $\displaystyle{\frac{d{\bf X}}{dt}={\bf W}}$):
\begin{eqnarray*}
{\bf v_1} & =& {\bf v_1}_N+{\bf W}\, ,\\
{\bf v_2} & =& {\bf v_2}_N+{\bf W}\, .\\
\label{2.7cd} \end{eqnarray*}
Expressing
\begin{eqnarray*}
{\cal L}_{PN}={\cal L}_{N}\left( {\bf r_1}- {\bf r_2}, {\bf v_1}, {\bf v_2}\right)+\left(\frac{1}{c^2}\right) {\cal L}_{2}\left( {\bf r_1}- {\bf r_2}, {\bf v_1}, {\bf v_2}\right),
\end{eqnarray*}
given by Eq. (\ref{2.1a})-(\ref{2.1c}) in terms of the new variables one finds:
\begin{eqnarray}
{\cal L}_{PN}&=&\frac{1}{2}MW^2+\frac{1}{2}\mu V^2+\frac{G\mu M}{R}+\nonumber\\ &&+\frac{1}{c^2}{\cal L}_2 \left( {\bf R}\frac{\mu {\bf V}}{m_1}+{\bf W}, -\frac {\bf V}{m_2}+{\bf W}\right)\, .
\label{2.8}\end{eqnarray}
Hence one obtains as a consequence of the equations of the PN motion:
\begin{eqnarray}
{\cal O} & = & \frac{1}{\mu}\frac{\delta{\cal L}_{PN}}{\delta{\bf R} }
= \left(\frac{\partial}{\partial {\bf R}} - \frac{d}{dt}\frac{\partial}{\partial {\bf R}}\right)
\left[\frac{1}{2}{\bf V^2}+\frac{G M}{R} +\right.\nonumber\\ &&+\left.\frac{1}{\mu c^2}{\cal L}_2 \left( {\bf R},\frac{\mu}{m_1}{\bf V}+{\bf W}, -\frac{\mu}{m_2} {\bf V}+{\bf W}\right)\right]\, ,\nonumber\\
\label{2.9} \end{eqnarray}
where in the last bracket we have discarded $\displaystyle{\frac{1}{2}MW^2}$ which gives no contribution. The first two terms in the ({\it rhs}) of Eq. (\ref{2.9}) yield the Newtonian relative motion. We wish to evaluate the relativistic corrections to the relative motion: $\displaystyle{\left(\frac{\delta}{\delta{\bf R}}\right)\left(\frac{{\cal L}}{\mu c^2}\right)}$ in the PN center of mass frame. Now ${\cal L}_2$ is a polynomial in the velocities and therefore a polynomial in ${\bf W}$, and from Eq. (\ref{2.4ab}) one sees that in the PN center of mass frame $\displaystyle{{\bf W}={\cal O}\left(\frac{1}{c^2}\right)}$. Therefore as $\displaystyle{\frac{\delta}{\delta{\bf R}}}$ does not act on ${\bf W}$, we see that the contributions coming from ${\bf W}$ to the {\it rhs} of Eq. (\ref{2.9}) are of the second PN order $\displaystyle{{\cal O}\left(\frac{1}{c^4}\right)}$ that we shall consistently neglect throughout this work. In other words one obtains as a consequence of the equations of the PN motion in the PN center of mass frame:
\begin{eqnarray*}
&& \frac{\delta}{\delta{\bf R}}\left[\frac{1}{2}V^2+ \frac{G M}{R}+ \frac{1}{\mu c^2}{\cal L}_2 \left( {\bf R},\frac{\mu}{m_1}{\bf V}, -\frac{\mu}{m_2} {\bf V}\right) \right]\nonumber\\ &&
={\cal O} \left(\frac{1}{c^4}\right)\, .\nonumber\\
\label{2.10}\end{eqnarray*}
This shows that the equations of the relative motion in the PN center of mass frame derive from the following Lagrangian:
\begin{eqnarray*}
{\cal L}_{PN}^R({\bf R},{\bf V})&=& \frac{1}{2}V^2+\frac{G M}{R}+\frac{1}{\mu c^2}{\cal L}_2 \left( {\bf R},\frac{\mu}{m_1}{\bf V}, -\frac{\mu}{m_2} {\bf V}\right)\, ,
\label{2.11}\end{eqnarray*}
which happens to be obtained by replacing in the full PN Lagrangian, see Eq. (\ref{2.8}) above, $\bf X$ and $\bf W$ by zero, i. e. the original variables by Eq. (\ref{2.6abcd}) and by dividing by $\mu$ \cite{deruelle}. The explicit expression of $ {\cal L}_{PN}^R$ reads:
\begin{eqnarray}
{\cal L}_{PN}^R({\bf R},{\bf V}&)=&\frac{1}{2}V^2+\frac{G M}{R}+\frac{1}{8}(1-3\nu)\frac{V^4}{c^2}+\nonumber\\ &&\frac{G M}{2Rc^2}\left[(3+\nu)V^2+\nu(NV)^2-\frac{G M}{R}\right]\,.\nonumber\\ \label{2.12}\end{eqnarray}
The Lagrangian (\ref{2.12}) was obtained in \cite{infeld}. The integration of the equations (\ref{2.5}) can be done in several different ways.
\begin{itemize}
\item A standard approach: Lagrange's method of variation the osculating elements. \item The Hamilton-Jacobi equation approach which, takes advantage of the existence of the PN Lagrangian is the route which has been taken by Landau and Lifshits \cite{landau}, who worked out only the secular precession of the periastron.
\item Another approach, based on the Maupertuis principle \footnote {In classical mechanics, Maupertuis' principle is an integral equation that determines the path followed by a physical system without specifying the time parameterization of that path. It is a special case of the more generally stated principle of least action. More precisely, it is a formulation of the equations of motion for a physical system not as differential equations, but as an integral equation, using the calculus of variations.}, which reduces the PN problem to a simple auxiliary Newtonian problem.
\end{itemize}
To describe the motion, it is convenient to use the standard method to solve the non-relativistic two-bodies problem and which consists in exploiting the symmetries of the relative Lagrangian $\ {\cal L}_{PN}^R$. The invariance $\displaystyle{ {\cal L}_{PN}^R}$ under time translations and space rotations implies the existence of four first integrals:\\ $\displaystyle{ E={\bf V} \cdot \frac{\partial {\cal L}_{PN}^R}{\partial {\bf V}}-{\cal L}_{PN}^R} $ and $\displaystyle{ {\bf J} = {\bf R}\times \frac{\partial {\cal L}_{PN}^R}{\partial {\bf V}}}$:
\begin{eqnarray}
E&=&\frac{1}{2}V^2-\frac{G M}{R}+\frac{3}{8}(1-3\nu)\frac{V^4}{c^2}+\nonumber\\ &&\frac{G M}{2Rc^2}\left[(3+\nu)V^2+\nu(NV)^2-\frac{G M}{R}\right] \, ,
\label{2.13}\\
{\bf J}&=& {\bf R}\times {\bf V}\left[ 1+\frac{1}{2}(1-3\nu)\frac{V^2}{c^2}+(3+\nu)\frac{G M}{2Rc^2}\right]\, .
\label{2.14}\end{eqnarray}
It is checked that these quantities coincide respectively with $\mu^{-1}$ times the total Noether energy and the total Noether angular momentum of the binary system when computed in the PN center of mass frame \cite{wagoner}. Eq. (\ref{2.14}) implies that the motion takes place in a coordinate plane, therefore one can introduce polar coordinates $R=r$ and $\phi$ in the plane (i.e. $r_x=r\cos\phi$,\quad$r_y=r\sin\phi$, $r_z=0$). Then starting from the first integrals (\ref{2.13})-(\ref{2.14}) and using the identities: $\displaystyle{ V^2=\left(\frac{dr}{dt}\right)^2+r^2\left(\frac{d\phi}{dt}\right)^2}$, $\displaystyle{ |{\bf R}\times{\bf V}|= r^2\frac{d\phi}{dt}}$, $\displaystyle{NV=\frac{dr}{dt}}$, we obtain by iteration \footnote{In these and the following equations we neglect terms of the second PN order $\displaystyle{ {\cal O}\left(\frac{1}{c^4}\right).}$}
\begin{equation}
\left(\frac{dr}{dt}\right)^2 = A+\frac{2B}{r}+\frac{C}{r^2}+\frac{D}{r^3}\, ,
\label{2.15}\end{equation}
\begin{equation}
\frac{d\phi}{dt}=\frac{H}{r^2}+\frac{I}{r^3}\, ,
\label{2.16}\end{equation}
where the coefficients $A,B,C,D,H,I$ are the following polynomials in $E$ and $J=|{\bf J}|$:
\begin{eqnarray}
A & = & 2 E\left(1+\frac{3}{2}(3\nu-1)\frac{E}{c^2}\right)\, ,\nonumber\\
B & = & GM\left(1+(7\nu-6)\frac{ E}{c^2}\right)\, ,\nonumber\\
C & = & -J^2\left(1+2(3\nu-1)\frac{ E}{c^2}\right)+(5\nu-10)\frac{G^2M^2}{c^2}\, ,\nonumber\\
D &=& (-3\nu+8)\frac{GMJ^2}{c^2}\, ,\nonumber\\
H &=& J\left(1+(3\nu-1)\frac{ E}{c^2}\right)\, ,\nonumber\\
I &=& (2\nu-4)\frac{GMJ}{c^2}\, .
\label{2.17}\end{eqnarray}
The relativistic "relative motion", {\it i.e.} the solution of Eq. (\ref{2.15}) can be simply reduced to the integration of auxiliary {\it non-relativistic} radial motion. Indeed let us consider the following change of the radial variable:
\begin{equation}
r={\bar r}+\frac{D}{2C_0}\, , \label{3.1}
\end{equation}
where $C_0$ is the limit of $C$ when $c^{-1}\rightarrow0$ with $(C_0=-J^2)$. Geometrically, the transformation which is expressed in polar coordinates by the equation: $\displaystyle{r'=r+cons}t$, $\displaystyle{\phi'=\phi}$, is called a {\bf conchoidal transformation} \cite{deruelle}. Taking into account the fact that $D$ is $\displaystyle{{\cal O}\left(\frac{1}{c^2}\right)}$ and that we can neglect all terms of order $\displaystyle{{\cal O}\left(\frac{1}{c^4}\right)}$, we find that replacing Eq. (\ref{3.1}) in Eqs. (\ref{2.1a})-(\ref{2.1c}), leads to:
\begin{equation}
\left(\frac{d{\bar r}}{dt}\right) ^2=A+\frac{2B}{{\bar r}}+\frac{{\bar C}}{{\bar r}^2}\, ,
\label{3.2} \end{equation}
with
\begin{displaymath}
{\bar C}=C-\frac{BD}{C_0}\, .
\end{displaymath}
Then, in the case of {\bf quasi-elliptical} motion $( E<0; A<0)$, ${\bar r}$ is a linear function of $\displaystyle{\cos {\cal E}}$, ${\cal E}$ being an eccentric anomaly and the same is true of $\displaystyle{r={\bar r}+\frac{D}{2C_0}}$. We then obtain the PN radial motion in quasi-Newtonian parametric form ($t_0$ being a constant of integration):
\begin{eqnarray}
n(t-t_0) & = & {\cal E}-\epsilon_t\sin {\cal E}\, ,
\label{3.3}\end{eqnarray}
\begin{eqnarray}
r & = & a_r(1-\epsilon_r\cos {\cal E})\, ,
\label{3.4}\end{eqnarray}
with
\begin{eqnarray*}
n & = &\frac{(-A)^{3/2}}{B}\, , \nonumber\\
\epsilon_t & = &\left[ 1\frac{A}{B^2}\left(C-\frac{BD}{C_{0}}\right)\right]^{1/2}\, , \nonumber\\
a_r & = &-\frac{A}{B}+\frac{D}{2C_{0}}\, , \nonumber\\
\epsilon_r & = & \left(1+\frac{AD}{2BC_0}\right)\epsilon_t \, .\nonumber\\
\label{3.5}\end{eqnarray*}
The main difference between the relativistic radial motion and the non-relativistic one is the appearence of two eccentricities: the {\it time eccentricity} $\epsilon_t$ appearing in the Kepler equation (\ref{3.3}) and the {\it relative radial eccentricity} $\epsilon_r$. Using (\ref{2.17}) we can express $a_r,\epsilon_r,\epsilon_t$ and $n$ in terms of $E$ and $J$:
\begin{eqnarray}
a_r & = & \frac{GM}{ E}\left[1-\frac{1}{2}(\nu-7)\frac{E}{c^2}\right]\, , \nonumber\\
\epsilon_r & = & \left\{1+\frac{2E}{G^2M^2}\left[1+\left(\frac{5}{2}\nu-\frac{15}{2}\right )\frac{E}{c^2}\right]\right. \nonumber\\ &&
\left.\left[J^2+(\nu-6)\frac{G^2M^2}{c^2}\right] \right\}^{1/2}\, , \nonumber\\
\epsilon_t & = &\left \{1+\frac{2E}{G^2M^2}\left[1+\left(-\frac{7}{2}\nu-\frac{17}{2}\right )\right] \right. \nonumber\\ &&
\left. \left[J^2+(-2\nu+2)\frac{G^2M^2}{c^2}\right] \right\} \, ,\nonumber\\
n & = & \frac{(-2 E)^{3/2}}{GM}\left[1+\frac{1}{4}(\nu-15)\frac{E}{c^2}\right] \, .
\label{3.6}\end{eqnarray}
It is remarkable that a well known result of the Newtonian motion is still valid at PN level: both the relative semi-major axis $a_r$ the mean motion $n$ depend only on the center of mass energy ${\cal E}$. The same is true for the time of return to periastron period $\displaystyle{P=\frac{2\pi}{n}}$.
As a consequence we can also express $n$ in term of $a_r$:
\begin{displaymath}
n=\left(\frac{GM}{a_r^3}\right)^{1/2}\left[1+\frac{GM}{2a_r c^2}(-9+\nu)\right]\, .
\label{3.7}\end{displaymath}
Let us note also that the relationships between $e_r$ and $e_t$ are:
\begin{eqnarray*}
\frac{\epsilon_r}{\epsilon_t} &=& 1+(3\nu-8)\frac{ E}{c^2}\, ,\nonumber\\
\frac{\epsilon_r}{\epsilon_t}&=&1+\frac{GM}2{a_rc^2}\left(4-\frac{3}{2}\nu\right)\, .
\label{3.8}\end{eqnarray*}
The relativistic angular motion, i.e. the solution of Eq. (\ref{2.16}) can also be simply reduced to the integration of an auxiliary non relativistic angular motion. Let us first make, at
$\displaystyle{{\cal O}\left(\frac{1}{c^2}\right)}$ order, the
following conchoidal transformation:
\begin{equation}
r= {\tilde r}+\frac{I}{2H}\, ,
\label{4.1}\end{equation}
which transforms Eq.(\ref{2.16}) into
\begin{displaymath}
\frac{d\phi}{dt}=\frac{H}{{\tilde r}^2}\, ,
\label{4.2}\end{displaymath}
where $ {\tilde r}$ can be expressed as
\begin{equation}
{\tilde r}={\tilde a}(1- {\tilde \epsilon}\cos {\cal E})\,.
\label{4.3}\end{equation}
Let us note also the relationship between $e_r$ and $e_t$:
\begin{displaymath}
{\tilde a} =a_r- \frac{I}{2H}\, ,
\label{4.4}\end{displaymath}
\begin{equation}
{\tilde \epsilon} = \epsilon_r\left(1- \frac{AI}{2BH}\right)\, .
\label{4.5}\end{equation}
The differential time is given, from Eq. (\ref{3.3}) by:
\begin{displaymath}
dt= n^{-1}(1-\epsilon_t\cos {\cal E})d{\cal E}\, .
\label{4.6}\end{displaymath}
Hence we get
\begin{displaymath}
d\phi= \frac{H}{n {\tilde a}^2}\frac{(1-\epsilon_t\cos {\cal E})}{(1- {\tilde \epsilon}\cos {\cal E})^2}d{\cal E}\, .
\label{4.7}\end{displaymath}
As can be seen from Eq. (\ref{3.3}) and Eq. (\ref{4.5}) $\epsilon_t$ and ${\tilde \epsilon}$ differ by only small terms of order $\displaystyle{\frac{1}{c^2}}$. Now if we introduce any new eccentricity say $\displaystyle{\epsilon_\phi}$ also very near $\displaystyle{\epsilon_t}$ so that we can write:
$\displaystyle{\epsilon_t=\frac{1}{(\epsilon_t+\epsilon_\phi)}{2}+\varepsilon}$, $\displaystyle{\epsilon_\phi=\frac{(\epsilon_t+\epsilon_\phi)}{2}-\varepsilon}$, with $\displaystyle{\varepsilon={\cal O}\left(\frac{1}{c^2}\right)}$ then
\begin{displaymath}
(1-\epsilon_t\cos {\cal E}) (1-\epsilon_\phi\cos {\cal E})=\left(1-\frac{(\epsilon_t+\epsilon_\phi)}{2}\cos {\cal E}\right)^2-\varepsilon^2\cos^2 {\cal E}\, .
\label{4.8}\end{displaymath}
Therefore if we choose $\epsilon_\phi$ such that the average of $\epsilon_t$ and $\epsilon_\phi$ is equal to $ {\tilde \epsilon}$ i.e. $\epsilon_\phi=2{\tilde \epsilon}-\epsilon_t$ we have
\begin{displaymath}
\frac{(1-\epsilon_t\cos {\cal E})}{(1- {\tilde \epsilon}\cos {\cal E})}^{2}=\frac{1}{1- \epsilon_\phi \cos {\cal E}}+ {\cal O}\left(\frac{1}{c^4}\right)\, ,
\label{4.9}\end{displaymath}
which transforms Eq. (\ref{4.7}) into a Newtonian like angular motion equation
\begin{displaymath}
d\phi=\frac{H}{n{\tilde a}^2}\frac{d{\cal E}}{1- \epsilon_\phi \cos {\cal E}}\, ,
\label{4.10}\end{displaymath}
which is easily integrated to give
\begin{equation}
\phi-\phi_0=KA_{\epsilon_{\phi}}( {\cal E})\, ,
\label{4.11a}\end{equation}
$\phi_0$ being a constant of integration and where for the sake of simplicity we have introduced the notations:
\begin{equation}
A_{\epsilon_{\phi}}(E)=2\arctan \left[\left(\frac{1+\epsilon}{1-\epsilon}\right)^{\frac{1}{2}} \tan\frac{ {\cal E}}{2} \right] \, ,
\label{4.11b}\end{equation}
and
\begin{equation}
K=\frac{H}{n{\tilde a^2}(1-\epsilon_{\phi}^{2})^\frac{1}{2}}\, .
\label{4.11c}\end{equation}
From Eq. (\ref{4.5}) and (\ref{3.5}) and the definition of $\epsilon_\phi=2{\tilde \epsilon}-\epsilon_t$ we have:
\begin{displaymath}
\epsilon_{\phi}= \epsilon_t\left(1+\frac{AD}{BC_0}-\frac{AI}{BH}\right)=\epsilon_r\left(1+\frac{AD}{2BC_0}-\frac{AI}{BH}\right)
\label{4.12}\end{displaymath}
then, as shown by straightforward calculations:
\begin{eqnarray}
&&\epsilon_{\phi}= \epsilon_r\left(1+\frac{G\mu}{2a_r c^2}\right)=\nonumber\\ && \left \{1+ \frac{2E}{G^2M^2}\left[1+\left(\frac{1}{2}\nu-\frac{15}{2}\right)\frac{E}{c^2}\right] \left[J^2-6\frac{G^2 M^2}{c^2}\right] \right \}^\frac{1}{2}\, ,\nonumber\\
\label{4.13}\end{eqnarray}
and
\begin{equation}
K=\frac{J}{\left(J^2-\frac{6G^2M^2}{c^2}\right)^\frac{1}{2}}\, .
\label{4.14}\end{equation}
As it is clear from Eqs. (\ref{4.1}) and (\ref{4.3}), the radial variable $r$ reaches its successive minima "periastron passages" for ${\cal E}=0,2\pi,4\pi,...$The periastron therefore precesses at each turn by the angle $\displaystyle{\Delta\phi=2\pi(L-1)}$, which if $\displaystyle{J>>\frac{GM}{c}}$ reduces to the well-known result \cite{robertson}:
\begin{displaymath}
\Delta\phi=6\pi\frac{G^2M^2}{J^2c^2}+{\cal O} \left(\frac{1}{c^4}\right)=\frac{6\pi GM}{a_R(1-\epsilon_r)c^2}+{\cal O} \left(\frac{1}{c^4}\right)\, .
\label{4.15} \end{displaymath}
Contrarily to the usual approach which derives first the orbit by eliminating the time between Eq. (\ref{2.15}) and (\ref{2.16}) before working out the motion on the orbit we find the orbit by eliminating $ {\cal E}$ between Eq. (\ref{3.4}) and (\ref{4.11a})-(\ref{4.11c}). With the aim to simplify the formulae we introduce the notation $f$ for the polar angle counted from a periastron and corrected for the periastron precession i.e.:
\begin{displaymath}
f=\frac{\phi-\phi_0}{K}\, ,
\label{5.1} \end{displaymath}
We must eliminate $\cal E$ between:
\begin{displaymath}
r=a_r(1-\epsilon_r\cos {\cal E})\, ,
\label{5.2} \end{displaymath}
and
\begin{displaymath}
f= A_{\epsilon_{\phi}}( {\cal E})\, .
\label{5.3} \end{displaymath}
In order to get, it is convenient to play a new conchoidal
transformation on $r$ writing:
\begin{equation}
r=\frac{\epsilon_r}{\epsilon_\phi}a_r(1-\epsilon_\phi\cos {\cal E})+a_r\left(1-\frac{\epsilon_r}{\epsilon_\phi}\right)\, .
\label{5.4} \end{equation}
From the definition of $\displaystyle{A_{\epsilon_{\phi}}({\cal E})}$ we have:
\begin{displaymath}
1-\epsilon_\phi\cos {\cal E}=\frac{1-\epsilon_{\phi}^2}{1+e_\phi A_{\epsilon_{\phi}}( {\cal E})} =\frac{1-\epsilon_{\phi}^2}{1+\epsilon_{\phi} \cos f}\, .
\label{5.5} \end{displaymath}
Moreover we find from Eq. (\ref{4.13}) that the radial displacement appearing in Eq. (\ref{5.4}) is simply
\begin{displaymath}
a_r \left(1-\frac{\epsilon_r}{\epsilon_\phi}\right)=\frac{G\mu}{2c^2}\, ,
\label{5.6} \end{displaymath}
so that we find the polar equation of the relative orbit as:
\begin{displaymath}
r= \left(a_r-\frac{G\mu}{2c^2}\right)\frac{1-\epsilon_{\phi}^2}{1+\epsilon_{\phi} \cos f}+\frac{G\mu}{2c^2}
\label{5.7} \end{displaymath}
This equations means that the relative orbit is the {\it conchoid} of a {\it precessing ellipse}, which means that it is obtained from an ellipse: $r'=l(1+e\cos\phi')^{-1}$ by a radial displacemnet $r=r'+const$ together with an angular homothetic transformation: $\phi=const\cdot\phi'$.
Let us finally note that the relative orbit con also be written as:
\begin{displaymath}
r=\frac{a_r(1-\epsilon_{r}^2)}{1+\epsilon_r \cos f'}\, ,
\label{5.10a} \end{displaymath}
with
\begin{displaymath}
f'= f+2\left(\frac{\epsilon_{2 f}}{\epsilon_r}\right)\sin f\, .
\label{5.10b} \end{displaymath}
The conservation laws and the coordinate transformations which we
have obtained here will reveal particularly useful to characterize
the relativistic orbits, as we will see below.
\subsection{Relativistic quasi-Elliptical orbits}
The relativistic motions of each body are obtained by replacing the solutions for the relative motion, $t({\cal E}),r({\cal E}),\phi({\cal E})$, in the PN center of mass formulae Eq. (\ref{2.4ab}) (see\cite{deruelle}). We see first that the polar angle of the first object (of mass $m_1$) is the same as the relative polar angle and that the polar angle of the second object (mass $m_2$) is simply $\phi+\pi$. Therefore it is sufficient to work out the radial motions. From Eq. (\ref{2.4ab}) we have by replacing $V^2$ in the relativistic corrections with $\displaystyle{\frac{2GM}{R}+2E\simeq \frac{2GM}{R}-\frac{GM}{a_r}}$:
\begin{displaymath}
r=\frac{m_2R}{M}+\frac{G\mu(m_1-m_2)}{2mc^2}\left(1-\frac{R}{a_R}\right)
\label{6.1} \end{displaymath}
(and similar results for the second object by exchanging $m_1$ and $m_2$) which shows remarkably enough,
that $r$ can also be written in a quasi-Newtonian form:
\begin{displaymath}
r=a_{r'}(1-\epsilon_{r'}\cos {\cal E}) \, ,
\label{6.2} \end{displaymath}
with
\begin{eqnarray*}
a_{r'} &=& \frac{m_2}{M}a_r\, ,\\
\epsilon_{r'} &=& e_R\left[1-\frac{Gm_1(m_1-m_2}{2Ma_r c^2}\right] \, ,
\label{6.3} \end{eqnarray*}
and where as before:
\begin{eqnarray*}
n(t-t_0) &=& {\cal E}-\epsilon_t \sin {\cal E}\, ,\\
\phi-\phi_0 &=& K A_{\epsilon_{\phi}}({\cal E})\, .
\label{6.4}\end{eqnarray*}
The orbit in space of the first object can be written by using
the same method as in the preceding Section for the relative
orbit, that is:
\begin{displaymath}
r=\frac{\epsilon_{r'}}{\epsilon_\phi}a_{r'}(1-\epsilon_\phi \cos {\cal E})+a_{r'}\left(1-\frac{\epsilon_{r'}}{\epsilon_\phi}\right)\, .
\label{6.5} \end{displaymath}
One finds:
\begin{displaymath}
a_{r'}\left(1-\frac{\epsilon_{r'}}{\epsilon-\phi}\right) =\frac{Gm_{1}^2 m_2}{2M^2c^2}\, ,
\label{6.6} \end{displaymath}
hence we find that the orbits is conchoid of a precessing ellipse with
\begin{displaymath}
r'=\left(a_{r'}-\frac{Gm_{1}^2m_2}{2M^2c^2}\right) \frac{1-\epsilon_{\phi}^2}{1+\epsilon_\phi\cos\left(\frac{\phi-\phi_0}{L}\right)}+ \frac{Gm_{1}^2m_2}{2M^2c^2}
\label{6.7} \end{displaymath}
Summarizing then gathering our results for the {\bf
elliptic-like} ($E<0$) PN motion in the PN center of mass frame,
we have:
\begin{eqnarray}
&&n(t-t_0) = {\cal E}-\epsilon_t\sin {\cal E}\, ,\nonumber\\
&& \phi-\phi_0 = K2\arctan \left[\left(\frac{1+\epsilon_\phi}{1-\epsilon_\phi}\right)^\frac{1}{2}\tan \frac{{\cal E}}{2} \right]\, , \nonumber\\
&&r = a_r(1-\epsilon_r\cos {\cal E})\, ,\nonumber\\
&& r ' = a_{r'}(1-e_{r'}\cos {\cal E})\, ,\nonumber\\
\label{7.1}\end{eqnarray}
with
\begin{displaymath}
a_r=\frac{GM}{2E}\left[1-\frac{1}{2}(\nu-7)\frac{E}{c^2}\right]\, ,
\end{displaymath}
\begin{equation}
n=\frac{(-2E)^\frac{3}{2}}{GM} \left[1-\frac{1}{4}(\nu-15)\frac{E}{c^2}\right]\, .
\label{7.2}\end{equation}
and $\displaystyle{K,\epsilon_t,e_\phi,\epsilon_r,e_r,a_r,e_{r'},a_{r'}}$ given in terms of the total angular momentum by unit reduced mass in the center of mass frame, $E$ and $J$, by Eq. (\ref{3.6}), (\ref{4.14}), (\ref{4.13}),
and interchange of $m_1$ and $m_2$ for $\epsilon_{r'},a_ {r'}$.
The above equations are very similar to the standard Newtonian
solutions of the non-relativistic two-body problem.
\subsection{Relativistic quasi-Hyperbolic orbits}
The simplest method to obtain the Post-Newtonian motion in the
{\bf hyperbolic-like} case ($E>0$) consists simply in making, in
Eqs.(\ref{7.1})-(\ref{7.2}), the analytic continuation
from $E<0$ to $E>0$, passing below $E=0$ in the complex $E$ plane
and replacing the parameter $\cal E$ by $i\cal F$ ($i^2=-1$). The
proof that this yields to a correct parametric solution
consists in noticing, on one hand, that $K$ and the various
eccentricities are analytic in $E$, near $E=0$, and that if one
replaces the parametric solution (\ref{7.1})-(\ref{7.2}) and the
corresponding expressions of
$\epsilon_t,e_\phi,\epsilon_r,e_r,a_r,e_{r'},a_{r'}$ in terms of
$E$ and $J$ in $\displaystyle{\left(\frac{dr}{dt}\right)^2}$ and
in $\displaystyle{\left(\frac{d\phi}{dt}\right)^2}$, one finds
that Eq. (\ref{2.15}) and the square of Eq. (\ref{2.16}) are
satisfied identically, modulo $\displaystyle{{\cal
O}\left(\frac{1}{c^4}\right)}$, and that the resulting identities
are {\it analytic} in $E$ and $\cal E$ as purely imaginary ones.
One finds:
\begin{eqnarray}
\bar{n}(t-t_0) &=&\epsilon_t\sinh {\cal F}- {\cal F}\, ,\nonumber\\
\phi-\phi_0 &=& K 2\arctan \left[\left(\frac{\epsilon_\phi+1}{\epsilon_\phi-1}\right)^\frac{1}{2}\tanh \frac{{\cal F}}{2} \right]\, , \nonumber\\
r &=& \bar {a}_r(1-\epsilon_r\cos {\cal F})\, ,\nonumber\\
r ' &=& \bar{a}_{r'}(1-\epsilon_{r'}\cos {\cal F})\, ,\nonumber\\
\label{7.3}\end{eqnarray}
where $K,\epsilon_t,\epsilon_\phi,\epsilon_r,\epsilon_{r'}$ are functions of $E$ and $J$, as before, but where it has been conveninet to introduce the opposites of analytic continuations of the semi-major axes:
\begin{displaymath}
\bar {a}_{r'}=\frac{GM}{2E}\left[1-\frac{1}{2}(\nu-7)\frac{E}{c^2}\right]\, ,
\label{7.4}\end{displaymath}
and $\displaystyle{\bar{a}_{r'}=\frac{m_1\bar {a}_{r}}{M}}$ and the modulus of the analytic continuation of the mean motion:
\begin{displaymath}
\bar n=\frac{(2M)^\frac{3}{2}}{GM}\left[1-\frac{1}{4}(\nu-15)\frac{E}{c^2}\right]\,.
\label{7.5}\end{displaymath}
\subsection{Relativistic quasi-Parabolic orbits}
The {\bf quasi-parabolic} post-Newtonian motion ($E=0$) can be obtained as a limit when $E\rightarrow0$. For istance, let us start from the quasi-elliptic solution in Eq.(\ref{7.1}) and pose
\begin{displaymath}
{\cal E}=\left(\frac{-2E}{G^2M^2}\right)^\frac{1}{2}x\, .
\label{7.6}\end{displaymath} Taxing now the limit
$E\rightarrow0^-$, holding $x$ fixed, yields the following
parametric representation of the quasi parabolic motion:
\begin{equation}
t-t_0 = \frac{1}{2G^2M^2}\left[ \left(J^2+(2-2\nu)\frac{G^2M^2}{c^2}x+\frac{1}{3}x^3\right)\right]\, ,
\label{7.7a}\end{equation}
\begin{equation}
\phi-\phi_0=\frac{J}{(J^2-6)\frac{G^2M^2}{c^2})^\frac{1}{2}}2\arctan\frac{x}{(J^2-\frac{6G^2M^2}{c^2})^\frac{1}{2}}\, ,
\label{7.7b}\end{equation}
\begin{equation}
r = \frac{1}{2GM}\left[
\left(J^2+(\nu-6)\frac{G^2M^2}{c^2}x+x\right)\right]\, .
\label{7.7c}\end{equation}
Moreover let us point out that our
solutions (for the three cases $E<0$, $E>0$ and $E=0$) have been
written in a suitable form when
$\displaystyle{J^2>6\left(\frac{GM}{c}\right)^2}$. However the
validity of our solutions can be extended to the range
$\displaystyle{J^2\leq 6\left(\frac{GM}{c}\right)^2}$ by first
replacing in the solutions for the angular motion , the second equation of
(\ref{7.1}) and (\ref{7.3}), and considering the Eqs. (\ref{7.7a})-(\ref{7.7c}), the function
$\arctan$ by $\cot$ (at the price of a simple modifiation of
$\phi_0$) and then by making an analytic continuation in
$J$. One ends up with an angular motion expressed by an $\arg\coth$
which can also be approximatively replaced by its asymptotic
behaviour for large arguments: $\arg\coth(
\displaystyle{X})\sim\frac{1}{X}$. The case of purely radial
motion ($J=0$) is also obtained by taking the limit
$J\rightarrow0$ ( at $\cal E,F$ or respectively $x$ fixed).
Finally a parametric representation of the general post-Newtonian
motion in an arbitrary (post-Newtonian harmonic) coordinate system
is obtained from our preceding center of mass solution by applying
a general transformation of the Poincar\'e group
$\displaystyle{x'^a=L_b^ax^b+T^a}$ \cite{deruelle}.
\section{Relativistic orbits with gravitomagnetic corrections}
Using the orbital theory developed up to now for relativistic
orbit, we have neglected terms of order
$\displaystyle{\frac{v}{c}^3}$. However, we succeed in explaining,
for instance, the perihelion precession of Mercury. In cases
where $\displaystyle{10^{-2}\le\frac{v}{c}\le10^{-1}}$, higher
order terms like $\displaystyle{\frac{v}{c}^3}$ cannot be
discarded in order to discuss consistently the problem of motion
(see for example \cite{SMFL}). In this situations, we are dealing
with gravitomagnetic corrections. Before discussing the theory
of orbits under the gravitomagnetic effects, let us give some
insight into gravitomagnetism and derive the corrected metric.
Theoretical and experimental aspects of gravitomagnetism are
discussed in \cite{iorio9,ruffini}.
A remark is in order at this point: any theory combining, in a
consistent way, Newtonian gravity together with Lorentz
invariance has to include a gravitomagnetic field generated by
the mass-energy currents. This is the case, of course, of GR: it
was shown by Lense and Thirring
\cite{Thirring,barker,ashby,iorio10,Tartaglia}, that a rotating
mass generates a gravitomagnetic field, which, in turn, causes a
precession of planetary orbits. In the framework of the linearized
weak-field and slow-motion approximation of GR, the
gravitomagnetic effects are induced by the off-diagonal components
of the space-time metric tensor which are proportional to the
components of the matter-energy current density of the source. It
is possible to take into account two types of mass-energy
currents. The former is induced by the matter source rotation
around its center of mass: it generates the intrinsic
gravitomagnetic field which is closely related to the angular
momentum (spin) of the rotating body. The latter is due to the
translational motion of the source: it is responsible of the
extrinsic gravitomagnetic field \cite{pascual,kopeikin2}. Let us
now discuss the gravitomagnetic effects in order to see how they
affect the orbits.
\subsection{Gravitomagnetic effects}
Starting from the Einstein field equations in the weak field
approximation, one obtain the gravitoelectromagnetic equations and
then the corrections on the metric\footnote{Notation: Latin
indices run from 1 to 3, while Greek indices run from 0 to 3; the
flat space-time metric tensor is
$\eta_{\mu\nu}=diag(1,-1,-1,-1)$.} \cite{SMFL}. The weak field approximation
can be set as
\begin{equation}
g_{\mu\nu}(x)=\eta_{\mu\nu}+h_{\mu\nu}(x),\qquad\left|h_{\mu\nu}(x)\right|<<1,\label{eq:g_muni}\end{equation}
where $\eta_{\mu\nu}$ is the Minkowski metric tensor and
$\left|h_{\mu\nu}(x)\right|<<1$ is a small deviation from it
\cite{weinberg}.
The stress-energy tensor for perfect - fluid matter is given by
\begin{displaymath}
T^{\mu\nu}=\left(p+\rho
c^{2}\right)u^{\mu}u^{\nu}-pg^{\mu\nu}\, ,\end{displaymath}
which, in the
weak field approximation $p\ll\rho c^{2}$, is
\begin{displaymath}
T^{00}\simeq\rho c^{2},\qquad T^{0j}\simeq\rho cv^{j},\qquad
T^{ij}\simeq\rho v^{i}v^{j}\,.\end{displaymath} From the Einstein
field equations $G_{\mu\nu}=(8\pi G/c^4)T_{\mu\nu}$, one finds
\begin{equation}
\bigtriangledown^{2}h_{00}=\frac{8\pi
G}{c^{2}}\rho\,,\label{eq:nabla_00}\end{equation}
\begin{equation}
\bigtriangledown^{2}h_{ij}=\frac{8\pi
G}{c^{2}}\delta_{ij}\rho\,,\label{eq:nabla_ij}\end{equation}
\begin{equation}
\bigtriangledown^{2}h_{0j}=-\frac{16\pi G}{c^{2}}\delta_{jl}\rho
v^{l}\,,\label{eq:nabla_0j}\end{equation}
where $\bigtriangledown^{2}$ is the standard Laplacian operator
defined on the flat spacetime. To achieve Eqs.
(\ref{eq:nabla_00})-(\ref{eq:nabla_0j}), the harmonic condition
\begin{displaymath}
g^{\mu\nu}\Gamma_{\mu\nu}^{\alpha}=0\;,\end{displaymath} has been
used.
By integrating Eqs. (\ref{eq:nabla_00})-(\ref{eq:nabla_0j}), one
obtains
\begin{equation}
h_{00}=-\frac{2\Phi}{c^{2}}\;,\label{eq:h_00}\end{equation}
\begin{equation}
h_{ij}=-\frac{2\Phi}{c^{2}}\delta_{ij}\;,\label{eq:h_ij}\end{equation}
\begin{equation}
h_{0j}=\frac{4}{c^{3}}\delta_{jl}V^l\;.\label{eq:h_0j}\end{equation}
The metric is determined by the gravitational Newtonian potential
\begin{equation}
\Phi(x)=-G\int\frac{\rho}{\left|\mathbf{x}-\mathbf{x}'\right|}d^{3}x'\;,\label{eq:fi_x}\end{equation}
and by the vector potential $V^{l}$,
\begin{equation}
V^{l}=-G\int\frac{\rho
v^{l}}{\left|\mathbf{x}-\mathbf{x}'\right|}d^{3}x'\;.\label{eq:Vl}\end{equation}
given by the matter current density $\rho v^{l}$ of the moving
bodies. This last potential gives rise to the gravitomagnetic
corrections.
From Eqs. (\ref{eq:g_muni}) and (\ref{eq:h_00})-(\ref{eq:Vl}), the
metric tensor in terms of Newton and gravitomagnetic potentials is
\begin{eqnarray}
ds^{2}&=&\left(1+\frac{2\Phi}{c^{2}}\right)c^{2}dt^{2}-
\frac{8\delta_{lj}V^{l}}{c^{3}}cdtdx^{j}+\nonumber\\ &&-\left(1-\frac{2\Phi}{c^{2}}\right)\delta_{lj}dx^{i}dx^{j}\;.
\label{eq:ds_DUE}\end{eqnarray}
From Eq. (\ref{eq:ds_DUE}) it is possible to construct a
variational principle from which the geodesic equation follows.
Then we can derive the orbital equations. As standard, we have
\begin{displaymath}
\ddot{x}^{\alpha}+\Gamma_{\mu\nu}^{\alpha}\dot{x}^{\mu}\dot{x}^{\nu}=0\;,\label{eq:geodedica_uno}\end{displaymath}
where the dot indicates the differentiation with respect to the
affine parameter. In order to put in evidence the gravitomagnetic
contributions, let us explicitly calculate the Christoffel symbols
at lower orders. By some straightforward calculations, one gets
\begin{eqnarray}
\Gamma^0_{00} &=&0\\
\Gamma^0_{0j} &=&\frac{1}{c^2}\frac{\partial\Phi}{\partial x^j} \\
\Gamma^0_{ij} &=&-\frac{2}{c^3}\left(\frac{\partial V^i}{\partial x^j}+\frac{\partial V^j}{\partial x^i}\right) \\
\Gamma^k_{00} &=& \frac{1}{c^2}\frac{\partial\Phi}{\partial x^k}\\
\Gamma^k_{0j} &=&\frac{2}{c^3}\left(\frac{\partial V^k}{\partial x^j}-\frac{\partial V^j}{\partial x^k}\right) \\
\Gamma^k_{ij} &= &-\frac{1}{c^2}\left(\frac{\partial \Phi}{\partial
x^j}\delta^k_i+\frac{\partial \Phi}{\partial
x^i}\delta^k_j-\frac{\partial \Phi}{\partial
x^k}\delta_{ij}\right)\end{eqnarray}
In the
approximation which we are going to consider, we are retaining
terms up to the orders $\displaystyle{\Phi/c^2}$ and $\displaystyle{V^j/c^3}$. It is important
to point out that we are discarding terms like
$\displaystyle{(\Phi/c^4)\partial\Phi/\partial x^k}$,
$\displaystyle{(V^j/c^5)\partial\Phi/\partial x^k}$, $\displaystyle{(\Phi/c^5)\partial
V^k/\partial x^j}$, $\displaystyle{(V^k/c^6)\partial V^j/\partial x^i}$ and of
higher orders. Our aim is to show that, in several cases like in
tight binary stars, it is not correct to discard higher order
terms in $\displaystyle{v/c}$ since physically interesting effects could come
out.
The geodesic equations up to $c^{-3}$ corrections are then
\begin{eqnarray}
&& c^{2}\frac{d^{2}t}{d\sigma^{2}}+\frac{2}{c^{2}}\frac{\partial\Phi}{\partial
x^{j}}c\frac{dt}{d\sigma}\frac{dx^{j}}{d\sigma}\nonumber\\ &&-\frac{2}{c^{3}}\left(\delta_{im}\frac{\partial
V^{m}}{\partial x^{j}}+\delta_{jm}\frac{\partial V^{m}}{\partial
x^{i}}\right)\frac{dx^{i}}{d\sigma}\frac{dx^{j}}{d\sigma}=0\;,\nonumber\\ \label{time}
\end{eqnarray}
for the time component, and
\begin{eqnarray}
&& \frac{d^{2}x^{k}}{d\sigma^{2}}+\frac{1}{c^{2}}\frac{\partial\Phi}{\partial
x^{j}}\left(c\frac{dt}{d\sigma}\right)^{2}+
\frac{1}{c^{2}}\frac{\partial\Phi}{\partial x^{k}}\delta_{ij}\frac{dx^{i}}{d\sigma}\frac{dx^{j}}{d\sigma}\nonumber\\
& &-\frac{2}{c^{2}}\frac{\partial\Phi}{\partial
x^{l}}\frac{dx^{l}}{d\sigma}\frac{dx^{k}}{d\sigma}+\frac{4}{c^{3}}\left(\frac{\partial
V^{k}}{\partial x^{j}}-\delta_{jm}\frac{\partial V^{m}}{\partial
x^{k}}\right)c\frac{dt}{d\sigma}\frac{dx^{i}}{d\sigma}=0\;,\label{eq:dduexk}\nonumber\\
\end{eqnarray}
for the spatial components.
In the case of a null-geodesic, it results $ds^{2}=d\sigma^{2}=0$. Eq.
(\ref{eq:ds_DUE}) gives, up to the order $c^{-3}$,
\begin{equation}
cdt=\frac{4V^{l}}{c^{3}}dx^{l}+\left(1-\frac{2\Phi}{c^{2}}\right)dl_{euclid}\;,\label{eq:c_dt}\end{equation}
where $dl_{euclid}^{2}=\delta_{ij}dx^{i}dx^{j}$ is the Euclidean
length interval. Squaring Eq. (\ref{eq:c_dt}) and keeping terms up
to order $c^{-3}$, one finds
\begin{eqnarray}
c^{2}dt^{2}=\left(1-\frac{4\Phi}{c^{2}}
\right)dl_{euclid}^{2}+\frac{8V^{l}}{c^{3}}dx^{l}dl_{euclid}\;.\label{eq:cdue_dtdue}\end{eqnarray}
Inserting Eq.(\ref{eq:cdue_dtdue}) into Eq. (\ref{eq:dduexk}), one
gets, for the spatial components,
\begin{eqnarray}
&&\frac{d^{2}x^{k}}{d\sigma^{2}}+\frac{2}{c^{2}}\frac{\partial\Phi}{\partial
x^{k}}\left(\frac{dl_{euclid}}{d\sigma}\right)^{2}-\frac{2}{c^{2}}\frac{\partial\Phi}{\partial
x^{l}}\frac{dx^{l}}{d\sigma}\frac{dx^{k}}{d\sigma}+\nonumber\\ && \frac{4}{c^{3}}\left(\frac{\partial
V^{k}}{\partial x^{j}}-\delta_{jm}\frac{\partial V^{m}}{\partial
x^{k}}\right)\frac{dl_{euclid}}{d\sigma}\frac{dx^{j}}{d\sigma}=0\;.\label{eq:ddue_dsigma}\end{eqnarray}
Such an equation can be seen as a differential equation for
$dx^k/d\sigma$ which is the tangent 3-vector to the trajectory. On
the other hand, Eq. (\ref{eq:ddue_dsigma}) can be expressed in
terms of $l_{euclid}$ considered as a parameter. In fact, for null
geodesics and taking into account the lowest order in $v/c$,
$d\sigma$ is proportional to $dl_{euclid}$. From Eq. (\ref{time})
multiplied for ${\displaystyle \left(1+\frac{2}{c^2}\Phi\right)}$,
we have
\begin{displaymath}
\frac{d}{d\sigma}\left(\frac{dt}{d\sigma}+\frac{2}{c^2}\Phi\frac{dt}{d\sigma}-
\frac{4}{c^4}\delta_{im}V^m\frac{dx^i}{d\sigma}\right)=0\,,
\end{displaymath}
and then
\begin{equation}
\frac{dt}{d\sigma}\left(1+\frac{2}{c^2}\Phi\right)-
\frac{4}{c^4}\delta_{im}V^m\frac{dx^i}{d\sigma}=1\,,\label{constant}
\end{equation}
where, as standard, we have defined the affine parameter so that
the integration constant is equal to 1 \cite{weinberg}.
Substituting Eq. (\ref{eq:c_dt}) into Eq. (\ref{constant}), at
lowest order in $v/c$, we find
\begin{equation} \frac{dl_{euclid}}{c d\sigma}=1\,.\end{equation}
In the weak field regime, the spatial 3-vector, tangent to a given
trajectory, can be expressed as
\begin{equation} \frac{dx^k}{ d\sigma}=\frac{cdx^k}{dl_{euclid}}\,.\end{equation}
Through the definition
\begin{displaymath} e^k=\frac{dx^k}{dl_{euclid}}\,,\end{displaymath}
Eq. (\ref{eq:ddue_dsigma}) becomes
\begin{eqnarray*}
&& \frac{de^{k}}{dl_{euclid}}+\frac{2}{c^{2}}\frac{\partial\Phi}{\partial
x^{k}}-\frac{2}{c^{2}}\frac{\partial\Phi}{\partial
x^{l}}e^{l}e^{k}+\nonumber\\ &&+\frac{4}{c^{3}}\left(\frac{\partial
V^{k}}{\partial x^{j}}-\delta_{jm}\frac{\partial V^{m}}{\partial
x^{k}}\right)e^{j}=0\;,\label{eq:e_dsigma}\end{eqnarray*}
which can
be expressed in a vector form as
\begin{equation}
\frac{d\mathbf{e}}{dl_{euclid}}=-\frac{2}{c^2}\left[\nabla\Phi-\mathbf{e}(\mathbf{e}\cdot\nabla\Phi)\right]+\frac{4}{c^3}
\left[\mathbf{e}\wedge(\nabla\wedge\mathbf{V})\right]\label{vector}\,.
\end{equation}\\
The gravitomagnetic term is the second one in Eq. (\ref{vector})
and it is usually discarded since considered not relevant. This is
not true if $v/c$ is quite large as in the cases of tight binary
systems or point masses approaching to black holes.
Our task is now to achieve explicitly the trajectories, in
particular the orbits, corrected by such effects.
\vspace{0.5cm}
\subsection{Gravitomagnetically corrected orbits}
Orbits with gravitomagnetic effects can be obtained starting from
the classical Newtonian theory and then correcting it by
successive relativistic terms. Starting from the above
considerations (see Sec. \ref{due}, and \ref{tre}) we can see how
gravitomagnetic corrections affect the problem of orbits.
Essentially, they act as a further $v/c$ correction leading to
take into account terms up to $c^{-3}$, as shown above. Let us
start from the line element (\ref{eq:ds_DUE}) which can be written
in spherical coordinates. Here we assume the motion of point-like
bodies and then we can work in the simplified hypothesis
${\displaystyle \Phi=-\frac{GM}{r}}$ and $V^{l}=\Phi v^{l}$. It is
\begin{eqnarray*}
ds^{2} & = &
\left(1+\frac{2\Phi}{c^{2}}\right)cdt^{2}-\left(1-\frac{2\Phi}{c^{2}}\right)\\ &&\left[dr^{2}+r^{2}d\theta^{2}+
r^{2}\sin^{2}\theta d\phi^{2}\right]\\ &&
-\frac{8\Phi}{c^{3}}cdt \left[\cos\theta+\sin\theta\left(\cos\phi+\sin\phi\right)\right]dr
\\ &&+\frac{8\Phi}{c^{3}}cdt\left[\cos\theta\left(\cos\phi+\sin\phi\right)-\sin\theta\right]rd\theta
\\ && +\frac{8\Phi}{c^{3}}cdt\left[\sin\theta\left(\cos\phi-\sin\phi\right)\right]rd\phi\,.
\label{totalmetricnostrpola}\end{eqnarray*}
As in the Newtonian and relativistic cases, from the line element
(\ref{totalmetricnostrpola}), we can construct the Lagrangian
\begin{eqnarray}
\mathcal{L} & = &
\left(1+\frac{2\Phi}{c^{2}}\right)\dot{t}-\left(1-\frac{2\Phi}{c^{2}}\right)\left[{\dot r}+r^{2}{\dot\theta^{2}}+
r^{2}\sin^{2}\theta {\dot\phi^{2}}\right]\nonumber\\ &&
-\frac{8\Phi}{c^{3}}{\dot t} \left[\cos\theta+\sin\theta\left(\cos\phi+\sin\phi\right)\right]{\dot r}
\nonumber\\ &&+\frac{8\Phi}{c^{3}}{\dot t}\left[\cos\theta\left(\cos\phi+\sin\phi\right)-\sin\theta\right]r{\dot\theta}
\nonumber\\ && +\frac{8\Phi}{c^{3}}{\dot t}\left[\sin\theta\left(\cos\phi-\sin\phi\right)\right]r{\dot\phi}\,. \nonumber\\ \label{Lagrangiannostra}\end{eqnarray}
Being,
$\mathcal{L}=1$, one can multiply both members for
${\displaystyle \left(1+\frac{2\Phi}{c^{2}}\right)}$. In the
planar motion condition $\theta=\pi/2$ , we obtain
\begin{eqnarray*}
&&E^{2}-\left(1+\frac{2\Phi}{c^{2}}\right)\left(1-\frac{2\Phi}{c^{2}}\right)\left(\dot{r}^2+\frac{L^{2}}{r^{2}}\right)\\ &&
-\frac{8\Phi
E}{c^{3}}\left[\left(\cos\phi+\sin\phi\right)\dot{r}-\left(\cos\phi-\sin\phi\right)\dot{\phi}\right]
\nonumber\\ &&= \left(1+\frac{2\Phi}{c^{2}}\right)\,,\end{eqnarray*} and then,
being ${\displaystyle \frac{2\Phi}{c^2}=-\frac{R_{s}}{r}}$ (where $R_{s}$ is
the Schwarzschild radius) and
${\displaystyle u=\frac{1}{r}}$ it is
\begin{eqnarray*}
&&E^{2}-h^{2}\left(1-R_{s}^{2}u^{2}\right)\left(u'^{2}+u^{2}\right)+\\ &&
\frac{4R_{S}uE}{c}\left[\left(\cos\varphi+\sin\varphi\right)u'+\left(\cos\varphi-\sin\varphi\right)u^{2}\right]
\nonumber\\ &&= \left(1-R_{S}u\right)\label{eq:L4}\,.\end{eqnarray*}
By deriving such an equation, it is easy to show that, if the
relativistic and gravitomagnetic terms are discarded, the
Newtonian theory is recovered, being
\begin{displaymath}
u''+u=\frac{R_s}{2L^2}\,.\end{displaymath} This result probes the
self-consistency of the problem. However, it is nothing else but a
particular case since we have assumed the planar motion. This
planarity condition does not hold in general if gravitomagnetic
corrections are taken into account.
From the above Lagrangian (\ref{Lagrangiannostra}), it is
straightforward to derive the equations of motion
\begin{eqnarray}
\ddot{r}&=&\frac{1}{c r \left(r c^2+2 G M\right)}
\Big[c \left(r c^2+G M\right) \left(\dot{\theta}^2+\sin ^2\theta \dot{\phi}^2\right) r^2+
\nonumber\\
&& -4 G M \dot{t} \left((\cos\theta(\cos\phi+\sin\phi)-\sin \theta)\dot{\theta}+\right. \nonumber\\ &&\left.
\sin\theta (\cos\phi-\sin \phi)\dot{\phi} \right) r+c G M \dot{r}^2-c G M
\dot{t}^2\Big]\,,\nonumber\\
\label{ddr}
\end{eqnarray}
\begin{eqnarray}
\ddot{\phi} &=&-\frac{2}{r^2 \left(r c^3+2 G M
c\right)} c \cot \theta \left(r c^2+2 G M\right) \dot{\theta}\dot{ \phi} r^2\nonumber\\ &&+\dot{r}
\left(2 G M \csc \theta (\sin\phi-\cos\phi) \dot{t}+c r \left(r c^2+G
M\right) \dot{\phi}\right)\,,\nonumber\\ &&
\label{ddphi}
\end{eqnarray}
\begin{eqnarray}
\ddot{\theta}&=&\frac{1}{r^2 \left(r c^3+2 G M
c\right)}c \cos\theta r^2 \left(r c^2+2 G M\right) \sin\theta \dot{\phi}^2\nonumber\\ &&+\dot{r}
\left(4 G M (\cos\theta (\cos \phi+\sin\phi)-\sin\theta)\dot{t}+\right. \nonumber\\ &&\left.-
2 c r \left(r c^2+G M\right) \dot{\theta}\right)\,,\nonumber\\
\label{ddtheta}
\end{eqnarray}
corresponding to the spatial components of the geodesic Eq.
(\ref{eq:ddue_dsigma}). The time component $\ddot{t}$ is not
necessary for the discussion of orbital motion. Being the
Lagrangian (\ref{Lagrangiannostra}) $\mathcal{L}=1$ it is easy to
achieve a first integral for $\dot{r}$ which is a natural
constrain equation related to the energy.
Our aim is to show how gravitomagnetic effects modify the orbital
motion and what the parameters determining the stability of the
problem are. As we will see, the energy and the mass, essentially,
determine the stability. Beside the standard periastron precession
of GR, a nutation effect is genuinely induced by gravitomagnetism
and stability greatly depends on it. A fundamental issue for this
study is to achieve the orbital phase space portrait.
In Fig.\ref{Fig:stiffness}, the results for a given value of nutation
angular velocity with a time span of $10000$ steps is shown. It is
interesting to see that, by increasing the initial nutation
angular velocity, being fixed all the other initial conditions, we
get curves with decreasing frequencies for $\dot{r}(t)$ and
$\ddot{r}(t)$. This fact is relevant to have an insight on the
orbital motion stability (see Fig.\ref{fig:1}). The effect of gravitomagnetic terms are taken into
account, in Fig.
\ref{fig:orbit}-\ref{fig:orbit1}, showing the basic orbits (left) and the orbit
with the associated velocity field in false colors (right). From a
rapid inspection of the right panel, it is clear the sudden
changes of velocity direction induced by the gravitomagnetic
effects.
\begin{figure}[ht]
\begin{tabular}{|c|c|}
\hline
\tabularnewline
\includegraphics[scale=0.35]{glob_time_M_1_095_r20_k10.eps}
\includegraphics[scale=0.35]{Graf_der_M_1_095_r_phi_th_k_10.eps}
\tabularnewline
\hline
\end{tabular}
\caption {Plots along the panel lines of: $r(t)$ (upper
left),phase portrait of $r(t)$ versus $\dot{r}(t)$ (bottom left),
$\dot{r}(t)$ (upper right) and $\ddot{r}(t)$ (bottom right) for a
star of $1 M_{\odot}$.The examples we are showing were obtained
solving the system for the following parameters and initial
conditions: $\mu\approx 1 M_{\odot}$, $E=0.95$,$\phi_{0}=0$,
$\theta_{0}=\frac{\pi}{2}$,$\dot{\theta_{0}}=\frac{1}{10}\dot{\phi_{0}}$,$\dot{\phi_{0}}=-\frac{1}{10}\dot{r}_{0}$
and $\dot{r}_{0}=-\frac{1}{100}$ and $r_{0}=20 \mu$. The stiffness
is evident from the trend of $\dot{r}(t)$ and
$\ddot{r}(t)$}\label{Fig:stiffness}
\end{figure}
To show the orbital velocity field, a rotation
and a projection of the orbits along the axes of maximal energy can be performed.
In other words, by a {\it Singular Value Decomposition} of the
de-trended positions and velocities, the
eigenvectors corresponding to the largest eigenvalues can be selected, and, of
course, those representing the highest energy components (see Fig
\ref{fig:orbit}-\ref{fig:orbit1}).
\begin{figure}[!ht]
\includegraphics[height=0.30\textheight]{basic_orbit.eps}
\caption {Plots of basic orbits (left)
The initial values are:
$M=1M_{\bigodot}$; $E_n=0.95$ in mass units;
$r_0=20$ in Schwarzschild radii; $\dot\phi=-\frac{\dot
r}{10}$; $\dot\theta=\frac{\dot\phi}{10}$. } \label{fig:orbit}
\end{figure}
\begin{figure}[!ht]
\includegraphics[height=0.30\textheight]{prec_nut_ex_95_M_1.eps}\tabularnewline
\caption {Plots of basic orbits with the
associated velocity field. The arrows indicate the
instantaneous velocities. The initial values are:
$M=1M_{\bigodot}$; $E_n=0.95$ in mass units;
$r_0=20$ in Schwarzschild radii; $\dot\phi=-\frac{\dot
r}{10}$; $\dot\theta=\frac{\dot\phi}{10}$. } \label{fig:orbit1}
\end{figure}
\begin{figure}[!ht]
\includegraphics[height=0.30\textheight]{Ex_PHASE}
\caption {Breaking points examples: on the left panel, the first
four orbits in the phase plane are shown: the red one is labelled
I, the green is II, the black is III and the fourth is
IV. As it is possible to see, the orbits in the phase plane are
not closed and they do not overlap at the orbital closure points;
we have called this feature {\it breaking points}. In this dynamical
situation, a small perturbation can lead the system to a
transition to the chaos as prescribed by the
Kolmogorov-Arnold-Moser (KAM) theorem \cite{binney}. On the right
panel, it is shown the initial orbit with the initial (square) and
final (circles) points marked in black.}
\label{fig:Breakstab}
\end{figure}
\begin{figure}[!ht]
\includegraphics[height=0.30\textheight]{Ex_PHASE_Break_First.eps}
\caption{In this figure it is shown the initial orbit with
the initial (squares) and final(circles) points marked
in black.}
\label{fig:Breakstab1}
\end{figure}
The above differential equations for the parametric orbital
motion are non-linear and with time-varying coefficients. In order
to have a well-posed Cauchy problem, we have to define:
\begin{itemize}
\item the initial and final boundary condition problems;
\item the stability and the dynamical equilibrium of solutions.
\end{itemize}
We can start by solving the Cauchy problem, as in the classical
case, for the initial condition putting $\dot{r}=0$ ,
$\displaystyle{\dot{\phi}=0}$, $\displaystyle{\dot{\theta}=0}$ and $\displaystyle{\theta=\frac{\pi}{2}}$ and
the result we get is that the orbit is not planar being
$\displaystyle{\ddot{\theta}\neq 0}$. In this case, we are compelled to solve
numerically the system of second order differential equations and
to treat carefully the initial conditions, taking into account the
high non-linearity of the system. A similar discussion, but for
different problems, can be found in \cite{cutler,cutler1}.
A series of numerical trials on the orbital parameters can be done
in order to get an empirical insight on the orbit stability. The
parameters involved in this analysis are the mass, the energy, the
orbital radius, the initial values of $r,\phi,\theta$ and the
angular precession and nutation velocities $\dot{\phi}$ and
$\dot{\theta}$ respectively. We have empirically assumed initial
conditions on $\dot r$, $\dot\phi$ and $\dot\theta$.
The trials can be organized in two series, i.e.
constant mass and energy variation and constant energy and mass
variation.
\begin{itemize}
\item In the first class of trials, we assume the mass equal
to $M=1M_{\bigodot}$ and the energy $E_n$ (in mass units) varying
step by step. The initial orbital radius $r_0$ can be changed,
according to the step in energy: this allows to find out
numerically the dynamical equilibrium of the orbit. We have also
chosen, as varying parameters, the ratios of the precession
angular velocity $\dot\phi$ to the radial angular velocity $\dot
r$ and the ratio of the nutation angular velocity $\dot\theta$ and
the precession angular velocity $\dot\phi$. The initial condition
on $\phi$ has been assumed to be $\phi_0=0$ and the initial
condition on $\theta$ has been $\displaystyle{\theta_0=\frac{\pi}{2}}$. For $M=1$
(in Solar masses) , $\displaystyle{\frac{\dot\theta}{\dot\phi}=\frac{1}{2}}$ and
$\displaystyle{\dot{\phi}=-\frac{\dot{r}}{10}}$, can be found out two different
empirical linear equations, according to the different values of
$\displaystyle{\dot{\theta},\dot{\phi}}$. One obtains a rough guess of the initial
distance $r_0=r_0(E_n)$ around which it is possible to find a guess
on the equilibrium of the initial radius, followed by trials and
errors procedure.
\item In the second class of trials, we have assumed the
variation of the initial orbital radius for different values of
mass at a constant energy value equal to $E_n=0.95$ in mass units.
With this conditions, we assume ${\displaystyle
\dot\phi=\frac{\dot{r}}{10}}$ and assume that $\dot{\theta}$ takes
the two values $1/2$ and $1/10$. One can approach the problem also
considering the mass parameterization, at a given fixed energy, to
have an insight of the effect of mass variation on the initial
conditions. The masses have been varied between 0.5 and 20 Solar
masses and the distances have been found to vary according to the
two 3rd-order polynomial functions, according to the different
values of $\dot{\theta}$ with respect to the mass (for details see [\cite{SMFL}])
\end{itemize}
In summary, the numerical calculations, if optimized, allow to put
in evidence the specific contributions of gravitomagnetic
corrections on orbital motion. In particular, specific
contributions due to nutation and precession emerge when higher
order terms in $v/c$ are considered.
\begin{figure}[!ht]
\begin{tabular}{|c|c|}
\hline
\includegraphics[height=0.15\textheight]{30_V_c_T10.eps}&
\includegraphics[height=0.15\textheight]{30r_rp_10.eps}\tabularnewline
\hline
\includegraphics[height=0.15\textheight]{30_V_c_T025.eps}&
\includegraphics[height=0.15\textheight]{30r_rp_025.eps}\tabularnewline
\hline
\includegraphics[height=0.15\textheight]{40_V_c_T10.eps}&
\includegraphics[height=0.15\textheight]{40r_rp_10.eps}\tabularnewline
\hline
\includegraphics[height=0.15\textheight]{40_V_c_T025.eps}&
\includegraphics[height=0.15\textheight]{40r_rp_025.eps}\tabularnewline
\hline
\end{tabular}
\caption {Plots of orbits with various energy values. For each
value of energy, four plots are shown: the first on the left
column is the orbit, with the orbital velocity field in false
colors. The color scale goes from blue to red in increasing
velocity. The second on the left column is the orbit with a
different nutation angular velocity. On the right column, the
phase portraits $\displaystyle{\dot r=\dot r(r(t))}$ are shown. Energy varies
from $0.3$ to $0.4$, in mass units. The stability of the system is
highly sensitive either to very small variation of $r_{0}$ or to
variation on the initial conditions on both precession and
nutation angular velocities: it is sufficient a variation of few
percent on $r_{0}$ to induce system instability} \label{fig:1}
\end{figure}
The conclusion of this part of the review is that orbits are
highly characterized by the velocity regime of the moving bodies.
The order of the parameter $v/c$ determines the global shape of
the trajectories. Our task is now to show how the motion of
sources is related to the features of emitted GWs, in particular
to their production and to the profile of waves.
\part{\large Production and signature of gravitational waves }
The first derivation of gravitational radiation in GR
is due to Einstein. His initial calculation \cite{ae1916} was
"marred by an error in calculation" (Einstein's words), and was
corrected in 1918 \cite{ae1918} (albeit with an overall factor of
two error). Modulo a somewhat convoluted history (discussed in great
detail by Kennefick \cite{dk1997}) owing (largely) to the
difficulties of analyzing radiation in a nonlinear theory, Einstein's
final result stands today as the leading-order "quadrupole formula" for
gravitational wave emission. This formula plays a role in gravity
theory analogous to the dipole formula for electromagnetic radiation,
showing GWs arise
from accelerated masses exactly as electromagnetic waves arise from
accelerated charges.
The quadrupole formula tells us that GWs are difficult to produce ---
very large masses moving at relativistic speeds are needed. This
follows from the weakness of the gravitational interaction. A
consequence of this is that it is {\it extremely} unlikely there will
ever be an interesting laboratory source of GWs. The only objects
massive and relativistic enough to generate detectable GWs are
astrophysical. Indeed, experimental confirmation of the existence of
GWs has come from the study of binary neutron star systems --- the
variation of the mass quadrupole in such systems is large enough that
GW emission changes the system's characteristics on a timescale short
enough to be observed.
Intuitively, it is clear that measuring these waves must be difficult
--- the weakness of the gravitational interaction ensures that the
response of any detector to gravitational waves is very small.
Nonetheless, technology has brought us to the point where detectors
are now beginning to set interesting upper limits on GWs from some
sources \cite{ligo1,ligo2,ligo3,ligo4}. The first direct detection could be,hopefully, not too far in the future.\par
\section{Gravitational waves in linearized gravity}
The most natural starting point for any discussion of GWs is {\it
linearized gravity} \cite{gravitation,shapiro,maggiore}. Linearized gravity is an adequate approximation
to GR when the spacetime metric, may be
treated as deviating only slightly from a flat metric, $\eta_{\mu\nu}$:
\begin{displaymath}
g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu},\qquad ||h_{\mu\nu}|| \ll 1\;.
\end{displaymath}
Here
$||h_{\mu\nu}||$ means ``the magnitude of a typical non-zero component of
$h_{\mu\nu}$''. Note that the condition $||h_{\mu\nu}|| \ll 1$ requires both
the gravitational field to be weak \footnote{We will work in geometrized coordinates putting $c=G=1$}, and in addition constrains the
coordinate system to be approximately Cartesian. We will refer to
$h_{\mu\nu}$ as the metric perturbation; as we will see, it encapsulates
GWs, but contains additional, non-radiative degrees of freedom as
well. In linearized gravity, the smallness of the perturbation means
that we only keep terms which are linear in $h_{\mu\nu}$ --- higher order
terms are discarded. As a consequence, indices are raised and lowered
using the flat metric $\eta_{\mu\nu}$. The metric perturbation $h_{\mu\nu}$
transforms as a tensor under Lorentz transformations, but not under
general coordinate transformations.
We now compute all the quantities which are needed to describe
linearized gravity. The components of the affine connection
(Christoffel coefficients) are given by
\begin{eqnarray*}
{\Gamma^\mu}_{\nu\rho} &=& \frac{1}{2}\eta^{\mu\sigma}\left(\partial_\rho h_{\rho\nu} +
\partial_\nu h_{\sigma\rho} -\partial_\sigma h_{\nu\rho}\right)
\nonumber\\
&=& \frac{1}{2}\left(\partial_\rho {h^\mu}_\nu + \partial_\nu {h^\mu}_{\rho}
-\partial^\mu h_{\nu\rho}\right)\;.
\label{eq:connection}
\end{eqnarray*}
Here $\partial_\mu$ means the partial derivative $\partial / \partial x^\mu$.
Since we use $\eta_{\mu\nu}$ to raise and lower indices, spatial indices
can be written either in the "up" position or the "down" position
without changing the value of a quantity: $f^x = f_x$. Raising or
lowering a time index, by contrast, switches sign: $f^t = -f_t$. The
Riemann tensor we construct in linearized theory is then given by
\begin{eqnarray}
R^\mu_{\nu\rho\sigma} &=& \partial_\rho{\Gamma^\mu}_{\nu\sigma} - \partial_\sigma{\Gamma^\mu}_{\nu\rho}
\nonumber\\
&=& \frac{1}{2}\left(\partial_\rho\partial_\nu {h^\mu}_\sigma +
\partial_\sigma\partial^\mu h_{\nu\rho} - \partial_\rho\partial^\mu h_{\nu\sigma} -
\partial_\sigma\partial_\nu {h^\mu}_\rho\right)\,.\nonumber\\
\label{eq:riemann}
\end{eqnarray}
From this, we construct the Ricci tensor
\begin{displaymath}
R_{\mu\nu} = {R^\rho}_{\mu\rho\nu}
= \frac{1}{2}\left(\partial_\rho\partial_\nu {h^\rho}_\mu + \partial^\rho
\partial_\mu h_{\nu\rho} - \Box h_{\mu\nu} - \partial_\mu\partial_\nu h\right)\;,
\label{eq:ricci}
\end{displaymath}
where $h = {h^\mu}_\mu$ is the trace of the metric perturbation, and $\displaystyle{\Box
= \partial_\rho\partial^\rho = \nabla^2 - \partial_t^2}$ is the wave
operator. Contracting once more, we find the curvature scalar:
\begin{displaymath}
R = {R^\mu}_\mu = \left(\partial_\rho\partial^\mu {h^\rho}_\mu - \Box h\right)
\label{eq:scalar}
\end{displaymath}
and finally build the Einstein tensor:
\begin{eqnarray*}
G_{\mu\nu} &=& R_{\mu\nu} - \frac{1}{2}\eta_{\mu\nu} R
\nonumber\\
&=& \frac{1}{2}\left(\partial_\rho\partial_\nu {h^\rho}_\mu + \partial^\rho
\partial_\mu h_{\nu\rho} - \Box h_{\mu\nu} - \partial_\mu\partial_\nu h
\right.\nonumber\\
& &\qquad\left.-\eta_{\mu\nu}\partial_\rho\partial^\sigma {h^\rho}_\sigma + \eta_{\mu\nu} \Box
h\right)\;.
\label{eq:einstein_h}
\end{eqnarray*}
This expression is a bit unwieldy. Somewhat remarkably, it can be
cleaned up significantly by changing notation: rather than working
with the metric perturbation $h_{\mu\nu}$, we use the {\it trace-reversed}
perturbation $\displaystyle{\bar h_{\mu\nu} = h_{\mu\nu} - \frac{1}{2}\eta_{\mu\nu} h}$. (Notice
that $\displaystyle{\bar {h^\mu}_\mu = -h}$, hence the name ``trace reversed''.)
Replacing $h_{\mu\nu}$ with\\
$\displaystyle{\bar h_{\mu\nu} + \frac{1}{2}\eta_{\mu\nu} h}$ in Eq.\
(\ref{eq:einstein_h}) and expanding, we find that all terms with the
trace $h$ are canceled. What remains is
\begin{eqnarray}
G_{\mu\nu} = \frac{1}{2}\left(\partial_\sigma\partial_\nu {{\bar h}^\rho}_{\ \mu} +
\partial^\rho \partial_\mu \bar h_{\mu\nu} - \Box \bar h_{\mu\nu} -\eta_{\mu\nu}
\partial_\rho\partial^\sigma {{\bar h}^\rho}_{\ \sigma}\right)\;.\nonumber\\
\label{eq:einstein_hbar}
\end{eqnarray}
This expression can be simplified further by choosing an appropriate
coordinate system, or {\it gauge}. Gauge transformations in general
relativity are just coordinate transformations. A general
infinitesimal coordinate transformation can be written as ${x^a}' =
x^\mu + \xi^\mu$, where $\xi^\mu(x^\nu)$ is an arbitrary infinitesimal vector
field. This transformation changes the metric via
\begin{equation}
h_{\mu\nu}' = h_{\mu\nu} - 2\partial_{(\mu} \xi_{\nu)}\;,
\label{eq:metric_transform}
\end{equation}
so that the trace-reversed metric becomes
\begin{eqnarray*}
\bar h_{\mu\nu}' &=& h_{\mu\nu}' - \frac{1}{2}\eta_{\mu\nu} h'
\nonumber\\
&=& \bar h_{\mu\nu} - 2\partial_{(\nu}\xi_{\mu)} + \eta_{\mu\nu}\partial^\rho\xi_\rho\;.
\label{eq:barmetric_transform}
\end{eqnarray*}
A class of gauges that are commonly used in studies of radiation are
those satisfying the {\it Lorenz gauge} condition
\begin{equation}
\partial^\mu \bar h_{\mu\nu} = 0.
\label{eq:lorentzgauge}
\end{equation}
(Note the close analogy to Lorentz gauge
\footnote{Fairly recently, it
has become widely recognized that this gauge was in fact invented by
Ludwig Lorenz, rather than by Hendrik Lorentz. The inclusion of the
"t" seems most likely due to confusion between the similar names;
see Ref.\cite{vanbladel} for detailed discussion. Following the
practice of Griffiths (\cite{griffiths}, p. 421), we bow to the
weight of historical usage in order to avoid any possible confusion.}
in electromagnetic theory, $\partial^\mu A_\mu = 0$, where $A_\mu$ is the
potential vector.)
Suppose that our metric perturbation is not in Lorentz gauge. What
properties must $\xi_\mu$ satisfy in order to {\it impose} Lorentz
gauge? Our goal is to find a new metric $h'_{\mu\nu}$ such that
$\partial^\mu \bar h'_{\mu\nu} = 0$:
\begin{eqnarray*}
\partial^\mu \bar h_{\mu\nu}' &=& \partial^\mu \bar h_{\mu\nu} -
\partial^\mu\partial_\nu\xi_\mu - \Box \xi_\nu + \partial_\nu \partial^\rho\xi_\rho
\nonumber\\
&=& \partial^\mu \bar h_{\mu\nu} - \Box\xi_\nu\;.
\label{eq:ll1}
\end{eqnarray*}
Any metric perturbation $h_{\mu\nu}$ can therefore be put into a Lorentz
gauge by making an infinitesimal coordinate transformation that satisfies
\begin{equation}
\Box \xi_\nu = \partial^\mu \bar h_{\mu\nu}\;.
\label{eq:wave}
\end{equation}
One can always find solutions to the wave equation (\ref{eq:wave}),
thus achieving Lorentz gauge.
The amount of gauge freedom has now been reduced
The amount of gauge freedom has now been reduced from 4 freely
specifiable functions of 4 variables to 4 functions of 4 variables
that satisfy the homogeneous wave equation $\Box \xi^\nu =0$, or,
equivalently, to 8 freely specifiable functions of 3 variables on an
initial data hypersurface.
Applying the Lorentz gauge condition (\ref{eq:lorentzgauge}) to the
expression (\ref{eq:einstein_hbar}) for the Einstein tensor, we find
that all but one term vanishes:
\begin{displaymath}
G_{\mu\nu} = -\frac{1}{2}\Box \bar h_{\mu\nu}\;.
\label{eq:einstein_lg}
\end{displaymath}
Thus, in Lorentz gauges, the Einstein tensor simply reduces to the
wave operator acting on the trace reversed metric perturbation (up to
a factor $-1/2$). The linearized Einstein equation is therefore
\begin{equation}
\Box \bar h_{\mu\nu} = - 16 \pi T_{\mu\nu}\;;
\label{eq:elin}
\end{equation}
in vacuum, this reduces to
\begin{equation}
\Box \bar h_{\mu\nu} = 0\;.
\label{eq:elin1}
\end{equation}
Just as in electromagnetism, the equation (\ref{eq:elin}) admits a
class of homogeneous solutions which are superpositions of plane
waves:
\begin{displaymath}
{\bar h}_{\mu\nu}({\bf x},t) = {\rm Re} \int d^3 k \ A_{\mu\nu}({\bf k}) e^{i
({\bf k} \cdot {\bf x} - \omega t)}\;.
\label{eq:planewaves}
\end{displaymath}
Here, $\omega = |{\bf k}|$. The complex coefficients $A_{\mu\nu}({\bf
k})$ depend on the wavevector ${\bf k}$ but are independent of ${\bf
x}$ and $t$. They are subject to the constraint $k^\mu A_{\mu\nu} = 0$
(which follows from the Lorentz gauge condition), with $k^\mu =
(\omega,{\bf k})$, but are otherwise arbitrary. These solutions are the
gravitational waves.
\subsection{Transverse traceless (TT) gauge in globally vacuum spacetimes}
\label{sec:TTgauge}
We now specialize to globally vacuum spacetimes in which $T_{\mu\nu}=0$
everywhere, and which are asymptotically flat (for our purposes,
$h_{\mu\nu} \to 0$ as $r \to \infty$). Equivalently, we specialize to the
space of homogeneous, asymptotically flat solutions of the linearized
Einstein Eq. (\ref{eq:elin}). For such spacetimes one can, along
with choosing Lorentz gauge, further specialize the gauge to make the
metric perturbation be purely spatial
\begin{equation}
h_{00} = h_{0i} =0
\label{eq:spatial}
\end{equation}
and traceless
\begin{equation}
h = h_i^{\ i} =0.
\label{eq:traceless}
\end{equation}
The Lorentz gauge condition (\ref{eq:lorentzgauge}) then
implies that the spatial metric perturbation is transverse:
\begin{displaymath}
\partial_i h_{ij} =0.
\label{eq:transverse}
\end{displaymath}
This is called the transverse-traceless gauge, or TT gauge. A metric
perturbation that has been put into TT gauge will be written
$\displaystyle{h_{\mu\nu}^{\rm TT}}$. Since it is traceless, there is no distinction
between $\displaystyle{h^{\rm TT}_{\mu\nu}}$ and $\displaystyle{\bar h^{\rm TT}_{\mu\nu}}$.
The conditions (\ref{eq:spatial}) and (\ref{eq:traceless}) comprise 5
constraints on the metric, while the residual gauge freedom in Lorentz
gauge is parameterized by 4 functions that satisfy the wave equation.
It is nevertheless possible to satisfy these conditions,
essentially because the metric perturbation satisfies the linearized
vacuum Einstein equation. When the TT gauge conditions are satisfied,
the gauge is completely fixed.
One might wonder {\it why} we would choose the TT gauge. It is certainly
not necessary; however, it is extremely {\it convenient}, since the TT
gauge conditions completely fix all the local gauge freedom. The
metric perturbation $\displaystyle{h_{\mu\nu}^{\rm TT}}$ therefore contains only
physical, non-gauge information about the radiation. In the TT gauge
there is a close relation between the metric perturbation and the
linearized Riemann tensor $\displaystyle{R_{\mu\nu\rho\sigma}}$ [which is invariant under the
local gauge transformations (\ref{eq:metric_transform}) by Eq.\
(\ref{eq:riemann})], namely
\begin{displaymath}
R_{i0j0} = - \frac{1}{2} {\ddot h}_{ij}^{\rm TT}.
\label{eq:Rtitj}
\end{displaymath}
In a globally vacuum spacetime, all non-zero components of the Riemann
tensor can be obtained from $R_{i0j0}$ via Riemann's symmetries
and the Bianchi identity. In a
more general spacetime, there will be components that are not related
to radiation.
Transverse traceless gauge also exhibits the fact that gravitational
waves have two polarization components. For example, consider a GW
which propagates in the $z$ direction: $h^{\rm TT}_{ij} = h^{\rm
TT}_{ij}(t - z)$ is a valid solution to the wave equation $\Box h^{\rm
TT}_{ij} = 0$. The Lorentz condition $\partial_z h^{\rm TT}_{zj} = 0$
implies that $h^{\rm TT}_{zj}(t - z) = \mbox{constant}$. This
constant must be zero to satisfy the condition that $h_{ab} \to 0$ as
$r \to \infty$. The only non-zero components of $h^{\rm TT}_{ij}$ are
then $h^{\rm TT}_{xx}$, $h^{\rm TT}_{xy}$, $h^{\rm TT}_{yx}$, and
$h^{\rm TT}_{yy}$. Symmetry and the tracefree condition
(\ref{eq:traceless}) further mandate that only two of these are
independent:
\begin{eqnarray*}
h^{\rm TT}_{xx} &=& -h^{\rm TT}_{yy} \equiv h_+(t-z)\;;
\\
h^{\rm TT}_{xy} &=& h^{\rm TT}_{yx} \equiv h_\times(t-z)\;.
\label{eq:pols0}
\end{eqnarray*}
The quantities $h_+$ and $h_\times$ are the two independent waveforms
of the GW (see Fig.\ \ref{fig:2},\ref{fig:3}) \cite{gravitation,MR}.
\begin{figure}
\begin{center}
\includegraphics[width=0.40\textwidth]{fig1a.eps}
\caption{We show how point particles along a ring move as a result of
the interaction with a GW propagating in the direction perpendicular to the
plane of the ring. This figure refers to a wave
with $+$ polarization.
\label{fig:2}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.40\textwidth]{fig1b.eps}
\caption{We show how point particles along a ring move as a result of
the interaction with a GW propagating in the direction perpendicular to the
plane of the ring. This figure refers to a wave with $\times$ polarization.
\label{fig:3}}
\end{center}
\end{figure}
To illustrate the effect of GWs on free falling (FF) particles,
we consider a ring of point particles initially at rest with respect to a
FF frame attached to the center of the ring, as shown in Fig.~\ref{fig:2},\ref{fig:3}.
\section{Interaction of gravitational waves with a detector}
\label{sec:detect}
The usual notion of ``gravitational force'' disappears in GR, replaced instead by the idea that freely falling bodies
follow geodesics in spacetime. Given a spacetime metric $g_{\mu\nu}$ and
a set of spacetime coordinates $x^\mu$, geodesic trajectories are given
by the equation
\begin{eqnarray*}
\frac{d^2 x^\mu}{d\tau^2} +
{\Gamma^\mu}_{\nu\rho}\frac{dx^\nu}{d\tau}\frac{dx^\rho}{d\tau} = 0\;,
\label{eq:geod_eqn}
\end{eqnarray*}
where $\tau$ is the proper time as measured by an observer travelling
along the geodesic. By writing the derivatives in the above geodesic
equation in terms of coordinate time $t$ rather
than proper time $\tau$, and by combining the $\mu=t$ (i.e $0$ coordinate) equation with the
spatial, $\mu=j$ (i.e. spatial coordinates) equations, we obtain an equation for the coordinate
acceleration:
\begin{eqnarray}
\frac{d^2 x^i}{dt^2} &=& - (\Gamma^i_{00} + 2 \Gamma^i_{0j} v^j +
\Gamma^i_{jk} v^j v^k) +\nonumber\\&& v^i (\Gamma^0_{00} + 2 \Gamma^0_{0j} v^j +
\Gamma^0_{jk} v^j v^k),
\label{eq:geod_eqn1}
\end{eqnarray}
where $v^i = dx^i/dt$ is the coordinate.
Let us now specialize to linearized theory, with the non-flat part of
our metric dominated by a GW in TT gauge. Further, let us specialize
to non-relativistic motion for our test body. This implies that $v^i
\ll 1$, and to a good approximation we can neglect the velocity
dependent terms in Eq.\ (\ref{eq:geod_eqn1}):
\begin{displaymath}
\frac{d^2 x^i}{dt^2} + {\Gamma^i}_{00} = 0\;.
\end{displaymath}
In linearized theory and TT gauge,
\begin{displaymath}
{\Gamma^i}_{00} = \Gamma^{0}_{i0} = \frac{1}{2}\left(2\partial_t h^{\rm
TT}_{j0} - \partial_j h^{\rm TT}_{00}\right) = 0\, ,
\label{eq:linGamma}
\end{displaymath}
since $h^{\rm TT}_{00} = 0$. Hence, we find that $\displaystyle{d^2x^i/dt^2 = 0}$.
This result could mean that the GW has no effect.This is not true since it
just tells us that, in TT gauge, the {\it coordinate location} of a
slowly moving, freely falling (here in after FF) body is unaffected by the GWs. In
essence, the coordinates move with the waves.
This result illustrates why, in GR, it is important to
focus upon coordinate-invariant observables (a naive interpretation
of the above result would be that freely falling bodies are not
influenced by GWs). In fact the GWs cause the {\it proper separation}
between two FF particles to oscillate, even if the {\it
coordinate separation} is constant. Consider two spatial FF
particles, located at $z = 0$, and separated on the $x$ axis
by a coordinate distance $L_c$. Consider a GW in TT gauge that
propagates down the $z$ axis, $h^{\rm TT}_{\mu\nu}(t,z)$. The proper
distance $L$ between the two particles in the presence of the GW is
given by
\begin{eqnarray}
L &=& \int_0^{L_c} dx\,\sqrt{g_{xx}} = \int_0^{L_c} dx\,\sqrt{1 +
h^{\rm TT}_{xx}(t,z = 0)}
\nonumber\\
&\simeq& \int_0^{L_c} dx\,\left[1 + \frac{1}{2} h^{\rm TT}_{xx}(t,z =
0)\right]=\nonumber\\&&= L_c\left[1 + \frac{1}{2} h^{\rm TT}_{xx}(t,z =
0)\right]\;.
\label{eq:waveeffect}
\end{eqnarray}
Notice that we use the fact that the coordinate location of each
particle is fixed in TT gauge. In a gauge in which the particles move
with respect to the coordinates, the limits of integration would have
to vary. Eq. (\ref{eq:waveeffect}) tells us that the proper
separation of the two particles oscillates with a fractional length
change $\delta L/L$ given by
\begin{equation}
\frac{\delta L}{L} \simeq \frac{1}{2} h^{\rm TT}_{xx}(t,z = 0)\;.
\label{eq:strainans}
\end{equation}
Although we used TT gauge to perform this calculation, the result is
gauge independent; we will derive it in a different gauge momentarily.
Notice that $h^{\rm TT}_{xx}$ acts as a strain, a fractional length
change. The magnitude $h$ of a wave is often referred to as the
``wave strain''. The proper distance we have calculated here is a
particularly important quantity since it directly relates to the
accumulated phase which is measured by laser interferometric GW
observatories . The
``extra'' phase $\delta \phi$ accumulated by a photon that travels
down and back the arm of a laser interferometer in the presence of a
GW is $\displaystyle{\delta\phi = 4\pi \delta L/\lambda}$, where $\lambda$ is the
photon's wavelength and $\delta L$ is the distance the end mirror
moves relative to the beam splitter\footnote{This description of the
phase shift only holds if $L \ll \lambda$, so that the metric
perturbation does not change value very much during a light travel
time. This condition will be violated in the high frequency regime
for space-based GW detectors; a more careful analysis of the phase
shift is needed in this case {\cite{lhh00}}.}. We now give a
different derivation of the fractional length change
(\ref{eq:strainans}) based on the concept of {\it geodesic deviation}.
Consider a geodesic in spacetime given by $x^\mu = z^\mu(\tau)$, where
$\tau$ is the proper time, with four velocity $u^\mu(\tau) =
dz^\mu/d\tau$. Suppose we have a nearby geodesic $x^\mu(\tau) = z^\mu(\tau)
+ L^\mu(\tau)$, where $L^\mu(\tau)$ is small. We can regard the
coordinate displacement $L^\mu$ as a vector ${\vec L} = L^\mu \partial_\mu$
on the first geodesic; this is valid to first order in ${\vec L}$.
Without loss of generality, we can make the connecting vector be
purely spatial: $L^\mu u_\mu =0$. Spacetime curvature causes the
separation vector to change with time, the geodesics will move
further apart or closer together, with an acceleration given by the
geodesic deviation equation
\begin{equation}
u^\nu \nabla_\nu (u^\rho \nabla_\rho L^\mu) = - {R^\mu}_{\nu\rho\sigma}[{\vec z}(\tau)] u^\nu
L^\rho u^\sigma\;;
\label{eq:geod_dev}
\end{equation}
see, e.g., Ref.\cite{jh03}. This equation is valid to
linear order in $L^\mu$; fractional corrections to this equation will
scale as $L / {\cal L}$, where ${\cal L}$ is the lengthscale over
which the curvature varies.
For application to GW detectors, the shortest lengthscale ${\cal
L}$ is the wavelength $\lambda$ of the GWs. Thus, the geodesic
deviation equation will have fractional corrections of order $L /
\lambda$. For ground-based detectors $L \lesssim $ a few km, while
$\lambda \gtrsim 3000 {\rm km}$; thus
the approximation will be valid. For detectors with $L \gtrsim
\lambda$ (e.g. the space based detector LISA) the analysis here is not
valid and other techniques must be used to analyze the detector.
A convenient coordinate system to analyze the geodesic deviation
equation (\ref{eq:geod_dev}) is the {\it local proper reference frame}
of the observer who travels along the first geodesic. This coordinate
system is defined by the requirements
\begin{equation}
z^i(\tau) = 0,\ \ \ \ \ g_{\mu\nu}(t,{\bf 0}) = \eta_{\mu\nu}, \ \ \ \
\Gamma^\mu_{\nu\rho}(t,{\bf 0}) =0,
\label{eq:lprf}
\end{equation}
which imply that the metric has the form
\begin{equation}
ds^2 = - dt^2 + d {\bf x}^2 + O\left(\frac{{\bf x}^2}{{\cal R}^2}
\right).
\label{eq:lprfmetric}
\end{equation}
Here ${\cal R}$ is the radius of curvature of spacetime, given by
${\cal R}^{-2} \sim ||R_{\mu\nu\rho\sigma}||$. It also follows from the gauge
conditions (\ref{eq:lprf}) that proper time $\tau$ and coordinate time
$t$ coincide along the first geodesic, that ${\vec u} = \partial_t$
and that $L^\mu =(0,L^i)$.
Consider now the proper distance between the two geodesics, which are
located at $x^i=0$ and $x^i = L^i(t)$. From the metric
(\ref{eq:lprfmetric}) we see that this proper distance is just $|{\bf
L}| = \sqrt{L_i L_i}$, up to fractional corrections of order $L^2 /
{\cal R}^2$. For a GW of amplitude $h$ and wavelength $\lambda$ we
have ${\cal R}^{-2} \sim h / \lambda^2$, so the fractional errors are
$\sim h L^2 / \lambda^2$. (Notice that ${\cal R} \sim {\cal
L}/\sqrt{h}$ the wave's curvature scale ${\cal R}$ is much larger
than the lengthscale ${\cal L}$ characterizing its variations.) Since we are
restricting attention to detectors with $L \ll \lambda$, these
fractional errors are much smaller than the fractional distance
changes $\sim h$ caused by the GW.
Therefore, we can simply identify $|{\bf L}|$ as the proper
separation.
We now evaluate the geodesic deviation equation (\ref{eq:geod_dev})
in the local proper reference frame coordinates. From the conditions
(\ref{eq:lprf}) it follows that we can replace the covariant time
derivative operator $u^\mu \nabla_\mu$ with $\partial / (\partial t)$.
Using ${\vec u} = \partial_t$ and $L^\mu = (0,L^i)$, we get
\begin{equation}
\frac{d^2 L^i(t)}{dt^2} =- {R}_{i0j0}(t,{\bf 0}) L^j(t) \;.
\label{eq:observable}
\end{equation}
Note that the key quantity entering into the equation, $R_{i0j0}$, is
gauge invariant in linearized theory, so we can use any convenient
coordinate system to evaluate it. Using the expression
(\ref{eq:Rtitj}) for the Riemann tensor in terms of the TT gauge
metric perturbation $h_{ij}^{\rm TT}$ we find
\begin{displaymath}
\frac{d^2 L^i}{dt^2} = \frac{1}{2}\frac{d^2h^{\rm TT}_{ij}}{dt^2}L^j\;.
\label{eq:geod_dev_detector}
\end{displaymath}
Integrating this equation using $L^i(t) = L^i_0 + \delta L^i(t)$ with
$|\delta {\bf L}| \ll |{\bf L}_0|$ gives
\begin{equation}
\delta L^i(t) = \frac{1}{2}h^{\rm TT}_{ij}(t) L_0^j\;.
\label{eq:response}
\end{equation}
This equation is ideal to analyze an interferometric GW detector.
We choose Cartesian coordinates such that the interferometer's two
arms lie along the $x$ and $y$ axes, with the beam splitter at the
origin. For concreteness, let us imagine that the GW propagates down
the $z$-axis. Then, as discussed in Sec.\ \ref{sec:TTgauge}, the only
non-zero components of the metric perturbation are $h^{\rm TT}_{xx} =
-h^{\rm TT}_{yy} = h_+$ and $h^{\rm TT}_{xy} = h^{\rm TT}_{yx} =
h_\times$, where $h_+(t-z)$ and $h_\times(t-z)$ are the two
polarization components. We take the ends of one of the
interferometer's two arms as defining the two nearby geodesics; the
first geodesic is defined by the beam splitter at ${\bf x}=0$, the
second by the end-mirror. From Eq.\ (\ref{eq:response}), we then find
that the distances $L = | {\bf L}|$ of the arms end from the beam
splitter vary with time as
\begin{eqnarray*}
\frac{\delta L_x}{L} &=& \frac{1}{2} h_+\;,
\nonumber\\
\frac{\delta L_y}{L} &=& -\frac{1}{2} h_+\;.
\end{eqnarray*}
(Here the subscripts $x$ and $y$ denote the two different arms, not
the components of a vector). These distance variations are then measured
via laser interferometry. Notice that the GW (which is typically a
sinusoidally varying function) acts tidally, squeezing along one axis
and stretching along the other. In this configuration, the detector is
sensitive only to the $+$ polarization of the GW. The $\times$
polarization acts similarly, except that it squeezes and stretches
along a set of axes that are rotated with respect to the $x$ and $y$
axes by $45^\circ$. The force lines corresponding to the two
different polarizations are illustrated in Fig.\
{\ref{fig:forcelines}}.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.20\textwidth]{fig2a.eps} &
\includegraphics[width=0.20\textwidth]{fig2b.eps} \\
\end{tabular}
\caption{Lines of force associated to the $+$ (left panel) and $\times$ (right panel) polarizations.\label{fig:forcelines}}
\end{center}
\end{figure}
\
Of course, we don't expect nature to provide GWs that so perfectly
align with our detectors. In general, we will need to account for the
detector's {\it antenna pattern}, meaning that we will be sensitive to
some weighted combination of the two polarizations, with the weights
depending upon the location of a source on the sky, and the relative
orientation of the source and the detector \cite{KT87}.
Finally, in our analysis so far of detection we have assumed that the
only contribution to the metric perturbation is the GW contribution.
However, in reality time-varying near zone gravitational fields
produced by sources in the vicinity of the detector will also be
present. From Eq. (\ref{eq:observable}) we see that the quantity
that is actually measured by interferometric detectors is the
space-time-space-time or electric-type piece $R_{i0j0}$ of the Riemann
tensor (or more precisely the time-varying piece of this within the
frequency band of the detector). From the general expression
of Riemann tensor (see \cite{gravitation}), we see that $R_{i0j0}$
contains contributions from both $h_{ij}^{\rm TT}$ describing GWs, and
also additional terms describing the time-varying near zone
gravitational fields. There is no way for the detector to separate
these two contributions, and the time-varying near zone gravitational
fields produced by motions of bedrock, air, human bodies, and
tumbleweeds can all contribute to the output of the detector and act
as sources of noise \cite{ht1998,tw1999,tcreighton}.
\subsection{The generation of gravitational waves}
\label{sec:lin_with_source}
GWs are generated by the matter source term on the
right hand side of the linearized Einstein equation
\begin{equation}
\Box {\bar h}_{\mu\nu} = -16\pi T_{\mu\nu}\;,
\label{eq:LinEinstein_TT}
\end{equation}
cf.\ Eq.\ (\ref{eq:elin}) (presented here in Lorentz gauge). In this
section we will compute the leading order contribution to the spatial
components of the metric perturbation for a source whose internal
motions are slow compared to the speed of light (``slow-motion
sources''). We will then compute the TT piece of the metric
perturbation to obtain the standard quadrupole formula for the emitted
radiation.
Eq. (\ref{eq:LinEinstein_TT}) can be solved by using a Green's
function. A wave equation with source generically takes the form
\begin{displaymath}
\Box f(t, {\bf x}) = s(t, {\bf x})\;,
\label{eq:wave_source}
\end{displaymath}
where $f(t,{\bf x})$ is the radiative field, depending on time $t$ and
position ${\bf x}$, and $s(t,{\bf x})$ is a source function.
Green's function $G(t,{\bf x};t',{\bf x}')$ is the field which arises
due to a delta function source; it tells how much field is generated
at the ``field point'' $(t,{\bf x})$ per unit source at the ``source
point'' $(t',{\bf x}')$:
\begin{displaymath}
\Box G(t, {\bf x}; t',{\bf x}') = \delta(t - t')\delta({\bf x} - {\bf x}')\;.
\label{eq:wave_greens}
\end{displaymath}
The field which arises from our actual source is then given by
integrating Green's function against $s(t,{\bf x})$:
\begin{displaymath}
f(t, {\bf x}) = \int dt'd^3 x'\,G(t, {\bf x}; t',{\bf x}')\,s(t',{\bf
x}')\;\;.
\end{displaymath}
The Green's function associated with the wave operator $\Box$ is very
well known (see, e.g., {\cite{jackson}}):
\begin{displaymath}
G(t, {\bf x}; t', {\bf x}') = -\frac{\delta(t' - [t - |{\bf x} - {\bf
x}'|/c])}{4\pi|{\bf x} - {\bf x}'|}\;.
\label{eq:BoxGreen}
\end{displaymath}
The quantity $t - |{\bf x} - {\bf x}'|/c$ is the {\it retarded time};
it takes into account the lag associated with the propagation of
information from events at ${\bf x}$ to position ${\bf x}'$. The
speed of light $c$ has been restored here to emphasize the causal
nature of this Green's function; we set it back to unity in what
follows.
Applying this result to Eq.\ ({\ref{eq:LinEinstein_TT}}), we find
\begin{displaymath}
{\bar h}_{ab}(t,{\bf x}) = 4\int d^3x' \frac{T_{\mu\nu}(t - |{\bf x} - {\bf x}'|,
{\bf x}')}{|{\bf x} - {\bf x}'|}\;.
\label{eq:naivesolution}
\end{displaymath}
Projected transverse and
traceless, as already mentioned, the radiative degrees of freedom are contained
entirely in the spatial part of the metric. First, consider the spatial part of the metric:
\begin{displaymath}
{\bar h}_{ij}(t,{\bf x}) = 4\int d^3x' \frac{T^{ij}(t - |{\bf x} - {\bf
x}'|,{\bf x}')} {|{\bf x} - {\bf x}'|}\;.
\label{eq:spatialsolution}
\end{displaymath}
We have raised indices on the right-hand side, using the rule that the
position of spatial indices in linearized theory is irrelevant.
We now evaluate this quantity at large distances from the source.
This allows us to replace the factor $|{\bf x} - {\bf x}'|$ in the
denominator with $r = |{\bf x}|$. The corresponding fractional errors
scale as $\sim L / r$, where $L$ is the size of the source; these
errors can be neglected. We also make the same replacement in the
time argument of $T_{ij}$:
\begin{displaymath}
T_{ij}(t-|{\bf x} - {\bf x}'|,{\bf x}') \approx T_{ij}(t-r,{\bf x}').
\label{eq:slowmotion}
\end{displaymath}
Using the formula $|{\bf x} - {\bf x}'| = r - n^i x^{'\,i} + O(1/r)$
where $n^i = x^i/r$, we see that the fractional errors generated by
the replacement (\ref{eq:slowmotion}) scale as $L/\tau$, where $\tau$
is the timescale over which the system is changing. This quantity is
just the velocity of internal motions of the source (in units with
$c=1$), and is therefore small compared to one by our assumption.
These replacements give
\begin{equation}
{\bar h}_{ij}(t,{\bf x}) = \frac{4}{r}\int d^3x'\,T^{ij}(t - r,{\bf x}')\;,
\label{eq:almost_quadrupole}
\end{equation}
which is the first term in a multipolar expansion of the radiation field.
Eq. (\ref{eq:almost_quadrupole}) almost gives us the quadrupole
formula that describes GW emission (at leading order). To get the
remaining part there, we need to manipulate this equation a bit. The
stress-energy tensor must be conserved, which means $\partial_\mu T^{\mu\nu}
= 0$ in linearized theory. Breaking this up into time and space
components, we have the conditions
\begin{displaymath}
\partial_t T^{00} + \partial_i T^{0i} = 0\,,
\end{displaymath}
\begin{displaymath}
\partial_t T^{0i} + \partial_j T^{ij} = 0\,.
\label{eq:cons_of_stress1}
\end{displaymath}
From this, it follows that
\begin{equation}
\partial^2_t T^{00} = \partial_k\partial_l T^{kl}\;.
\label{eq:cons_of_stress2}
\end{equation}
Multiplying both sides of this equation by $x^i x^j$, we first
manipulate the left-hand side:
\begin{displaymath}
\partial^2_t T^{00} x^i x^j = \partial_t^2\left(T^{00} x^i x^j\right)\;.
\end{displaymath}
Next, manipulate the right-hand side of Eq.\ (\ref{eq:cons_of_stress2}); multiplying by $x^i x^j$, we obtain:
\begin{displaymath}
\partial_k\partial_l T^{kl} x^i x^j =
\partial_k\partial_l \left(T^{kl} x^i x^j\right) -
2\partial_k\left(T^{ik} x^j + T^{kj} x^i\right) + 2 T^{ij}\;.
\end{displaymath}
This identity is easily verified by expanding the derivatives and
applying the identity $\partial_i x^j = {\delta_i}^j$. We thus have
\begin{displaymath}
\partial_t^2\left(T^{00} x^i x^j\right) =
\partial_k\partial_l \left(T^{kl} x^i x^j\right) -
2\partial_k\left(T^{ik} x^j + T^{kj} x^i\right) + 2 T^{ij}\;.
\end{displaymath}
This yields
\begin{eqnarray*}
\frac{4}{r} \int d^3 x'\,T_{ij} &=&
\frac{4}{r}\int d^3x'
\left[\frac{1}{2}\partial_t^2\left(T^{00} x^{\prime i} x^{\prime
j}\right) +\right.\nonumber\\
& &\left. +\partial_k\left(T^{ik} x^{\prime j} + T^{kj} x^{\prime
i}\right) - \frac{1}{2} \partial_k\partial_l \left(T^{kl}
x^{\prime i}x^{\prime j}\right)\right]
\nonumber\\
&=& \frac{2}{r}\int d^3x'\,\partial_t^2\left(T^{00} x^{\prime i}
x^{\prime j}\right)
\nonumber\\
&=& \frac{2}{r}\frac{\partial^2}{\partial t^2}\int d^3x'\,T^{00}
x^{\prime i} x^{\prime j}
\nonumber\\
&=& \frac{2}{r}\frac{\partial^2}{\partial t^2}\int
d^3x'\,\rho\,x^{\prime i} x^{\prime j}\;.
\label{eq:oh_so_close}
\end{eqnarray*}
In going from the first to the second line, we used the fact that the
second and third terms under the integral are divergences. Using
Gauss's theorem, they can thus be recast as surface integrals; taking
the surface outside the source, their contribution is zero. In going
from the second to the third line, we used the fact that the
integration domain is not time dependent, so we can take the
derivatives out of the integral. Finally, we used the fact that
$T^{00}$ is the mass density $\rho$. Defining the second moment
$Q_{ij}$ of the mass distribution via
\begin{equation}
Q_{ij}(t) = \int d^3x'\,\rho(t,{\bf x}') \,x^{\prime i}x^{\prime j}\;,
\label{eq:quad_moment1}
\end{equation}
and combining Eqs.\ (\ref{eq:almost_quadrupole}) and
(\ref{eq:oh_so_close}) we get
\begin{equation}
{\bar h}_{ij}(t,{\bf x}) = \frac{2}{r}\frac{d^2Q_{ij}(t-r)}{dt^2}\;.
\label{eq:h_from_I}
\end{equation}
When we subtract the trace from $Q_{ij}$, we obtain the {\it
quadrupole momentum} tensor:
\begin{displaymath}
{\cal Q}_{ij} = Q_{ij} - \frac{1}{3}\delta_{ij} Q, \qquad
Q = Q_{ii}\;.
\label{eq:quad_moment}
\end{displaymath}
To complete the derivation, we must project out the non-TT pieces of
the right-hand side of Eq.\ (\ref{eq:h_from_I}).
Since we are working to leading order in $1/r$, at each field point
${\bf x}$, this operation reduces to algebraically projecting the
tensor perpendicularly to the local direction of propagation ${\bf n}
= {\bf x} / r$, and subtracting off the trace.
It is useful to introduce the projection tensor,
\begin{displaymath}
P_{ij} = \delta_{ij} - n_i n_j\;.
\end{displaymath}
This tensor eliminates vector components parallel to ${\bf n}$,
leaving only transverse components. Thus,
\begin{displaymath}
{\bar h}^T_{ij} = {\bar h}_{kl}P_{ik}P_{jl}
\end{displaymath}
is a transverse tensor. Finally, we remove the trace; what remains is
\begin{equation}
h^{\rm TT}_{ij} = {\bar h}_{kl} P_{ik} P_{jl} -
\frac{1}{2}P_{ij}P_{kl}{\bar h}_{kl}\;.
\label{eq:quad_formula1}
\end{equation}
Substituting Eq.\ (\ref{eq:h_from_I}) into (\ref{eq:quad_formula1}),
we obtain our final quadrupole formula:
\begin{displaymath}
h^{\rm TT}_{ij}(t,{\bf x}) = \frac{2}{r}\frac{d^2{\cal
Q}_{kl}(t-r)}{dt^2}\left[P_{ik}({\bf n})P_{jl}({\bf n}) -
\frac{1}{2}P_{kl}({\bf n})P_{ij}({\bf n})\right] \, ,
\label{eq:quad_formula}
\end{displaymath}
or
\begin{equation}
h^{\rm TT}_{ij}(t,{\bf x}) = \frac{2G}{rc^4} \ddot{{\cal Q}_{kl}}(t-r)P_{ijkl} \, .
\label{eq:quad_formula}
\end{equation}
One can now search for wave solutions of (\ref{eq:LinEinstein_TT}) from a
system of masses undergoing arbitrary motions, and then obtain the
power radiated. The result, assuming the source dimensions very
small with respect to the wavelengths, (quadrupole approximation
\cite{landau}), is that the power ${\displaystyle
\frac{dE}{d\Omega}}$ radiated in a solid angle $\Omega$ is
\begin{equation}
\frac{dE}{d\Omega}=\frac{G}{8\pi c^{5}}\left(\frac{d^{3}Q_{ij}}{dt^{3}}\right)^{2}\label{eq:P}\end{equation}
If one sums (\ref{eq:P}) over the two allowed
polarizations, one obtains
\begin{eqnarray}
&&\sum_{pol}\frac{dE}{d\Omega}= \frac{G}{8\pi c^{5}}\left[\frac{d^{3}Q_{ij}}{dt^{3}}\frac{d^{3}Q_{ij}}{dt^{3}}-2n_{i}\frac{d^{3}Q_{ij}}{dt^{3}}n_{k}\frac{d^{3}Q_{kj}}{dt^{3}}+\right.\nonumber \\
& & \left.-\frac{1}{2}\left(\frac{d^{3}Q_{ii}}{dt^{3}}\right)^{2}+\frac{1}{2}\left(n_{i}n_{j}\frac{d^{3}Q_{ij}}{dt^{3}}\right)^{2}+\frac{d^{3}Q_{ii}}{dt^{3}}n_{j}n_{k}\frac{d^{3}Q_{jk}}{dt^{3}}\right]\nonumber\\\label{eq:sommatoria}\end{eqnarray}
where $\hat{n}$ is the unit vector in the radiation direction.
The total radiation rate is obtained by integrating
(\ref{eq:sommatoria}) over all directions of emission; the result
is
\begin{equation}
{\cal F}^{GW}=\frac{dE}{dt}=-\frac{G\left\langle Q_{ij}^{(3)}Q^{(3)ij}\right\rangle }{45c^{5}}\label{eq:dEdt}\end{equation}
where the index (3) represents the number of differentiations with respect to
time, the symbol $<>$ indicates that the quantity is averaged over
several wavelengths.
\subsection{Extension to sources with non-negligible self gravity}
\label{sec:quad1}
Concerning our derivation of the quadrupole formula (\ref{eq:quad_formula}) we assumed
the validity of the linearized Einstein equations. In particular, the
derivation is not applicable to systems with weak (Newtonian) gravity
whose dynamics are dominated by self-gravity, such as binary star
systems\footnote{Stress energy conservation in linearized gravity,
$\partial^\mu T_{\mu\nu} =0$, forces all bodies to move on geodesics of
the Minkowski metric.}. This shortcoming of the above
linearized-gravity derivation of the quadrupole formula was first
pointed out by Eddington. However, it is very straightforward to
extend the derivation to encompass systems with non-negligible self gravity.
In full GR, we define the quantity ${\bar h}^{\mu\nu}$ via
\begin{displaymath}
\sqrt{-g} g^{\mu\nu} = \eta^{\mu\nu} - {\bar h}^{\mu\nu},
\end{displaymath}
where $\eta^{\mu\nu} \equiv {\rm diag}(-1,1,1,1)$.
When gravity is weak, this definition coincides with
our previous definition of ${\bar h}^{\mu\nu}$ as a trace-reversed metric
perturbation. We impose the harmonic gauge condition
\begin{equation}
\partial_\mu (\sqrt{-g} g^{\mu\nu}) = \partial_\mu {\bar h}^{\mu\nu} =0.
\label{eq:harmonic}
\end{equation}
In this gauge, the Einstein equation can be written as
\begin{equation}
\Box_{\rm flat} {\bar h}^{\mu\nu} = - 16 \pi ( T^{\mu\nu} + t^{\mu\nu} ),
\label{eq:enonlin}
\end{equation}
where $\Box_{\rm flat} \equiv \eta^{\mu\nu} \partial_\mu \partial_\nu$ is the
flat-spacetime wave operator, and $t^{\mu\nu}$ is a pseudotensor that is
constructed from ${\bar h}^{\mu\nu}$. Taking a coordinate divergence of
this equation and using the gauge condition (\ref{eq:harmonic}), stress-energy conservation can be written
\begin{equation}
\partial_\mu (T^{\mu\nu} + t^{\mu\nu}) =0.
\label{eq:nonlincons}
\end{equation}
Eqs. (\ref{eq:harmonic})- (\ref{eq:enonlin}) and
(\ref{eq:nonlincons}) are precisely the same equations as are used in
the linearized-gravity derivation of the quadrupole formula, except
for the fact that the stress energy tensor $T^{\mu\nu}$ is replaced by
$T^{\mu\nu} + t^{\mu\nu}$. Therefore the derivation of the last subsection
carries over, with the modification that the formula
(\ref{eq:quad_moment1}) for $Q_{ij}$ is replaced by
\begin{displaymath}
Q_{ij}(t) = \int d^3 x' \left[T^{00}(t,{\bf x}') + t^{00}(t,{\bf
x}')\right] x^{\prime i}x^{\prime j}.
\end{displaymath}
In this equation the term $t^{00}$ describes gravitational binding
energy, roughly speaking. For systems with weak gravity, this term is
negligible in comparison with the term $T^{00}$ describing the
rest-masses of the bodies. Therefore the quadrupole formula
(\ref{eq:quad_formula}) and the original definition
(\ref{eq:quad_moment1}) of $Q_{ij}$ continue to apply to the more
general situation considered here.
\subsection{Dimensional analysis}
The rough form of the leading GW field that we just derived, Eq.\
(\ref{eq:quad_formula}), can be deduced using simple physical
arguments. First, we define some moments of the mass distribution.
The zeroth moment is just the mass itself:
\begin{displaymath}
M_0 \equiv \int \rho\,d^3x = M\;.
\end{displaymath}
More accurately, this is the total mass-energy of the source.
Next, we define the dipole moment:
\begin{displaymath}
M_1 \equiv \int \rho\,x_i\,d^3x = ML_i\;.
\end{displaymath}
$L_i$ is a vector with the dimension of length; it describes the
displacement of the center of mass from the origin we chose. As such,
$M_1$ is clearly not a very meaningful quantity --- we can change its
value simply by choosing a different origin.
If our mass distribution exhibits internal motion, then moment of the
{\it mass current}, $j_i = \rho v_i$, are also important. The first
momentum is the spin angular momentum:
\begin{displaymath}
S_1 \equiv \int \rho v_j\,x_k\,\epsilon_{ijk}\,d^3x = S_i\;.
\end{displaymath}
Finally, we look at the second momentum of the mass distribution:
\begin{displaymath}
M_2 \equiv \int \rho\,x_i\,x_j\,d^3x = M L_{ij}
\end{displaymath}
where $L_{ij}$ is a tensor with the dimension length squared.
Using dimensional analysis and simple physical arguments, it is simple
to see that the first moment that can contribute to GW emission is
$M_2$. Consider first $M_0$. We want to combine $M_0$ with the
distance to our source, $r$, in such a way as to produce a
dimensionless wavestrain $h$. The only way to do this (bearing in
mind that the strain should fall off as $1/r$, and restoring factors
of $G$ and $c$) is to put
\begin{displaymath}
h \sim \frac{G}{c^2}\frac{M_0}{r}\;.
\label{eq:hnewton}
\end{displaymath}
Conservation
of mass-energy tells us that $M_0$ for an isolated source cannot vary
dynamically. This $h$ cannot be radiative; it corresponds to a
Newtonian potential, rather than a GW.
Let us consider now the momentum $M_1$. In order to get the right dimensions,we
must take one time derivative:
\begin{displaymath}
h \sim \frac{G}{c^3}\frac{d}{dt}\frac{M_1}{r}\;.
\end{displaymath}
The extra factor of $c$ converts the dimension of the time derivative
to space, so that the whole expression is dimensionless. Think
carefully about the derivative of $M_1$:
\begin{displaymath}
\frac{dM_1}{dt} = \frac{d}{dt}\int\rho\,x_i\,d^3x =
\int\rho\,v_i\,d^3x = P_i\;.
\end{displaymath}
This is the total momentum of our source. Our guess for the form of a
wave corresponding to $M_1$ becomes
\begin{equation}
h \sim \frac{G}{c^3}\frac{P}{r}\;.
\label{eq:hboost}
\end{equation}
Also this formula cannot describe a GW. The momentum of an
isolated source must be conserved. By boosting into a different
Lorentz frame, we can always set $P = 0$. Terms like this can only be
gauge artifacts; they do not correspond to radiation. Indeed, terms
like (\ref{eq:hboost}) appear in the metric of a moving BH,
and correspond to the relative velocity of the BH and the observer, \cite{membrane}.
Dimensional analysis tells us that radiation from
$S_1$ must take the form
\begin{displaymath}
h \sim \frac{G}{c^4}\frac{d}{dt}\frac{S_1}{r}.
\end{displaymath}
Conservation of angular momentum tells us that the total spin of an
isolated system cannot change, so we must reject also this term.
Finally, we examine $M_2$:
\begin{displaymath}
h \sim \frac{G}{c^4}\frac{d^2}{dt^2}\frac{M_2}{r}\;.
\end{displaymath}
There is {\it no} conservation principle that allows us to reject this
term. This is
the quadrupole formula we derived earlier, up to numerical factors.
In ``normal'' units, the prefactor of this formula turns out to be
$G/c^4$ --- a small number divided by a {\it very} big number. In
order to generate interesting amounts of GWs, the variation quadrupole momentum must be enormous. The only interesting sources of GWs will
be those which have very large masses undergoing extremely rapid
variation; even in this case, the strain we expect from typical sources
is tiny. The smallness of GWs reflects the fact that gravity is the
weakest of the fundamental interactions.
\subsection{Numerical estimates}
\label{subsec:numerstrain}
Consider a binary star system, with stars of mass $m_1$ and $m_2$ in a
circular orbit with separation $R$. The quadrupole moment is given by
\begin{equation}
{\cal Q}_{ij} = \mu\left(x_i x_j - \frac{1}{3}R^2\delta_{ij}\right)\;,
\end{equation}
where
${\bf x}$ is the relative displacement, with $|{\bf x}| = R$. We
use the center-of-mass reference frame, and
choose the coordinate axes so that the binary lies in the $xy$ plane,
so $x = x_1 = R\cos\Omega t$, $y = x_2 = R\sin\Omega t$, $z = x_3 =
0$. Let us further choose to evaluate the field on the $z$ axis, so
that ${\bf n}$ points in the $z$-direction. The projection operators
in Eq.\ (\ref{eq:quad_formula}) then simply serve to remove the $zj$
components of the tensor. Bearing this in mind, the quadrupole
formula (\ref{eq:quad_formula}) yields
\begin{displaymath}
h^{\rm TT}_{ij} = \frac{2 \ddot {\cal Q}_{ij}}{r}\;.
\end{displaymath}
The quadrupole moment tensor is
\begin{displaymath}
{\cal Q}_{ij} = \mu R^2\left[
\begin{array}{ccc}
\cos^{2}\Omega t-\frac{1}{3} & \cos\Omega t \sin\Omega t & 0\\
\cos\Omega t\sin\Omega t & \cos^2\Omega t - \frac{1}{3} & 0 \\
0 & 0 & -\frac{1}{3} \\
\end{array}
\right]\;;
\end{displaymath}
its second derivative is
\begin{displaymath}
\ddot{\cal Q}_{ij} = -2\Omega^2\mu R^2
\left[\begin{array}{ccc}
\cos2\Omega t & \sin2\Omega t & 0 \\
-\sin2\Omega t &-\cos2\Omega t & 0 \\
0 & 0 & 0\\\end{array}\right]\;.
\end{displaymath}
The magnitude $h$ of a typical non-zero component of $h^{\rm TT}_{ij}$
is
\begin{displaymath}
h = \frac{4\mu\Omega^2 R^2}{r} = \frac{4\mu M^{2/3}\Omega^{2/3}}{r}\;.
\end{displaymath}
We used Kepler's 3rd law\footnote{In units with $G = 1$, and for
circular orbits of radius $R$, $R^3\Omega^2 = M$.} to replace $R$ with
powers of the orbital frequency $\Omega$ and the total mass $M = m_1 +
m_2$.
The combination of masses here, $\mu M^{2/3}$,
appears quite often in studies of GW emission from binaries; it
motivates the definition of the {\it chirp mass}:
\begin{equation}
{\cal M} = \mu^{3/5}M^{2/5}\;.
\end{equation}
For the purpose of numerical estimate, we will take the members of
the binary to have equal masses, so that $\mu = M/4$:
\begin{displaymath}
h = \frac{M^{5/3}\Omega^{2/3}}{r}\;.
\end{displaymath}
Finally, we insert numbers corresponding to plausible sources:
\begin{eqnarray*}
h &\simeq& 10^{-21}\left(\frac{M}{2\,M_\odot}\right)^{5/3}
\left(\frac{1\,\mbox{hour}}{P}\right)^{2/3}
\left(\frac{1\,\mbox{kiloparsec}}{r}\right)
\nonumber\\
&\simeq& 10^{-22}\left(\frac{M}{2.8\,M_\odot}\right)^{5/3}
\left(\frac{0.01\,\mbox{sec}}{P}\right)^{2/3}
\left(\frac{100\,\mbox{Megaparsecs}}{r}\right)\;.
\end{eqnarray*}
The first line corresponds roughly to the mass, distance and orbital
period ($P = 2\pi/\Omega$) expected for the many close binary white
dwarf systems in our G
alaxy. Such binaries are so common that they
are likely to be a confusion limited source of GWs for space-based
detectors, acting in some cases as an effective source of noise. The
second line contains typical parameter values for binary neutron stars
that are on the verge of spiralling together and merging. Such waves
are targets for the ground-based detectors that have recently begun
operations. The {\it tiny} magnitude of these waves illustrates why
detecting GWs is so difficult.
The emission of GWs costs energy and to compensate for the loss of energy,
the radial separation $R$ between the two bodies must decrease. We shall
now derive how the orbital frequency and GW frequency change in time,
using Newtonian dynamics and the balance equation
\begin{equation}
\frac{d E_{\rm orbit}}{d t} = - P\,.
\label{eq:84}
\end{equation}
At Newtonian order, $E_{\rm orbit} = - m_1\,m_2/(2R)$. Thus, $\dot{R} = -2/3\,(R\,\Omega)\,
(\dot{\Omega}/\Omega^2)$. As long as $\dot{\Omega}/\Omega^2 \ll 1$, the radial
velocity is smaller than the tangential velocity and the binary's motion
is well approximated by an adiabatic sequence of quasi-circular orbits.
Eq. (\ref{eq:84}) implies that the orbital frequency varies as
\begin{equation}
\frac{\dot{\Omega}}{\Omega^2}=\frac{96}{5}\,\nu\,\left (\frac{G M\Omega}{c^3} \right )^{5/3}\,,
\label{eq}
\end{equation}
and the GW frequency $f_{\rm GW} = 2 \omega$,
\begin{equation}
\dot{f}_{\rm GW} = \frac{96}{5}\pi^{8/3}\,\left (\frac{{\cal M}}{c^3} \right )^{5/3}\,f_{\rm GW}^{11/3}\,.
\label{eq:85}
\end{equation}
Introducing the time
to coalescence $\tau = t_{\rm coal} -t$, and integrating Eq.~(\ref{eq:85}), we get
\begin{equation}
f_{\rm GW} \simeq 130 \left (\frac{1.21 M_\odot}{\cal M}\right )^{5/8}\,
\left (\frac{1 {\rm sec}}{\tau} \right )^{3/8}\,{\rm Hz}\,,
\label{eq:86}
\end{equation}
where $1.21 M_\odot$ is the chirp mass of a NS-NS binary.
Eq.(\ref{eq:86}) predicts, { \it e.g.} coalescence times of
$ \sim 17 {\rm min}, 2 {\rm sec}, 1 {\rm msec}$, for $f_{\rm GW} \sim
10, 100 ,10^3$ Hz. Using the above equations,
it is straightforward to compute the relation between the radial
separation and the GW frequency. We find
\begin{equation}
R \simeq 300 \left (\frac{M}{2.8 M_\odot} \right )^{1/3}\,
\left (\frac{100\, {\rm Hz}}{f_{\rm GW}} \right )^{2/3}\, {\rm km}\,.
\label{eq:87}
\end{equation}
Finally, a useful quantity is the number of GW cycles, defined by
\begin{equation}
{\cal N}_{\rm GW} = \frac{1}{\pi} \int_{t_{\rm in}}^{t_{\rm fin}}
\Omega(t)\,dt = \frac{1}{\pi} \int_{\Omega_{\rm in}}^{\Omega_{\rm fin}}
\frac{\Omega}{\dot{\Omega}}\,d\Omega\,.
\label{cycles}
\end{equation}
Assuming $\Omega_{\rm fin} \gg \Omega_{\rm in}$, we get
\begin{equation}
{\cal N}_{\rm GW} \simeq 10^4\,\left (\frac{{\cal M}}{1.21 M_\odot} \right )^{-5/3}\,
\left (\frac{f_{\rm in}}{10 {\rm Hz}} \right )^{-5/3}\,.
\label{eq:88}
\end{equation}
\section{Gravitational waves from sources in Newtonian motion}
We have now all the ingredient to characterize gravitational radiation with respect to the motion of the sources, {\it i.e.} with respect to different types of stellar encounters. Let us start with the Newtonian cases.
With the above formalism, it is possible to estimate the amount of
energy emitted in the form of GWs from a system of massive objects
interacting among them \cite{pm1,pm2}. Considering the quadrupole components for two bodies interacting in a Newtonian gravitational field, we have:
\begin{equation}
\begin{array}{lll}
Q_{xx}=\mu r^2(3\cos{^2\phi}\sin{^2\theta}-1)~,\\ \\
Q_{yy}=\mu r^2(3\sin{^2\phi}\sin{^2\theta}-1)~,\\ \\
Q_{zz}=\frac{1}{2} r^2 \mu (3 \cos2 \theta+1) ~,\\ \\
Q_{xz}=Q_{zx}=r^2 \mu (\frac{3}{2} \cos\phi \sin2\theta)~,\\ \\
Q_{yz}=Q_{zy}=r^2 \mu (\frac{3}{2} \sin 2\theta \sin \phi)~,\\ \\
Q_{xy}=Q_{yx}=r^2 \mu \left(\frac{3}{2} \sin ^2\theta \sin2\phi\right)~,
\end{array}\label{eq:quadrupoli}
\end{equation}
where the masses $m_{1}$ and $m_{2}$ have polar coordinates
$\{r_{i}\cos\theta\cos\phi,\; r_{i}\cos\theta\sin\phi,\:
r_{i}\sin\theta\}$ with $i=1,2$ . We will work in the equatorial plane
($\theta=\pi/2$). The origin of the motions is
taken at the center of mass. Such components can be differentiated
with respect to time, as in Eq.(\ref{eq:dEdt}), in order to derive the amount of gravitational radiation in the various Newtonian orbital motions.
\subsection{Gravitational wave luminosity from circular and elliptical orbits}
The first case we are going to consider is that of closed circular and elliptical orbits.
Using Eq.(\ref{eq:traiettoria}), let us derive the angular
velocity equation
\begin{displaymath}
\dot{\phi}=\frac{\sqrt{G l (m_{1}+m_{2})} (\epsilon \cos\phi+1)^2}{l^2}
\label{eq:angularvelo}
\end{displaymath}
and then, from Eqs.(\ref{eq:quadrupoli}), the third derivatives of quadrupolar components
for the elliptical orbits are:
\begin{eqnarray*}
\frac{d^{3}Q_{xx}}{dt^{3}}&=&\beta(24 \cos\phi+\epsilon (9 \cos2 \phi)+11)) \sin \phi \\
\frac{d^{3}Q_{yy}}{dt^{3}}&=&-\beta(24 \cos\phi+\epsilon(13+9 \cos2 \phi)) \sin\phi)\\
\frac{d^{3}Q_{zz}}{dt^{3}}&=&-2\beta \epsilon \sin\phi \\
\frac{d^{3}Q_{xy}}{dt^{3}}&=&\beta(24 \cos\phi+\epsilon (11+9 \cos2 \phi)) \sin\phi) \\
\end{eqnarray*}
where
\begin{displaymath}
\beta=\frac{G l (m_{1}+m_{2}))^{3/2} \mu (\epsilon
\cos\phi+1)^2}{l^4}\,.
\end{displaymath}
Being
\begin{eqnarray*}
Q_{ij}^{(3)}Q^{(3)ij}=\frac{G^3}{l^5}\left[ (m_{1}+m_{2})^3 \mu ^2 (1+\epsilon \cos\phi)^4\right.\\
\left(415 \epsilon ^2+3 (8 \cos\phi+3 \epsilon \cos2 \phi) \right.\\
\left.(72 \cos\phi+\epsilon (70+27 \cos2 \phi)))
\sin\phi^{2}\right]
\end{eqnarray*}
the total power radiated is given by
\begin{displaymath}
\frac{dE}{dt}=\frac{G^3}{45c^5l^5}f(\phi),\end{displaymath}
where
\begin{eqnarray*}
f(\phi)&=&(m_{1}+m_{2})^3 \mu ^2 (1+\epsilon \cos\phi)^4\times\\ &&
(415 \epsilon ^2+3 (8 \cos\phi+3 \epsilon \cos2 \phi)\times \\ &&
(72 \cos\phi+\epsilon (70+27 \cos2 \phi)))
\sin\phi^{2}.
\end{eqnarray*}
The total energy emitted in the form of gravitational radiation,
during the interaction, is :
\begin{displaymath}
\Delta E=\int^{\infty}_0 \left|\frac{dE}{dt}\right| dt~.
\end{displaymath}
From Eq.(\ref{eq:momang1}), we can adopt the angle $\phi$ as a
suitable integration variable. In this case, the energy emitted
for $\phi_1<\phi<\phi_2$ is
\begin{displaymath}
\Delta E(\phi_1,\phi_2)
=\frac{G^3}{45c^5l^5}\int^{\phi_2}_{\phi_1}f(\phi)~d\phi~,\label{eq:integraleenergia}
\end{displaymath}
and the total energy can be determined from the previous relation
in the limits $\phi_1\rightarrow 0$ and $\phi_2\rightarrow
\pi$. Thus, one has
\begin{displaymath}
\Delta E=\frac{G^4 \pi (m_{1}+m_{2})^3 \mu ^2}{l^5c^5}F(\epsilon)~
\end{displaymath}
where $F(\epsilon)$ depends on the initial conditions only and it is
given by
\begin{displaymath}
F(\epsilon)=\frac{ \left(13824+102448 \epsilon ^2+59412 \epsilon ^4+2549 \epsilon ^6\right)}{2880}.
\end{displaymath}
In other words, the gravitational wave luminosity strictly depends
on the configuration and kinematics of the binary system.
\subsection{Gravitational wave luminosity from parabolic and hyperbolic orbits}
In this case, we use Eq.(\ref{eq:traie}) and Eq.
(\ref{eq:quadrupoli}) to calculate the quadrupolar formula for
parabolic and hyperbolic orbits. The angular velocity is
\begin{displaymath}
\dot{\phi}=l^2 L (\epsilon \cos\phi+1)^2,
\label{eq:angularvelo1}
\end{displaymath}
and the quadrupolar derivatives are
\begin{eqnarray*}
\frac{d^{3}Q_{xx}}{dt^{3}}&=&\rho(24 \cos\phi+\epsilon(9 \cos 2 \phi+11)) \sin \phi, \\
\frac{d^{3}Q_{yy}}{dt^{3}}&=&-\rho(24 \cos\phi+\epsilon (13+9 \cos2 \phi)) \sin\phi),\\
\frac{d^{3}Q_{zz}}{dt^{3}}&=&-2\rho\epsilon\sin\phi, \\
\frac{d^{3}Q_{xy}}{dt^{3}}&=&-\frac{3}{2}\rho(\epsilon \cos \phi+1)^2 (5 \epsilon\cos \phi+8 \cos 2 \phi+3 \epsilon \cos3 \phi),\\
\end{eqnarray*}
where
\begin{displaymath}
\rho=l^4 L^3 \mu (\epsilon \cos\phi+1)^2\,.
\end{displaymath}
The radiated power is given by
\begin{eqnarray*}
\frac{dE}{dt}&=&-\frac{G \rho^2}{120 c^5} \times \\ &&[ 314
\epsilon ^2+(1152 \cos (\phi+187 \epsilon \cos 2 \phi \\ &&-3
(80 \cos 3 \phi+30 \epsilon \cos4 \phi+48 \cos 5 \phi+9 \epsilon \cos6 \phi)) \epsilon
\\ &&-192 \cos4 \phi+576],
\end{eqnarray*}
then
\begin{displaymath}
\frac{dE}{dt}=-\frac{G l^8 L^6 \mu ^2 }{120 c^5 }f(\phi),
\end{displaymath}
where $f(\phi)$, in this case, is
\begin{eqnarray*}
f(\phi)&=& (314
\epsilon ^2+(1152 \cos (\phi+187 \epsilon \cos 2 \phi-3
(80 \cos 3 \phi\\ &&+30 \epsilon \cos4 \phi+48 \cos 5 \phi+9 \epsilon \cos6 \phi)) \epsilon+ \phi\\ &&-192 \cos4 \phi+576).
\end{eqnarray*}
Then using Eq. (\ref{eq:dEdt}), the total energy emitted in the
form of gravitational radiation, during the interaction as a
function of $\phi$, is given by
\begin{eqnarray*}
&& \Delta E(\phi_1,\phi_2)
=-\frac{1}{480 c^5}~d\phi~\times
\nonumber\\ && (G l^8 L^6 \pi \left(1271 \epsilon ^6+24276 \epsilon ^4+34768
\epsilon ^2+4608\right) \mu ^2)
,\label{eq:integraleenergia1}
\end{eqnarray*}
and the total energy can be determined from the previous relation
in the limits $\phi_1\rightarrow -\pi$ and $\phi_2\rightarrow\pi$ in the parabolic case. Thus, one has
\begin{eqnarray*}
\Delta E= -\frac{(G l^8 L^6 \pi \mu^2}{480 c^5}F(\epsilon)~,
\end{eqnarray*}
where $F(\epsilon)$ depends on the initial conditions only and it is
given by
\begin{displaymath}
F(\epsilon)= \left(1271 \epsilon ^6+24276 \epsilon ^4+34768
\epsilon ^2+4608\right)~.
\end{displaymath}
In the hyperbolic case, we have that the total energy is determined in the limits
${\displaystyle \phi_1\rightarrow \frac{-3\pi}{4}}$ and ${\displaystyle \phi_2\rightarrow\frac{-3\pi}{4}}$, i.e.
\begin{displaymath}
\Delta E=--\frac{G l^8 L^6\mu^2}{201600 c^5}F(\epsilon)~,
\end{displaymath}
where $F(\epsilon)$ depends on the initial conditions only and is
given by
\begin{eqnarray*}
F(\epsilon)&=& [315 \pi (1271 \epsilon ^6+24276
\epsilon ^4+34768 \epsilon
^2+4608)+\\ & & +16 \epsilon[\epsilon[\epsilon(926704 \sqrt{2}-7
\epsilon (3319 \epsilon ^2-32632 \sqrt{2} \epsilon
\\ &&+55200))-383460]+352128 \sqrt{2}]]~.
\end{eqnarray*}
As above, the gravitational wave luminosity strictly depends on
the configuration and kinematics of the binary system.
\subsection{Gravitational wave amplitude from elliptical orbits}
Beside luminosity, we can characterize also the GW amplitude starting from the motion of sources. In the case of a binary system and a single amplitude component
, it is straightforward to show that
\begin{displaymath}
\begin{array}{llllllll}
h^{11}=-\frac{2G}{Rc^4}\frac{G (m_{1}+m_{2}) \mu (13 \epsilon \cos \phi+12 \cos2 \phi+\epsilon (4 \epsilon +3 \cos3 \phi))}{2 l}~,
\\ \\
h^{22}=\frac{2G}{Rc^4}\frac{G (m_{1}+m_{2}) \mu (17 \epsilon \cos\phi+12 \cos2 \phi+\epsilon (8 \epsilon +3 \cos3 \phi))}{2 l}
~,
\\ \\
h^{12}=h^{21}=-\frac{2G}{Rc^4}\frac{G (m_{1}+m_{2}) \mu (13 \epsilon \cos\phi+12 \cos2 \phi+\epsilon (4 \epsilon +3 \cos3 \phi))}{2
l}
~,
\end{array}
\end{displaymath}
so that the expected strain amplitude
$h\simeq(h_{11}^2+h_{22}^2+2h_{12}^2)^{1/2}$ turns out to be
\begin{eqnarray*}
h&=&\frac{G^3 (m_{1}+m_{2}) \mu^2}{c^4 Rl^2}\times\\ &&
(3 (13 \epsilon \cos\phi+12 \cos2 \phi+\epsilon (4 \epsilon
+3
\cos3 \phi))^2\\ &&+(17 \epsilon \cos\phi+
12 \cos2 \phi+\epsilon (8 \epsilon +3 \cos3 \phi))^2)^{\frac{1}{2}}
~,
\end{eqnarray*}
which, as before, strictly depends on the initial conditions of
the stellar encounter. A remark is in order at this point. A
monochromatic gravitational wave has, at most, two independent
degrees of freedom. In fact, in the TT gauge, we have $h_{+} =
h_{11} + h_{22}$ and $h_{\times} = h_{12} + h_{21}$ (see e.g.
\cite{bla}). As an example, the amplitude of gravitational wave is
sketched in Fig. \ref{fig:ellisse800pc} for a stellar encounter , in Newtonian motion,
close to the Galactic Center. The adopted initial parameters are
typical of a close impact and are assumed to be $b=1$ AU for the impact factor and
$v_{0}=200$ Km$s^{-1}$ for the initial velocity, respectively. Here, we have fixed
$m_{1}=m_{2}=1.4M_{\odot}$. The impact parameter is defined as
$L=bv$ where $L$ is the angular momentum and $v$ the incoming
velocity. We have chosen a typical velocity of a star in the
galaxy and we are considering, essentially, compact objects with
masses comparable to the Chandrasekhar limit $(\sim
1.4M_{\odot})$. This choice is motivated by the fact that
ground-based experiments like VIRGO or LIGO expect to detect
typical GW emissions from the dynamics of these objects or from
binary systems composed by them (see e.g. \cite{maggiore}).
\begin{figure}
\includegraphics[scale=0.5]{ellisse800pc.eps}
\caption{The gravitational wave-forms from elliptical orbits shown
as function of the polar angle $\phi$. We have fixed
$m_{1}=m_{2}=1.4M_{\odot}$. $m_{2}$ is considered at rest while
$m_{1}$ is moving with initial velocity $v_{0}=200$ Km$s^{-1}$ and
an impact parameter $b=1$ AU. The distance of the GW source is
assumed to be $R=8$ kpc and the eccentricity is $\epsilon=
0.2,0.5, 0.7.$ } \label{fig:ellisse800pc}
\end{figure}
\subsection{Gravitational wave amplitude from parabolic and hyperbolic orbits}
The single components of amplitude for a
parabolic and hyperbolic orbits are
\begin{displaymath}
\begin{array}{llllllll}
h^{11}=-\frac{G l^2 L^2\mu}{Rc^4}(13 \epsilon \cos \phi+12 \cos2 \phi+\epsilon (4 \epsilon +3 \cos3 \phi))
~,
\\ \\
h^{22}=\frac{Gl^2 L^2\mu}{Rc^4}(17 \epsilon \cos\phi+12 \cos2 \phi+\epsilon (8 \epsilon +3 \cos3 \phi))
~,
\\ \\
h^{12}=h^{21}=-\frac{3Gl^2 L^2\mu}{Rc^4} (4 \cos \phi+\epsilon (\cos2 \phi+3)) \sin\phi~,
\end{array}
\end{displaymath}
and then the expected strain amplitude is
\begin{eqnarray*}
h&=&\frac{2 l^4 L^4 \mu ^2 }{c^4 R}\times
(10 \epsilon ^4+9 \epsilon ^3\cos 3 \phi+59 \epsilon ^2
\cos2 \phi \\ &&+59 \epsilon ^2+\left(47 \epsilon
^2+108\right) \epsilon\cos\phi +36)^{\frac{1}{2}}
~,
\end{eqnarray*}
which, as before, strictly depends on the initial conditions of
the stellar encounter. We note that the gravitational wave
amplitude has the same analytical expression for both cases and
differs only for the value of $\epsilon$ which is $\epsilon=1$ if
the motion is parabolic and the polar angle range is
$\phi\in(-\pi,\pi)$, while it is $\epsilon>1$ and
$\phi\in(-\pi,\pi)$ for hyperbolic orbits. In these cases, we have
non-returning objects.
The amplitude of the gravitational wave is sketched in Figs.
\ref{fig:parabola} and \ref{fig:iperbole8} for stellar encounters
close to the Galactic Center. As above, we consider a close
impact and assume $b=1$ AU cm, $v_{0}=200$ Km$s^{-1}$ and
$m_{1}=m_{2}=1.4M_{\odot}$. In summary, we can say that also in the case of Newtonian motion of sources, the orbital features characterize GW luminosities and amplitudes.
\begin{figure}
\includegraphics[scale=0.5]{parabola.eps}
\caption{The gravitational wave-forms for a parabolic encounter as
a function of the polar angle $\phi$. As above,
$m_{1}=m_{2}=1.4M_{\odot}$ and $m_{2}$ is considered at rest.
$m_{1}$ is moving with initial velocity $v_{0}=200$ Km$s^{-1}$
with an impact parameter $b=1$ AU. The distance of the GW source
is assumed at $R=8$ kpc. The eccentricity is $\epsilon=1$. }
\label{fig:parabola}
\end{figure}
\begin{figure}
\includegraphics[scale=0.5]{iperbole8.eps}
\caption{The gravitational wave-forms for hyperbolic encounters as
function of the polar angle $\phi$. As above, we have fixed
$m_{1}=m_{2}=1.4M_{\odot}$. $m_{2}$ is considered at rest while
$m_{1}$ is moving with initial velocity $v_{0}=200$ Km$s^{-1}$ and
an impact parameter $b=1$ AU. The distance of the source is
assumed at $R=8$ kpc. The eccentricity is assumed with the values
$\epsilon=1.2,1.5,1.7$ .} \label{fig:iperbole8}
\end{figure}
\section{Gravitational waves from sources in relativistic motion}
It is straighforward to extend the above considerations to orbital motions containing Post$-$Newtonian corrections. It is clear that GW luminosity and amplitude are strictly dependent on the parameter ($\frac{v}{c}$) considered at various order of approximation and, as discussed above, the global feature of orbits fully characterize the gravitational emission. Now we study how the waveforms depend on the dynamics of binary and colliding systems and how relativistic corrections modulate the features of gravitational radiation.
\subsection{Inspiralling waveform including post-Newtonian corrections}
\label{sec6.2}
As we have shown in the above section the PN method involves an expansion around the Newtonian
limit keeping terms of higher order in the small
parameter~\cite{TD87,PNreview,buonanno}
\begin{equation}
\epsilon \sim \frac{v^2}{c^2} \sim \left |h_{\mu \nu} \right |
\sim \left |\frac{\partial_0 h}{\partial_i h} \right |^2 \sim
\left |\frac{T^{0i}}{T^{00}}\right | \sim \left |\frac{T^{ij}}{T^{00}}
\right |\,.
\end{equation}
In order to be able to determine the dynamics of binary systems with a precision
acceptable for detection, it has been necessary to compute the force determining
the motion of the two bodies and the amplitude of the gravitational radiation
with a precision going beyond the quadrupole formula.
For nonspinning BHs, the two-body equations of motion and the GW
flux are currently known through 3.5PN order~\cite{PNnospin}.
Specifically if we restrict
the discussion to circular orbits, as Eq.~(\ref{eq}) shows,
there exists a natural {\it adiabatic} parameter $\dot{\Omega}/\Omega^2 \cong {\cal O}[(v/c)^5]$.
Higher-order PN corrections to Eq.~(\ref{eq}) have been
computed~\cite{PNnospin,PNspin}, yielding the general equation:
\begin{equation}
\frac{\dot{\Omega}}{\Omega^2} = \frac{96}{5}\,\nu\,v_\Omega^{5/3}\,
\sum_{k=0}^7 {\Omega}_{(k/2)\mathrm{PN}}\,v_\Omega^{k/3}\,
\label{omegadot}
\end{equation}
where $G=1=c$ and where we define $v_\omega \equiv (M\,\omega)^{1/3}$ . The PN-order is given by $\Omega_{(k/2)PN}$ which is, up to $k=7$,
\begin{equation}
{\Omega}_{0\mathrm{PN}} = 1\,,
\label{omegadotSTpn0}
\end{equation}
\begin{equation}
{\Omega}_{0.5\mathrm{PN}} = 0\,,\\
\label{omegadotSTpn05}
\end{equation}
\begin{equation}
{\Omega}_{1\mathrm{PN}} = -\frac{743}{336} -\frac{11}{4}\,\nu\,, \\
\label{omegadotSTpn1}
\end{equation}
\begin{equation}
{\Omega}_{1.5\mathrm{PN}} = 4\pi + \left[-\frac{47}{3}\frac{S_L}{M^2}
-\frac{25}{4}\frac{\delta m}{M}\frac{\Sigma_L}{M^2}\right]\,,\\
\label{omegadotSTpn15}
\end{equation}
\begin{eqnarray}
{\Omega}_{2\mathrm{PN}} &=&
\frac{34\,103}{18\,144}+\frac{13\,661}{2\,016}\,\nu+\frac{59}{18}\,\nu^2 - \nonumber \\
&& \frac{1}{48}\, \nu\,\chi_1\chi_2\left[247\,(\hat {S}_1\cdot\hat{ S}_2)- 721\,
(\boldsymbol{\hat{L}}\cdot\hat{S}_1)(\boldsymbol{\hat{L}}\cdot\hat{S}_2)\right]\,,\nonumber\\
\label{omegadotSTpn2}
\end{eqnarray}
\begin{eqnarray}
{\Omega}_{2.5\mathrm{PN}} &=& -\frac{1}{672}\,(4\,159 +15\,876\,\nu)\,\pi +
\left[\left(-\frac{31811}{1008}+\right.\right.\nonumber\\ && \left.\left.\frac{5039}{84}\nu\right)\frac{S_L}{M^2}+ \left(-\frac{473}{84}+\frac{1231}{56}\nu\right)\frac{\delta
m}{M}\frac{\Sigma_L}{M^2}\right]\,, \nonumber\\
\label{omegadotSTpn25}
\end{eqnarray}
\begin{eqnarray}
{\Omega}_{3\mathrm{PN}} &=&
\left(\frac{16\,447\,322\,263}{139\,708\,800}-\frac{1\,712}{105}\,\gamma_E+\frac{16}{3}\pi^2\right)+\nonumber\\ &&
\left(-\frac{56\,198\,689}{217\,728}+ \frac{451}{48}\pi^2 \right)\nu
+\nonumber\\ &&\frac{541}{896}\,\nu^2-\frac{5\,605}{2\,592}\,\nu^3
-\frac{856}{105}\log\left[16v^{2}\right]\,,\nonumber\\
\label{omegadotSTpn3}
\end{eqnarray}
\begin{eqnarray}
{\Omega}_{3.5\mathrm{PN}} &=& \left (
-\frac{4\,415}{4\,032}+\frac{358\,675}{6\,048}\,\nu+\frac{91\,495}{1\,512}\,\nu^2
\right )\,\pi\,.\nonumber\\
\label{omegadotSTpn35}
\end{eqnarray}
We denote $\boldsymbol{L} = \mu \,\bf{X} \times \bf{V}$ the
Newtonian angular momentum (with $\bf{X}$ and $\bf{V}$, as above, the two-body center-of-mass
radial separation and relative velocity), and $\boldsymbol{\hat{L}} = \boldsymbol{L} /
|\boldsymbol{L}|$; $\bf{S}_1 =\chi_1\,m_1^2\,\hat{S}_1$ and $\bf{S}_2
=\chi_2\,m_2^2\,\hat{S}_2$ are the spins of the two bodies (with
$\hat{S}_{1,2}$ unit vectors, and $0 < \chi_{1,2} < 1$ for BHs) and
\begin{equation}
\label{spins}
\mathbf{S} \equiv \mathbf{S}_1 + \mathbf{S}_2\,, \quad \mathbf{\Sigma} \equiv M\left[\frac{\mathbf{S}_2}{m_2} -
\frac{\mathbf{S}_1}{m_1}\right]\,.
\end{equation}
Finally, $\delta m = m_1-m_2$ and $\gamma_E=0.577\ldots$ is Euler's constant.
\begin{table*}[t]
\caption{Post-Newtonian contributions to the number of GW
cycles accumulated from $\Omega_\mathrm{in} =
\pi\times 10\,\mathrm{Hz}$ to $\Omega_\mathrm{fin} =
\Omega^\mathrm{ISCO}=1/(6^{3/2}\,M)$ for binaries detectable by
LIGO and VIRGO. We denote $\kappa_{i} = \hat{S}_i \cdot \boldsymbol{\hat{L}}$ and
$\xi = \hat{\mathbf{S}}_1\cdot
\hat{\mathbf{S}}_2$.
\label{tab:1}}
\begin{center}
{\scriptsize
\begin{tabular}{|l|c|c|}\hline
& \multicolumn{1}{c|}{$(10+10)M_\odot$} &
\multicolumn{1}{c|}{$(1.4+1.4)M_\odot$} \\ \hline\hline
Newtonian & $601$ & $16034$ \\
1PN & $+59.3$ & $+441$\\
1.5PN & $-51.4 + 16.0\, \kappa_1\,\chi_1 + 16.0\, \kappa_2\,\chi_2$
& $ -211 + 65.7\,\kappa_1\,\chi_1 + 65.7\, \kappa_2\,\chi_2$ \\
2PN & $+4.1 - 3.3\, \kappa_1\,\kappa_2\,\chi_1\,\chi_2 + 1.1\, \xi\,\chi_1\,\chi_2$
& $+ 9.9 - 8.0\, \kappa_1\,\kappa_2\,\chi_1\,\chi_2 + 2.8
\,\xi\,\chi_1\,\chi_2$ \\
2.5PN & $-7.1 + 5.5\, \kappa_1\,\chi_1 + 5.5\,
\kappa_2\,\chi_2$ & $-11.7 + 9.0\, \kappa_1\,\chi_1 + 9.0\,
\kappa_2\,\chi_2$ \\
3PN & $+2.2$ & $+2.6$ \\
3.5PN & $-0.8$ & $-0.9$ \\ \hline
\end{tabular}}\end{center}
\end{table*}
It is instructive to compute the relative contribution of the
PN terms to the total number of GW cycles accumulating
in the frequency band of LIGO/VIRGO. In Table~\ref{tab:1},
we list the figures obtained by plugging Eq.~(\ref{omegadot})
into Eq.~(\ref{cycles}) \cite{buonanno}. As final frequency, we use the
innermost stable circular orbit (ISCO) of a point particle in
Schwarzschild BH [$f_{\rm GW}^{\rm ISCO} \simeq 4400/(M/M_\odot)$ Hz].
\subsection{The full waveform: inspiral, merger and ring-down}
\label{sec6.3}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.5\textwidth]{fig4a.eps}
\end{tabular}
\caption{We sketch the curvature potential
as function of the tortoise coordinate
$r^*$ associated to metric perturbations of
a Schwarzschild BH. \label{fig:7}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.3\textwidth]{fig4b.eps}\\
\end{tabular}
\caption{The potential peaks at the last
unstable orbit for a massless particle (the light ring).
Ingoing modes propagate toward the BH horizon, whereas
outgoing modes propagate away from the source. \label{fig:8}}
\end{center}
\end{figure}
After the two BHs merge, the system settles down to a Kerr BH
and emits quasi-normal modes (QNMs),~\cite{qnm,Press}. This phase is commonly known as
the ring-down (RD) phase. Since the QNMs have complex frequencies totally
determined by the BH's mass and spin, the RD waveform is a
superposition of damped sinusoidals. The inspiral and
RD waveforms can be computed analytically. What about the
merger? Since the nonlinearities dominate,
the merger would be described at {\it best} and {\it utterly}
through numerical simulations of Einstein equations. However,
before numerical relativity (NR) results became available, some
analytic approaches were proposed.
In the test-mass limit, $\nu \ll 1$, Refs.~\cite{Davis,Press} realized a long time
ago that the basic physical reason underlying
the presence of a universal merger signal was that
when a test particle falls below $ 3 M$ (the
unstable light storage ring of Schwarzschild), the GW that
it generates is strongly filtered by the curvature potential
barrier centered around it (see Fig.~\ref{fig:7}).
For the equal-mass case $\nu = 1/4$, Price and Pullin~\cite{CLA}
proposed the so-called close-limit approximation,
which consists in switching from the two-body description to the
one-body description (perturbed-BH) close to the light-ring
location. Based on these observations,
the effective-one-body (EOB) resummation scheme~\cite{EOB}
provided a first {\it example} of full
waveform by (i) resumming the PN Hamiltonian, (ii)
modeling the merger as a very short (instantaneous) phase
and (iii) matching the end of the plunge (around the light-ring)
with the RD phase (see Ref.~\cite{lazarus} where similar
ideas were developed also in NR).
The matching was initially done using {\it only} the least
damped QNM whose mass and spin were determined by the binary BH
energy and angular momentum at the end of the plunge.
An example of full waveform is given in Fig.~\ref{fig:7},\ref{fig:6}.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth]{fig5a.eps}
\end{tabular}
\caption{GW signal from an equal-mass
nonspinning BH binary as predicted at 2.5PN order by Buonanno and Damour (2000)
in Ref.~\cite{EOB}.
The merger is assumed almost instantaneous and one QNM is included.
\label{fig:6}}
\end{center}
\end{figure}
Today, with the results in NR, we are in the position
of assessing the closeness of analytic to numerical waveforms for
inspiral, merger and RD phases. In Fig.~\ref{fig:6}, we show some
first-order comparisons between the EOB-analytic and NR
waveforms~\cite{BCP} (see also Ref.~\cite{Goddshort}). Similar
results for the inspiral phase but using PN
theory~\cite{PNnospin,PNspin} (without resummation) at 3.5PN order
are given in Refs.~\cite{BCP,Goddshort}. So far, the agreement is
qualitatively good, but more accurate simulations, starting with
the BHs farther apart, are needed to draw robust conclusions.
Those comparisons are suggesting that it should be possible to
design purely analytic templates with the full numerics used to guide the
patching together of the inspiral and RD waveforms.
This is an important avenue to template construction as
eventually hundreds of thousands of waveform
templates may be needed to extract the signal from
the noise, an impossible demand for NR alone.
\begin{figure}
\begin{center}
\begin{tabular}{cc}\includegraphics[width=0.35\textwidth]{fig5b.eps}
\end{tabular}
\caption{ GW signal from an equal-mass
BH binary with a small spin $\chi_1=\chi_2 = 0.06$ obtained in
full GR by Pretorius~\cite{BCP}.
\label{fig:6}}
\end{center}
\end{figure}
\section{Gravitational waves with gravitomagnetic corrections}
In this section, we are going to study the evolution of compact
binary systems, formed through the capture of a moving (stellar)
mass $m$ by the gravitational field, whose source is a massive MBH
of mass $M$ where $m \ll M$. One expects that small compact
objects ($1\div 20 M_{\odot}$) from the surrounding stellar
population will be captured by these black holes following
many-body scattering interactions at a relatively high rate
\cite{Sigurdsson,Sigurdsson2}. It is well known that the capture
of stellar-mass compact objects by massive MBHs could constitute,
potentially, a very important target for LISA
\cite{Danzmann,freitag}. However, dynamics has to be carefully
discussed in order to consider and select all effects coming from
standard stellar mass objects inspiralling over MBHs.
In the first part of this review, we have shown that, in the
relativistic weak field approximation, when considering higher
order corrections to the equations of motion, gravitomagnetic
effects in the theory of orbits, can be particularly significant,
leading also to chaotic behaviors in the transient regime dividing
stable from unstable trajectories. Generally, such contributions
are discarded since they are considered too small. However, in a
more accurate analysis, this is not true and gravitomagnetic
corrections could give peculiar characterization of dynamics \cite{SMFL}.
According to these effects, orbits remain rather eccentric until
the final plunge, and display both extreme relativistic
perihelion precession and Lense-Thirring
\cite{Thirring1,Thirring,iorio} precession of the orbital plane
due to the spin of MBH, as well as orbital decay. In \cite{Ryan},
it is illustrated how the measured GW-waveforms can effectively
map out the spacetime geometry close to the MBH. In
\cite{CDDIN,SF}, the classical orbital motion (without
relativistic corrections in the motion of the binary system) has
been studied in the extreme mass ratio limit $m\ll M$, assuming
the stellar system density and richness as fundamental parameters.
The conclusions have been that
\begin{itemize}
\item the GW-waveforms have been
characterized by the orbital motion (in particular, closed or open
orbits give rise to very different GW-production and waveform
shapes);
\item in rich and dense stellar clusters, a large
production of GWs can be expected, so that these systems could be
very interesting for the above mentioned ground-based and space
detectors;
\item the amplitudes of the strongest GW signals are
expected to be roughly an order of magnitude smaller than LISA's
instrumental noise.
\end{itemize}
We investigate the GW emission by binary systems,
in the extreme mass ratio limit, by the quadrupole approximation,
considering orbits affected by both nutation and precession
effects, taking into account also gravitomagnetic terms in the
weak field approximation of the metric. We will see that
gravitational waves are emitted with a "peculiar" signature
related to the orbital features: such a signature may be a "burst"
wave-form with a maximum in correspondence to the periastron
distance or a modulated waveform, according to the orbit
stability. Here we face this problem discussing in detail the
dynamics of such a phenomenon which could greatly improve the
statistics of possible GW sources.
Besides, we give estimates of the distributions of these sources
and their parameters. It is worth noticing that the captures
occur when objects, in the dense stellar cusp surrounding a
galactic MBH, undergo a close encounter, so that the trajectory
becomes tight enough that orbital decay through emission of GWs
dominates the subsequent evolution. According to Refs.
\cite{cutler,cutler1}), for a typical capture, the initial
orbital eccentricity is extremely large (typically $1-e\sim
10^{-6}{-}10^{-3}$) and the initial pericenter distance very small
($r_{\rm p}\sim 8-100 M$, where $M$ is the MBH mass
\cite{FreitagApJ}. The subsequent orbital evolution may (very
roughly) be divided into three stages. In the first and longest
stage the orbit is extremely eccentric, and GWs are emitted in
short ``pulses'' during pericenter passages. These GW pulses
slowly remove energy and angular momentum from the system, and the
orbit gradually shrinks and circularizes. After $\sim 10^3-10^8$
years (depending on the two masses and the initial eccentricity)
the evolution enters its second stage, where the orbit is
sufficiently circular: the emission can be viewed as continuous.
Finally, as the object reaches the last stable orbit, the
adiabatic inspiral transits to a direct plunge, and the GW signal
cuts off. Radiation reaction quickly circularizes the orbit over
the inspiral phase; however, initial eccentricities are large
enough that a substantial fraction of captures will maintain high
eccentricity until the final plunge. It has been estimated
\cite{cutler1} that about half of the captures will plunge with
eccentricity $e\gtrsim 0.2$. While individually-resolvable
captures will mostly be detectable during the last $\sim 1-100$
yrs of the second stage (depending on the stellar mass $m$ and the
MBH mass), radiation emitted during the first stage will
contribute significantly to the confusion background. As we shall
see, the above scenario is heavily modified since the
gravitomagnetic effects play a crucial role in modifying the orbital
shapes that are far from being simply circular or elliptic and no
longer closed.
\subsection{Gravitational waves amplitude considering orbits with gravitomagnetic corrections}
Direct signatures of gravitational radiation are given by
GW-amplitudes and waveforms. In other words, the identification
of a GW signal is strictly related to the accurate selection of
the waveform shape by interferometers or any possible detection
tool. Such an achievement could give information on the nature of
the GW source, on the propagating medium, and, in principle, on
the gravitational theory producing such a radiation \cite{Dela}.
Considering the formulas of previous Section, the GW-amplitude
can be evaluated by
\begin{equation}
h^{jk}(t,R)=\frac{2G}{Rc^4}\ddot{Q}^{jk}~, \label{ampli1}
\end{equation}
$R$ being the distance between the source and the observer and,
due to the above polarizations, $\{j,k\}=1,2$.
From Eq.(\ref{ampli1}), it is straightforward to show that, for a
binary system where $m\ll M$ and orbits have gravitomagnetic
corrections, the Cartesian components of GW-amplitude are
\begin{eqnarray*}
\nonumber
h^{xx}&=& 2 \mu [(3 \cos ^2\phi \sin ^2\theta-1)
\dot{r}^2+6 r ( \dot{\theta}\cos ^2\phi \sin 2 \theta
\\ &&-\dot{ \phi}\sin ^2\theta \sin2 \phi) \dot{r}
+r((3 \cos ^2\phi \sin ^2\theta-1)\ddot{r}\\ &&+3
r (\dot{\theta}^2\cos2 \theta \cos ^2\phi -\dot{\phi} \dot{\theta} \sin2
\theta \sin2\phi\\ &&
-\sin\theta
(\sin\theta(\dot{ \phi}^2\cos 2 \phi+\ddot{ \phi} \cos\phi \sin\phi)-\ddot{ \theta}\cos\theta \cos ^2\phi
)))],
\end{eqnarray*}
\begin{eqnarray*}
\nonumber
h^{yy}&=&2 \mu[(3 \sin ^2\theta \sin ^2\phi-1)
\dot{r}^2+6 r ( \dot{ \phi}\sin 2 \phi \sin ^2\theta
\\ &&+ \dot{\theta}\sin 2 \theta \sin ^2\phi)
\dot{r}+
+ r ((3 \sin ^2\theta \sin ^2\phi
-1) \ddot{r}\\ &&+3 r ( \dot{\theta}^2\cos 2 \theta \sin ^2\phi
+\dot{ \phi}
\dot{\theta}\sin 2 \theta \sin 2 \phi
\\ &&+ \sin \theta (\ddot{\theta}\cos \theta
\sin ^2\phi+\sin \theta ( \dot{ \phi}^2\cos 2 \phi +\ddot\phi\cos \phi \sin \phi ))))],
\end{eqnarray*}
\begin{eqnarray*}
\nonumber
h^{xy}&=&h^{yx}=3 \mu[\cos 2 \phi \sin \theta(4\dot{\theta} \dot{ \phi} \cos \theta
+ \ddot{ \phi}\sin \theta )
r^2\\ &&+2 \dot{r} (2 \dot{ \phi} \cos 2 \phi \sin ^2\theta
+\dot{\theta}\sin 2 \theta \sin 2 \phi )r
\\ &&+\frac{1}{2} \sin 2 \phi (2 \ddot{r} \sin ^2\theta
+r( 2\dot{\theta}^2 \cos 2 \theta-4 \dot{ \phi}^2\sin ^2\theta
\\ &&+\ddot{\theta}\sin 2 \theta ))
r+ \dot{r}^2\sin ^(\theta \sin 2 \phi],
\end{eqnarray*}
where we are assuming geometrized units. The above formulas have
been obtained from Eqs.(\ref{ddr}), (\ref{ddphi}), (\ref{ddtheta})
The gravitomagnetic corrections give rise to signatures on the
GW-amplitudes that, in the standard Newtonian orbital motion, are
not present (see for example \cite{CDDIN,SF}). On the
other hand, as discussed in the Introduction, such corrections
cannot be discarded in peculiar situations as dense stellar
clusters or in the vicinity of galaxy central regions.
We are going to evaluate these
quantities and results are shown in Figs. \ref{Fig:03},
\ref{Fig:04}, \ref{Fig:05}, \ref{Fig:07}.
\begin{figure*}[t!]
\begin{tabular}{|c|c|}
\hline
\tabularnewline
\includegraphics[scale=0.4]{Dz_NO_500.eps}
\includegraphics[scale=0.4]{Dz_500.eps}
\tabularnewline
\hline
\includegraphics[scale=0.4]{Dx_Dy_step_500.eps}
\includegraphics[scale=0.4]{basic_orbit_500mu.eps}
\tabularnewline
\hline
\end{tabular}
\caption {\footnotesize{Plots of $z_{NO}(t)$ (left upper panel) and
$z_{Grav}(t)$ (right upper panel). It is interesting to see the
differences of about five orders of magnitude between the two
plots. At the beginning, the effect is very small but, orbit by
orbit, it grows and, for a suitable interval of coordinated time,
the effect cannot be neglected (see the left bottom panel in which
the differences in $x$ and $y$, starting from the initial orbits
up to the last ones, by steps of about 1500 orbits, are reported).
The internal red circle represents the beginning, the middle one
is the intermediate situation (green) and the blue one is the
final result of the correlation between $\Delta x$ versus $\Delta
y$, being $\Delta x=x_{Grav}-x_{NO}$ and $\Delta
y=y_{Grav}-y_{NO}$. On the bottom right, it is shown the basic
orbit.}}\label{Fig:03}
\end{figure*}
\begin{figure}[!ht]
\begin{tabular}{|c|c|c|}
\hline
\tabularnewline
\includegraphics [scale=0.60]{En_95_M_1.4_r0_500_delta_GW.eps}
\tabularnewline
\hline
\end{tabular}
\caption {Plot of the differences of total gravitational waveform
$h$, with and without the gravitomagnetic orbital correction for
a neutron star of $1.4 M_{\odot}$ orbiting around a MBH . The
waveform has been computed at the Earth-distance from SgrA$^*$
(the central Galactic Black Hole). The example we are showing has
been obtained solving the systems for the following parameters and
initial conditions: $\mu\approx1.4 M_{\odot}$, $r_{0}$, $E=0.95$,
$\phi_{0}=0$, $\theta_{0}=\frac{\pi}{2}$, $\dot{\theta}_{0}=0$,
$\dot{\phi_{0}}=-\frac{1}{10}\dot{r}_{0}$ and
$\dot{r}_{0}=-\frac{1}{100}.$ It is worth noticing that frequency
modulation gives cumulative effects after suitable long times.
}\label{Fig:04}
\end{figure}
\begin{figure*}[!ht]
\begin{tabular}{|c|c|c|}
\hline
\tabularnewline
\includegraphics[scale=0.25]{En_95_M_1.4_r0_20_theta_p_0_V_c_T.eps}
\includegraphics[scale=0.25]{En_95_M_1.4_r0_20_theta_p_0quadQ.eps}
\includegraphics[scale=0.25]{En_95_M_1.4_r0_20_theta_p_0GWQ.eps}
\tabularnewline
\hline
\tabularnewline
\includegraphics[scale=0.25]{En_95_M_1.4_r0_1500_theta_p_0_V_c_T.eps}
\includegraphics[scale=0.25]{En_95_M_1.4_r0_1500_theta_p_0quadQ.eps}
\includegraphics[scale=0.25]{En_95_M_1.4_r0_1500_theta_p_0GWQ.eps}
\tabularnewline
\hline
\tabularnewline
\includegraphics[scale=0.25]{En_95_M_1.4_r0_2500_theta_p_0_V_c_T.eps}
\includegraphics[scale=0.25]{En_95_M_1.4_r0_2500_theta_p_0quadQ.eps}
\includegraphics[scale=0.25]{En_95_M_1.4_r0_2500_theta_p_0GWQ.eps}
\tabularnewline
\hline
\end{tabular}
\caption {Plots along the panel lines from left to right of field
velocities along the axes of maximum covariances, total
gravitational emission waveform $h$ and gravitational waveform
polarizations $h_{+}$ and $h_{\times}$ for a neutron star of $1.4
M_{\odot}$. The waveform has been computed for the Earth-distance
from Sagittarius A (the central Galactic Black Hole SgrA$^*$). The
plots we are showing have been obtained solving the system for the
following parameters and initial conditions: $\mu\approx1.4
M_{\odot}$, $E=0.95$, $\phi_{0}=0$, $\theta_{0}=\frac{\pi}{2}$,
$\dot{\theta_{0}}=0$, $\dot{\phi_{0}}=-\frac{1}{10}\dot{r}_{0}$
and $\dot{r}_{0}=-\frac{1}{100}$. From top to bottom of the
panels, the orbital radius is $r_0=20\mu,\,1500\mu,\,2500\mu$.
See also Table I.}\label{Fig:05}
\end{figure*}
\begin{figure*}[!ht]
\begin{tabular}{|c|c|c|}
\hline
\tabularnewline
\includegraphics[scale=0.25]{En_95_M_10_r0_20_theta_p_0_V_c_T.eps}
\includegraphics[scale=0.25]{En_95_M_10_r0_20_theta_p_0quadQ.eps}
\includegraphics[scale=0.25]{En_95_M_10_r0_20_theta_p_0GWQ.eps}
\tabularnewline
\hline
\includegraphics[scale=0.25]{En_95_M_10_r0_1000_theta_p_0_V_c_T.eps}
\includegraphics[scale=0.25]{En_95_M_10_r0_1000_theta_p_0quadQ.eps}
\includegraphics[scale=0.25]{En_95_M_10_r0_1000_theta_p_0GWQ.eps}
\tabularnewline
\hline
\tabularnewline
\includegraphics[scale=0.25]{En_95_M_10_r0_2500_theta_p_0_V_c_T.eps}
\includegraphics[scale=0.25]{En_95_M_1.4_r0_2500_theta_p_0quadQ.eps}
\includegraphics[scale=0.25]{En_95_M_10_r0_2500_theta_p_0GWQ.eps}
\tabularnewline
\hline
\end{tabular}
\caption {Plots along the panel lines from left to right of field
velocities along the axes of maximum covariances, total
gravitational emission waveform $h$ and gravitational waveform
polarizations $h_{+}$ and $h_{\times}$ for a Black Hole (BH) of
$10 M_{\odot}$. The waveform has been computed for the
Earth-distance to SgrA$^*$. The plots we are showing have been
obtained solving the system for the following parameters and
initial conditions: $\mu\approx10 M_{\odot}$,
$E=0.95$,$\phi_{0}=0$,
$\theta_{0}=\frac{\pi}{2}$,$\dot{\theta_{0}}=0$,$\dot{\phi_{0}}=-\frac{1}{10}\dot{r}_{0}$
and $\dot{r}_{0}=-\frac{1}{100}$. From top to bottom of the
panels, the orbital radius is $r_0=20\mu,\,1000\mu,\,2500\mu$.
See also Table I}\label{Fig:07}
\end{figure*}
\subsection{Numerical results}
Now we have all the ingredients to estimate the effects of
gravitomagnetic corrections on the GW-radiation. Calculations have
been performed in geometrized units in order to evaluate better
the relative corrections in absence of gravitomagnetism. For the
numerical simulations, we have assumed the fiducial systems
constituted by a $m=1.4M_{\odot}$ neutron star or $m=10M_{\odot}$
massive stellar object orbiting around a MBH $M\simeq 3\times
10^6M_{\odot}$ as SgrA$^*$. In the extreme mass-ratio limit, this
means that we can consider ${\displaystyle \mu=\frac{mM}{m+M}}$ of
about $\mu \approx 1.4M_{\odot}$ and $\mu \approx10M_{\odot}$.
The computations have been performed starting with orbital radii
measured in the mass unit and scaling the distance according to
the values shown in Table I. As it is possible to see in Table I,
starting from $r_{0}=20\mu$ up to $2500\mu$, the orbital
eccentricity ${\displaystyle
\bm{e}=\frac{r_{max}-r_{min}}{r_{max}+r_{min}}}$ evolves towards a
circular orbit. In Table I, the GW-frequencies, in $mHz$, as
well as the $h$ amplitude strains and the two polarizations
$h_{+}$ and $h_{\times}$ are shown. The values are the mean values
of the GW $h$ amplitude strains ($h=\frac{h_{max}+h_{min}}{2}$)
and the maxima of the polarization waves (see Figs. \ref{Fig:05}
and \ref{Fig:07}). In Fig. \ref{Fig:09}, the fiducial LISA
sensitivity curve is shown \cite{LISA} considering the confusion
noise produced by White Dwarf binaries (blue curve). We show also
the $h$ amplitudes (red diamond and green circles for $\mu\approx
1.4 M_{\odot}$ and $\approx 10 M_{\odot}$ respectively). It is
worth noticing that, due to very high Signal to Noise Ratio, the
binary systems which we are considering result extremely
interesting, in terms of probability detection, for the LISA
interferometer (see Fig. \ref{Fig:09}).
\begin{table*}[!ht]
\caption{GW-amplitudes and frequencies as function of eccentricity
$e$, reduced mass $\mu$, orbital radius $r_0$ for the two cases
of fiducial stellar objects $m\simeq 1.4 M_{\odot}$ and $m\simeq
10 M_{\odot}$ orbiting around a MBH of mass $M\simeq 3\times
10^6M_{\odot}$.}
\begin{tabular}{|c|c|} \hline
\textbf{$1.4 M_{\odot} $} & \textbf{$10 M_{\odot} $}\\
\begin{tabular}{|c|c|c|c|c|c}
\hline
$\frac{r_{0}}{\mu} $ & $ e $ & $ f(mHz) $ & $ h $ & $ h_{+} $ & $ h_{\times} $\\
\hline
\hline
$20 $ & $ 0.91 $ & $ 7.7\cdot 10^{-2} $ & $ 2.0\cdot 10^{-22} $ & $ 5.1\cdot 10^{-23} $ & $ 5.1\cdot 10^{-22} $\\
$200 $ & $ 0.79 $ & $1.1\cdot 10^{-1} $ & $ 1.2\cdot 10^{-20} $ & $ 2.2\cdot 10^{-21} $ & $ 3.1\cdot 10^{-20} $\\
$500 $ & $ 0.64 $ & $1.4\cdot 10^{-1}$ & $ 6.9\cdot 10^{-20}$ & $ 8.7\cdot 10^{-21}$ & $ 1.7\cdot 10^{-19}$\\
$1000 $ & $ 0.44 $ & $ 1.9\cdot 10^{-1} $ & $ 2.6\cdot 10^{-19} $ & $ 6.4\cdot 10^{-20} $ & $ 6.4\cdot 10^{-19} $\\
$1500 $ & $ 0.28 $ & $ 2.3\cdot 10^{-1} $ & $ 4.8\cdot 10^{-19} $ & $ 3.6\cdot 10^{-20} $ & $ 1.2\cdot 10^{-18} $\\
$2000 $ & $ 0.14 $ & $ 2.7\cdot 10^{-1} $ & $ 5.9\cdot 10^{-19} $ & $ 4.9\cdot 10^{-20} $ & $ 1.3\cdot 10^{-18} $\\
$2500 $ & $ 0.01 $ & $ 3.1\cdot 10^{-1} $ & $ 5.9\cdot 10^{-19} $ & $ 1.7\cdot 10^{-20} $ & $ 9.2\cdot 10^{-19} $\\
\end{tabular}
&
\begin{tabular}{c|c|c|c|c|}
\hline
$ e $ & $ f(mHz) $ & $ h $ & $ h_{+} $ & $ h_{\times} $\\
\hline
\hline
$0.98 $ & $ 3.2\cdot 10^{-2} $ & $ 1.5\cdot 10^{-18} $ & $ 1.6\cdot 10^{-19} $ & $ 4.3\cdot 10^{-18} $\\
$0.87 $ & $ 9.2\cdot 10^{-2} $ & $ 1.5\cdot 10^{-16} $ & $ 2.5\cdot 10^{-18} $ & $ 4.1\cdot 10^{-16} $\\
$0.71 $ & $ 1.4\cdot 10^{-1}$ & $ 8.5\cdot 10^{-16}$ & $ 7.0\cdot 10^{-18}$ & $ 2.4\cdot 10^{-15}$\\
$ 0.49 $ & $ 1.9\cdot 10^{-1} $ & $ 2.0\cdot 10^{-15} $ & $ 1.6\cdot 10^{-17} $ & $ 5.6\cdot 10^{-15} $\\
$ 0.32 $ & $ 2.3\cdot 10^{-1} $ & $ 2.7\cdot 10^{-15} $ & $ 2.5\cdot 10^{-17} $ & $ 7.4\cdot 10^{-15} $\\
$ 0.19 $ & $ 2.6\cdot 10^{-1} $ & $ 2.8\cdot 10^{-15} $ & $ 3.3\cdot 10^{-17} $ & $ 7.6\cdot 10^{-15} $\\
$ 0.08 $ & $ 2.9\cdot 10^{-1} $ & $ 2.1\cdot 10^{-15} $ & $ 4.0\cdot 10^{-17} $ & $ 5.6\cdot 10^{-15} $\\
\end{tabular}
\\
\hline
\end{tabular}
\end{table*}
\begin{figure}[!ht]
\begin{tabular}{|c|}
\hline
\tabularnewline
\includegraphics[scale=0.5]{Lisa_GW_h.eps}
\tabularnewline
\hline
\end{tabular}
\caption {Plot of estimated mean values of GW-emission in terms
of strain $h$ for two binary sources at the Galactic Center
SgrA$^*$ with reduced mass $\mu\approx1.4M_{\odot}$ (red
diamonds) and $\mu\approx 10M_{\odot}$(green circles). The blue
line is the foreseen LISA sensitivity curve. The waveforms have
been computed for the Earth-distance to SgrA$^*$. The examples we
are showing have been obtained solving the systems for the
parameters and initial conditions reported in Figs. \ref{Fig:05},
\ref{Fig:07} and in Table I.}\label{Fig:09}
\end{figure}
\section{Rate and event number estimations in dense stellar systems}
At this point, it is important to give some estimates of the
number of events where gravitomagnetic effects could be a
signature for orbital motion and gravitational radiation. From
the GW emission point of view, close orbital encounters,
collisions and tidal interactions have to be dealt on average if
we want to investigate the gravitational radiation in a dense
stellar system. On the other hand, dense stellar regions are the
favored target for LISA interferometer \cite{freitag} so it is
extremely useful to provide suitable numbers before its launching.
To this end, it is worth giving the stellar encounter rate
producing GWs in astrophysical systems like dense globular
clusters or the Galactic Center. In general, stars are
approximated as point masses. However, in dense regions of stellar
systems, a star can pass so close to another that they raise
tidal forces which dissipate their relative orbital kinetic energy
and the Newtonian mechanics or the weak field limit of GR cannot
be adopted as good approximations. In some cases, the loss of
energy can be so large that stars form binary (the situation which
we have considered here) or multiple systems; in other cases, the
stars collide and coalesce into a single star; finally stars can
exchange gravitational interaction in non-returning encounters.
To investigate and parameterize all these effects, one has to
compute the collision time $t_{coll}$, where $1/t_{coll}$ is the
collision rate, that is, the average number of physical collisions
that a given star suffers per unit time. As a rough approximation,
one can restrict to stellar clusters in which all stars have the
same mass $m$.
Let us consider an encounter with initial relative velocity
$\mathbf{v}_{0}$ and impact parameter $b$. The angular momentum
per unit mass of the reduced particle is $L=bv_{0}$. At the
distance of closest approach, which we denote by $r_{coll}$, the
radial velocity must be zero, and hence the angular momentum is
$L=r_{coll}v_{max}$, where $v_{max}$ is the relative speed at
$r_{coll}$. It is easy to show that \cite{binney}
\begin{equation}
b^{2}=r_{coll}^{2}+\frac{4Gmr_{coll}}{v_{0}^{2}}\,.\label{eq:b}\end{equation}
If we set $r_{coll}$ equal to the sum of the radii of the two
stars, a collision will occur if the impact parameter is less
than the value of $b$, as determined by Eq.(\ref{eq:b}).
The function $f(\mathbf{v}_{a})d^{3}\mathbf{v}_{a}$ gives the
number of stars per unit volume with velocities in the range
$\mathbf{v}_{a}+d^{3}\mathbf{v}_{a}.$ The number of encounters per
unit time with impact parameter less than $b$, which are suffered
by a given star, is $f(\mathbf{v}_{a})d^{3}\mathbf{v}_{a}$ times
the volume of the annulus with radius $b$ and length $v_{0}$, that
is,
\begin{equation}
\int f(\mathbf{v}_{a})\pi
b^{2}v_{0}d^{3}\mathbf{v}_{a},\label{eq:integrale}\end{equation}
where $v_{0}=\left|\mathbf{v-v}_{a}\right|$ and $\mathbf{v}$ is
the velocity of the considered star. The quantity in
Eq.(\ref{eq:integrale}) is equal to $1/t_{coll}$ for a star with
velocity $\mathbf{v}$: to obtain the mean value of $1/t_{coll}$,
we average over $\mathbf{v}$ by multiplying (\ref{eq:integrale})
by $f(\mathbf{v})/\nu$, where $\nu=\int
f(\mathbf{v})d^{3}\mathbf{v}$ is the number density of stars and
the integration is over $d^{3}\mathbf{v}$.
Thus
\begin{eqnarray}
\frac{1}{t_{coll}}&=& \frac{\nu}{8\pi^{2}\sigma^{6}}\int
e^{-(v^{2}+v_{a}^{2})/2\sigma^{2}}\times\nonumber\\&&\left(r_{coll}\left|\mathbf{v-v}_{a}\right|+
\frac{4Gmr_{coll}}{\left|\mathbf{v-v}_{a}\right|}\right)d^{3}\mathbf{v}d^{3}\mathbf{v}_{a}\,.
\label{eq:invtcoll}\end{eqnarray} Replacing the variable
$\mathbf{v}_{a}$ by $\mathbf{V}=\mathbf{v-v}_{a}$, the argument of
the exponential is then
$-\left[\left(\mathbf{v}-\frac{1}{2}\mathbf{V}\right)^{2}+\frac{1}{4}V^{2}\right]/\sigma^{2}$,
and if we replace the variable $\mathbf{v}$ by ${\displaystyle
\mathbf{v}_{cm}=\mathbf{v}-\frac{1}{2}\mathbf{V}}$ (the center of
mass velocity), then one has
\begin{equation}
\frac{1}{t_{coll}}=\frac{\nu}{8\pi^{2}\sigma^{6}} \int
e^{-(v_{cm}^{2}+V^{2})/2\sigma^{2}}\left(r_{coll}V+
\frac{4Gmr_{coll}}{V}\right)dV\,.\label{eq:invtcoll1}\end{equation}
The integral over $\mathbf{v}_{cm}$ is given by
\begin{equation}
\int
e^{-v_{cm}^{2}/\sigma^{2}}d^{3}\mathbf{v}_{cm}=\pi^{3/2}\sigma^{3}\,.\label{eq:intint}\end{equation}
Thus
\begin{equation}
\frac{1}{t_{coll}}=\frac{\pi^{1/2}\nu}{2\sigma^{3}}\int_{\infty}^{0}e^{-V^{2}/4\sigma^{2}}
\left(r_{coll}^{2}V^{3}+4GmVr_{coll}\right)dV\label{eq:invtcoll2}\end{equation}
The integrals can be easily calculated and then we find
\begin{equation}
\frac{1}{t_{coll}}=4\sqrt{\pi}\nu\sigma
r_{coll}^{2}+\frac{4\sqrt{\pi}\nu
Gmr_{coll}}{\sigma}\,.\label{eq:invtcooll3}\end{equation} The
first term of this result can be derived from the kinetic theory.
The rate of interaction is $\nu\Sigma\left\langle V\right\rangle$,
where $\Sigma$ is the cross-section and $\left\langle
V\right\rangle $ is the mean relative speed. Substituting
$\Sigma=\pi r_{coll}^{2}$ and $\left\langle V\right\rangle
=4\sigma/\sqrt{\pi}$ (which is appropriate for a Maxwellian
distribution with dispersion $\sigma$) we recover the first term
of (\ref{eq:invtcooll3}). The second term represents the
enhancement in the collision rate by gravitational focusing, that
is, the deflection of trajectories by the gravitational attraction
of the two stars.
If $r_{*}$ is the stellar radius, we may set $r_{coll}=2r_{*}$. It
is convenient to introduce the escape speed from stellar surface,
${\displaystyle v_{*}=\sqrt{\frac{2Gm}{r_{*}}}}$, and to rewrite
Eq.(\ref{eq:invtcooll3}) as
\begin{equation}
\Gamma=\frac{1}{t_{coll}}=16\sqrt{\pi}\nu\sigma
r_{*}^{2}\left(1+\frac{v_{*}^{2}}{4\sigma^{2}}\right)=16\sqrt{\pi}\nu\sigma
r_{*}^{2}(1+\Theta),\label{eq:invtcoll4}\end{equation}
where
\begin{equation}
\Theta=\frac{v_{*}^{2}}{4\sigma^{2}}=\frac{Gm}{2\sigma^{2}r_{*}}\label{eq:safronov}\end{equation}
is the Safronov number \cite{binney}. In evaluating the rate, we
are considering only those encounters producing gravitational
waves, for example, in the LISA range, i.e. between $10^{-4}$ and
$10^{-1}$ Hz (see e.g. \cite{Rub}). Numerically, we have
\begin{eqnarray}
&&\Gamma \simeq 5.5\times 10^{-10} \left(\frac{v}{10 {\rm km
s^{-1}}}\right) \left(\frac{\sigma}{UA^2}\right) \left(\frac{{\rm
10 pc}}{R}\right)^3 {\rm
yrs^{-1}}\nonumber\\ && \qquad\Theta<<1\label{eq:thetamin}
\end{eqnarray}
\begin{eqnarray}
&&\Gamma \simeq 5.5\times 10^{-10} \left(\frac{M}{10^5 {\rm
M_{\odot}}}\right)^2 \left(\frac{v}{10 {\rm km s^{-1}}}\right)
\left(\frac{\sigma}{UA^2}\right)\times\nonumber\\ && \left(\frac{{\rm 10
pc}}{R}\right)^3 {\rm yrs^{-1}}\qquad\Theta>>1\label{eq:thetamagg}
\end{eqnarray}
If $\Theta>>1$, the energy dissipated exceeds the relative kinetic
energy of the colliding stars, and the stars coalesce into a
single star. This new star may, in turn, collide and merge with
other stars, thereby becoming very massive. As its mass increases,
the collision time is shorten and then there may be runaway
coalescence leading to the formation of a few supermassive objects
per clusters. If $\Theta<<1$, much of the mass in the colliding
stars may be liberated and forming new stars or a single
supermassive objects (see \cite{Belgeman,Shapiro}). Both cases are
interesting for LISA purposes.
Note that when one has the effects of quasi-collisions (where
gravitomagnetic effects, in principle, cannot be discarded) in an
encounter of two stars in which the minimal separation is several
stellar radii, violent tides will raise on the surface of each
star. The energy that excites the tides comes from the relative
kinetic energy of the stars. This effect is important for
$\Theta>>1$ since the loss of small amount of kinetic energy may
leave the two stars with negative total energy, that is, as a
bounded binary system. Successive peri-center passages will
dissipates more energy by GW radiation, until the binary orbit is
nearly circular with a negligible or null GW radiation emission.
Let us apply these considerations to the Galactic Center which can
be modelled as a system of several compact stellar clusters, some
of them similar to very compact globular clusters with high
emission in X-rays \cite{townes}.
For a typical globular cluster around the Galactic Center, the
expected event rate is of the order of $2\times 10^{-9}$
yrs$^{-1}$ which may be increased at least by a factor $\simeq
100$ if one considers the number of globular clusters in the whole
Galaxy. If the stellar cluster at the Galactic Center is taken
into account and assuming the total mass $M\simeq 3\times 10^6$
M$_{\odot}$, the velocity dispersion $\sigma\simeq $ 150 km
s$^{-1}$ and the radius of the object $R\simeq$ 10 pc (where
$\Theta=4.3$), one expects to have $\simeq 10^{-5}$ open orbit
encounters per year. On the other hand, if a cluster with total
mass $M\simeq 10^6$ M$_{\odot}$, $\sigma\simeq $ 150 km s$^{-1}$
and $R\simeq$ 0.1 pc is considered, an event rate number of the
order of unity per year is obtained. These values could be
realistically achieved by data coming from the forthcoming space
interferometer LISA. As a secondary effect, the above wave-forms
could constitute the "signature" to classify the different
stellar encounters thanks to the differences of the shapes (see
Figs. \ref{Fig:05} and \ref{Fig:07}).
\section{Discussion, conclusions and perspectives}
We have considered the two-body problem in Newtonian and relativistic theory of orbits in view of characterizing the gravitational radiation, starting from the motion of the sources.
We have reported several results concerning the equations
of motion, and the associated Lagrangian formulation, of compact
binary systems. These equations are necessary when constructing
the theoretical templates for searching and analyzing the GW
signals from inspiralling compact binaries in VIRGO-LIGO and LISA
type experiments. Considering the two-body problem, we mean the
problem of the dynamics of two structureless, non-spinning
point-particles, characterized by solely two mass parameters $m_1$
and $m_2$, moving under their mutual, purely gravitational
interaction. Surely this problem, because of its conceptual
simplicity, is among the most interesting ones to be solved within
any theory of gravity. Actually, there are two aspects of the
problem: the first sub-problem consists into obtaining the equation
of the binary motion, the second is to find the (hopefully exact)
solution of that equation. We referred to the equation of motion
as the explicit expression of the acceleration of each of the
particles in terms of their positions and velocities. It is well
known that in Newtonian gravity, the first of these sub-problems
is trivial, as one can easily write down the equation of motion
for a system of $N$ particles, while the second one is difficult,
except in the two-body case $N = 2$, which represents, in fact, the
only situation amenable to an exact treatment of the solution. In
GR, even writing down the equations of motion in the simplest case
$N = 2$ is difficult. Unlike in Newton's theory, it is impossible
to express the acceleration by means of the positions and
velocities, in a way which would be valid within the {\it exact}
theory. Therefore we are obliged to resort to approximation
methods. Let us feel reassured that plaguing the exact theory of
GR with approximation methods is not a shame. It is fair to say
that many of the great successes of this theory, when confronted
to experiments and observations, have been obtained thanks to
approximation methods. Furthermore, the beautiful internal wheels
of GR also show up when using approximation
methods, which often deserve some theoretical interest in their
own, as they require interesting mathematical techniques. Here we
have investigated the equation of the binary motion in the
post-Newtonian approximation, {\it i.e.} as a formal expansion
when the velocity of light $c$ tends to infinity. As a consequence
of the equivalence principle, which is incorporated by hand in
Newton's theory and constitutes the fundamental basis of GR, the acceleration of $particle1$ should not depend on $m_1$ (nor
on its internal structure), in the {\it test-mass} limit where the
mass $m_1$ is much smaller than $m_2$. This is, of course, satisfied
by the Newtonian acceleration, which is independent of $m_1$, but
this leaves the possibility that the acceleration of the $particle
1$, in higher approximations, does depend on $m_1$, via the
so-called self-forces, which vanish in the test-mass limit.
Indeed, this is what happens in the post-Newtonian and
gravitomagnetic corrections, which show explicitly many
terms proportional to (powers of) $m_1$. Though the approximations
and corrections to the orbits are really a consequence of GR, they
should be interpreted using the common-sense language of Newton.
That is, having chosen a convenient general-relativistic
(Cartesian) coordinate system, like the harmonic coordinate system
adopted above, we have express the results in terms of the
coordinate positions, velocities and accelerations of the bodies.
Then, the trajectories of the particles can be viewed as taking
place in the absolute Euclidean space of Newton, and their
(coordinate) velocities as being defined with respect to absolute
time. Not only this interpretation is the most satisfactory one
from a conceptual point of view, but it represents also the most
convenient path for comparing the theoretical predictions and the
observations. For instance, the Solar System dynamics at the first
post-Newtonian level is defined, following a recent resolution of
the International Astronomical Union, in a harmonic coordinate
system, the Geocentric Reference System (GRS), with respect to
which one considers the {\it absolute} motion of the planets and
satellites. But because the equations come from GR, they are
endowed with the following properties, which make them truly {\it relativistic}.
\begin{itemize}
\item The one-body problem in GR corresponds to the Schwarzschild solution, so the
equations possess the correct {\it perturbative} limit, that given by the geodesics of the Schwarzschild
metric, when the mass of one of the bodies tends to zero.
\item Because GR admits the Poincar\' e group as a global symmetry (in the case of
asymptotically flat space-times), the harmonic-coordinate equations of motion stay invariant when
we perform a global Lorentz transformation
\item Since the particles emit gravitational radiation there are some terms in the equations which
are associated with radiation reaction. These terms appear
at the order $2.5$PN or $c^-5$
that we discarded in our discussion (where $5 = 2s + 1$, $s = 2$
being the helicity of the graviton). They correspond to an {\it
odd}- order PN correction, which does not stay invariant in a time
reversal. By contrast, the {\it even}-orders, as
1PN, correspond to a dynamics which is conservative.
\item GR is a non-linear theory (even in vacuum), and some part of the gravitational
radiation which was emitted by the particles in the past scatters off the static gravitational field
generated by the rest-masses of the particles, or interacts gravitationally with itself.
\end{itemize}
From all these considerations, the post-Newtonian equations were
also obtained, for the motion of the centers of mass of extended
bodies, using a technique that can be qualified as more {\it
physical} than the surface-integral method, as it takes explicitly
into account the structure of the bodies.
Particularly interesting is considering
gravitomagnetic effects in the geodesic motion. In particular, one can
consider the orbital effects of higher-order terms in $v/c$
which is the main difference with respect to the standard approach
to the gravitomagnetism. Such terms are often discarded but, as we
have shown, they could give rise to interesting phenomena in tight
binding systems as binary systems of evolved objects (neutron
stars or black holes). They could be important for objects falling
toward extremely massive black holes as those seated in the
galactic centers \cite{cutler,cutler1}. The leading parameter for
such correction is the ratio $v/c$ which, in several physical
cases cannot be simply discarded. For a detailed discussion see
for example \cite{capozzlamb,capozzre,sereno,sereno1}. A part the
standard periastron precession effects, such terms induce
nutations and are capable of affecting the stability basin of the
orbital phase space. As shown, the global structure of such a
basin is extremely sensitive to the initial angular velocities,
the initial energy and mass conditions which can determine
possible transitions to chaotic behaviors. Detailed studies on the
transition to chaos could greatly aid in gravitational wave
detections in order to determine the shape, the spectrum and the
intensity of the waves (for a discussion see \cite{levin,gair}).
In the second part of this review, we have summarized many of the
most important topics in the theory of GWs. Linearized theory as
described in is adequate to describe the propagation of GWs and
to model the interaction of GWs with our detectors. A variety of
formalisms have been developed.
\begin{itemize}
\item {\it Newtonian theory} The emission of gravitational waves from stellar encounters in Newtonian regime interacting on elliptical, hyperbolic and parabolic orbits is studied in the quadrupole approximation. Analytical expressions are then derived for the gravitational wave luminosity, the total energy output and gravitational radiation amplitude produced in tight impacts where two massive objects closely interact at an impact distance of $1AU$.
\item {\it Post-Newtonian theory.} PN theory is one of the most
important of these formalisms, particularly for modeling binary
systems. Roughly speaking, PN theory analyzes sources using an
iterated expansion in two variables: The ``gravitational
potential'', $\Phi \sim M/r$, where $M$ is a mass scale and $r$
characterizes the distance from the source; and velocities of
internal motion, $v$. (In linearized theory, we assume $\Phi$ is
small but place no constraints on $v$.) Newtonian gravity emerges
as the first term in the expansion, and higher order corrections
are found as the expansion is iterated to ever higher order. Our
derivation of the quadrupole formula gives the leading order term
in the PN expansion of the emitted radiation. See \cite{lb02}
and references therein for a comprehensive introduction to and
explication of this subject.
\item {\it Gravitomagnetic corrections.} The gravitomagnetic effect could give rise to interesting phenomena in tight
binding systems such as binaries of evolved objects (NSs or BHs).
The effects reveal particularly interesting if
$\displaystyle{\frac{v}{c}}$ is in the range
$\displaystyle{(10^{-1} \div 10^{-3})c}$. They could be important
for objects captured and falling toward extremely massive black
holes such as those at the Galactic Center. Gravitomagnetic
orbital corrections, after long integration time, induce
precession and nutation and then modification on the wave-form. In
principle, GW emission could present signatures of gravitomagnetic
corrections after suitable integration times in particular for the
on going LISA space laser interferometric GW antenna.
\end{itemize}
To conclude, Henri Poincar\'e \cite{poincare} once remarked that
real problems can never be classified as solved or unsolved ones,
but that they are always {\it more and less solved}. This remark
applies particularly well to the problem of motion, which has had
chequered history. Even the Newtonian problem of motion, which
appeared to well understood after the development of the powerful
methods of classical mechanics \cite{tisserand} embarked on an
entirely new career after work of Poincar\'e which has led to many
further developments (see \cite{arnold,gallavotti}). The
Einsteinian problem of motion has not even reached a classical
stage where the basic problems appear as well understood. At first
sight the best developed approximation method in GR, the PN one,
would seem to constitute such classical stage, but the literature
on the PN problem of motion is full of repetitions, errors or
ambiguities. We was to conclude this review by giving a list of
issues that need to be clarified. We renounced this project
because, if one wishes to look at the work done with a critical
eye, nearly all aspects of the problem of motion and GWs need to
be thoroughly re-investigates for mathematical, physical or
conceptual reasons; so that the list of open problems would,
consistent with the remark of Poincar\' e. One thing is certain:
the problem of motion and GWs is no longer a purely theoretical
problem, tanks to the improvement in the precision of positions
measurements in the solar systems, and to the discovery of the
binary pulsar 1913+16 which is a relativistic laboratory; the
problem has become an important tool of modern astrophysics. It is
therefore of some urgency, not only to complete and unify the work
already done, but also to develop new approaches in order to aim both
formal and conceptual clarification of the basic issues, and to obtain more accurate explicit results.
\section{Acknowledgments}
We thank S. Capozziello and L. Milano for fruitful discussions and for the useful suggestions on the topics of this review.
\newpage
|
2,869,038,156,229 | arxiv | \section{Introduction}
Let $G$ be a finite simple graph on $[n]$. Herzog et al. in \cite{HH1}
and independently Ohtani in \cite{oh}, introduced the notion of
binomial edge ideal corresponding to a finite simple graph. Let
$S=K[x_1, \ldots, x_n,y_1, \ldots, y_n]$, where $K$ is a field. The
binomial edge ideal of the graph $G$ is $J_G =(x_i y_j - x_j y_i :
\{i,j\} \in E(G), \; i <j)$. Researchers have been trying to establish
connections between combinatorial invariants associated to $G$ and
algebraic invariants associated to $J_G$. In particular, connections
have been established between homological invariants such as depth,
codimension, Betti numbers and Castelnuovo-Mumford regularity of $J_G$
with certain combinatorial invariants associated to $G$, see for
example \cite{her1,HH1,JNR,KM3,MM,Rauf,KM1,KM2}. In \cite[Theorem
1.1]{MM}, Matsuda and Murai proved that for any graph $G$ on vertex
set $[n]$, $l \leq \reg(S/J_G) \leq n-1$, where $l$ is the length of
longest induced path in $G$. They conjectured that $\reg(S/J_G) = n-1$
if and only if $G$ is the path graph. This conjecture was settled in
affirmative by Kiani and Saeedi Madani in \cite{KM3}. For a graph $G$,
let $c(G)$ denote the number of maximal cliques of $G$. If $G$ is a
closed graph, i.e., if $J_G$ has a quadratic Gr\"obner basis, then
Saeedi Madani and Kiani proved that $\reg(S/J_G) \leq c(G)$,
\cite{KM1}. They conjectured that $\reg(S/J_G) \leq c(G)$ for any
finite simple graph $G$, \cite{KM2}. In \cite{KM5}, Saeedi Madani and
Kiani proved the conjecture for generalized block graphs.
Another homological invariant associated with an ideal $I$ is the
depth of $S/I$. While not much is known about the depth of binomial
edge ideals, there are some results on the structure of certain
classes of graphs whose binomial edge is Cohen-Macaulay, which
corresponds to the highest possible depth. Bolognini et al. studied
the structure of bipartite graphs and characterized the
Cohen-Macaulayness of the binomial edge ideals of bipartite graphs in
\cite{dav}. They introduced a family of bipartite graphs, denoted by
$F_m$ and a family of non-bipartite graphs, called $k$-fan graphs,
denoted by $F_k^W(K_n)$, whose binomial edge ideals are Cohen-Macaulay
(see Sections 2 and 3 for the definition).
There are very few classes of graphs for which the regularity of their
binomial edge ideals is known. The upper and lower bounds known are,
in general, far from being sharp for most of the classes of graphs.
In this article, we compute the regularity of the binomial edge ideals
of Cohen-Macaulay bipartite graphs. First, we show that
the $k$-fan graphs $F^W_k(K_n)$ satisfy the upper bound conjectured
by Saeedi Madani and Kiani. It may be noted that $F^W_k(K_n)$ is not
necessarily a generalized block graph or a closed graph. We
also obtain a subclass which attains the upper bound, (Theorem
\ref{3.3}). We then compute the regularity of $k$-pure fan graphs,
(Theorem \ref{3.4}). In \cite{dav}, it
was proved that if $G$ is a connected bipartite graph, then $J_G$ is
Cohen-Macaulay if and only if $G = G_1 * \cdots * G_s$, where $G_i = F_m$ or
$G_i = F_{m_1} \circ \cdots \circ F_{m_t}$ for some $m \geq 1$ and
$m_j \geq 3$, see Section 2 for the
definition of the operations $\circ$ and $*$. By
\cite[Theorem 3.1]{JNR}, it is known that if $G = G_1 * G_2$, then
$\reg(S/J_G) = \reg(S/J_{G_1}) + \reg(S/J_{G_2})$. Therefore, to
compute the regularity of Cohen-Macaulay bipartite graphs, we need to
understand the regularity behavior under the operation $\circ$.
We first show that
$\reg(S/J_{F_m}) = 3$ if $m \geq 2$, (Proposition \ref{4.1}).
We then compute the regularity of the intermediate graphs such as
$F_{m_1} \circ \cdots \circ F_{m_t}\circ H$, where $H$ is either $F_n$
or a fan graph $F_k^W(K_n)$ for some $n \geq 3$, (Theorem \ref{4.6}).
Using these information, we obtain a precise expression for the
regularity of binomial edge ideals of Cohen-Macaulay bipartite graphs,
(Theorem \ref{cm-bipartite}).
\section{Preliminaries}
In this section, we recall some notation and fundamental results on
graphs and the corresponding binomial edge ideals which are used
throughout this paper.
Let $G$ be a finite simple graph with vertex set $V(G)$ and edge set
$E(G)$. A graph $G$ is said to be \textit{bipartite} if
there is a bipartition of $V(G)=V_1 \sqcup V_2$ such that for each
$i=1,2$, no two of the vertices of $V_i$ are adjacent.
For a subset $A \subseteq V(G)$, $G[A]$ denotes the induced
subgraph of $G$ on the vertex set $A$, that is, for $i, j \in A$, $\{i,j\} \in E(G[A])$ if and only if $ \{i,j\} \in E(G)$.
For a vertex $v$, $G \setminus v$ denotes the induced subgraph of $G$
on the vertex set $V(G) \setminus \{v\}$. A vertex $v \in V(G)$ is
said to be a \textit{cut vertex} if $G \setminus v$ has strictly more
connected components than $G$.
A subset $U$ of $V(G)$ is said to be a
\textit{clique} if $G[U]$ is a complete graph.
A vertex $v$ is said to be a \textit{free vertex} if it belongs to
exactly one maximal clique. For a vertex $v$, $N_G(v) = \{u \in V(G) ~
: ~ \{u,v\} \in E(G)\}$ (neighborhood of $v$), $N_G[v] = N_G(v)
\cup \{v\}$ and $\deg_G(v) = |N_G(v)|$. A vertex $v$ is said to be
pendant vertex, if $\deg_G(v) =1$. For a vertex $v$, $G_v$ is the
graph on vertex set $V(G)$ and edge set $E(G_v) =E(G) \cup \{
\{u,w\}: u,w \in N_G(v)\}$.
We say that a graph $G$ is Cohen-Macaulay if $S/J_G$ is Cohen-Macaulay.
For a graph $G$, by \textit{regularity of} $G$, we mean the regularity
of the binomial edge ideal of $G$.
For every $m\geq 1$, $F_m$ denotes the graph on the vertex set $[2m]$ and edge set $E(F_m) =\{ \{2i,2j-1\} : i=1,\ldots,m,j=i,\ldots,m\}$.
It was shown that the graphs $F_m$'s form basic building blocks of
Cohen-Macaulay bipartite graphs, see \cite{dav} for details.
Here we recall from \cite{dav} the two operations, denoted by $*$ and
$\circ$, which are important in the study of Cohen-Macaulay bipartite
graphs.
\textbf{Operation $*$} : For $i=1,2$, let $G_i$ be a graph with at
least one free vertex $f_i$. We denote by $G =(G_1,f_1) * (G_2,f_2)$
the graph obtained by identifying the vertices $f_1$ and $f_2$.
\textbf{Operation $\circ$} : For $i=1,2$, let $G_i$ be a graph with at least one pendant vertex $f_i$ and $v_i$ be its neighbor with
$\deg_{G_i}(v_i) \geq 2$. Then we define $G=(G_1,f_1) \circ (G_2, f_2)$
to be the graph obtained from $G_1$ and $G_2$ by removing the pendant
vertices $f_1,f_2$ and identifying the vertices $v_1$ and $ v_2$.
In the above notation, we may suppress $f_1$ and $f_2$ whenever it is
not necessary to emphasize them. Below, we illustrate the definition
of $*$ and $\circ$ with an example.
Let $G$ and $H$ be the graphs as given below:
\vskip 2mm \noindent
\begin{minipage}{\linewidth}
\begin{minipage}{0.3\linewidth}
\captionsetup[figure]{labelformat=empty}
\begin{figure}[H]
\begin{tikzpicture}[scale=.6]
\draw (4,2)-- (4,0);
\draw (4,0)-- (3,-2);
\draw (4,0)-- (5,-2);
\draw (3,-2)-- (5,-2);
\draw (5,-2)-- (6,0);
\draw (5,-2)-- (7,-2);
\draw (3,-2)-- (5,-2);
\draw (9,0)-- (9,-2);
\draw (9,-2)-- (10,0);
\draw (10,0)-- (10,-2);
\draw (10,-2)-- (11,0);
\draw (11,0)-- (11,-2);
\draw (9,-2)-- (11,0);
\draw (9,0)-- (9,-2);
\draw (9,-2)-- (11,0);
\begin{scriptsize}
\fill (4,2) circle (1.5pt);
\draw (4.28,2.26) node {$v_1$};
\fill (4,0) circle (1.5pt);
\draw (4.28,0.26) node {$v_2$};
\fill (3,-2) circle (1.5pt);
\draw (3.18,-2.24) node {$v_3$};
\fill (5,-2) circle (1.5pt);
\draw (5.04,-2.32) node {$v_4$};
\fill (6,0) circle (1.5pt);
\draw (6.28,0.26) node {$v_5$};
\fill (7,-2) circle (1.5pt);
\draw (7.16,-2.26) node {$v_6$};
\fill (9,-2) circle (1.5pt);
\draw (9.14,-2.26) node {$u_2$};
\fill (10,-2) circle (1.5pt);
\draw (10.16,-2.26) node {$u_4$};
\fill (11,-2) circle (1.5pt);
\draw (11.1,-2.24) node {$u_6$};
\fill (9,0) circle (1.5pt);
\draw (9.12,0.44) node {$u_1$};
\fill (10,0) circle (1.5pt);
\draw (10.06,0.4) node {$u_3$};
\fill (11,0) circle (1.5pt);
\draw (11.06,0.42) node {$u_5$};
\draw (5.04,-3.02) node {$G$};
\draw (10.16,-3.02) node {$H$};
\end{scriptsize}
\end{tikzpicture}
\end{figure}
\end{minipage}
\begin{minipage}{0.40\linewidth}
\captionsetup[figure]{labelformat=empty}
\begin{figure}[H]
\begin{tikzpicture}[scale=.6]
\draw (4,2)-- (4,0);
\draw (4,0)-- (3,-2);
\draw (4,0)-- (5,-2);
\draw (3,-2)-- (5,-2);
\draw (5,-2)-- (6,0);
\draw (5,-2)-- (7,-2);
\draw (3,-2)-- (5,-2);
\draw (7,-2)-- (9,-2);
\draw (9,-2)-- (10,0);
\draw (10,0)-- (10,-2);
\draw (10,-2)-- (11,0);
\draw (11,0)-- (11,-2);
\draw (9,-2)-- (11,0);
\draw (7,-2)-- (9,-2);
\draw (9,-2)-- (11,0);
\begin{scriptsize}
\fill (4,2) circle (1.5pt);
\draw (4.28,2.26) node {$v_1$};
\fill (4,0) circle (1.5pt);
\draw (4.28,0.26) node {$v_2$};
\fill (3,-2) circle (1.5pt);
\draw (3.18,-2.24) node {$v_3$};
\fill (5,-2) circle (1.5pt);
\draw (5.14,-2.22) node {$v_4$};
\fill (6,0) circle (1.5pt);
\draw (6.28,0.26) node {$v_5$};
\fill (7,-2) circle (1.5pt);
\draw (7.16,-2.26) node {$v_6$};
\fill (9,-2) circle (1.5pt);
\draw (9.14,-2.26) node {$u_2$};
\fill (10,-2) circle (1.5pt);
\draw (10.16,-2.26) node {$u_4$};
\fill (11,-2) circle (1.5pt);
\draw (11.1,-2.24) node {$u_6$};
\fill (7,-2) circle (1.5pt);
\draw (7.12,-1.56) node {$u_1$};
\fill (10,0) circle (1.5pt);
\draw (10.06,0.4) node {$u_3$};
\fill (11,0) circle (1.5pt);
\draw (11.06,0.42) node {$u_5$};
\draw (7.12,-3.02) node {$G*H$};
\end{scriptsize}
\end{tikzpicture}
\end{figure}
\end{minipage}
\begin{minipage}{.25\linewidth}
\captionsetup[figure]{labelformat=empty}
\begin{figure}[H]
\begin{tikzpicture}[scale=.6]
\draw (4,2)-- (4,0);
\draw (4,0)-- (3,-2);
\draw (4,0)-- (5,-2);
\draw (3,-2)-- (5,-2);
\draw (5,-2)-- (5,0);
\draw (3,-2)-- (5,-2);
\draw (5,-2)-- (6,0);
\draw (6,0)-- (6,-2);
\draw (6,-2)-- (7,0);
\draw (7,0)-- (7,-2);
\draw (5,-2)-- (7,0);
\draw (5,-2)-- (7,0);
\begin{scriptsize}
\fill (4,2) circle (1.5pt);
\draw (4.28,2.26) node {$v_1$};
\fill (4,0) circle (1.5pt);
\draw (4.26,0.32) node {$v_2$};
\fill (3,-2) circle (1.5pt);
\draw (3.18,-2.24) node {$v_3$};
\fill (5,-2) circle (1.5pt);
\draw (4.78,-2.22) node {};
\fill (5,0) circle (1.5pt);
\draw (5.24,0.36) node {$v_5$};
\fill (5,-2) circle (1.5pt);
\draw (4.95,-2.22) node {$v_4=u_2$};
\fill (6,-2) circle (1.5pt);
\draw (6.16,-2.26) node {$u_4$};
\fill (7,-2) circle (1.5pt);
\draw (7.1,-2.24) node {$u_6$};
\fill (6,0) circle (1.5pt);
\draw (6.06,0.4) node {$u_3$};
\fill (7,0) circle (1.5pt);
\draw (7.06,0.42) node {$u_5$};
\draw (5.50,-3.02) node {$G\circ H$};
\end{scriptsize}
\end{tikzpicture}
\end{figure}
\end{minipage}
\end{minipage}
The graph $G*H$ given above is obtained by identifying the vertices
$v_6$ of $G$ and $u_1$ of $H$. By deleting the vertices $v_6$ of $G$ and
$u_1$ of $H$ and identifying the vertices $v_4$ and $u_2$, we obtain
$G\circ H$ as given above.
For a subset $T$ of $[n]$, let $\bar{T} = [n]\setminus T$ and $c_G(T)$
denote the number of connected components of $G[\bar{T}]$. Let $G_1,\cdots,G_{c_{G(T)}}$ be connected
components of $G[\bar{T}]$. For each $i$, let $\tilde{G_i}$ denote the complete graph on $V(G_i)$ and
$$P_T(G) = (\underset{i\in T} \cup \{x_i,y_i\}, J_{\tilde{G_1}},\cdots, J_{\tilde{G}_{c_G(T)}}).$$
It was shown by Herzog et al. that $J_G = \underset{T \subseteq [n]}\cap P_T(G)$, \cite{HH1}.
For each $i \in T$, if $i$ is a cut vertex of the graph $G[\bar{T} \cup \{i\}]$,
then we say that $T$ has the cut point property. Set $\mathscr{C}(G) =\{\phi \}
\cup \{ T: T \; \text{has the cut point property} \}$.
Throughout this paper, we use a short exact sequence which allows us
to use induction.
\begin{remark}\label{2.5}
Let $G$ be a finite simple graph and $v$ be a vertex which is not a
free vertex in $G$. In \cite[Lemma 4.8]{oh}, it was shown that
$J_G = Q_1 \cap Q_2$, where $Q_1 = J_{G_v}$, $Q_2 = (x_v,y_v) +
J_{G\setminus v}$ and $Q_1 +Q_2 = (x_v,y_v) + J_{G_v \setminus
v}$. This gives rise to the following short exact sequence:
\begin{equation}\label{2.6}
0 \longrightarrow \dfrac{S}{J_{G} } \longrightarrow
\dfrac{S}{Q_1} \oplus \dfrac{S}{Q_2} \longrightarrow \dfrac{S}{Q_1
+Q_2 } \longrightarrow 0.
\end{equation}
\end{remark}
The following basic property of regularity is used repeatedly in this
article.
\begin{lemma}\label{2.4}
Let $S$ be a standard graded ring and $M,N$ and $P$ be finitely generated graded $S$-modules.
If $ 0 \rightarrow M \xrightarrow{f} N \xrightarrow{g} P \rightarrow 0$ is a
short exact sequence with $f,g$
graded homomorphisms of degree zero, then
\begin{enumerate}
\item $reg(M) \leq max\{reg(N),reg(P)+1\}$.
\item $reg(M) = reg(N)$, if $reg(N) >reg(P)$.
\end{enumerate}
\end{lemma}
\section{Regularity of $F_k^W(K_n)$}
In \cite{dav}, Bolognini et al. introduced a family of chordal graphs namely the
fan of a complete graph $K_n$.
\begin{definition}
Let $K_n$ be the complete graph on the vertex set $[n]$ and $W =
\{v_1, \ldots , v_r\} \subseteq [n]$. Then $F^W(K_n)$ is the graph
obtained from $K_n$ by the following operation: for every $i = 1,
\ldots ,r$, attach a complete graph $K_{a_i}$ to $K_n$ in such a way
that $V(K_n) \cap V(K_{a_i}) = \{v_1, \ldots, v_i\}$, for some $a_i >
i$. We say that the graph $F^W(K_n)$ is obtained by adding a fan to
$K_n$ on the set $W$ and $\{ K_{a_1},\ldots, K_{a_r}\} $ is the branch of
that fan on $W$.
Let $K_n$ be the complete graph on $[n]$ and $W_1 \sqcup \cdots \sqcup
W_k$ be a partition of a subset $W \subseteq [n]$. Let $F_k^W(K_n)$
be the graph
obtained from $K_n$ by adding a fan on each set $W_i$. For each $i\in
\{1,\ldots , k \}$, set $W_i =\{v_{i,1},\ldots , v_{i,r_i}\}$ and
$\{K_{a_{i,1}},\ldots , K_{a_{i,r_i}}\}$ be the branch of the fan on
$W_i$. The graph $F_k^W(K_n)$ is called a $k$-fan of $K_n$ on the set
$W$.
A branch $\{K_{a_{i,1}},\ldots,K_{a_{i,r_i}}\} $ of the fan on $W_i$
is said to be a pure branch if for each $j =1,\ldots,r_i$, $a_{i,j} =
j+1$. A fan on set $W_i $ is said to be pure fan, if its branch is
pure. If for each $i \in \{1,\ldots,k\}$, fan on set $W_i$ is pure,
then $F_k^W(K_n)$ is said to be a $k$-pure fan graph of $K_n$ on $W$.
\end{definition}
\begin{example}
Let $G_1$ and $G_2$ be the graphs as shown in the figure below.
\begin{minipage}{\linewidth}
\begin{minipage}{0.25\linewidth}
\definecolor{qqqqff}{rgb}{0,0,1}
\captionsetup[figure]{labelformat=empty}
\begin{figure}[H]
\begin{tikzpicture}[scale=.6]
\draw (-1,4)-- (-0.94,2.68);
\draw (-0.94,2.68)-- (0.02,1.6);
\draw (0.02,1.6)-- (1.04,2.74);
\draw (1.04,2.74)-- (1,4);
\draw (1,4)-- (0,5);
\draw (0,5)-- (-1,4);
\draw (-1,4)-- (-2,5);
\draw (-1,4)-- (-1.16,5.66);
\draw (-1.16,5.66)-- (0,5);
\draw (0,5)-- (0,6);
\draw (0,6)-- (-1,4);
\draw (-1,4)-- (1,4);
\draw (1,4)-- (0,6);
\draw (0,5)-- (-0.94,2.68);
\draw (-0.94,2.68)-- (1.04,2.74);
\draw (1.04,2.74)-- (0,5);
\draw (-1,4)-- (0.02,1.6);
\draw (0.02,1.6)-- (1,4);
\draw (1,4)-- (-0.94,2.68);
\draw (-1,4)-- (1.04,2.74);
\draw (0,5)-- (0.02,1.6);
\draw (1.04,2.74)-- (2.14,2.92);
\draw (1.04,2.74)-- (1.34,1.38);
\draw (1.34,1.38)-- (0.02,1.6);
\begin{scriptsize}
\fill (0,5) circle (1.5pt);
\draw(0.14,5.26) node {$2$};
\fill (-1,4) circle (1.5pt);
\draw(-1.26,3.96) node {$1$};
\fill (1,4) circle (1.5pt);
\draw(1.16,4.26) node {$3$};
\fill (1.04,2.74) circle (1.5pt);
\draw(1.18,3) node {$4$};
\fill (-0.94,2.68) circle (1.5pt);
\draw(-1.22,2.62) node {$6$};
\fill (0.02,1.6) circle (1.5pt);
\draw(-0.04,1.38) node {$5$};
\fill (-2,5) circle (1.5pt);
\fill (-1.16,5.66) circle (1.5pt);
\fill (0,6) circle (1.5pt);
\fill (2.14,2.92) circle (1.5pt);
\fill (1.34,1.38) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\caption{$G_1$}
\end{figure}
\end{minipage}
\begin{minipage}{0.25\linewidth}
\captionsetup[figure]{labelformat=empty}
\begin{figure}[H]
\definecolor{qqqqff}{rgb}{0,0,1}
\begin{tikzpicture}[scale=.6]
\draw (-1,4)-- (-0.94,2.68);
\draw (-0.94,2.68)-- (0.02,1.6);
\draw (0.02,1.6)-- (1.04,2.74);
\draw (1.04,2.74)-- (1,4);
\draw (1,4)-- (0,5);
\draw (0,5)-- (-1,4);
\draw (-1,4)-- (-2,5);
\draw (-1,4)-- (-1.16,5.66);
\draw (-1.16,5.66)-- (0,5);
\draw (0,5)-- (0,6);
\draw (0,6)-- (-1,4);
\draw (-1,4)-- (1,4);
\draw (1,4)-- (0,6);
\draw (0,5)-- (-0.94,2.68);
\draw (-0.94,2.68)-- (1.04,2.74);
\draw (1.04,2.74)-- (0,5);
\draw (-1,4)-- (0.02,1.6);
\draw (0.02,1.6)-- (1,4);
\draw (1,4)-- (-0.94,2.68);
\draw (-1,4)-- (1.04,2.74);
\draw (0,5)-- (0.02,1.6);
\draw (1.04,2.74)-- (2.14,2.92);
\draw (1.04,2.74)-- (1.34,1.38);
\draw (1.34,1.38)-- (0.02,1.6);
\draw (-2,5)-- (-2.28,3.96);
\draw (-2.28,3.96)-- (-1,4);
\draw (-1.52,5.08)-- (-1,4);
\draw (-1.52,5.08)-- (0,5);
\draw (-1.52,5.08)-- (-1.16,5.66);
\draw (1.04,2.74)-- (0.88,1.88);
\draw (0.88,1.88)-- (1.34,1.38);
\draw (0.88,1.88)-- (0.02,1.6);
\begin{scriptsize}
\fill (0,5) circle (1.5pt);
\draw (0.32,5.22) node {$2$};
\fill (-1,4) circle (1.5pt);
\draw (-1.22,3.74) node {$1$};
\fill (1,4) circle (1.5pt);
\draw (1.54,4.24) node {$3$};
\fill (1.04,2.74) circle (1.5pt);
\draw (1.44,3.18) node {$4$};
\fill (-0.94,2.68) circle (1.5pt);
\draw (-1.2,2.42) node {$6$};
\fill (0.02,1.6) circle (1.5pt);
\draw (-0.08,1.26) node {$5$};
\fill (-2,5) circle (1.5pt);
\fill (-1.16,5.66) circle (1.5pt);
\fill (0,6) circle (1.5pt);
\fill (2.14,2.92) circle (1.5pt);
\fill (1.34,1.38) circle (1.5pt);
\fill (-2.28,3.96) circle (1.5pt);
\fill (-1.52,5.08) circle (1.5pt);
\fill (0.88,1.88) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\caption{$G_2$}
\end{figure}
\end{minipage}
\begin{minipage}{0.4\linewidth}
Let $W = \{1,2,3\} \sqcup \{4,5\}$. Then it can be seen that
$G_1=F_2^W(K_6)$ is a $2$-pure fan graph while $G_2=F_2^W(K_6)$ is a $2$-fan
graph which is not a pure fan graph.
\end{minipage}
\end{minipage}
\end{example}
In \cite[Lemma 3.2]{dav}, it was proved that $F_k^W(K_n)$ is
Cohen-Macaulay. It may be noted that if $|W_i| > 1$ for some $i$, then
$F_k^W(K_n)$ is neither a closed graph nor a generalized block graph.
In this section, we prove that the regularity of the $k$-fan graph
$F_k^W(K_n)$ is at most the number of maximal cliques in it, i.e., the
class of $k$-fan graphs satisfies the regularity upper bound conjecture
of Saeedi Madani and Kiani. If $k=1$, then we denote $F_1^W(K_n)$ by
$F^W(K_n)$.
\begin{theorem}\label{3.3}
With the above notation, let $G =F_k^W(K_n)$ be a $k$-fan graph of the
complete graph $K_n$ on $W,$ where $n\geq2$. Then
$\reg(S/J_G) \leq c(G)$. Moreover, if for each $i \in \{1,\ldots,k\} $
and for each $ j \in \{1,\ldots, r_i\}$, $ a_{i,j} >j+1 $, then
equality holds.
\end{theorem}
\begin{proof}
We prove the assertions by induction on $k$. For $k=1$, set $W_1 = \{v_1, \ldots , v_{r_1}\}$.
We proceed by induction on $|W_1|= r_1$. If $r_1 =1$, then result
follows from \cite[Theorem 3.1]{JNR}. Assume that $r_1 >1$ and that the result is true for $r_1 -1$.
Set $G_1= K_{a_1} \setminus v_1$ and $G_2 =F^{W_1 \setminus \{v_1\}}(K_n \setminus v_1)$. Since $G_2$ is the fan graph
of $K_n \setminus v_1$ on $W_1 \setminus \{v_1\}$, by induction, $\reg(S/J_{G_2}) \leq c(G_2)$.
Note that $G=cone(v_1,G_1 \sqcup G_2)$. By \cite[Theorem 3.19]{KM5}, $$\reg(S/J_G) = \reg(S/J_{G_1}) + \reg(S/J_{G_2}) \leq 1+ c(G_2) =c(G).$$
Now, assume that $k >1$ and result is true for $k-1$. We proceed by induction on $r_k$. If $r_k =1$, then $G=K_{a_{k,1}} * F_{k-1}^{W\setminus \{v_{k,1}\}}(K_n)$.
By induction on $k$, $\reg(S/J_{F_{k-1}^{W \setminus \{v_{k,1}\}}(K_n)}) \leq c(F_{k-1}^{W \setminus \{v_{k,1}\}}(K_n))$.
By \cite[Theorem 3.1]{JNR}, $$\reg(S/J_G) =\reg(S/J_{K_{a_{k,1}}})+ \reg(S/J_{F_{k-1}^{W \setminus \{v_{k,1}\}}(K_n)}) \leq 1+ c(F_{k-1}^{W \setminus \{v_{k,1}\}}(K_n))=c(G).$$
Assume that $r_k >1$ and the result is true for $r_k -1$. Since, $v=v_{k,1}$ is not a free vertex, by Remark \ref{2.5}, $J_G =Q_1 \cap Q_2$,
$Q_1 = J_{G_{v}}$, $Q_2 =(x_v,y_v)+J_{G\setminus v}$ and $Q_1+Q_2 = (x_v,y_v)+J_{G_v \setminus v}$.
Let $G' = F_k^{W \setminus \{v\}}(K_n \setminus v)$. Then $G \setminus v = (K_{a_{k,1}} \setminus v) \sqcup G'$.
By induction on
$r_k$, $\reg(S/J_{G'}) \leq c(G')$ and therefore, $$\reg(S/Q_2)=\reg(S/J_{G\setminus v})=\reg(S/J_{K_{a_{k,1}} \setminus v})+\reg(S/J_{G'})
\leq 1+c(G\setminus v) = c(G).$$
Let $H$ be the complete graph on vertex set $N_G[v]$. Note that $G_{v}$ is ($k-1$)-fan graph of $H$ on $U = W_1 \sqcup \cdots \sqcup W_{k-1}$ and
by induction on $k$, $\reg(S/Q_1)=\reg(S/J_{G_v}) \leq c(G_v)< c(G)$.
Also, $G_{v} \setminus v$ is ($k-1$)-fan graph of $H\setminus v$ on the set $U = W_1 \sqcup \cdots \sqcup W_{k-1}$. By induction on $k$,
$\reg(S/{Q_1+Q_2})=\reg(S/J_{G_v \setminus v}) \leq c(G_v \setminus v) < c(G)$. Hence, by the short exact sequence (\ref{2.6}) and Lemma \ref{2.4},
$\reg(S/J_G) \leq c(G)$.
\end{proof}
Now, we compute the regularity of $k$-pure fan graph.
This result, along with the regularity of $F_m$'s, helps us to compute
the regularity of Cohen-Macaulay bipartite graphs.
\begin{theorem}\label{3.4}
Let $G=F_k^W(K_n)$ be a $k$-pure fan graph of $K_n$ on $W,$
where $n\geq2$. Then $\reg(S/J_G) =k+1$.
\end{theorem}
\begin{proof}
We prove this by induction on $k$. For $k=1$, let $W_1 = \{v_1, \ldots , v_{r_1}\}$ and $\{K_{a_1},\ldots,K_{a_{r_1}}\}$ be the
branch of the fan on $W_1$.
We prove this assertion by induction on $|W_1|= r_1$. If $r_1 =1$, then result
follows from \cite[Theorem 3.1]{JNR}. Assume that $r_1 >1$ and the result is true for $r_1 -1$.
Write $K_{a_{k,1}} =\{v_1,w\}$.
Set $G' =F^{W_1 \setminus \{ v_1\}}(K_n \setminus v_1)$. Since, $G'$ is $1$-pure fan graph
of $K_n \setminus v_1$ on $W_1 \setminus \{v_1\}$, it follows from the induction hypothesis that, $\reg(S/J_{G'}) =2$.
Note that $G=cone(v_1,w \sqcup G')$. Therefore,
by \cite[Theorem 3.19]{KM5}, $\reg(S/J_G) = \reg(S/J_{G'}) =2$.
Now, assume that $k >1$ and result is true for $k-1$. If $r_k =1$, then
$G=K_{a_{k,1}} * F_{k-1}^{W \setminus \{v_{k,1}\}}(K_n)$. By induction on $k$, $\reg(S/J_{F_{k-1}^{W \setminus \{v_{k,1}\}}(K_n)}) =k$.
By \cite[Theorem 3.1]{JNR}, $$\reg(S/J_G) =\reg(S/J_{K_{a_{k,1}}})+ \reg(S/J_{F_{k-1}^{W\setminus \{v_{k,1}\}}(K_n)})=k+1.$$
Assume that $r_k >1$ and the result is true for $r_k -1$. Since, $v=v_{k,1}$ is not a free vertex, by Remark \ref{2.5}, $J_G =Q_1 \cap Q_2$,
$Q_1= J_{G_{v}}$, $Q_2=(x_v,y_v)+ J_{G\setminus v}$ and $Q_1+Q_2 = (x_v,y_v)+J_{G_{v} \setminus v}$.
Note that $ G\setminus v =w_{k,1} \sqcup G''$, where $G''$
is $k$-pure fan graph of $K_n \setminus v$ on $W\setminus \{v\}$ and $K_{a_{k,1}}=\{v,w_{k,1}\}$.
By induction on $r_k$, $\reg(S/J_{G''}) =k+1$ and therefore, $$\reg(S/Q_2)=\reg(S/J_{G\setminus v})=\reg(S/J_{G''}) =k+1.$$
Let $H$ be the complete graph on the vertex set $N_G[v]$. Note that
$G_v$ is a ($k-1$)-pure fan graph of $H$
on $W' = W_1 \sqcup \cdots \sqcup W_{k-1}$. By induction on $k$, $\reg(S/Q_1)= \reg(S/J_{G_{v}}) =k$.
Also, by induction on $k$,
$\reg(S/{Q_1+Q_2})=\reg(S/J_{G_v \setminus v})
=k$.
Now, using the short exact sequence (\ref{2.6}) and Lemma \ref{2.4},
we get $\reg(S/J_G) =k+1$.
\end{proof}
It was proved in \cite[Theorem 1.1]{MM} that $\reg(S/J_G) \geq l$,
where $l$ is the length of longest induced path. Note that if $k \geq 2$,
then for $F_k^W(K_n)$ the longest induced path has length $3$. We
conclude this section by
obtaining an improved lower bound for this class of graphs.
\begin{corollary}
Let $G=F_k^W(K_n)$ be a $k$-fan graph of the complete graph $K_n$ on the set $W$, where $n\geq2$. Then $\reg(S/J_G) \geq k+1$.
\end{corollary}
\begin{proof}
Let $A=[n]
\sqcup \{w_{i,1}: i=1,\ldots,k\}$, where $w_{i,1} \in V(K_{a_{i,1}})
\setminus [n]$. Then $G[A]$ is the induced subgraph of $G$ which is obtained
by adding a whisker each to $k$ vertices of $K_n$. By applying \cite[Corollary
2.2]{MM} and \cite[Theorem 3.1]{JNR}, we get $\reg(S/J_G) \geq \reg(S/J_{G[A]}) = k+1$.
\end{proof}
\section{Regularity of Cohen-Macaulay bipartite graphs}
In this section, we compute the regularity of binomial edge ideals of
Cohen-Macaulay bipartite graphs. As a first step, we compute the
regularity of $F_m$, for $m\geq2$, which are the basic building blocks
of a Cohen-Macaulay bipartite graph.
Note that $F_1$ is $K_2$, therefore, $\reg(S/J_{F_1}) =1$.
\begin{proposition}\label{4.1}
For each $m\geq 2$, $\reg(S/J_{F_m}) =3$.
\end{proposition}
\begin{proof}
We prove the assertion by induction on $m$.
Observe that $F_2$ is a path on $4$ vertices, therefore
$\reg(S/J_{F_2}) = 3$.
Assume now that $m \geq 3$ and that the result is true for $m-1$. Since $v=2m-1$ is not a free vertex of $F_m$, by Remark \ref{2.5},
$J_{F_m} = Q_1 \cap Q_2$,
$Q_1=J_{{{(F_m)}_v}}$, $Q_2= (x_v,y_v)+J_{{F_m} \setminus v}$ and
$Q_1+Q_2 = (x_v,y_v)+ J_{{{(F_m)}_v} \setminus v}$.
Note that ${(F_m)}_v = F^{W'}(H)$ is the $1$-pure fan graph of $H$ on the set $W'=\{2,4,\ldots,2m\}$,
where $H$ is a complete graph on vertex set $N_{F_m}[v]$.
By Theorem \ref{3.4}, $\reg(S/Q_1)=
\reg(S/J_{F^{W'}(H)}) =2$. Since, ${F_m} \setminus v = F_{m-1} \sqcup \{2m\}$, by induction on $m$, $\reg(S/Q_2)= \reg(S/J_{F_{m-1}}) =3 $.
Note that ${(F_m)}_v \setminus v = F^{W'}(H\setminus v)$ is the $1$-pure fan graph of $H\setminus v$ on $W'$. It follows from Theorem \ref{3.4} that
$\reg(S/{Q_1+Q_2}) = \reg(S/J_{{{(F_m)}_v} \setminus v}) =2$. Thus, by the short exact sequence (\ref{2.6}) and Lemma \ref{2.4}, $\reg(S/J_{F_m})=3$. Hence, the assertion follows.
\end{proof}
It may be noted that for $F_m$, any maximal induced path has length $3$.
Therefore, one can say that $F_m$'s have minimal regularity, in the
sense that it attains the lower bound given by Matsuda and Murai,
\cite{MM}.
\begin{remark}\label{4.2}
For the operation $\circ$, in \cite{dav}, the authors
assumed that $\deg_{G_i}(v_i) \geq 3$, for each $i$. By allowing
$\deg_{G_i}(v_i) = 2$, we can apply the operation $\circ$ with $F_2$ as one
of the graphs.
If $F_{m_1}$ is a graph with $m_1\geq 2$, then $F_{m_1} \circ F_2 =F_{m_1}*F_1$. By \cite[Theorem 3.1]{JNR},
$\reg(S/J_{F_{m_1} \circ F_2})
=\reg(S/J_{F_{m_1} * F_1}) = 3+1 =4$.
\end{remark}
We now compute the regularity of
$F_{m_1} \circ F_{m_2}$
in terms of the regularities of $F_{m_1}$ and $F_{m_2}$.
\begin{proposition}\label{4.3}
Let $m_1,m_2\geq 3$ and $G=(F_{m_1},f_1) \circ (F_{m_2},f_2)$. Then
\[\reg(S/J_G)=\reg(S/J_{F_{m_1 -1}})+\reg(S/J_{F_{m_2 -1}}) =6.\]
\end{proposition}
\begin{proof}
Let $V(F_{m_1}) = \{u_1, \ldots, u_{2m_1}\}$ and $V(F_{m_2}) =
\{w_1, \ldots, w_{2m_2}\}$. In $F_{m_1}$, there are two vertices of
degree $1$, namely $u_1$ and $u_{2m_1}$. So is the case for $F_{m_2}$.
It may be noted that the graphs obtained by different choices of $f_1$ and
$f_2$ are isomorphic. Hence, without loss of generality, we may assume
that $f_1 = u_{2m_1}$ and $f_2 = w_{2m_2}$. Let $v = u_{2m_1-1} =
w_{2m_2-1}$ in $G$.
Since, $v$ is not a free vertex of the graph $G$, by Remark \ref{2.5},
there exist $Q_1 = J_{G_v} $ and $Q_2 =(x_v,y_v) +J_{G\setminus v}$ so
that $J_G = Q_1 \cap Q_2$ and $Q_1+Q_2 = (x_v,y_v)+J_{G_v \setminus v}$.
Let $H$ be the complete graph on vertex set $N_G[v].$
Note that $G_v = F_2^W(H)$ is a $2$-pure fan graph of $H$ on $W =
N_G(v)$, $G \setminus v = F_{m_1-1} \sqcup F_{m_2-1}$ and $G_v
\setminus v = F_2^W(H \setminus v)$ which is a $2$-pure fan of $H
\setminus v$ on $W$. Therefore, it follows from Theorem \ref{3.4} and
Proposition \ref{4.1} that
\[
\reg(S/Q_1)=3, \reg(S/Q_2) = \reg(S/J_{F_{m_1 -1}})+\reg(S/J_{F_{m_2 -1}})=6 \text{ and }
\reg(S/{Q_1+Q_2}) =3.\]
Hence, it follows from the short exact sequence (\ref{2.6}) and Lemma
\ref{2.4} that $$\reg(S/J_G) =\reg(S/J_{F_{m_1 -1}})+\reg(S/J_{F_{m_2 -1}})= 6.$$
\end{proof}
For the rest of the section, we assume that $F_k^W(K_n)$ is a
$k$-pure fan graph.
\begin{proposition}\label{4.4}
For $m,n\geq3$, let $G=(F_m,f_1) \circ (F_k^W(K_n),f_2)$, where $W =
W_1 \sqcup \cdots \sqcup W_k \subseteq [n]$. Write $v = v_1 = v_2$ in $G$.
Assume that $|W_i|\geq 2$ for some $i$ and $v \in W_i$. Then
\[\reg(S/J_G)= \reg(S/J_{F_{m -1}})+\reg(S/J_{F_k^W(K_n) \setminus
\{v,f_2\}})=k+4.\]
\end{proposition}
\begin{proof}
Without loss of generality, assume that $|W_1| \geq 2$ and $v \in
W_1$.
Since $v$ is not a free vertex of $G$, by Remark \ref{2.5}, there exist
$Q_1 = J_{G_v}$ and $Q_2 =(x_v,y_v)+J_{G\setminus v}$ such that
$J_G=Q_1 \cap Q_2$ and $Q_1+Q_2 = (x_v,y_v)+ J_{G_v \setminus v}$. Let
$H$ be the complete graph on $N_G[v]$. Note that $G_v =F_k^{W'}(H)$
is a $k$-pure fan of $H$ on $W'=N_{F_m \setminus f_1}(v) \sqcup
(W \setminus W_1)$.
By Theorem \ref{3.4}, $\reg(S/Q_1)= \reg(S/J_{G_v}) = k+1$. Since $K_{a_{1,1}} =\{v,f_2\}$ and
$G\setminus v = F_{m-1} \sqcup (F_k^{W}(K_n)\setminus \{v,f_2\})$, by
Proposition \ref{4.1} and Theorem \ref{3.4}, we get
$$\reg(S/Q_2) =\reg(S/J_{G\setminus v}) = \reg(S/J_{F_{m-1}})+ \reg(S/J_{F_k^W(K_n) \setminus \{v,f_2\}}) =k+4.$$
Since $G_v \setminus v$ is an induced subgraph of $G_v$,
$\reg(S/(Q_1+Q_2)) = \reg(S/J_{G_v \setminus v}) \leq \reg(S/J_{G_v}) = k+1$.
Hence we conclude from the short exact sequence (\ref{2.6}) and
Lemma \ref{2.4} that
$$\reg(S/J_G) =\reg(S/J_{F_{m -1}})+\reg(S/J_{F_k^W(K_n)\setminus \{v,f_2\}})= k+4.$$
\end{proof}
\begin{remark}\label{4.5}
\begin{enumerate}
\item In Proposition \ref{4.4}, we had assumed that $m\geq 3$. If $m=2$,
then $F_m$ is a path of length $3$.
Since $G=F_2 \circ F_k^W(K_n) = F_1 * F_k^W(K_n)$, by \cite[Theorem 3.1]{JNR},
$$\reg(S/J_G) =\reg(S/J_{F_1})+\reg(S/J_{F_k^W(K_n)})=k+2.$$
\item We had also assumed that $|W_i| \geq 2$ for some $i$.
If $|W_i|=1$ for each $i$, then observe that $G \setminus v = F_{m-1} \sqcup
F_{k-1}^{W \setminus \{v\}}(K_n \setminus v)$ and hence
$\reg(S/Q_2) = k+3$. Note that the regularities of $S/Q_1$ and
$S/(Q_1+Q_2)$ remain the same. Therefore, it follows that
$\reg(S/J_G) = k+3$.
\end{enumerate}
\end{remark}
We now study the regularity of graphs obtained by composing several
$F_m$'s with a pure fan graph using the operation $\circ$.
\begin{theorem}\label{4.6}
Let $n \geq 3$ and $H$ denote either $F_n$ or $F_k^W(K_n)$ with $W = W_1 \sqcup
\cdots \sqcup W_k$ and $|W_i| \geq 2$ for some $i$. Let $G= F_{m_1} \circ \cdots \circ F_{m_t} \circ (H,f)$ be a graph with $t\geq2$
and for each $i\in [t]$, $m_i \geq 3$. Let $V(F_{m_1} \circ \cdots
\circ F_{m_t}) \cap V(H)= \{v\}$ and $f$ be a pendant vertex in $N_H(v)$.
If $H = F_k^W(K_n)$, then assume that $v\in W_i$ and $|W_i| \geq 2$.
Then
$$\reg(S/J_G) = \reg(S/F_{m_1 -1})+ \reg(S/F_{m_2 -2})+\cdots + \reg(S/F_{m_t -2})+\reg(S/J_{H\setminus \{v,f\}}).$$
\end{theorem}
\begin{proof}
For each $i \in \{1,\ldots,t\}$ and $j=\{1,2\}$,
let $f_{i,j}$ be the only pendant vertices of $F_{m_i}$ and for each
$i \in\{1,\ldots,t-1\}$, $V(F_{m_i}) \cap V(F_{m_{i+1}})=\{v_{i,i+1}\}$, i.e. $F_{m_i} \circ F_{m_{i+1}}$
is the graph obtained from $F_{m_i}$ and $F_{m_{i+1}}$ by removing the pendant vertices $f_{i,2},f_{i+1,1}$
and identifying the vertices $2m_i-1 =v_{i,i+1} =2$.
Following Remark \ref{2.5}, set
$J_G = Q_1 \cap Q_2$, $ Q_1 = J_{G_v}$, $Q_2 =(x_v,y_v)+J_{G\setminus v} $
and $Q_1+Q_2 =(x_v,y_v)+ J_{G_v \setminus v}$.
We proceed by induction on $t\geq 2$. Let $t=2$. Let
$H=F_k^W(K_n)$ and $H'$ be the complete graph on $N_G[v]$. Without
loss of generality, assume that $|W_1| \geq 2$ and $v \in W_1$. Note that
$G_v = F_{m_1} \circ G'$, where $G'=F_k^{W'}(H')$ is the $k$-pure fan
graph of $H'$ on $W'=N_{F_{m_2} \setminus f_{2,2}}(v) \sqcup (W
\setminus W_1)$. Since $m_2 \geq 3$, it follows from Proposition \ref{4.4} that $\reg(S/Q_1) =\reg(S/J_{G_v})
=k+4$. Note that $G\setminus v = F_{m_1} \circ F_{m_2 -1} \sqcup
H\setminus \{v,f\}$. By Proposition \ref{4.3} and Remark \ref{4.2},
$$\reg(S/Q_2) =\reg(S/J_{G\setminus v}) =
\reg(S/J_{F_{m_1-1}})+\reg(S/J_{F_{m_2-2}})+ k+1.$$ Since $G_v
\setminus v$ is an induced subgraph of $G_v$, $\reg(S/J_{F_{m_1-1}})
= 3$ and $\reg(S/J_{F_{m_2-2}}) \geq 1$, it follows from the short
exact sequence (\ref{2.6}) and Lemma \ref{2.4} that
$$\reg(S/J_G)=\reg(S/J_{F_{m_1-1}})+\reg(S/J_{F_{m_2-2}})+ k+1.$$
Assume now that $H=F_n$. Without loss of generality, assume that $v =
2n-1$. Let $H''$ be the complete graph on $N_G[v]$
and $G''=F_2^{W''}(H'')$ is $2$-pure fan of
$H''$ on $W''=N_{F_{m_2} \setminus f_{2,2}}(v) \sqcup N_{F_{n} \setminus f}(v)$.
Then $G_v = F_{m_1} \circ G''$.
By Proposition \ref{4.4}, $\reg(S/Q_1) =\reg(S/J_{G_v}) =6$.
Since $G\setminus v = F_{m_1} \circ F_{m_2 -1} \sqcup F_{n-1}$, by
Proposition \ref{4.3} and Remark \ref{4.2},
$$\reg(S/Q_2) =\reg(S/J_{G\setminus v}) =
\reg(S/J_{F_{m_1-1}})+\reg(S/J_{F_{m_2-2}})+ \reg(S/J_{F_{n-1}}) \geq
7.$$
Note that $G_v\setminus v$ is an induced subgraph of $G_v$. Thus, by \cite[Corollary 2.2]{MM}, $\reg(S/Q_1+Q_2) = \reg(S/J_{G_v \setminus v}) \leq \reg(S/J_{G_v}) = 6$.
Hence, it follows from the short exact sequence (\ref{2.6}) and
Lemma \ref{2.4} that
$$\reg(S/J_G)=\reg(S/J_{F_{m_1-1}})+\reg(S/J_{F_{m_2-2}})+ \reg(S/J_{F_{n-1}}).$$
Now, assume that $t\geq3$ and the result is true for $\leq t-1$.
Let $H=F_k^W(K_n)$ and $H_1$ be the complete graph on $N_G[v]$.
Note that $G_v = F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ G_1$,
where $G_1=F_k^{U}(H_1)$ is the $k$-pure fan of
$H_1$ on
$U=N_{F_{m_t} \setminus f_{t,2}}(v) \sqcup (W \setminus
W_1)$, $G\setminus v = F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ
F_{m_t-1} \sqcup H\setminus \{v,f\}$ and $G_v \setminus v = F_{m_1} \circ \cdots \circ F_{m_{t-1}}
\circ F_k^U(H_1\setminus v)$. Hence by induction on $t$,
\begin{eqnarray*}
\reg(S/J_{G_v}) & =& \reg(S/F_{m_1 -1})+ \reg(S/F_{m_2
-2})+\cdots + \reg(S/F_{m_{t-1} -2})+k+1; \\
\reg(S/J_{G\setminus v}) & =&
\reg(S/J_{F_{m_1-1}})+ \reg(S/J_{F_{m_2 -2}}) +\cdots +
\reg(S/J_{F_{m_{t-1}-2}})\\
& & +\reg(S/J_{F_{m_t-2}})+ k+1; \\
\reg(S/J_{G_v \setminus v}) & =& \reg(S/F_{m_1 -1})+ \reg(S/F_{m_2 -2})+\cdots + \reg(S/F_{m_{t-1} -2})+k+1.
\end{eqnarray*}
By (\ref{2.6}) and Lemma \ref{2.4}, we get
$$\reg(S/J_G)=\reg(S/F_{m_1 -1})+ \reg(S/F_{m_2 -2})+\cdots + \reg(S/F_{m_{t} -2})+k+1.$$
Now assume that $H=F_n$. Let $H_2$ be the complete graph on vertex set $N_G[v]$.
Note that $G_v = F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ G_2$,
where $G_2=F_2^{U'}(H_2)$ is the $2$-pure fan of
$H_2$ on $U'=N_{F_{m_t} \setminus f_{t,2}}(v) \sqcup N_{F_{n}
\setminus f}(v) $, $G\setminus v = F_{m_1} \circ \cdots \circ
F_{m_{t-1}} \circ F_{m_t -1} \sqcup F_{n-1}$ and $G_v \setminus v =
F_{m_1} \circ \cdots \circ F_{m_{t-1}} \circ F_2^{U'}(H_2\setminus
v)$. Hence by induction on $t$,
\begin{eqnarray*}
\reg(S/J_{G_v}) & = & \reg(S/F_{m_1 -1})+ \reg(S/F_{m_2 -2})+\cdots +
\reg(S/F_{m_{t-1} -2})+3;\\
\reg(S/J_{G\setminus v}) & = &
\reg(S/J_{F_{m_1-1}})+\reg(S/J_{F_{m_2-2}})+\cdots+ \reg(S/J_{F_{m_t
-2}}) + \reg(S/J_{F_{n-1}}); \\
\reg(S/J_{G_v \setminus v}) & = & \reg(S/F_{m_1 -1})+\reg(S/F_{m_2 -2})+\cdots + \reg(S/F_{m_{t-1} -2})+3.
\end{eqnarray*}
Using the short exact sequence (\ref{2.6}) and Lemma \ref{2.4}, we
conclude that
$$\reg(S/J_G)= \reg(S/J_{F_{m_1-1}})+\reg(S/J_{F_{m_2-2}})+\cdots+ \reg(S/J_{F_{m_t -2}}) + \reg(S/J_{F_{n-1}}).$$ Hence, the assertion follows.
\end{proof}
Now, we obtain a precise expression for regularity of binomial edge ideal of Cohen-Macaulay bipartite graphs.
By \cite[Theorem 6.1]{dav}, if $G$ is a connected Cohen-Macaulay
bipartite graph, then there exists a positive integer $s$ such that
$G=G_1*\cdots*G_s$, where $G_i=F_{n_i}$ or $G_i=F_{m_{i,1}} \circ \cdots \circ F_{m_{i,t_i}}$,
for some $n_i\geq1$ and $m_{i,j}\geq 3$ for each $j=1,\ldots,t_i$. Let
$A=\{i\in[s]: G_i =F_{n_i} , n_i \geq2\}$, $B=\{i\in[s]: G_i =F_{n_i}
, n_i =1\}$ and
$C=\{i\in [s]: G_i=F_{m_{i,1}} \circ \cdots \circ F_{m_{i,t_i}},
t_i\geq 2\}$. For each $i\in C$, let $C_i=\{j \in \{2, \ldots, t_i-1\}
~ : ~m_{i,j} \geq 4 \}\sqcup \{1,t_i\}$ and
$C_i'=\{j \in \{2, \ldots, t_i-1\} ~ : ~
m_{i,j} = 3 \}$. Set $\alpha = |A| + \sum_{i \in C} |C_i|$ and $\beta
= |B| + \sum_{i \in C}|C_i'|$.
\begin{theorem}\label{cm-bipartite}
Let $G=G_1*\cdots*G_s$ be Cohen-Macaulay connected bipartite graph.
Let $\alpha$ and $\beta$ be as defined above. Then
$\reg(S/J_G) =3\alpha+\beta$.
\end{theorem}
\begin{proof}
By \cite[Theorem 3.1]{JNR}, $$\reg(S/J_G) = \sum_{i=1}^s
\reg(S/J_{G_i}). $$ By Proposition \ref{4.1}, $\reg(S/J_{G_i}) = 3$ for $i
\in A$. If $i
\in B$, then $\reg(S/J_{G_i}) = 1$. If $i \in C$, then it follows from
Theorem \ref{4.6} that $\reg(S/J_{G_i}) = 3|C_i| + |C_i'|$. Hence the
assertion follows.
\end{proof}
We illustrate our result in the following example.
Let $G= F_3 \circ F_4 \circ F_3 \circ F_3 \circ F_3 $ be the graph
as shown in figure below
\begin{figure}[H]
\begin{tikzpicture}[scale=.6]
\draw (-4,0)-- (-4,2);
\draw (-4,0)-- (-3,2);
\draw (-4,0)-- (-2,2);
\draw (-3,2)-- (-3,0);
\draw (-3,0)-- (-2,2);
\draw (-2,2)-- (-1,0);
\draw (-2,2)-- (0,0);
\draw (-1,0)-- (-1,2);
\draw (0,0)-- (0,2);
\draw (-1,2)-- (0,0);
\draw (-1,2)-- (1,0);
\draw (0,2)-- (1,0);
\draw (-2,2)-- (1,0);
\draw (1,0)-- (2,2);
\draw (1,0)-- (3,2);
\draw (2,2)-- (2,0);
\draw (2,0)-- (3,2);
\draw (3,2)-- (4,0);
\draw (3,2)-- (5,0);
\draw (4,2)-- (4,0);
\draw (4,2)-- (5,0);
\draw (5,0)-- (6,2);
\draw (5,0)-- (7,2);
\draw (6,2)-- (6,0);
\draw (7,2)-- (7,0);
\draw (7,2)-- (6,0);
\begin{scriptsize}
\fill (-3,2) circle (1.5pt);
\fill (-1,2) circle (1.5pt);
\fill (-3,0) circle (1.5pt);
\fill (-1,0) circle (1.5pt);
\fill (-2,2) circle (1.5pt);
\fill (0,0) circle (1.5pt);
\fill (0,2) circle (1.5pt);
\fill (1,0) circle (1.5pt);
\fill (2,2) circle (1.5pt);
\fill (2,0) circle (1.5pt);
\fill (3,2) circle (1.5pt);
\fill (4,2) circle (1.5pt);
\fill (4,0) circle (1.5pt);
\fill (5,0) circle (1.5pt);
\fill (6,2) circle (1.5pt);
\fill (6,0) circle (1.5pt);
\fill (7,2) circle (1.5pt);
\fill (7,0) circle (1.5pt);
\fill (-4,2) circle (1.5pt);
\fill (-4,0) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\end{figure}
Note that $G$ is Cohen-Macaulay bipartite graph. With respect to the
notation in Theorem \ref{cm-bipartite}, $A = \emptyset = B$ and $C =
\{1\}$. Also, we have $|C_1| = 3$ and $|C_1'| = 2$. Therefore,
by Theorem \ref{cm-bipartite}
$\reg(S/J_G) =11$.
\vskip 2mm
\noindent
\textbf{Acknowledgements:} The second author thanks the National Board
for Higher Mathematics, India for the financial support. We have extensively
used SAGE \cite{sage} and Macaulay 2 \cite{M2} for computational
purposes.
\bibliographystyle{plain}
|
2,869,038,156,230 | arxiv | \section{Introduction}
A (finite) \emph{generalized quadrangle} (GQ) is an incidence structure
$\S=(\P,\B,\I)$ in which $\mathcal{P}$ and $\mathcal{B}$ are disjoint non-empty sets of objects called
points and lines, and for which $\mathrel{\mathrm I} \subseteq (\mathcal{P} \times
\mathcal{B}) \cup (\mathcal{B} \times \mathcal{P})$ is a symmetric point-line incidence relation
satisfying the following axioms:
\begin{itemize}
\item[(i)] each point is incident with $1+t$ lines $(t \geqslant 1)$ and
two distinct points are incident with at most one line;
\item[(ii)] each line is incident with $1+s$ points $(s \geqslant 1)$ and
two distinct lines are incident with at most one point;
\item[(iii)] if $x$ is a point and $L$ is a line not incident with $x$,
then there is a unique pair $(y,M) \in \mathcal{P} \times \mathcal{B}$ for which $x \mathrel{\mathrm I} M \mathrel{\mathrm I}
y \mathrel{\mathrm I} L$.
\end{itemize}
The integers $s$ and $t$ are the parameters of the GQ and $\mathcal{S}$ is said to
have order $(s,t)$. If $s=t$, then $\mathcal{S}$ is said to have order $s$. If $\mathcal{S}$
has order $(s,t)$, then $|\mathcal{P}| = (s+1)(st+1)$ and $|\mathcal{B}| =
(t+1)(st+1)$ (see e.g. \cite{PayneThas84}).
An {\em ovoid} of a GQ $\mathcal{S}$ is a set $\mathcal{O}$ of points of $\mathcal{S}$ such that every line
is incident with exactly one point of the ovoid. An ovoid of a GQ of order
$(s,t)$ has necessarily size $1+st$. A {\em partial ovoid} of a
GQ is a set $\mathcal{K}$ of points such that every line contains {\em at most} one point
of $\mathcal{K}$. A partial ovoid $\mathcal{K}$ is called {\em maximal} if and only if $\mathcal{K} \cup
\{P\}$ is not a partial ovoid for any point $P \in \mathcal{P} \setminus \mathcal{K}$, in other words, if
$\mathcal{K}$ cannot be extended. It is clear that any partial ovoid of a GQ of order
$(s,t)$ contains $1+st-\rho$ points, $\rho \geq 0$, with $\rho = 0$ if and only
if $\mathcal{K}$ is an ovoid.
It is a natural question to study {\em extendability} of partial ovoids, i.e. can one alway extend a partial ovoid of size $1+st-\epsilon$ (e.g. to an ovoid) if $\epsilon$ is not too big? The following theorem is a typical example.
\begin{theorem}[{\cite[2.7.1]{PayneThas84}}]\label{theo1}
Let $\S=(\P,\B,\I)$ be a GQ of order $(s,t)$. Any partial ovoid of size $st-\rho$, $0 \leq \rho < \frac{t}{s}$ is contained in a uniquely defined ovoid of $\mathcal{S}$.
\end{theorem}
Note that if no ovoids of a particular GQ exist, then Theorem~\ref{theo1} implies an upper bound on the size of partial ovoids.
The following theorem deals with the limit situation, and will be of use in Section~\ref{sec:geom}.
\begin{theorem}[{\cite[2.7.2]{PayneThas84}}]\label{theo2}
Let $\S=(\P,\B,\I)$ be a GQ of order $(s,t)$. Let $\mathcal{K}$ be a maximal partial ovoid of size
$st-t/s$ of $\mathcal{S}$. Let $\mathcal{B}'$ be the set of lines incident with no point
of $\mathcal{K}$, and let $\mathcal{P}'$ be the set of points on at least one line of $\mathcal{B}'$ and
let $\mathrel{\mathrm I}'$ be the restriction of $\mathrel{\mathrm I}$ to points of $\mathcal{P}'$ and lines of $\mathcal{B}'$. Then
$\mathcal{S}'=(\mathcal{P}',\mathcal{B}',\mathrel{\mathrm I}')$ is a subquadrangle of order $(s,\rho=t/s)$.
\end{theorem}
Consider the parabolic quadric $\mbox{\rm Q}(4,q)$ in the $4$-dimensional projective space $\mbox{\rm PG}(4,q)$.
This quadric is the set of points and lines that are totally isotropic with relation to a non-singular
quadratic form on $\mbox{\rm PG}(4,q)$, which is, up to coordinate transform, unique, and its points and
lines constitute an example of a generalized quadrangle of order $q$.
It is known, (see e.g. \cite{PayneThas84}) that this GQ has
ovoids. A particular example of an ovoid is any elliptic quadric $\mbox{\rm Q}^-(3,q)$
contained in it and obtained by a hyperplane section of $\mbox{\rm Q}(4,q)$.
When $q$ is prime, these are the only ovoids \cite{BallGovaertsStorme}; when $q$
is a prime power, other examples are known, see e.g. \cite{DeBeuleKleinMetsch11}
for a list of references. The classification of ovoids of $\mbox{\rm Q}(4,q)$, for $q$ prime, is essentially due to
the computation of intersection numbers (modulo $p$) of a hypothetical ovoid with elliptic quadrics, and the use of this information in a combinatorial argument.
Applying Theorem~\ref{theo1} to the $GQ$ $\mbox{\rm Q}(4,q)$ implies that a partial ovoid
of size $q^2$ cannot be maximal. It is shown in \cite{DeBeuleGacs} that maximal partial
ovoids of $\mbox{\rm Q}(4,q)$, $q=p^h$, $p$ an odd prime, $h > 1$, do not exist. The natural question arises
if maximal partial ovoids exist when $h=1$. Curiously, this is the case when $p \in \{3,5,7,11\}$, but
no examples are known for $q > 11$, \cite{Penttila}. In this paper we give a slightly alternative
proof of the non-existence result for $h > 1$. Further, we compute the intersection numbers
of a hypothetical maximal partial ovoid of size $q^2-1$ with elliptic quadrics embedded in $\mbox{\rm Q}(4,q)
$, for $q$ an odd prime. This yields structural information on the existing examples, and it is our
hope that this information could contribute to finally proving their uniqueness and non-existence for
$p > 11$.
\section{Non-existence for $q > p$}
We follow almost the same approach as in \cite{DeBeuleGacs}. Therefore we need to introduce an
alternative representation of the GQ $\mbox{\rm Q}(4,q)$.
An {\em oval} of $\mbox{\rm PG}(2,q)$ is a set of $q+1$ points $\mathcal{C}$, such that no three points
of $\mathcal{C}$ are collinear. When $q$ is odd, it is known that all ovals of
$\mbox{\rm PG}(2,q)$
are conics. When $q$ is even, several other examples and infinite families are
known, see e.g. \cite{Cheroweb}. The GQ $T_2(\mathcal{C})$ is defined as follows. Let
$\mathcal{C}$ be an oval of $\mbox{\rm PG}(2,q)$, embed $\mbox{\rm PG}(2,q)$ as a plane in
$\mbox{\rm PG}(3,q)$ and denote this plane by $\pi_{\infty}$. Points are defined as
follows:
\begin{itemize}
\item[(i)] the points of $\mbox{\rm PG}(3,q) \setminus \mbox{\rm PG}(2,q)$;
\item[(ii)] the planes $\pi$ of $\mbox{\rm PG}(3,q)$ for which $|\pi \cap \mathcal{C}| =
1$;
\item[(iii)] one new symbol $(\infty)$.
\end{itemize}
Lines are defined as follows:
\begin{itemize}
\item[(a)] the lines of $\mbox{\rm PG}(3,q)$ which are not contained in $\mbox{\rm PG}(2,q)$
and meet $\mathcal{C}$ (necessarily in a unique point);
\item[(b)] the points of $\mathcal{C}$.
\end{itemize}
Incidence between points of type (i) and (ii) and lines of type (a) and
(b) is the inherited incidence of $\mbox{\rm PG}(3,q)$. In addition, the point
$(\infty)$ is incident with no line of type (a) and with all lines of type
(b). It is straightforward to show that this incidence structure is a
GQ of order $q$. The following theorem (see e.g. \cite{PayneThas84}) allows us to
use this representation.
\begin{theorem}\label{theo3}
The GQs $T_2(\mathcal{C})$ and $\mbox{\rm Q}(4,q)$ are isomorphic if and only if $\mathcal{C}$ is a conic of
the plane $\mbox{\rm PG}(2,q)$.
\end{theorem}
From now on we suppose that $\mathcal{C}$ is a conic. Let $\mathcal{K}$ be a maximal partial ovoid of size
$k$ of $T_2(\mathcal{C})$. Since $\mbox{\rm Q}(4,q )\cong T_2(\mathcal{C})$ has a collineation group acting transitively on the
points (see e.g. \cite{HirschfeldThas91}), we can suppose that $(\infty) \in \mathcal{K}$. This
implies that $\mathcal{K}$ contains no points of type (ii). It is clear that no two points of type (i)
of $\mathcal{K}$ determine a line meeting $\pi_{\infty}$ in a point of $\mathcal{C}$. Hence the existence of
$\mathcal{K}$ implies the existence of a set $U$ of $k-1$ points of type (i) such that no two
points determine a line meeting $\pi_{\infty}$ in $\mathcal{C}$. It is easy to see that the
converse is also true: from a set $U$ of $k-1$ points in $\mbox{\rm PG}(3,q) \setminus \pi_{\infty}$ with the
property that all lines joining at least two points of $U$ are disjoint from $\mathcal{C}$, we can find a
partial ovoid $\mathcal{K}$ of $T_2(\mathcal{C})$ of size $k$ by adding $(\infty)$ to $U$.
The maximality of $\mathcal{K}$ is equivalent to the maximality of $U$.
Hence the existence of a maximal partial ovoid of size $q^2-1$ of $\mbox{\rm Q}(4,q)$, is equivalent with
the existence of a set $U$ of $q^2-2$ affine points, not determining the points of a conic
at infinity. In \cite{DeBeuleGacs}, it is shown that such a set $U$ can always be extended
when $q > p$. In fact, only the assumption that at least $p+2$ points are not determined
is used. In this paper we will assume that the points of a conic are not determined and that $U$ is not extendable,
compute the range of a certain polynomial, and find a contradiction when $q > p$.
In the third section, we will describe the use of this particular polynomial to compute the
intersection numbers modulo $p$ of the point set $U$ with planes of $\mbox{\rm AG}(3,q)$. This will yield intersection
numbers modulo $p$ of the maximal partial ovoid of size $q^2-1$ with elliptic
quadrics embedded in $\mbox{\rm Q}(4,q)$.
From now on, let $\mathcal{K}$ denote a partial ovoid of $\mbox{\rm Q}(4,q)$, $q=p^h$, $p$ an odd prime and $h \geq 1$.
Let $U$ denote the point set of $\mbox{\rm PG}(3,q) \setminus \pi_{\infty}$ corresponding with the partial ovoid of $\mbox{\rm Q}(4,q)$.
\begin{lemma}\label{le:pigeon}
If a plane $\pi \neq \pi_{\infty}$ of $\mbox{\rm PG}(3,q)$, meets $\mathcal{C}$ in at least one point, then $|\pi \cap U | \leq q$.
\end{lemma}
\begin{proof}
Let $P \in \mathcal{C} \cap \pi$. If $|\pi \cap U | > q$, then at least one of the $q$ lines of
$\pi$ through $P$ must contain two points of $U$. This contradicts the fact
that $U$ does not determine any point of $\mathcal{C}$. Hence, $|\pi \cap U| \leq q$.
\end{proof}
We choose $\pi_{\infty}$ to be the plane with equation $X_3 = 0$. Then any line $l$ of $\pi_{\infty}$
is determined by the equation $yX_0 + zX_1 + wX_2 = 0$, $(y,z,w) \in \mathbb{F}_q^3 \setminus \{(0,0,0)\}$.
We denote such a line as $l(y,z,w)$. Any plane $\pi \neq \pi_{\infty}$ through $l(y,z,w)$ is determined
by the equation $yX_0 + zX_1 + wX_2 +xX_3= 0$. We denote such a plane as $\pi(x,y,z,w)$.
The point set $U=\{ (a_i,b_i,c_i,1):i=1,\dots ,q^2-2 \}\subset \mbox{\rm PG}(3,q) \setminus \pi_{\infty}$. We define the
R\'edei polynomial associated to the point set $U$ as follows:
\[
R(X,Y,Z,W)=\prod _{i=1}^{q^2-2}(X+a_iY+b_iZ+c_iW)=
X^{q^2-2}+\sum _{i=1}^{q^2-2}\sigma _i(Y,Z,W)X^{q^2-2-i}\,.
\]
Here $\sigma _i(Y,Z,W)$ is the $i$-th elementary symmetric polynomial of the
multi-set $\{ a_iY+b_iZ+c_iZ:i\}$ and is either zero or has degree $i$.
\begin{lemma}\label{le:div}
Suppose that the line $l(y,z,w)$ meets $\mathcal{C}$ in at least one point. Then $R(X,y,z,w) \mid (X^q-X)^q$.
\end{lemma}
\begin{proof}
If $x \in \mathbb{F}_q$ is a root of $R(X,y,z,w) = 0$, then its multiplicity equals $|\pi(x,y,z,w)
\cap U| \leq q$ (the latter by Lemma~\ref{le:pigeon}). Since $|U| = q^2-2$, each of the $q$
planes $\pi(x,y,z,w)$, $x \in \mathbb{F}_q$, contains points of $U$, hence $R(X,y,z,w)=0$ has each
element $x \in \mathbb{F}_q$ as root, with multiplicity at most $q$, and the lemma follows.
\end{proof}
Since we suppose that $q$ is odd and $|U|=q^2-2$, after the affine translation
\[ a_i \mapsto a_i - \frac{\sum a_i}{q^2-2}, \qquad
b_i \mapsto b_i - \frac{\sum b_i}{q^2-2}, \qquad
c_i \mapsto c_i - \frac{\sum c_i}{q^2-2}, \]
not affecting the (non)-determined points at infinity, we may assume that $\sum a_i = \sum b_i = \sum c_i = 0$, which is equivalent to $\sigma_1(Y,Z,W) \equiv 0$.
\begin{lemma}\label{le:sigma2}
If a line $l(y,z,w)$ has at least one common point with $\mathcal{C}$, then
\begin{equation}\label{eq:sigma2}
R(X,y,z,w)(X^2-\sigma _2(y,z,w))=(X^q-X)^q.
\end{equation}
\end{lemma}
\begin{proof}
From Lemma~\ref{le:div} we know that
\[
R(X,y,z,w)(X-S)(X-S')=(X^q-X)^q \label{eq:lacu},
\]
where $S$ and $S'$ are not necessarily different and depend on $y,z,w$.
Considering the first three terms on both sides and taking into account that
$\sigma_1(Y,Z,W) \equiv 0$, we have $(X-S)(X-S')=X^2-\sigma _2(y,z,w)$.
\end{proof}
\begin{lemma}\label{le:sigmas}
Suppose that the line $l(y,z,w)$ meets $\mathcal{C}$ in at least one point. Then
\[\sigma_{2l+1}(y,z,w) = 0, 0 \leq l \leq \frac{q^2-3}{2},\]
\[\sigma_{2l}(y,z,w) = \sigma_2^l(y,z,w), 0 \leq l \leq \frac{q^2-q-2}{2},\]
\[\sigma_{q^2-q+2k}(y,z,w) =
\sigma_2^{\frac{q^2-q+2k}{2}}(y,z,w)-\sigma_2^k(y,z,w), 0 \leq k \leq
\frac{q-3}{2}. \]
\end{lemma}
\begin{proof}
Computation of both right-hand and left-hand sides of Equation~\eqref{eq:sigma2},
and the use of $\sigma_1(Y,Z,W) \equiv 0$ proves the lemma.
\end{proof}
\begin{corollary}\label{cor:sigmas}
\[\sigma_{2l+1}(Y,Z,W) \equiv 0,\, 0 \leq l \leq \frac{q-1}{2},\]
\[\sigma_{2l}(Y,Z,W) \equiv \sigma_2^l(Y,Z,W), \,0 \leq l \leq \frac{q-1}{2}.\]
\end{corollary}
\begin{proof}
Consider any line $l(y,z,w)$ meeting $\mathcal{C}$ in at least one point. By Lemma~\ref{le:sigmas},
the equations of the corollary are true after subsituting $Y=y$, $Z=z$, $W=w$.
But for each point $P \in \mathcal{C}$, each line $l(y,z,w)$ on $P$ gives a substitution for
which the equations are true. Dually, this means that the points of at least $q+1$
different lines are a solution of the equations of the corollary. Since the degree of each equation
is at most $q$, by the theorem of B\'ezout, each curve represented by an equation must
contain $q+1$ lines as a component. But then its degree must be at least $q+1$.
Hence, the polynomials are identically zero.
\end{proof}
We define now the polynomials $S_j$, $j=0,\ldots, q-1$ as follows.
\[
S_j(Y,Z,W) := \sum_{i=1}^{q^2-2} (a_iY+b_iZ+c_iW)^j\,.
\]
The Newton identities describe a relation between the polynomials $S_j(Y,Z,W)$ and $\sigma_i(Y,Z,W)$ as follows:
\[
k\sigma_k(Y,Z,W) \equiv \sum_{j=1}^k(-1)^{j-1}S_j(Y,Z,W) \sigma_{k-1}(Y,Z,W) \,.
\]
\begin{lemma}\label{le:sj}
\[S_{2l+1}(Y,Z,W) \equiv 0,\, 0 \leq l \leq \frac{q-1}{2},\]
\[S_{2l}(Y,Z,W) \equiv -2 \sigma_2^l(Y,Z,W), \, 0 \leq l \leq \frac{q-1}{2}.\]
\end{lemma}
\begin{proof}
Using Corollary~\ref{cor:sigmas}, the Newton identities, the fact that $S_1(Y,Z,W) \equiv \sigma_1(Y,Z,W)$, $\sigma_0 = 1$, and induction, the lemma follows.
\end{proof}
\begin{lemma}
If $\sigma_2(Y,Z,W)$ is reducible then the set $U$ is extendable.
\end{lemma}
\begin{proof}
Suppose that $\sigma_2(Y,Z,W)$ is reducible. By equation~\eqref{eq:sigma2}, $\sigma_2(y,z,w)$
must be a square for any $(y,z,w)$ such that $l(y,z,w)$ meets $\mathcal{C}$ in at least one point. So there
are triples $(y,z,w)$, contained in a line (the dual of the pencil of lines through a point $P \in \mathcal{C}$) for
which $\sigma_2(y,z,w)$ is a square. It follows that $\sigma_2(Y,Z,W) = (AY+BZ+CW)^2$.
Now define $U^* := U \cup \{(A,B,C,1),(-A,-B,-C,1)\}$. Consider any point $P \in \mathcal{C}$ and any line $l
(y,z,w)$ on $P$. From Equation~\eqref{eq:sigma2} it follows that each plane on $l$ now contains
exactly $q$ points of $U^*$. But if $P$ is a point determined by $U^*$, then there exists a line $m$
on $P$ containing $r \geq 2$ points of $U^*$. But all $q+1$ planes on $m$ contain exactly $q$
points of $U^*$, so $q^2 = |U^*|=r+(q+1)(q-r)$, a contradiction. Hence, $U^*$ does not determine
the points of $\mathcal{C}$.
\end{proof}
\begin{theorem}
If $U$ is not extendable, then $q=p$.
\end{theorem}
\begin{proof}
We define
\begin{eqnarray*}
\chi(X,Y,Z,W) & := & \sum_{i=1}^{q^2-2} (X+a_iY+b_iZ+c_iW)^{q-1} \nonumber\\
& = & \sum_{i=1}^{q^2-2}\sum_{j=0}^{q-1}{q-1 \choose j}X^{q-1-j} (a_iY+b_iZ+c_iW)^j \nonumber \\
& = & \sum_{j=0}^{q-1}(-1)^jX^{q-1-j} S_j(Y,Z,W) \nonumber\\
& = & -2 \sum_{k=0}^{\frac{q-1}{2}} X^{q-1-2k} \sigma_2^k(Y,Z,W) = -2 \frac{X^{q+1} - \sigma_2^{\frac{q+1}{2}}(Y,Z,W)}{X^2-\sigma_2(Y,Z,W)}\,,\label{eq:chi}
\end{eqnarray*}
where we used Lemma~\ref{le:sj} to obtain the second last equality. If $U$ is not extendable, then $\sigma_2(Y,Z,W)$ is not reducible. So the range of $\sigma_2(Y,Z,W)$ is the complete field $\mathbb{F}_q$, so for each non-square $\nu \in \mathbb{F}_q$, we can find a triple $(y,z,w)$ such that $\sigma_2(y,z,w)=\nu$. Then $\sigma_{2}^{\frac{q+1}{2}}(y,z,w) = -\sigma_2(y,z,w)$ and
\begin{equation}\label{eq:chix}
\chi(X,y,z,w) = -2 \frac{X^{q+1}+\sigma_2(y,z,w)}{X^2-\sigma_2(y,z,w)}
\end{equation}
It is now easy to see that the range of $\chi(X,Y,Z,W)$ will contain at least $\frac{q+1}{2}$ different elements of $\mathbb{F}_q$. On the other hand,
\[ \chi(x,y,z,w) = q^2-2 - |U \cap \pi(x,y,z,w)| \mbox{\rm{ mod }} p\,,\] for any 4-tuple $(x,y,z,w) \not \in \{(1,0,0,0),(0,0,0,0)\}$. So the right hand side is necessarily an element of $\mathbb{F}_p$, a contradiction with the range of $\chi(X,Y,Z,W)$ if $q > p$.
\end{proof}
\section{The intersection numbers for $q$ a prime}\label{sec:geom}
Suppose now that $q=p$, $p$ an odd prime. We consider the possibilities of $\chi(X,Y,Z,W)$. Consider a plane $\pi(x,y,z,w)$.
\begin{itemize}
\item[(a)] Suppose that $\sigma_2(y,z,w) = 0$. Then $\chi(X,y,z,w) = -2X^{q-1}$, hence $\chi(x,y,z,w)=0$ if $x=0$ and $\chi(x,y,z,w)=-2$ if $x \neq 0$.
\item[(b)] Suppose that $\sigma_2(y,z,w)$ is a square different from $0$. If $x^2 \neq \sigma_2(y,z,w)$ then $\chi(x,y,z,w) = -2$. If $x^2=\sigma_2(y,z,w)$ then $\chi(x,y,z,w) = -1$.
\item[(c)] Suppose that $\sigma_2(y,z,w)$ is a non-square. Then
\[ \chi(x,y,z,w) = -2 \frac{x^2+\sigma_2(y,z,w)}{x^2-\sigma_2(y,z,w)} \neq 0 \]
\end{itemize}
\begin{lemma}\label{le:dual}
The curve $\sigma_2(Y,Z,W) = 0$ is the dual of $\mathcal{C}$.
\end{lemma}
\begin{proof}
Theorem~\ref{theo2} ensures that the set of lines of $\mbox{\rm Q}(4,q)$, not meeting $\mathcal{K}$, is the set of lines
of a hyperbolic quadric embedded as a hyperplane section in $\mbox{\rm Q}(4,q)$. We denote this hyperbolic
quadric as $\mbox{\rm Q}^+$. Since $\mathcal{K} = \{(\infty)\} \cup U$, clearly $(\infty) \not \in \mbox{\rm Q}^+$, and from the proof
of Theorem~\ref{theo3} in \cite{PayneThas84}, it follows that $\mbox{\rm Q}^+$ is represented in $T_2(\mathcal{C})$ as
a hyperbolic quadric meeting $\pi_{\infty}$ in $\mathcal{C}$. We denote this quadric as $\mbox{\rm Q}_T^+$. The
hyperbolic
quadric $\mbox{\rm Q}_T^+$ contains exactly $q+1$ points of type (ii). Consider such a point, represented by
the plane $\pi$.
The two lines of type (a) of $\mbox{\rm Q}_T^+$ incident with $\pi$, are contained in $\pi$, and do
not meet $U$. But the other $q-1$ lines of $T_2(\mathcal{C})$, incident with $\pi$, do meet $U$ in exactly
one point. Hence the plane $\pi$ must contain exactly $q-2$ points of $U$. If $\pi$ is
represented by the 4-tuple $(x,y,z,w)$, then $\chi(x,y,z,w) = q^2-2-|\pi \cap U| \mbox{\rm{ mod }} q$. So if $|\pi
\cap U| = q-2$, then $\chi(x,y,z,w)=0$ and by the above overview of the range of $\chi$,
the planes $\pi(x,y,z,w)$ that represent a point of type (ii) of $\mbox{\rm Q}_T^+$, are exactly those
for which $\sigma_2(y,z,w) = 0 = x$. But the planes that represent points of type (ii) of
$\mbox{\rm Q}_T^+$ are planes that meet $\mathcal{C}$ in a tangent line. Hence,
$\sigma_2(y,z,w)=0$ if and only if $l(y,z,w)$ is a tangent line to $\mathcal{C}$.
\end{proof}
\begin{corollary}
A plane $\pi(x,y,z,w)$ represents an elliptic quadric containing $(\infty)$ if and only if $\sigma_2
(y,z,w)$ is a non-square.
\end{corollary}
\begin{proof}
From the proof of Theorem~\ref{theo3}, it follows that an elliptic quadric containing $(\infty)$ is
represented in $T_2(\mathcal{C})$ by a plane meeting $\pi_{\infty}$ in a line external to $\mathcal{C}$. The Corollary
now follows from Lemma~\ref{le:dual}.
\end{proof}
\begin{corollary}\label{cor:el}
If an elliptic quadric $\mbox{\rm Q}^- \subset \mbox{\rm Q}(4,q)$ contains a point of $\mathcal{K}$, then
\[
|\mbox{\rm Q}^- \cap \mathcal{K}| \mbox{\rm{ mod }} p \in \{-1+2\frac{x^2+\nu}{x^2-\nu}\,\|\, \mbox{\rm $\nu$ running over the
non-squares, $x \in \mathbb{F}_q$}\}
\]
\end{corollary}
\begin{proof}
If an elliptic quadric contains a point of $\mathcal{K}$, we can choose it to be the point $(\infty)$. Then
\begin{equation}\label{eq:gen}
|\pi(x,y,z,w) \cap U| \mbox{\rm{ mod }} q = -2 - \chi(x,y,z,w) = -2 + 2\frac{x^2+\nu}{x^2-\nu}\,,
\end{equation}
$\nu = \sigma_2(y,z,w)$, which is non-square.
\end{proof}
Consider now any point $P \in \mbox{\rm Q}(4,q) \setminus \mbox{\rm Q}^+$. Then $P^\perp \cap \mbox{\rm Q}^+$ is a conic $C_P$,
and $C_P^\perp = \{P,P'\}$, $P \neq P' \in \mbox{\rm Q}(4,q) \setminus \mbox{\rm Q}^+$. We call $P'$ the antipode of $P$.
Consider now the point $(\infty)$, this is collinear with the points of type (ii) of $\mbox{\rm Q}_T^+$. But for
each point of type (ii) of $\mbox{\rm Q}_T^+$, represented by a plane $\pi(x,y,z,w)$, we have seen that $x=0$.
Hence the point $(0,0,0,1)$ is contained in the planes representing the points of type (ii) of $\mbox{\rm Q}_T^
+$, so, the points of type (ii) of $\mbox{\rm Q}_T^+$ are collinear with $(0,0,0,1)$. Hence, the point
$(0,0,0,1)$
is the antipode of the point $(\infty)$.
\begin{lemma}
If an elliptic quadric $\mbox{\rm Q}^- \subset \mbox{\rm Q}(4,q)$ contains a point of $\mathcal{K}$ and its antipode, then
$|\mbox{\rm Q}^- \cap \mathcal{K}| \equiv -3 \mbox{\rm{ mod }} q$.
\end{lemma}
\begin{proof}
A point and its antipode are non-collinear, and the collineation group of $\mbox{\rm Q}(4,q)$ acts transitively
on the pairs of non-collinear points. So in the $T_2(\mathcal{C})$ representation, if an elliptic quadric
contains a point of $\mathcal{K}$, this can be chosen $(\infty)$ while its antipode can be chosen to be the
point $(0,0,0,1)$. For a plane $\pi(x,y,z,w)$ containing $(0,0,0,1)$, we have $x=0$. The lemma
now follows from Corollary~\ref{cor:el}.
\end{proof}
We remark that the computed intersection numbers (modulo $q$) do not exclude elliptic quadrics
that contain no point of $\mathcal{K}$. We list the range for intersection numbers modulo $q$ found in
Corollary~\ref{cor:el} for $q \in \{5,7,11\}$. Recall that these numbers are valid for elliptic quadrics
containing at least one point of $\mathcal{K}$. Hence $0$ means a positive multiple of $q$ in reality.
\begin{itemize}
\item $q=5$: $\{0,2,3\}$
\item $q=7$: $\{2,3,4,6\}$
\item $q=11$: $\{0,4,5,8,9,10\}$.
\end{itemize}
We used an explicit description of the known examples (\cite{kc}) to compute the intersection
numbers with all elliptic quadrics. We list the results. In this list, for $q=5$ and $q=11$, we see that
there are elliptic quadrics containing no point of $\mathcal{K}$. However this $0$ is {\bf not} related to a $0$
in the above list.
\begin{itemize}
\item $q=5$: $\{0,2,3,5,8,12\}$
\item $q=7$: $\{2,3,4,6,9,10,18\}$
\item $q=11$: $\{0,4,5,8,9,10,11,15,16,20,30\}$.
\end{itemize}
As a final remark, we notice that the number of different intersection numbers is relatively large
compared with $q$. On the other hand, an elliptic quadric containing a point of $\mathcal{K}$ and its
antipode always meets $\mathcal{K}$ in $-3 \mbox{\rm{ mod }} q$ points. In the above list, we notice for each $q$ only
two different intersection numbers corresponding to $-3 \mbox{\rm{ mod }} q$. This might suggest that pairs
point-antipode play a special role, and indeed, for the known examples, it is true that when a point
belongs to $\mathcal{K}$, then also its antipode belongs to $\mathcal{K}$, \cite[Theorem 12]{kc}. Unfortunately, the above combinatorial information seems too weak to prove such a characterisation. It is our feeling that such a characterisation could be helpful in proving the non-existence for larger $q$. We note
that in \cite{kc}, where a completely different approach is used, a comparable conclusion
on the pairs point-antipode is made. Finally, we also mention the work in \cite{DWT}, where
the non-existence for larger $q$ is shown under the extra assumption that $(q^2-1)^2$
divides the order of the automorphism group of the maximal partial ovoid.
\section*{Acknowledgement}
The author thanks the department of Computer Science at E\"otv\"os Lor\'and University in
Budapest, and especially P\'eter Sziklai, Tam\'as Sz\H{o}nyi and Zsuzsa Weiner for their hospitality.
|
2,869,038,156,231 | arxiv | \section{Introduction}
Internet of Things is considered as a networking paradigm that connects a wide range of devices to the internet, From wearable gadgets which capture data from body \cite{amiri-azimi-acmhealth20} to sensors and devices that interact with the environment in a smart home and also surveillance systems in smart cities \cite{7976279}. Optimal energy consumption at battery-powered nodes is always been a challenging topic for researchers in this era. The rapid growth of hardware technologies has made it possible for devices to become smaller, which poses new energy challenges that need to be addressed \cite{9246553}. This single constraint imposes several others in regard to choices of routing protocol, network coverage, and longevity\cite{s17071574}. A well-known network topology management method used to tackle this challenge is called the clustering method\cite{s21030873}. where the nodes are grouped into several clusters and one or many cluster heads (CHs) are elected in the network \cite{7976279}. Many clustering algorithms have been proposed in the context of homogeneous IoT and Wireless Sensor Networks (WSN) \cite{5679898, 7976279}. However, most of these algorithms have neglected the diversity of energy profiles, assumed that nodes had been supplied with the same energy level and this will cause fast energy depletion in nodes with weak power. To be utilized in the new paradigm of IoT networks, the available clustering methods, need to be modified to consider the heterogeneity of nodes in IoT environments\cite{s17071574}.
In this paper we propose a distributed energy efficient clustering method named HetEng to detect nodes with high power and distribute energy consumption by changing the CH role using a statistical way continually. We evaluated our approach using MATLAB platform and our method shows a significant improvement in alive nodes and residual energy.
The remainder of this paper is organised as follow: The related works is presented in section 2, The network model for formulating the problem in section 3, based on Smart-BEEM our detailed modification and improvement introduced in section 4. Performance evaluation and numerical results are indicated in section 5. Finally, in section 6 we discussed the conclusion and future works.
\section{Related Works}
Various energy-efficient approaches have been proposed in clustering context, each using specific methods for selection of the CH and for routing between the CH and the nodes. The reviewed works are categorized in Table 1.
\begin{table}[htbp]
\centering
\caption {Energy-efficient clustering methods}
\resizebox{.9999\columnwidth}{!}{%
\begin{tabular}{p{3.1cm}||p{5cm}}
\hline
\rowcolor{gray!20}
\textbf{Clustering Approach} & \textbf{Notable Methods} \\
\hline
Duty Cycle Ratio & EnergIoT \cite{LI2017124}\\
\hline
Data compression \& fusion & LIDAR \cite{MALAMBO20191} \\
\hline
Meta-Heuristic & SCE\_PSO \cite{10.1007/s11276-018-1679-2}, \cite{https://doi.org/10.1002/spe.2797}, FAMACROW \cite{GAJJAR2016235}\\
\hline
Mobility&\!\!\cite{7502988}\\
\hline
Routing \& Hierarchical & LEACH\cite{926982}, Modified-LEACH\cite{https://doi.org/10.1049/iet-wss.2017.0099}\\
\hline
Statistical/Mathematical & HEED \cite{1347100}, BEE(M) \cite{6878886}, Smart-BEEM \cite{s17071574}\\
\hline
\end{tabular}
}
\label{tab:methods}%
\end{table}
According to Table I, duty cycle methods are proposed for reduction of energy consumption in the Internet of Things, where the nodes switch between the active and sleep modes. Many heuristic algorithms have been proposed for that purpose\cite{LI2017124}. Compression and data fusion methods are also used to reduce energy consumption, which are mainly of heuristic nature \cite{8645769}. Clustering methods are also used for transmission at shorter distances, where data are transferred to the cluster, at a shorter distance, rather than to the sink, at a longer distance. Selection of the optimal cluster head using heuristic and meta-heuristic algorithms is effective in reduction of energy consumption. This can be based on parameters such as temperature, residual energy, load, and number of remaining nodes. Another challenge that can be added to clustering algorithms is mobility, which is evaluated using other parameters, such as maximum lane speed and traffic flow rate. In \cite{https://doi.org/10.1002/spe.2797}, a heuristic similar to the existing mathematical methods is presented for optimal selection of cluster heads, thereby reducing energy consumption in IoT.
In \cite{khamforoosh2011clustered} the authors divide nodes into a number of clusters according to the LEACH algorithm. Cluster heads then generate a minimum spanning tree according to Prim’s algorithm. The tree is balanced constantly using the AVL algorithm.
In \cite{s17071574} the authors propose a smart clustering algorithm (Smart- BEEM) based on one in their earlier work, referred to as BEE(M), to achieve energy efficiency and support for the Quality of user Experience (QoE) in communication in cluster-based IoT networks. It is a context- and user-behavior-aware approach, aiming to simplify the selection of beneficial communication interfaces and cluster headers for data transmission on the part of IoT devices.
In another paper, Hy-IoT provides an efficient hybrid energy-aware clustering communication protocol for green IoT network computing. It also provides a real IoT network architecture for testing the proposed protocol against the existing ones. Efficient cluster head selection boosts the utilization of the node’s energy contents, thereby increasing network lifetime and the rate of packet transmission to the base station. Hy-IoT uses different weighted election probabilities for selection of a cluster head based on the heterogeneity level of the region. Besides prolonging network lifetime, it increases throughput with respect to the amounts obtained by SEP, LEACH, and Z-SEP \cite{SADEK2018166}.
\section{Network Model}
Hereon, We define our considered clustering approach adopted in the research, and formalized the problem. An IoT network contains various IoT devices (e.g., smartwatches and sensors with different battery capacities). Therefore, we define \(N= \{N_1, \dots, N_i\}\) as heterogeneous set battery-powered IoT nodes in a network field with the dimension of \(Z\times{Y}\). As an example, Figure 1 illustrates a network field of \(100m\times{100m}\). Due to the heterogeneous energy level of the nodes, they have various values of energy capacity. We denote nodes energy values as \(E= \{E_1,\dots, E_i\}\) (in joules). On that basis, it is more challenging here than in a homogeneous network scenario to calculate and select a high-energy node to take the CH role in the network, since some nodes have higher or infinite initial energy and some have lower.Therefore, previous works conducted in the context of traditional clustering may not detect optimum CHs efficiently in IoT scenarios. Moreover, increasing number of iterations in the CH selection procedure imposes higher overload on the entire network due to the large number of broadcast packets containing information about residual energy and the density of nodes around them. In each iteration, each node calculates a grade which indicated the probability of being elected as CH based on its own energy level. Since the grade of every node should reach the point of 1.0 before it can be declared as a final CH, the packets are sent to the CHs or gateways according to the positions. In a homogeneous network, the initial energy for all the cluster nodes (in the first iteration) is the fixed value of 100\% for the entire time denoted as T, representing the first iteration, and the value changes to 80\% in T + 1… n (the following iterations), continuing to decrease until it reaches 0\%, where the node becomes inactive.
This affects network coverage and density, which results in unbalanced energy consumption in different areas which inevitably results in electing low-energy nodes as CH. In contrast, in heterogeneous IoT networks it is possible to elect high energy nodes to play the CH role. Figure \ref{fig:fig1} illustrates the network and node distribution in the initial phase.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\linewidth]{figures/fig1.png}
\caption{Network Environment at the initial state}
\label{fig:fig1}
\end{figure}
\section{Proposed Approach}
Clustering should be done so that highly energetic nodes that are denser in terms of position receive more packets from the neighboring nodes. Therefore, it is necessary to calculate the residual energy values of the nodes so that the re-clustering operation is performed according to the states and energy statuses of the nodes. The proposed statistically-based method is a re-modification of CH selection methods based on a change in the role of the CH through calculation of the amount of real energy in joules divided by the average of the surroundings and distribution of the energy consumption value among the initial highly-energetic nodes.In the first round, nodes with high real energy values rather than terms of percentage are selected. Then, as mean neighbor energy status is used, the energy consumption value is distributed so that high-energy nodes play a role in the sending of packets to the gateways. Therefore, energy consumption in the cluster is reduced. With these parameters taken into account, the proposed model for CH selection is shown as in Eq. (1).
\begin{multline}
CH_{\text{prob}}=\\
C_{\text{prob}}\times{\frac{E_{\text{rest current node in }j}}{M_{E_{\text{rest neighbor nodes in }j}}}}\times{\frac{E_{\text{rest}}}{\sqrt{\frac{1}{N-1}}\sum_{i=1}^{N}(E_i-M)^2}}
\label{eq1}
\end{multline}
In Eq. \ref{eq1}, \(CH_{\text{prob}}\) is the probability of the \(CH\) role, which is based on the following factors. \(C_{\text{prob}}\) has a fixed probability value of 5\%. \(E_{\text{rest}}\) is the amount of node energy that is considered in joules. \(M\) is mean neighboring node energy, which appears in the denominator of the average residual energy function of the target node, also calculated in joules. In fact, a comparison is made in this part of our algorithm between the residual energy of the node and the residual energy state of the surrounding nodes in a cluster.is the amount of node energy that is considered in joule. In the next term of the equation, the residual energy of the desired node is again considered in joules, but sample deviation is used this time for calculation of the variance of the desired node. In the deviation section, the criterion \(N\) is the number of surrounding nodes, \(E_i\) is the residual energy of each node, which is subtracted from the mean for the neighboring nodes.
To calculate the relative variance of residual energy in the network, we used sample deviation for each cluster; that is, a comparison is made between the residual energy of the node and the sample deviation of the average energy of the remaining neighbour nodes around. In fact, the variance of residual energy of the node is calculated relative to its neighboring nodes. A positive value for the standard deviation means that the examined node has more energy than the average energy of its neighbour nodes, and therefore, will get elected as the CH. In Eq (2), another term will be added, which will serve as a criterion for selecting the CH based on the conditions. On that basis, the number of repetitions in the competition round with other nodes will also be considered.
\begin{multline}
CH_{\text{prob}}=\\C_{\text{prob}}\times{\frac{E_{\text{rest current node in }j}}{M_{E_{\text{rest neighbor nodes in }j}}}}\times{\frac{E_{\text{rest}}}{\sqrt{\frac{1}{N-1}}\sum_{i=1}^{N}(E_i-M)^2}}
\label{eq2}\times\\{min(\frac{\text{node degree}}{D_{\text{avg}}},1)}
\end{multline}
In Eq. (2), the three parts of the previous formula are repeated, except that in this step, it is necessary for specification of the protocol of choice at the node and its selection as a CH to consider its position and the density of the surrounding nodes. In the third part of the formula, nodes with more sparse surroundings are selected as CHs, and their data need to be transferred. ${D_{\text{avg}}}$ is the average density of the surrounding nodes, calculated as in Eq. (3).
\begin{equation}
{{D_{\text{avg}}=\ {\pi}R^2}}\times{\frac{Num_{\text{Devices }}}{Area}}
\end{equation}
In Eq. (3), $R$ is the communication radius of the node, ${Num_{\text{Devices }}}$ is the number of battery-powered devices in network, and $Area$ is the operation environment of sensors and devices. The value is $1$ when the node can be elected as the final CH. Where the node is subjected to calculation, there are three possible cases, as given in Eq. (4).
\begin{multline}
\begin{cases}
C_1:\text{Random}(0,1)\leq{Cprob} & [1]\\
C_2:\frac{E_{\text{rest in j}}}{M_{E_{\text{rest other nodes in neighbourhood in j}}}}\times\\{\frac{E_{\text{rest}}}{\sqrt{\frac{1}{N-1}}\sum_{i=1}^{N}(E_i-M)^2}}\geq{1} & [2] \\\
C_3:\frac{\text{Node Degree}}{D_{\text{avg}}}\geq{1} & [3]
\label{eq:conditions}
\end{cases}
\end{multline}
If two of the three conditions in Eq. (4) are met, the node will be selected as the final node. If the
node in question is unable to communicate with any of the nodes, it prepares itself for the transfer as
the final CH without having to participate in the competition. Table II summarizes represents a truth table of possible scenarios\cite{s17071574}.
\begin{table}[htbp]
\label{tb:CH_statuses1}
\caption{Cluster head statuses}
\begin{center}
\begin{tabular}{M{1.5cm}|M{1.5cm}|M{1.5cm}|M{1.5cm}}
\hline
\rowcolor{gray!20}
\textbf{\(C_1\)} & \textbf{\(C_2\)} & \textbf{\(C_3\)} & \textbf{\(C_4\)} \\
\hline
\hline
1 & 1 & 1 & Final \\
0 & 1 & 1 & Final \\
1 & 0 & 1 & Final \\
1 & 1 & 0 & Final \\
1 & 0 & 0 & Tentative \\
\end{tabular}
\end{center}
\end{table}
The simulation parameters are indicated in Table III. Network communication interfaces are NAI1...NAI5 which includes Wifi, Bluetooth, ZigBee, LTE and NB-LTE and each one has its own characteristics. The detailed information for these communication technologies and power consumption models are mentioned in \cite{s17071574}.
\begin{table}[htbp]
\caption{Simulation Parameters}
\begin{center}
\begin{tabular}{c|c|c p{5cm}}
\hline
\rowcolor{gray!20}
\textbf{Type} & \textbf{Parameters} & \textbf{Values}\\
\hline \hline
Area & Network Area & From (0,0) - (100, 100) \\
Number of nodes & N & 300 \\w7
Initial energy & Energy & Heterogeneous \\
Transmission range & R & various \\
Gateway & Sink & At (50, 175) \\
Default cluster radious & & 25m \\
Data packet size & & 100 bytes \\
Broadcast packet size & & 25 bytes \\
Data header size & & 25 bytes \\
Each round & & 5 TDMA frames \\
CH data compress rate & & 0.8 \\
Duration & & 1000 rounds \\
Interfaces & & NAI1-NAI5 \\
\end{tabular}
\end{center}
\end{table}
\section{Performance Evaluation}
In recent years, several works aim to simulate various layers of cloud computing stack: IoT \cite{10.5555/1941192}, \cite{4116633} edge \cite{8654084}, \cite{ Yousefpour_2019}, fog \cite{https://doi.org/10.1002/spe.2509}, and cloud-based scenarios \cite{khan2021perfsim} by considering different levels of system details and complexities. However, in the context of clustering algorithms, the majority of papers used MATLAB as their base simulation platform \cite{matlab} due to the statistical analysis nature of their evaluations. For the same reason, to evaluate our method we also used MATLAB as the base simulation platform and all conditions, including network size, simulation area, and number of nodes, as well as network distribution are assumed to be the same in all compared algorithms. The average of ten runs in the simulation process was calculated to obtain the final results.
In this study, we named the result of the proposed algorithm HetEng. It was compared statistically with different algorithms, including HEED, LEACH, BEE, BEEM, and Smart-BEEM. As initial energy levels were generated randomly in the simulation environment, the running phase was assumed to be the same in all the algorithms. For large areas, the proposed method exhibited an increase in network lifetime as compared to the other algorithms. It was also observed that network lifespan was generally longer in networks with more nodes than in ones with fewer nodes. The results indicated an increase in the number of alive nodes and residual energy in the proposed algorithm as compared to the others. In the large networks, however, network coverage was much closer to network lifetime, because the energy consumption in the network was significantly higher.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth]{figures/Residual.png}
\caption{Comparison of residual energy among different algorithms}
\label{fig:fig2}
\end{figure}
In Fig. \ref{fig:fig2}, the superiority of the proposed algorithm over all the compared algorithms can be observed. On that basis, an average 3\% improvement over the Smart-BEEM algorithm has occurred in the residual energy of the nodes, and the slope is closer to linear than those of the other algorithms, which has resulted from the closer-to-normal distribution of energy consumption. Moreover, the HEED and LEACH algorithms exhibited steeper slopes, in that order, which indicated lower amounts of energy remaining after 1000 rounds.
\begin{figure}[hbtp]
\centering
\includegraphics[width=1\linewidth]{figures/Alive.png}
\caption{Alive nodes state in the network}
\label{fig:fig3}
\end{figure}
Fig. \ref{fig:fig3}, shows how the normal distribution of the CH role and the use of high-energy nodes to send packets resulted in the distribution of energy consumption over the network. As depicted in figure \ref{fig:fig3}, the number of alive nodes exhibited an improvement by 6.6\% on average in the proposed algorithm with respect to that in Smart-BEEM. The slope of the diagram in HetEng is closer to normal and linear than in the compared algorithms, which demonstrates the slower energy discharge among network nodes.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth]{figures/coverage.png}
\caption{Network coverage among different algorithms}
\label{fig:fig4}
\end{figure}
As indicated in Fig. \ref{fig:fig4}, in term of network coverage, the proposed algorithm exhibited a slight improvement in some scenarios as it depends on different factors e.g. nodes distribution and their initial energy state in the scenario, however, its performance was equal to that of Smart-BEEM and SelfCon on average, resulting from the use of five communication protocols in all the compared algorithms, which greatly affects network coverage. The diagram exhibited more normal slopes for Smart-BEEM, SelfCon, and HetEng than other compared algorithms.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth]{figures/percentage.png}
\caption{CH percentage in each algorithm}
\label{fig:fig5}
\end{figure}
Fig. \ref{fig:fig5}, demonstrates the probability of selecting a CH among the nodes. As can be seen, the HEED algorithm exhibited the largest number of jumps and changes at the beginning of the competition, and the probability constantly decreased over time with the loss of alive nodes. The five algorithms BEE, BEEM, Smart-BEEM, SelfCon, and HetEng were closer to linear, and exhibited almost equal values of probability of the role.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth]{figures/iter.png}
\caption{Iteration Status}
\label{fig:fig6}
\end{figure}
Fig. \ref{fig:fig6}, shows the importance of reducing the number of iterations in the selection of the CH in the network and its impact on reduction of network overhead. As indicated, the HEED algorithm exhibited the largest number of rounds and of fluctuations due to the greedy mechanism of selecting the cluster head and the focus on distribution of the role among all the nodes (including those with low energy). HetEng reduced the iteration number by 1\% on average in comparison with Smart-BEEM and SelfCon on average. It should be noted that the Leach algorithm exhibited the smallest number of competition rounds and the optimal mode among all the algorithms due to the application of random selection instead of competition rounds.
\section{Conclusion and Future Works}
In this paper, we proposed a distributed energy-efficient cluster head (CH) selection scheme in low-power clustered IoT networks to support heterogeneity and detect the most optimum nodes with high energy to play the CH role. Despite the many advancements that have been made in the field of hardware and downsizing technologies, energy consumption, network longevity, and, most importantly, network coverage are considered as the main challenges in such networks. Many of the challenges that exist in all low-power networks, such as wireless sensor networks, are still there in IoT networks, where they are even severer due to the much more complex scenarios. In the proposed algorithm, real energy variance was calculated using sample deviation, and the nodes were compared to those surrounding them in the cluster, indicating a high level of intelligence, which makes it possible to manage the network in terms of longevity, energy consumption, and network coverage under different conditions. We reduced the number of iterations by 1\%, and used average values to distribute energy consumption among high-energy nodes, which resulted in: 1- reduction of energy consumption by 6.6\% over the whole network rather than at each individual node and 2- improvement of coverage with respect to that in the compared algorithms through prevention of rapid depletion of energy at nodes with low energy. Finally, there are issues that need to be addressed for improvement of the present study in more realistic environments: (1) mobility, which is an important factor in IoT networks, and (2) Quality of Service (QoS).
\section*{Acknowledgment}
The authors would like to express their highest gratitude to Mr. Michel Gokan Khan from Karlstad University in Sweden for his genuine support and constructive feedback.
\bibliographystyle{IEEEtran}
|
2,869,038,156,232 | arxiv | \section{Introduction}
\label{sec:into}
The axial, scalar and tensor charges of the nucleon are needed to
interpret the results of many experiments and probe new physics. In
this paper, we extend the calculations presented in
Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2015esa,Bhattacharya:2016zcn}
by analyzing eleven ensembles of $2+1+1$ flavors of highly improved
staggered quarks (HISQ)~\cite{Follana:2006rc} generated by the MILC
collaboration~\cite{Bazavov:2012xda}. These now include a second
physical mass ensemble at $a=0.06$~fm, and an ensemble with
$a=0.15$~fm and $M_\pi \approx 310$~MeV. We have also increased the
statistics significantly on six other ensembles using the truncated
solver with bias correction method~\cite{Bali:2009hu,Blum:2012uh}. The
resulting high-statistics data provide better control over various
sources of systematic errors, in particular the two systematics: (i)
excited-state contamination (ESC) in the extraction of the
ground-state matrix elements of the various quark bilinear operators
and (ii) the reliability of the chiral-continuum-finite volume (CCFV)
extrapolation used to obtain the final results that can be compared to
phenomenological and experimental values. With improved simultaneous
CCFV fits, we obtain $g_A^{u-d} =1.218(25)(30)$, $g_S^{u-d}
=1.022(80)(60)$ and $g_T^{u-d} = 0.989(32)(10)$ for the isovector
charges in the $\overline{MS}$ scheme at 2~GeV. The first error
includes statistical and all systematic uncertainties except that due
to the ansatz used for the final CCFV extrapolation, which is given by
the second error estimate. We also update our estimates for the
connected contributions to the flavor diagonal charges $g_{A,T}^{u}$
and $g_{A,T}^{d} $, and the isoscalar combination $g_T^{u+d} $.
Throughout the paper, we present results for the charges of the
proton, which by convention are called nucleon charges in the
literature. From these, results for the neutron, in our isosymmetric
formulation with $m_u = m_d$, are obtained by the $u \leftrightarrow d$
interchange.
The axial charge, $g_A^{u-d}$, is an important parameter that
encapsulates the strength of weak interactions of nucleons. It enters
in many analyses of nucleon structure and of the Standard Model (SM)
and beyond-the-SM (BSM) physics. For example, it impacts the
extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element
$V_{ud}$, tests the unitarity of the CKM matrix, and is needed for the
analysis of neutrinoless double-beta decay. Also,
the rate of proton-proton fusion, the first step in the thermonuclear
reaction chains that power low-mass hydrogen-burning stars like the
Sun, is sensitive to it. The current best determination of the ratio
of the axial to the vector charge, $g_A/g_V$, comes from measurement
of neutron beta decay using polarized ultracold neutrons (UCN) by the UCNA
collaboration, $1.2772(20)$~\cite{Mendenhall:2012tz,Brown:2017mhw}, and by PERKEO
II, $1.2761{}^{+14}_{-17}$~\cite{Mund:2012fq}. Note that, in the SM,
$g_V=1$ up to second order corrections in isospin
breaking~\cite{Ademollo:1964sr,Donoghue:1990ti} as a result of the
conservation of the vector current.
Given the accuracy with which $g_A^{u-d}$ has been measured in
experiments, our goal is to calculate it directly with $O(1\%)$
accuracy using lattice QCD. The result presented in this paper,
$g_A^{u-d}=1.218(25)(30)$, is, however, about $1.5\sigma$ ($5\%$)
smaller than the experimental value. In Sec.~\ref{sec:comparison}, we
compare with the result $g_A^{u-d} = 1.271(13)$ by the CalLat
collaboration. We show that the data on seven HISQ
ensembles analyzed by both collaborations agree within $1\sigma$ and
the final difference is due to the chiral and continuum
extrapolation--the fits are weighted differently by the data points
that are not common. Based on the analysis of the size of the various
systematics in Sec.~\ref{sec:errors}, and on the comparison with
CalLat calculation, we conclude that our analysis of errors is
realistic. Our goal, therefore, is to continue to quantify and control
the various sources of errors to improve precision.
The Standard Model does not contain fundamental scalar or tensor
interactions. However, loop effects and new interactions at the TeV
scale can generate effective interactions at the hadronic scale that
can be probed in decays of neutrons, and at the TeV scale itself at
the LHC. Such scalar and tensor interactions contribute to the
helicity-flip parameters $b$ and $b_\nu$ in the neutron decay
distribution~\cite{Bhattacharya:2011qm}. Thus, by combining the
calculation of the scalar and tensor charges with the measurements of
$b$ and $b_\nu$ in low energy experiments, one can put constraints on
novel scalar and tensor interactions at the TeV scale as described in
Ref.~\cite{Bhattacharya:2011qm}. To optimally bound such scalar and
tensor interactions using measurements of $b$ and $b_\nu$ parameters
in planned experiments targeting $10^{-3}$
precision~\cite{abBA,WilburnUCNB,Pocanic:2008pu}, the level of
precision required in $g_S^{u-d}$ and $g_T^{u-d}$ is at the $10\%$
level as explained in
Refs.~\cite{Bhattacharya:2011qm,abBA,WilburnUCNB,Pocanic:2008pu}.
Future higher-precision measurements of $b$ and $b_\nu$ would require
correspondingly higher-precision calculations of the matrix elements
to place even more stringent bounds on TeV-scale couplings.
In a recent work~\cite{Bhattacharya:2015wna}, we showed that
lattice-QCD calculations have reached a level of control over all
sources of systematic errors needed to yield the tensor charge with
the required precision. The errors in the scalar 3-point functions are
about a factor of 2 larger. In this paper we show that by using the
truncated solver method with bias
correction~\cite{Bali:2009hu,Blum:2012uh}, (for brevity called TSM
henceforth), to obtain high statistics on all ensembles, we are also
able to control the uncertainty in $g_S^{u-d}$ to the required 10\%
level. These higher-statistics results also improve upon our previous
estimates of the axial and the tensor charges.
The matrix elements of the flavor-diagonal tensor operators are needed
to quantify the contributions of the $u,\ d, \ s, \ c$ quark electric
dipole moments (EDM) to the neutron electric dipole moment
(nEDM)~\cite{Bhattacharya:2015wna,Pospelov:2005pr}. The nEDM is a very
sensitive probe of new sources of $T$ and $CP$ violation that arise in
most extensions of the Standard Model designed to explain nature at
the TeV scale. Planned experiments aim to reduce the current bound on
the nEDM of $2.9 \times 10^{-26}\ e$~cm~\cite{Baker:2006ts} to around
$ 10^{-28}\ e$~cm. Improving the bound will put stringent constraints on many BSM
theories provided the matrix elements of novel $CP$-violating
interactions, of which the quark EDM is one, are calculated with the
required precision. In
Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn}, we showed that
the disconnected contributions are negligible so we update the
connected contributions to the flavor diagonal tensor charges for the
light $u$ and $d$ quarks that are taken to be degenerate.
The tensor charges are also extracted as the zeroth moment of the
transversity distributions, These are measured in many experiments
including Drell-Yan and semi-inclusive deep inelastic scattering
(SIDIS) and describe the net transverse polarization of quarks in a
transversely polarized nucleon. There exists an active program at
Jefferson Lab (JLab) to measure them~\cite{Dudek:2012vr}. It is,
however, not straightforward to extract the transversity distributions
from the data taken over a limited range of $Q^2$ and Bjorken $x$,
consequently additional phenomenological modeling is required. Lattice QCD
results for $g_T^{u}$, $g_T^{d}$, $g_T^{s}$ and $g_T^{u-d}$
are the most accurate at present as already discussed in
Ref.~\cite{Bhattacharya:2016zcn}. Future experiments at JLab and
other experimental facilities worldwide will significantly improve the
extraction of the transversity distributions, and together with
accurate calculations of the tensor charges using lattice QCD
elucidate the structure of the nucleon in terms of quarks and gluons.
The methodology for calculating the isovector charges in an isospin
symmetric theory, that is, measuring the contribution to the matrix
elements of the insertion of the zero-momentum bilinear quark
operators in one of the three valence quarks in the nucleon, is well
developed~\cite{Bhattacharya:2015wna,Bhattacharya:2015esa,Bhattacharya:2016zcn,Lin:2012ev,Syritsyn:2014saa,Constantinou:2014tga}.
Calculation of the flavor-diagonal charges is similar except that it
gets additional contributions from contractions of the operator as a
vacuum quark loop that interacts with the nucleon propagator through
the exchange of gluons. In Ref.~\cite{Bhattacharya:2015wna}, we
showed that these contributions to $g_T^{u,d,s}$ are small, $O(0.01)$,
and consistent with zero within errors. Thus, within current error
estimates, the connected contributions alone provide reliable
estimates for the flavor diagonal charges $g_{T}^{u,d} $ and the
isoscalar combination $g_T^{u+d} $. A detailed analysis
of disconnected contributions to the axial, scalar and tensor charges
will be presented in a separate paper.
This paper is organized as follows. In Sec.~\ref{sec:Methodology}, we
describe the parameters of the gauge ensembles analyzed and the
lattice methodology. The fits used to isolate excited-state
contamination are described in Sec.~\ref{sec:excited}. The
renormalization of the operators is discussed in
Sec.~\ref{sec:renorm}. Our final results for the isovector charges and
the connected parts of the flavor-diagonal charges are presented in
Sec.~\ref{sec:results}. Our estimation of errors is revisited
in Sec.~\ref{sec:errors}, and a comparison with previous works is given
in Sec.~\ref{sec:comparison}. In Sec.~\ref{sec:est}, we provide
constraints on novel scalar and tensor interactions at the TeV scale
using our new estimates of the charges and precision beta decay experiments and
compare them to those from the LHC. Our final conclusions are
presented in Sec.~\ref{sec:conclusions}.
\section{Lattice Methodology}
\label{sec:Methodology}
\begin{table*}[tbp]
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{l|ccc|cc|cccc}
Ensemble ID & $a$ (fm) & $M_\pi^{\rm sea}$ (MeV) & $M_\pi^{\rm val}$ (MeV) & $L^3\times T$ & $M_\pi^{\rm val} L$ & $\tau/a$ & $N_\text{conf}$ & $N_{\rm meas}^{\rm HP}$ & $N_{\rm meas}^{\rm LP}$ \\
\hline
$a15m310 $ & 0.1510(20) & 306.9(5) & 320.6(4.3) & $16^3\times 48$ & 3.93 & $\{5,6,7,8,9\}$ & 1917 & 7668 & 122,688 \\
\hline
$a12m310 $ & 0.1207(11) & 305.3(4) & 310.2(2.8) & $24^3\times 64$ & 4.55 & $\{8,10,12\}$ & 1013 & 8104 & 64,832 \\
$a12m220S$ & 0.1202(12) & 218.1(4) & 225.0(2.3) & $24^3\times 64$ & 3.29 & $\{8, 10, 12\}$ & 946 & 3784 & 60,544 \\
$a12m220 $ & 0.1184(10) & 216.9(2) & 227.9(1.9) & $32^3\times 64$ & 4.38 & $\{8, 10, 12\}$ & 744 & 2976 & 47,616 \\
$a12m220L_O$ & 0.1189(09) & 217.0(2) & 227.6(1.7) & $40^3\times 64$ & 5.49 & $\{8,10,12,14\}$ & 1010 & 8080 & 68,680 \\
$a12m220L$ & & & & & & $\{8,10,12,14\}$ & 1000 & 4000 & 128,000 \\
\hline
$a09m310 $ & 0.0888(08) & 312.7(6) & 313.0(2.8) & $32^3\times 96$ & 4.51 & $\{10,12,14,16\}$ & 2263 & 9052 & 114,832 \\
$a09m220 $ & 0.0872(07) & 220.3(2) & 225.9(1.8) & $48^3\times 96$ & 4.79 & $\{10,12,14,16\}$ & 964 & 7712 & 123,392 \\
$a09m130 $ & 0.0871(06) & 128.2(1) & 138.1(1.0) & $64^3\times 96$ & 3.90 & $\{10,12,14\}$ & 883 & 7064 & 84,768 \\
$a09m130W$ & & & & & & $\{8,10,12,14,16\}$ & 1290 & 5160 & 165,120 \\
\hline
$a06m310 $ & 0.0582(04) & 319.3(5) & 319.6(2.2) & $48^3\times 144$& 4.52 & $\{16,20,22,24\}$ & 1000 & 8000 & 64,000 \\
$a06m310W$ & & & & & & $\{18,20,22,24\}$ & 500 & 2000 & 64,000 \\
$a06m220 $ & 0.0578(04) & 229.2(4) & 235.2(1.7) & $64^3\times 144$& 4.41 & $\{16,20,22,24\}$ & 650 & 2600 & 41,600 \\
$a06m220W$ & & & & & & $\{18,20,22,24\}$ & 649 & 2596 & 41,546 \\
$a06m135 $ & 0.0570(01) & 135.5(2) & 135.6(1.4) & $96^3\times 192$& 3.7 & $\{16,18,20,22\}$ & 675 & 2700 & 43,200 \\
\end{tabular}
\end{ruledtabular}
\caption{Parameters, including the Goldstone pion mass
$M_\pi^{\rm sea}$, of the eleven 2+1+1- flavor HISQ ensembles generated
by the MILC collaboration and analyzed in this study are quoted from
Ref.~\cite{Bazavov:2012xda}. All fits are made versus $M_\pi^{\rm
val}$ and finite-size effects are analyzed in terms of $M_\pi^{\rm
val} L$. Estimates of $M_\pi^{\rm val}$, the clover-on-HISQ pion
mass, are the same as given in Ref.~\cite{Bhattacharya:2015wna} and
the error is governed mainly by the uncertainty in the lattice
scale. In the last four columns, we give, for each ensemble, the
values of the source-sink separation $\tau$ used in the
calculation of the three-point functions, the number of
configurations analyzed, and the number of measurements made using
the high precision (HP) and the low precision (LP) truncation of the
inversion of the clover operator. The second set of calculations,
$a09m130W$, $a06m310W$ and $a06m220W$, have been done with the
larger smearing size $\sigma$ that is given in
Table~\protect\ref{tab:cloverparams}. The new $a12m220L$ simulations
replace $a12m220L_O$ for reasons explained in the text.}
\label{tab:ens}
\end{center}
\end{table*}
\begin{table}[htbp]
\centering
\begin{ruledtabular}
\begin{tabular}{l|lc|c|c}
\multicolumn1c{ID} & \multicolumn1c{$m_l$} & $c_{\text{SW}}$ & Smearing & RMS smearing \\
& & & Parameters & radius \\
\hline
$a15m310 $ & $-0.0893$ & 1.05094 & \{4.2, 36\} & 4.69 \\
\hline
$a12m310 $ & $-0.0695$ & 1.05094 & \{5.5, 70\} & 5.96 \\
$a12m220S$ & $-0.075$ & 1.05091 & \{5.5, 70\} & 5.98 \\
$a12m220 $ & $-0.075$ & 1.05091 & \{5.5, 70\} & 5.96 \\
$a12m220L$ & $-0.075$ & 1.05091 & \{5.5, 70\} & 5.96 \\
\hline
$a09m310 $ & $-0.05138$ & 1.04243 & \{7.0,100\} & 7.48 \\
$a09m220 $ & $-0.0554$ & 1.04239 & \{7.0,100\} & 7.48 \\
$a09m130 $ & $-0.058$ & 1.04239 & \{5.5, 70\} & 6.11 \\
$a09m130W$ & $-0.058$ & 1.04239 & \{7.0,100\} & 7.50 \\
\hline
$a06m310 $ & $-0.0398$ & 1.03493 & \{6.5, 70\} & 7.22 \\
$a06m310W $ & $-0.0398$ & 1.03493 & \{12, 250\} & 12.19 \\
$a06m220 $ & $-0.04222$ & 1.03493 & \{5.5, 70\} & 6.22 \\
$a06m220W $ & $-0.04222$ & 1.03493 & \{11, 230\} & 11.24 \\
$a06m135 $ & $-0.044$ & 1.03493 & \{9.0,150\} & 9.56 \\
\end{tabular}
\end{ruledtabular}
\caption{The parameters used in the
calculation of the clover propagators. The hopping parameter for
the light quarks, $\kappa_l$, in the clover action is given by
$2\kappa_{l} = 1/(m_{l}+4)$. $m_l$ is tuned to achieve $M_\pi^{\rm
val} \approx M_\pi^\text{sea}$. The parameters used to construct
Gaussian smeared sources, $\{\sigma, N_{\text{KG}}\}$, are given in
the fourth column where $N_{\text{KG}}$ is the number of
applications of the Klein-Gordon operator and the width of the
smearing is controlled by the coefficient $\sigma$, both in Chroma
convention~\cite{Edwards:2004sx}. The resulting root-mean-square
radius of the smearing, defined as $\sqrt{\int r^2 \sqrt{S^\dag S}
dr /\int \sqrt{S^\dag S} dr} $, is given in the last column. }
\label{tab:cloverparams}
\end{table}
The parameters of the eleven ensembles used in the analysis are
summarized in Table~\ref{tab:ens}. They cover a range of lattice
spacings ($0.06 \, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } a \, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } 0.15$~fm), pion masses ($135 \,
\raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } M_\pi \, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } 320$~MeV) and lattice sizes ($3.3\, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } M_\pi
L\, \lsim5.5$) and were generated using 2+1+1-flavors of HISQ
fermions~\cite{Follana:2006rc} by the MILC
collaboration~\cite{Bazavov:2012xda}. Most of the details of the
methodology, and the strategies for the calculations and the analyses are the
same as described in
Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn}. Here we will
summarize the key points to keep the paper self-contained and
highlight the new features and analysis.
We construct the correlation functions needed to calculate the matrix
elements using Wilson-clover fermions on these HISQ ensembles. Such
mixed-actions, clover-on-HISQ, are a nonunitary formulation and suffer
from the problem of exceptional configurations at small, but
{\it a priori} unknown, quark masses. We monitor all correlation functions
for such exceptional configurations in our statistical samples. For
example, evidence of exceptional configurations on three $a15m310$
lattices prevents us from analyzing ensembles with smaller $M_\pi$ at
$a = 0.15$~fm using the clover-on-HISQ approach. The same holds for
the physical mass ensemble $a12m130$.
The parameters used in the construction of
the 2- and 3-point functions with clover fermions are given in
Table~\ref{tab:cloverparams}. The Sheikholeslami-Wohlert
coefficient~\cite{Sheikholeslami:1985ij} used in the clover action is
fixed to its tree-level value with tadpole improvement, $c_\text{sw} =
1/u_0^3$, where $u_0$ is the fourth root of the plaquette expectation
value calculated on the hypercubic (HYP)
smeared~\cite{Hasenfratz:2001hp} HISQ lattices.
The masses of light clover quarks were tuned so that the
clover-on-HISQ pion masses, $M^{\rm val}_\pi$, match the HISQ-on-HISQ
Goldstone ones, $M_\pi^{\rm sea}$. Both estimates are given in
Table~\ref{tab:ens}. All fits in $M_\pi^2$ to study the chiral
behavior are made using the clover-on-HISQ $M^{\rm val}_{\pi}$ since
the correlation functions, and thus the chiral behavior of the
charges, have a greater sensitivity to it. Henceforth, for
brevity, we drop the superscript and denote the clover-on-HISQ pion
mass as $M_\pi$. Performing fits using the HISQ-on-HISQ values,
${M_\pi^{\rm sea}}$, does not change the estimates significantly.
The highlights of the current work, compared to
the results presented in Ref.~\cite{Bhattacharya:2016zcn}, are as follows:
\begin{itemize}
\item
The addition of a second physical pion mass ensemble $a06m135$ and
the coarse $a15m310$ ensemble.
\item
The new $a12m220L$ simulations replace the older $a12m220L_O$ data. In
the $a12m220L_O$ calculation, the HP analysis had only been done for
$\tau=10$, while in the new $a12m220L$ data the HP calculation has
been done for all values of source-sink separation $\tau$, and the
bias correction applied. We have also increased the number of LP
measurements on each configurations and both HP and LP source points
are chosen randomly within and between configurations. Even though the
results from the two calculations are consistent, as shown in
Tables~\ref{tab:2ptmulti},~\ref{tab:results3bareu-d}
and~\ref{tab:results3bareu+d}, nevertheless, for the two reasons
stated above, we will, henceforth, only use the $a12m220L$ data in the
analysis of the charges and other quantities in this and future
papers.
\item
All ensembles are analyzed using the TSM method with much higher statistics
as listed in Table~\ref{tab:ens}. Our implementation of the TSM method is
described in Refs.~\cite{Bhattacharya:2015wna,Yoon:2016dij}.
\item
The new high statistics data for ensembles $a09m310$, $a09m220$ and
$a09m130W$ were generated using the smearing parameter
$\sigma=7$. This corresponds to a r.m.s. radius of $\approx 7.5$ in
lattice units or roughly 0.66~fm. As discussed in Sec.~\ref{sec:excited} and
shown in Figs.~\ref{fig:gA2v3a12}--\ref{fig:gT2v3a06},
increasing $\sigma$ from $5.5$ to $7.0$ reduces the ESC at a given
source-sink separation $\tau$.\looseness-1
\item
The two-point correlation functions are analyzed keeping up to four
states in the spectral decomposition. Previous work was based on
keeping two states.\looseness-1
\item
The three-point functions are analyzed keeping up to three states in
the spectral decomposition of the spectral functions. Previous work
was based on keeping two states.
\end{itemize}
We find that the new higher precision data significantly improved the
ESC fits and the final combined CCFV fit used to obtain results in the
limits $a \to 0$, the pion mass $M_\pi \to 135$~MeV and the
lattice volume $M_\pi L \to \infty$.
\subsection{Correlation Functions}
\label{sec:CorrelationFunctions}
We use the following interpolating operator $\chi$ to create$/$annihilate the nucleon
state:
\begin{align}
\chi(x) = \epsilon^{abc} \left[ {q_1^a}^T(x) C \gamma_5
\frac{(1 \pm \gamma_4)}{2} q_2^b(x) \right] q_1^c(x) \,,
\label{eq:nucl_op}
\end{align}
with $\{a, b, c\}$ labeling the color indices, $C=\gamma_0 \gamma_2$ the charge
conjugation matrix, and $q_1$ and $q_2$ denoting the two different
flavors of light quarks. The nonrelativistic projection $(1 \pm
\gamma_4)/2$ is inserted to improve the signal, with the plus and
minus signs applied to the forward and backward propagation in
Euclidean time, respectively~\cite{Gockeler:1995wg}. At zero
momentum, this operator couples only to the spin-$\frac{1}{2}$ state.
The zero momentum 2-point and 3-point nucleon correlation functions
are defined as
\begin{align}
{\mathbf C}_{\alpha \beta}^{\text{2pt}}(\tau)
&= \sum_{\mathbf{x}}
\langle 0 \vert \chi_\alpha(\tau, \mathbf{x}) \overline{\chi}_\beta(0, \mathbf{0})
\vert 0 \rangle \,,
\label{eq:corr_fun2} \\
{\mathbf C}_{\Gamma; \alpha \beta}^{\text{3pt}}(t, \tau)
&= \sum_{\mathbf{x}, \mathbf{x'}}
\langle 0 \vert \chi_\alpha(\tau, \mathbf{x}) \mathcal{O}_\Gamma(t, \mathbf{x'})
\overline{\chi}_\beta(0, \mathbf{0})
\vert 0 \rangle \,,
\label{eq:corr_fun3}
\end{align}
where $\alpha$ and $\beta$ are spinor indices. The source is
placed at time slice $0$, $\tau$ is the sink time slice, and $t$ is an
intermediate time slice at which the local quark bilinear operator
$\mathcal{O}_\Gamma^q(x) = \bar{q}(x) \Gamma q(x)$ is inserted. The
Dirac matrix $\Gamma$ is $1$, $\gamma_4$, $\gamma_i \gamma_5$ and
$\gamma_i \gamma_j$ for scalar (S), vector (V), axial (A) and tensor
(T) operators, respectively.
In this work, subscripts $i$ and $j$ on gamma matrices run over $\{1,2,3\}$,
with $i<j$.
The nucleon charges $g_\Gamma^q$ are obtained from the ground state
matrix element $ \langle N(p, s) \vert \mathcal{O}_\Gamma^q \vert N(p,
s) \rangle$, that, in turn, are extracted using the spectral
decomposition of the 2- and 3-point correlation functions. They are
related as
\begin{align}
\langle N(p, s) \vert \mathcal{O}_\Gamma^q \vert N(p, s) \rangle
= g_\Gamma^q \bar{u}_s(p) \Gamma u_s(p)
\end{align}
with spinors satisfying
\begin{equation}
\sum_s u_s(p) \bar{u}_s(p) = \frac{E_{\mathbf{p}} \gamma_4 - i\vec{\gamma}\cdot \vec{p} + M_N} {2 E_{\mathbf{p}}}\,.
\end{equation}
To extract the charges, we construct the projected 2- and 3-point correlation functions
\begin{align}
C^{\text{2pt}}(t) & = {\langle \Tr [ \mathcal{P}_\text{2pt} {\mathbf C}^{\text{2pt}}(t) ] \rangle}
\label{eq:2pt_proj} \\
C_{\Gamma}^{\text{3pt}}(t, \tau) & = \langle \Tr [ \mathcal{P}_{\rm 3pt} {\mathbf C}_{\Gamma}^{\text{3pt}}(t, \tau) ]\rangle \, .
\label{eq:3pt_proj}
\end{align}
The operator $\mathcal{P}_\text{2pt} = (1 \pm \gamma_4)/2$ is used to
project on to the positive parity contribution for the nucleon
propagating in the forward (backward) direction. For the connected
3-point contributions, $\mathcal{P}_{\rm 3pt} =
\mathcal{P}_\text{2pt}(1+i\gamma_5\gamma_3)$ is used. Note that the
$C_{\Gamma}^{\text{3pt}}(t, \tau)$ defined in Eq.~\eqref{eq:3pt_proj}
becomes zero if $\Gamma$ anticommutes with $\gamma_4$, so only $\Gamma
= 1$, $\gamma_4$, $\gamma_i \gamma_5$ and $\gamma_i \gamma_j$ elements
of the Clifford algebra survive. The fits used to extract the masses,
amplitudes and matrix elements from the 2- and 3-point functions,
defined in Eqs.~\eqref{eq:2pt_proj} and~\eqref{eq:3pt_proj}, are
discussed in Sec.~\ref{sec:excited}.
\subsection{High Statistics Using the Truncated Solver Method}
\label{sec:TSM}
We have carried out high-statistics calculation on all the ensembles
using the truncated solver method with bias
correction~\cite{Bali:2009hu,Blum:2012uh}. In this method,
correlation functions are constructed using quark propagators inverted
with high precision (HP) and low precision (LP) using the multigrid
algorithm. The bias corrected correlators on each configuration are
then given by
\begin{align}
C^\text{imp}&
= \frac{1}{N_\text{LP}} \sum_{i=1}^{N_\text{LP}}
C_\text{LP}(\mathbf{x}_i^\text{LP}) \nonumber \\
+& \frac{1}{N_\text{HP}} \sum_{i=1}^{N_\text{HP}} \left[
C_\text{HP}(\mathbf{x}_i^\text{HP})
- C_\text{LP}(\mathbf{x}_i^\text{HP})
\right] \,,
\label{eq:2-3pt_TSM}
\end{align}
where $C_\text{LP}$ and $C_\text{HP}$ are the 2- and 3-point
correlation functions constructed using LP and HP quark propagators,
respectively, and $\mathbf{x}_i^\text{LP}$ and
$\mathbf{x}_i^\text{HP}$ are the source positions for the two kinds of
propagator inversion. The LP stopping criteria, defined as $r_{\rm
LP} \equiv |{\rm residue}|_{\rm LP}/|{\rm source}|$ varied between $ 10^{-3}$ and $5
\times 10^{-4}$, while that for the HP calculations between $10^{-7}$
and $10^{-8}$.
As discussed in Ref.~\cite{Yoon:2016dij}, to reduce statistical
correlations between measurements, $N_\text{HP}$ maximally separated
time slices were selected randomly on each configuration and on each
of these time slices, $N_\text{LP}/N_\text{HP}$ LP source positions
were again selected randomly. The number of sources, $N_\text{LP}$
and $N_\text{HP}$, used are given in Table~\ref{tab:ens}. An important
conclusion based on all our calculations with $O(10^5)$ measurements
of nucleon charges and form factors carried out so far (see
Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn,Yoon:2016dij,Yoon:2016jzj,Rajan:2017lxk}),
is that the difference between the LP and the bias corrected estimates
(or the HP) is smaller than the statistical errors.
To further reduce the computational cost, we also used the coherent
sequential source method discussed in Ref.~\cite{Yoon:2016dij}.
Typically, we constructed four HP or LP sequential sources on four
sink time slices, and added them to obtain the coherent source. A
single inversion was then performed to construct the coherent
sequential propagator. This was then contracted with the four original
propagators to construct four measurements of each three-point
function. All of these propagators were held in the computer memory to
remove the I/O overhead.
Our final errors are obtained using a single elimination jackknife
analysis over the configurations, that is, we first construct the
average defined in Eq.~\eqref{eq:2-3pt_TSM} on each
configuration. Because of this ``binning'' of the data, we do not need
to correct the jackknife estimate of the error for correlations
between the $N_\text{LP}$ LP measurements per configuration.
\section{Excited-State Contamination}
\label{sec:excited}
To extract the nucleon charges we need to evaluate the matrix
elements of the currents between ground-state nucleons. The
lattice nucleon interpolating operator given in
Eq.~\eqref{eq:nucl_op}, however, couples to the nucleon, all its
excitations and multiparticle states with the same quantum
numbers. Previous lattice calculations have shown that the
ESC can be large. In our earlier
works~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn,Yoon:2016jzj,Yoon:2016dij},
we have shown that this can be controlled to within a few percent
using the strategy summarized below.
The overlap between the nucleon operator and the excited states in the
construction of the two- and three-point functions is reduced by using
tuned smeared sources when calculating the quark propagators on the
HYP smeared HISQ lattices. We construct gauge-invariant Gaussian
smeared sources by applying the three-dimensional Laplacian operator,
$\nabla^2$, $N_{\rm GS}$ number of times, i.e., $(1 +
\sigma^2\nabla^2/(4N_{\rm GS}))^{N_{\rm GS}}$ on a delta function
source. The input smearing parameters $\{\sigma, N_{\rm GS}\}$ for
each ensemble are given in Table~\ref{tab:cloverparams} along with the
resulting root-mean-square radius defined as $\sqrt{\int r^2 \sqrt{S^\dag S}
dr /\int \sqrt{S^\dag S} dr }$. We find that, as a function of
distance $r$, the modulus of the sum of the values of the twelve
spin-color components at each site, $\sqrt{S^\dag S}$, is well
described by a Gaussian, and we use this ansatz to fit the data. The
results for the root-mean-square radius given in
Table~\ref{tab:cloverparams} show weak dependence on the lattice
spacing or the pion mass for fixed $\sigma$, and are roughly equal to
the input $\sigma$. Throughout this work, the same smearing is used at
the source and sink points.
The analysis of the two-point functions, $C^\text{2pt}$, was carried
out keeping four states in the spectral decomposition:
\begin{align}
C^\text{2pt}
&(t,\bm{p}) = \nonumber \\
&{|{\cal A}_0|}^2 e^{-M_0 t} + {|{\cal A}_1|}^2 e^{-M_1 t}\,+ \nonumber \\
&{|{\cal A}_2|}^2 e^{-M_2 t} + {|{\cal A}_3|}^2 e^{-M_3 t}\,,
\label{eq:2pt}
\end{align}
where the amplitudes and the masses of the
four states are denoted by ${\cal A}_i$ and $M_i$, respectively.
In fits including more than two states, the estimates of $M_i$ and the
${\cal A}_i$ for $i \ge 2$ were sensitive to the choice of the
starting time slice $t_{\rm min}$, and the fits were not always
stable. The fits were stabilized using the empirical Bayesian
procedure described in Ref.~\cite{Yoon:2016jzj}. Examples of the
quality of the fits are shown in Figs.~22--29 in
Ref.~\cite{Rajan:2017lxk}. The new results for masses and amplitudes
obtained from 2-, 3- and 4-state fits are given in
Table~\ref{tab:2ptmulti}.
In Fig.~\ref{fig:2pta09m130}, we compare the efficacy of different
smearing sizes in controlling excited states in the 2-point data on
the three ensembles $a09m130$, $a06m310$ and $a06m220$. In each case,
the onset of the plateau with the larger smearing size occurs at
earlier Euclidean time $t$, however, the statistical errors at larger
$t$ are larger. The more critical observation is that, while $M_0$
overlap, the mass gaps $a\Delta M_i$ are significantly different in
two cases. Thus the excited state parameters are not well determined
even with our high statistics, $O(10^5)$ measurements, data. More
importantly, except for the $a06m310$ case, the mass gap $a \Delta
M_1$ obtained is much larger than $2 a M_\pi$, the value expected if
$N\pi\pi$ is the lowest excitation. Based on these observations, we
conclude that to resolve the excited state spectrum will require a
coupled channel analysis with much higher statistics data.
The results of different fits for the bare charges extracted from the
three-point data, given in Table~\ref{tab:results3bareu-d}, indicate
that these differences in the mass gaps do not significantly effect
the extraction of the charges. At current level of precision, the
variations in the values of the mass gaps and the corresponding values for the
amplitudes compensate each other in fits to the 2- and
3-point data.\looseness-1
\begin{figure*}[tb]
\centering
\subfigure{
\includegraphics[width=0.45\linewidth]{figs/meff_smearing} \qquad
\includegraphics[width=0.45\linewidth]{figs/meff_smearing_wide}
}
\caption{Illustration of the
data for the nucleon $M_{\rm eff}$ versus Euclidean time $t$ and the
results of the 4-state fit to the 2-point correlation function. We
compare the data obtained with two different smearing sizes on three
ensembles. In the right panel we also show results for the
$a06m135$ ensemble. The onset of the plateau in $M_{\rm eff}$
occurs at earlier $t$ with the larger smearing size but the errors
at larger $t$ are also larger.
\label{fig:2pta09m130}}
\end{figure*}
The analysis of the zero-momentum three-point functions,
$C_\Gamma^{(3\text{pt})} (t;\tau)$
was carried out retaining three-states in its spectral decomposition:
\begin{align}
&C^\text{3pt}_{\Gamma}(t_f,t,t_i) = \nonumber\\
& |{\cal A}_0|^2 \langle 0 | \mathcal{O}_\Gamma | 0 \rangle e^{-aM_0 (t_f - t_i)} +{}\nonumber\\
& |{\cal A}_1|^2 \langle 1 | \mathcal{O}_\Gamma | 1 \rangle e^{-aM_1 (t_f - t_i)} +{}\nonumber\\
& |{\cal A}_2|^2 \langle 2 | \mathcal{O}_\Gamma | 2 \rangle e^{-aM_2 (t_f - t_i)} +{}\nonumber\\
& {\cal A}_1{\cal A}_0^* \langle 1 | \mathcal{O}_\Gamma | 0 \rangle e^{-aM_1 (t_f-t)} e^{-aM_0 (t-t_i)} +{}\nonumber\\
& {\cal A}_0{\cal A}_1^* \langle 0 | \mathcal{O}_\Gamma | 1 \rangle e^{-aM_0 (t_f-t)} e^{-aM_1 (t-t_i)} +{}\nonumber\\
& {\cal A}_2{\cal A}_0^* \langle 2 | \mathcal{O}_\Gamma | 0 \rangle e^{-aM_2 (t_f-t)} e^{-aM_0 (t-t_i)} +{}\nonumber\\
& {\cal A}_0{\cal A}_2^* \langle 0 | \mathcal{O}_\Gamma | 2 \rangle e^{-aM_0 (t_f-t)} e^{-aM_2 (t-t_i)} +{}\nonumber\\
& {\cal A}_1{\cal A}_2^* \langle 1 | \mathcal{O}_\Gamma | 2 \rangle e^{-aM_1 (t_f-t)} e^{-aM_2 (t-t_i)} +{}\nonumber\\
& {\cal A}_2{\cal A}_1^* \langle 2 | \mathcal{O}_\Gamma | 1 \rangle e^{-aM_2 (t_f-t)} e^{-aM_1 (t-t_i)} + \ldots \,,
\label{eq:3pt}
\end{align}
where the source point is at $t_i$, the operator is inserted at time
$t$, and the nucleon state is annihilated at the sink time slice
$t_f$. The source-sink separation is $\tau \equiv t_f-t_i$. The
state $|0\rangle$ represents the ground state and $|n\rangle$, with $n
> 0$, the higher states. The ${\cal A}_i$ are the amplitudes for the
creation of state $i$ with zero momentum by the nucleon interpolating
operator $\chi$. To extract the matrix elements, the amplitudes
${\cal A}_i$ and the masses $M_i$ are obtained from the 4-state fits to
the two-point functions. Note that the insertion of the nucleon at
the sink time slice $t_f$ and the insertion of the current at time $t$
are both at zero momentum. Thus, by momentum conservation, only the
zero momentum projections of the states created at the source
time slice contribute to the three-point function.
We calculate the three-point correlation functions for a number of
values of the source-sink separation $\tau$ that are listed in
Table~\ref{tab:ens}. To extract the desired matrix element $\langle 0
| \mathcal{O}_\Gamma | 0 \rangle$, we fit the data at all $\tau$ and
$t$ simultaneously using the ansatz given in Eq.~\eqref{eq:3pt}. In
this work, we examine three kinds of fits, $2^\ast$-, 2- and
$3^\ast$-state fits. The $2^\ast$-state fit corresponds to keeping
terms of the type $\matrixe{0}{\mathcal{O}_\Gamma}{0}$ and
$\matrixe{0}{\mathcal{O}_\Gamma}{1}$. The 2-state fits also include
$\matrixe{1}{\mathcal{O}_\Gamma}{1}$, and the $3^\ast$-state fits
further add the $\matrixe{0}{\mathcal{O}_\Gamma}{2}$ and
$\matrixe{1}{\mathcal{O}_\Gamma}{2}$ type terms.\looseness-1
In the simultaneous fit to the data versus $t$ and multiple $\tau$ to
obtain $\matrixe{0}{\mathcal{O}_\Gamma}{0}$, we skip $\mathop{t_{\rm skip}}\nolimits$ points
adjacent to the source and the sink to remove points with the largest
ESC. The same $\mathop{t_{\rm skip}}\nolimits$ is used for each $\tau$. The $\mathop{t_{\rm skip}}\nolimits$ selected
is a compromise between wanting to include as many points as possible
to extract the various terms given in Eq.~\eqref{eq:3pt} with
confidence, and the errors in and stability of the full covariance
matrix used in the fit. In particular, the choice of $\mathop{t_{\rm skip}}\nolimits$ on the
$a=0.06$~fm ensembles is the smallest value for which the covariance
matrix was invertable and reasonable. These values of $\mathop{t_{\rm skip}}\nolimits$, tuned
for each ensemble, are given in Table~\ref{tab:results3bareu-d}.
To visualize the ESC, we
plot the data for the following ratio of correlation functions
\begin{equation}
R_\Gamma(t,\tau) = \frac{C_{\Gamma}^{\text{3pt}}(t, \tau) }{C^{\text{2pt}}(\tau)} \to g_\Gamma \,,
\label{eq:ratio}
\end{equation}
in Figs.~\ref{fig:gA2v3a12}--\ref{fig:gT2v3a06} and show the various
fits corresponding to the results in Table~\ref{tab:results3bareu-d}.
In the limit $t \to \infty$ and $\tau-t \to \infty$, this ratio
converges to the charge $g_\Gamma $. At short times, the ESC is
manifest in all cases. For sufficiently large $\tau$, the data should
exhibit a flat region about $\tau/2$, and the value should become
independent of $\tau$. The current data for $g_A$, $g_S$ and $g_T$,
with $\tau$ up to about 1.4~fm, do not provide convincing evidence of
this desired asymptotic behavior. To obtain
$\matrixe{0}{\mathcal{O}_\Gamma}{0}$, we use the three-state ansatz given
in Eq.~\eqref{eq:3pt}.
On the three ensembles, $a09m130$, $a06m310$ and $a06m220$, we can
compare the data with two different smearing sizes given in
Table~\ref{tab:ens}. We find a significant reduction in the ESC in the
axial and scalar charges on increasing the smearing
size. Nevertheless, the 2- and $3^\ast$-state fits and the two
calculations give consistent estimates for the ground state matrix
elements. The agreement between these four estimates has increased our
confidence in the control over ESC. The results for $g_S^{u-d}$,
obtained using $2$-state fits, have larger uncertainty as discussed in
Sec.~\ref{sec:poor}, but are again consistent except those from the
$a06m220$ ensemble.
This higher statistics study of the ESC confirms many features discussed in Ref.~\cite{Bhattacharya:2016zcn}:
\begin{itemize}
\item
The ESC is large in both $g_A^{u-d}$ and $g_S^{u-d}$, and the
convergence to the $\mathop{\tau \to \infty}\nolimits$ value is monotonic and from below.
\item
The ESC is $g_T^{u-d}$ is $\lesssim 10\%$ for $\tau > 1$~fm, and the
convergence to the $\mathop{\tau \to \infty}\nolimits$ value is also monotonic but from above.
\item
The ESC in $g_A^{u-d}$ and $g_S^{u-d}$ is reduced on increasing the
size of the smearing, but $g_T^{u-d}$ is fairly insensitive to the smearing
size.
\item
For a given number of measurements at the same $\tau$ and $t$, the
statistical precision of $g_T^{u-d}$ is slightly better than that of
$g_A^{u-d}$. The data for $g_S^{u-d}$ is noisy, especially at the
larger values of $\tau$. On many ensembles, it does not exhibit a
monotonic increase with $\tau$. To get $g_S^{u-d}$ with the same precision as
in $g_A^{u-d}$ currently will require $\approx 5$ times the statistics.
\item
The data for each charge and for each source-sink separation $\tau$
becomes symmetric about $\tau/2$ with increasing statistical
precision. This is consistent with the $\cosh(t-\tau/2)$ behavior
predicted by Eq.~\eqref{eq:3pt} for each transition matrix element.
\item
The variations in the results with the fit ranges selected for fits to
the two-point functions and the number, $\mathop{t_{\rm skip}}\nolimits$, of points skipped in
the fits to the three-point data decrease with the increased
statistical precision.
\item
Estimates from the $2$- and the $3^\ast$-state fits overlap for all
fourteen measurements of $g_A^{u-d}$ and $g_T^{u-d}$.
\item
The $3^\ast$-state fits for $g_S^{u-d}$ are not stable in all cases and many
of the parameters are poorly determined. To extract our best estimates, we use
2-state fits.
\item
The largest excited-state contribution comes from the $\langle 0 |
O_\Gamma | 1 \rangle$ transition matrix elements. We, therefore
discuss a poor person's recipe to get estimates based on the $2^\ast$
fits in Sec.~\ref{sec:poor} that are useful when data at only one
value of $\tau$ are available.
\end{itemize}
Our conclusion on ESC is that with $O(10^5)$ measurements, $3^\ast$
fits, the choice of smearing parameters used and the values of $\tau$
simulated, the excited-state contamination in $g_A^{u-d}$ and
$g_T^{u-d}$ has been controlled to within a couple of percent, i.e.,
the size of the quoted errors. The errors in $g_S^{u-d}$ are at the
5\%--10\% level, and we take results from the 2-state fit as our best
estimates. In general, for calculations by other groups when data
with reasonable precision are available only at a single value of
$\tau$, we show that the $2^\ast$ fit gives a much better estimate
than the plateau value.
\subsection{A poor person's recipe and $g_S^{u-d}$}
\label{sec:poor}
Our high statistics calculations allow us to develop the following
poor person's recipe for estimating the ground state matrix element
when data are available only at a single value of $\tau$. To
illustrate this, we picked two values with $\tau \approx 1$~fm ($\tau
=\{6,7\}, \{8,10\}, \{10,12\}, \{16,18,20\}$ in lattice units for the $a\approx
0.15, 0.12, 0.09, 0.06$ ensembles) for which we have reasonably
precise data at all values of $t$ and for all three isovector
charges. We then compared the estimates of the charges from the
$2^\ast$ fit to data at these values of $\tau$ with our best estimate
from the $3^\ast$ fit (2-state for $g_S^{u-d}$) to the data at
multiple $\tau$ and $t$. Fits for all ensembles are shown in
Figs.~\ref{fig:gA2v3a12}--\ref{fig:gT2v3a06} and the results collected
in Table~\ref{tab:results3bareu-d}.
In the case of $g_A^{u-d}$ and $g_T^{u-d}$ we get overlapping results
results converging to the $3^\ast$ value. This suggests that, within
our statistical precision, all the excited-state terms that behave as
$\cosh \Delta M(t-\tau/2)$ in the spectral decomposition are
well-approximated by the single term proportional to $\langle 0|{
\cal{O}} | 1 \rangle$ in the $2^\ast$ fit. Isolating this ESC is,
therefore, essential. Also, the remainder, the sum of all the terms
independent of $t$ is small. This explains why the values of the
excited state matrix elements $\langle 1| {\cal{O} } | 1 \rangle$ and
$\langle 0| {\cal{O} } | 2 \rangle$, given in Table~\ref{tab:bareEME},
are poorly determined.
We further observe that in our implementation of the lattice
calculations---HYP smoothing of the lattices plus the Gaussian
smearing of the quark sources---the product $(M_1-M_0) \times \tau$ is
$ \gtrsim 1$ for $\tau \approx 1$~fm, i.e., $(M_1-M_0) \gtrsim
200$~MeV. Since this condition holds for the physical nucleon
spectrum, it is therefore reasonable to expect that the charges
extracted from a $2^\ast$ fit to data with $\tau \gtrsim 1$~fm are a
good approximation to the $\mathop{\tau \to \infty}\nolimits$ value, whereas the value at the
midpoint $t=\tau/2$ (called the plateau value) is not. This is
supported by the data for $g_A^{u-d}$ and $g_T^{u-d}$ shown in
Table~\ref{tab:results3bareu-d}; there is much better consistency
between the $3^\ast$ results and $2^\ast$ fits to data with a
single values of $\tau \gtrsim 1$~fm versus the plateau value.
In this work, the reason for considering such a recipe is that
estimates of $g_S^{u-d}$ have much larger statistical errors, because
of which the data at the larger values of $\tau$ do not, in all cases,
exhibit the expected monotonic convergence in $\tau$ and have large
errors. As a result, on increasing $n$ in an $n$-state fit to data
with multiple values of $\tau$ does not always give a
better or converged value. We, therefore, argue that to obtain the
best estimates of $g_S^{u-d}$ one can make judicious use of this
recipe, i.e., use $2^\ast$ fits to the data with the largest value of
$\tau$ that conforms with the expectation of monotonic convergence
from below. In our case, based on such analyses we conclude that the
2-state fits are more reliable than $3^\ast$ fits for
$g_S^{u-d}$. These fourteen values of $g_S^{u-d}$ used in the final
analysis are marked with the superscript ${}^\dag$ in
Table~\ref{tab:results3bareu-d}. The same strategy is followed for
obtaining the connected contribution to the isoscalar charges,
$g_{S}^{u+d}$, that are given in Table~\ref{tab:results3bareu+d}.
\begin{table*}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc|ccc|ccc}
ID & $g_A^{u}$ & $g_A^{d}$ & $g_A^{u-d}$ & $g_S^{u}$ & $g_S^{d}$ & $g_S^{u-d}$ & $g_T^{u}$ & $g_T^{d}$ & $g_T^{u-d}$ \\
\hline
$a15m310 $ & 0.937(06) & $-$0.313(04) & 1.250(07) & 3.10(08) & 2.23(06) & 0.87(03) & 0.901(06) & $-$0.219(04) & 1.121(06) \\
\hline
$a12m310 $ & 0.946(15) & $-$0.328(09) & 1.274(15) & 3.65(13) & 2.69(09) & 0.96(05) & 0.859(12) & $-$0.206(07) & 1.065(13) \\
$a12m220S$ & 0.934(43) & $-$0.332(27) & 1.266(44) & 5.23(49) & 4.23(40) & 1.00(26) & 0.816(44) & $-$0.249(33) & 1.065(39) \\
$a12m220 $ & 0.947(22) & $-$0.318(13) & 1.265(21) & 4.83(35) & 3.72(29) & 1.11( 9) & 0.847(17) & $-$0.201(11) & 1.048(18) \\
$a12m220L$ & 0.942(09) & $-$0.347(08) & 1.289(13) & 4.21(29) & 3.34(26) & 0.87(04) & 0.846(11) & $-$0.203(05) & 1.069(11) \\
\hline
$a09m310 $ & 0.930(07) & $-$0.308(04) & 1.238(08) & 3.60(12) & 2.58(10) & 1.02(03) & 0.824(07) & $-$0.203(03) & 1.027(07) \\
$a09m220 $ & 0.945(12) & $-$0.334(06) & 1.279(13) & 4.46(19) & 3.41(16) & 1.05(04) & 0.799(10) & $-$0.203(05) & 1.002(10) \\
$a09m130 $ & 0.919(20) & $-$0.350(16) & 1.269(28) & 5.87(49) & 4.71(41) & 1.16(13) & 0.765(20) & $-$0.196(10) & 0.961(22) \\
$a09m130W$ & 0.935(14) & $-$0.336(08) & 1.271(15) & 5.28(17) & 4.23(14) & 1.05(06) & 0.797(12) & $-$0.203(06) & 1.000(12) \\
\hline
$a06m310 $ & 0.923(25) & $-$0.320(15) & 1.243(27) & 4.48(33) & 3.24(24) & 1.24(11) & 0.785(20) & $-$0.197(11) & 0.982(20) \\
$a06m310W$ & 0.906(22) & $-$0.310(16) & 1.216(21) & 4.06(16) & 2.94(11) & 1.12(07) & 0.784(15) & $-$0.192(08) & 0.975(16) \\
$a06m220 $ & 0.912(13) & $-$0.323(13) & 1.235(18) & 4.40(13) & 3.29(09) & 1.11(07) & 0.779(10) & $-$0.197(10) & 0.975(12) \\
$a06m220W$ & 0.917(24) & $-$0.341(15) & 1.257(24) & 4.32(21) & 3.55(18) & 0.77(09) & 0.764(21) & $-$0.198(11) & 0.962(22) \\
$a06m135 $ & 0.917(22) & $-$0.323(13) & 1.240(26) & 5.26(22) & 4.26(15) & 1.00(13) & 0.768(17) & $-$0.183(10) & 0.952(19) \\
\end{tabular}
\end{ruledtabular}
\caption{Results for the
bare connected contributions to the various charges.}
\label{tab:resultsbare}
\end{table*}
\subsection{Transition and excited state matrix elements}
\label{sec:excitedME}
The only transition matrix element that has been estimated with some
degree of confidence is $\langle 0 | \mathcal{O}_\Gamma | 1 \rangle$
as can be inferred from the results given in
Table~\ref{tab:bareEME}. Also including information from
Figs.~\ref{fig:gA2v3a12}--\ref{fig:gT2v3a06}, our qualitative
conclusions on it are as follows:
\begin{itemize}
\item
Estimates of $\langle 0 | \mathcal{O}_A | 1 \rangle$ vary between
$-0.1$ and $-0.3$ and account for the negative
curvature evident in the figures. All
ground-state estimates of $g_A^{u-d}$ converge from below.
\item
Estimates of $\langle 0 | \mathcal{O}_S | 1 \rangle$ vary between
$-0.2$ and $-0.5$ and account for the larger
negative curvature observed in the figures. All
ground-state estimates of $g_S^{u-d}$ also converge from below.
\item
Estimates of $\langle 0 | \mathcal{O}_T | 1 \rangle$ vary between 0.1
and 0.3 and account for the positive curvature evident in the
figures. The ground-state estimates of $g_T^{u-d}$ converge from
above in all cases.
\end{itemize}
Our long term goal is to improve the precision of these calculations
to understand and extract an infinite volume continuum limit value for the
transition matrix elements.
\subsection{A caveat in the analysis of the isoscalar charges $g_{A,S,T}^{u+d}$ keeping only the connected contribution}
\label{sec:PQ}
In this paper, we have analyzed only the connected contributions to
the isoscalar charges $g_{A,S,T}^{u+d}$. The disconnected
contributions are not included as they are not available for all the
ensembles, and are analyzed for different, typically smaller, values
of source-sink separation $\tau$ because of the lower
quality of the statistical signal. Since the proper way to extract
the isoscalar charges is to first add the connected and disconnected
contributions and then perform the fits using the lattice QCD spectral
decomposition to remove excited state contamination, analyzing only
the connected contribution introduces an approximation. Isoscalar
charges without a disconnected contribution can be defined in a
partially quenched theory with an additional quark with flavor
$u^\prime$. However, in this theory the Pauli exclusion principle does
not apply between the $u$ and $u^\prime$ quarks. The upshot of this is
that the spectrum of states in the partially quenched theory is
larger, for example, an intermediate $u^\prime u d$ state would be the
analogue of a $\Lambda$ baryon\footnote{We thank Stephen Sharpe for providing
a diagrammatic illustration of such additional states.}. Thus, the
spectral decomposition for this partially quenched theory and QCD is
different. The problem arises because our n-state fits assume the QCD
spectrum since we take the amplitudes and masses of states from the
QCD 2-point function when fitting the 3-point function using
Eq.~\eqref{eq:3pt}. One could make fits to 3-point functions leaving
all the parameters in Eq.~\eqref{eq:3pt} free, but then even 2-state
fits become poorly constrained with current data.
We assume that, in practice, the effect due to using the QCD rather
than the partially quenched QCD spectra to fit the connected
contribution versus $t$ and $\tau$ to remove ESC is smaller than the
quoted errors. First, the difference between the plateau value in our
largest $\tau$ data and the $\tau \to \infty$ value is a few percent
effect, so that any additional systematic is well within the quoted
uncertainty. Furthermore, for the tensor charges the disconnected
contribution is tiny and consistent with zero, so for the tensor
charges one can ignore this caveat. For the axial and scalar charges,
the disconnected contribution is between 10\%--20\% of the connected, so
we are neglecting possible systematic effects due to extrapolating the
connected and disconnected contributions separately.
\begin{table*}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc|cc|ccc}
& \multicolumn{3}{c|} {Axial} & \multicolumn{2}{c|} {Scalar} & \multicolumn{3}{c} {Tensor} \\
ID & $\langle 0 | \mathcal{O}_A | 1 \rangle$ & $\langle 1 | \mathcal{O}_A | 1 \rangle$ & $\langle 0 | \mathcal{O}_A | 2 \rangle$
& $\langle 0 | \mathcal{O}_S | 1 \rangle$ & $\langle 1 | \mathcal{O}_S | 1 \rangle$
& $\langle 0 | \mathcal{O}_T | 1 \rangle$ & $\langle 1 | \mathcal{O}_T | 1 \rangle$ & $\langle 0 | \mathcal{O}_T | 2 \rangle$ \\
\hline
$a15m310 $ & $-$0.044( 37) & $-$2.06(1.3) & $-$0.08( 5) & $-$0.37( 3) & $ $ 3.6(4.6) & 0.31( 4) & $-$2.72(1.2) & $-$0.18( 7) \\
\hline
$a12m310 $ & $-$0.208( 94) & $ $1.40(2.4) & $ $0.07( 4) & $-$0.72( 9) & $ $ 8.5(10.) & 0.32( 8) & $-$0.82(2.2) & $ $0.08( 4) \\
$a12m220S$ & $-$0.119( 77) & $ $1.46(60) & $ $0.03(10) & $-$0.42(13) & $ $ 3.8(5.7) & 0.19( 8) & $ $0.13(62) & $ $0.10(11) \\
$a12m220 $ & $-$0.047( 52) & $ $0.33(76) & $-$0.08( 5) & $-$0.38(11) & $-$ 2.8(3.6) & 0.21( 5) & $ $0.07(59) & $ $0.12( 4) \\
$a12m220L$ & $-$0.084( 25) & $-$0.21(73) & $-$0.05( 3) & $-$0.38(12) & $ $ 4.6(2.7) & 0.19( 2) & $-$0.04(43) & $ $0.09( 4) \\
\hline
$a09m310 $ & $-$0.095( 20) & $-$1.45(1.9) & $ $0.11( 6) & $-$0.39( 4) & $ $ 0.7(1.5) & 0.20( 2) & $ $0.17(1.1) & $ $0.04( 6) \\
$a09m220 $ & $-$0.153( 34) & $-$0.44(98) & $ $0.07( 4) & $-$0.47( 5) & $ $ 1.4(1.0) & 0.16( 3) & $ $0.44(60) & $ $0.13( 3) \\
$a09m130 $ & $-$0.092( 26) & $ $0.65(19) & $ $0.03( 4) & $-$0.42( 7) & $ $ 2.0(1.2) & 0.17( 3) & $ $0.78(14) & $ $0.08( 4) \\
$a09m130W$ & $-$0.098( 26) & $-$0.46(94) & $ $0.06( 6) & $-$0.28( 4) & $ $ 2.2(2.2) & 0.18( 3) & $ $0.37(71) & $ $0.11( 6) \\
\hline
$a06m310 $ & $-$0.075( 41) & $ $0.18(51) & $-$0.00( 1) & $-$0.41( 6) & $ $ 1.2(1.4) & 0.14( 5) & $-$0.20(60) & $-$0.08( 9) \\
$a06m310W$ & $-$0.093(124) & $-$0.56(4.5) & $-$0.02(35) & $-$0.44( 9) & $ $10.6(15.) & 0.22(12) & $ $0.41(3.9) & $ $0.04(36) \\
$a06m220 $ & $-$0.184( 40) & $ $0.43(38) & $ $0.28(13) & $-$0.32( 4) & $-$ 0.3(1.1) & 0.09( 4) & $ $0.33(32) & $ $0.05(12) \\
$a06m220W$ & $-$0.249(127) & $ $1.2(2.2) & $ $0.32(25) & $-$0.33(14) & $ $23.4(20.) & 0.29(13) & $-$1.86(3.0) & $-$0.17(25) \\
$a06m135 $ & $-$0.137( 47) & $ $0.81(41) & $ $0.20(13) & $-$0.32( 6) & $ $ 2.4(3.1) & 0.12( 5) & $ $0.82(39) & $ $0.07(12) \\
\end{tabular}
\end{ruledtabular}
\caption{Estimates of the leading ratios $\langle 0 |
\mathcal{O}_\Gamma | 1 \rangle /\langle 0 | \mathcal{O}_\Gamma | 0
\rangle$, $\langle 1 | \mathcal{O}_\Gamma | 1 \rangle /\langle 0 |
\mathcal{O}_\Gamma | 0 \rangle$, and $\langle 0 | \mathcal{O}_\Gamma
| 2 \rangle /\langle 0 | \mathcal{O}_\Gamma | 0 \rangle$ for the
transition and excited state matrix elements in the case of the
isovector charges. For the scalar charge, $\langle 0 |
\mathcal{O}_\Gamma | 2 \rangle /\langle 0 | \mathcal{O}_\Gamma | 0
\rangle$ is not given since our final results are from the $2$-state
fit that are marked with ${}^\dag$ in
Table~\protect\ref{tab:results3bareu-d}. }
\label{tab:bareEME}
\end{table*}
\section{Renormalization of Operators}
\label{sec:renorm}
The renormalization constants $Z_A$, $Z_V$, $Z_S$ and $Z_T$ of the
isovector quark bilinear operators are calculated in the
regularization-independent symmetric momentum-subtraction (RI-sMOM)
scheme~\cite{Martinelli:1994ty,Sturm:2009kb}. We followed the
methodology given in
Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn} and refer the
reader to it for details. Results based on the six ensembles,
{\it a12m310, a12m220, a09m310, a09m220, a06m310} and {\it a06m220},
obtained in Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn}
are summarized in Table~\ref{tab:Zfinal} along with the new results
on the $a15m310$ ensemble. We briefly summarize
the method below for completeness.\looseness-1
The calculation was done as follows: starting with the lattice results
obtained in the RI-sMOM scheme at a given Euclidean four-momentum
squared $Q^2$, we first convert them to the $\overline{\text{MS}}$
scheme at the same scale (horizontal matching) using two-loop
perturbative relations expressed in terms of the coupling constant
$\alpha_{\overline{\text{MS}}}(Q^2)$~\cite{Gracey:2011fb}. This
estimate at $\mu^2=Q^2$, is then run in the continuum in the
$\overline{\text{MS}}$ scheme to $2\mathop{\rm GeV}\nolimits$ using the 3-loop anomalous
dimension relations for the scalar and tensor
bilinears~\cite{Gracey:2000am,Agashe:2014kda}. These data are labeled
by the $Q^2$ in the original RI-sMOM scheme and suffer from artifacts
due to nonperturbative effects and the breaking of the Euclidean
$O(4)$ rotational symmetry down to the hypercubic group. To get the
final estimate, we fit these data versus $Q^2$ using an ansatz
motivated by the form of possible artifacts as discussed in
Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn}.
We find that the final renormalization factors on ensembles with constant $a$
show no significant dependence versus $M_\pi$. We, therefore,
average the results at different $M_\pi$ to get the mass-independent
values at each $a$.
In Table~\ref{tab:Zfinal}, we also give the results for the ratios
$Z_A/Z_V$, $Z_S/Z_V$, and $Z_T/Z_V$ that show much smaller $O(4)$
breaking, presumably because some of the systematics cancel. From the
individual data and the two ratios, $Z_\Gamma /Z_V$ and
$g_\Gamma/g_V^{u-d}$, we calculate the renormalized charges in two
ways: $Z_\Gamma \times g_\Gamma$ and $(Z_\Gamma /Z_V) \times
(g_\Gamma/g_V^{u-d})$ with $Z_V g_V^{u-d} = 1$ since the
conservation of the vector current. These two sets of renormalized
charges are given in Table~\ref{tab:resultsrenormIV}.
\begin{table*}
\centering
\begin{ruledtabular}
\begin{tabular}{c|cccc|ccc}
ID & $Z_A^{u-d}$& $Z_S^{u-d}$& $Z_T^{u-d}$& $Z_V^{u-d}$& $Z_A^{u-d}/Z_V^{u-d}$ & $Z_S^{u-d}/Z_V^{u-d}$ & $Z_T^{u-d}/Z_V^{u-d}$ \\
\hline
$a=0.15$~fm & $0.96(2)$ & $0.94(4)$ & $0.95(3)$ & $0.92(2)$ & $1.05(2)$ & $1.02(5)$ & $1.02(3)$ \\
$a=0.12$~fm & $0.95(3)$ & $0.90(4)$ & $0.94(4)$ & $0.91(2)$ & $1.045(09)$ & $0.986(09)$ & $1.034(34)$ \\
$a=0.09$~fm & $0.95(4)$ & $0.88(2)$ & $0.98(4)$ & $0.92(2)$ & $1.034(11)$ & $0.955(49)$ & $1.063(29)$ \\
$a=0.06$~fm & $0.97(3)$ & $0.86(3)$ & $1.04(3)$ & $0.95(1)$ & $1.025(09)$ & $0.908(40)$ & $1.100(25)$ \\
\end{tabular}
\end{ruledtabular}
\caption{The final mass-independent isovector
renormalization constants $Z_A^{u-d}$, $Z_S^{u-d}$, $Z_T^{u-d}$,
$Z_V^{u-d}$ and the ratios $Z_A^{u-d}/Z_V^{u-d}$,
$Z_S^{u-d}/Z_V^{u-d}$ and $Z_T^{u-d}/Z_V^{u-d}$ in the
$\overline{\text{MS}}$ scheme at 2~GeV at the four values of the
lattice spacing used in our analysis. Results for the $a=0.12$, $a=0.09$ and $a=0.06$~fm ensembles are reproduced from
Ref.~\cite{Bhattacharya:2016zcn}.}
\label{tab:Zfinal}
\end{table*}
\begin{table*}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc|ccc|cc}
& \multicolumn{3}{c|} {$g_\Gamma^{u-d,{\rm bare}}/g_V^{u-d,{\rm bare}}\times Z_\Gamma^{u-d}/Z_V^{u-d}$} & \multicolumn{3}{c|} {$g_\Gamma^{u-d,{\rm bare}} \times Z_\Gamma^{u-d}$} & \multicolumn{2}{c} {} \\
ID & $g_A^{u-d}$ & $g_S^{u-d}$ & $g_T^{u-d}$ & $g_A^{u-d}$ & $g_S^{u-d}$ & $g_T^{u-d}$ & $g_V^{u-d,{\rm bare}}$ & $Z_V g_V^{u-d,{\rm bare}}$ \\
\hline
$a15m310 $ & 1.228(25) & 0.828(049) & 1.069(32) & 1.200(26) & 0.816(044) & 1.065(34) & 1.069(04) & 0.983(22) \\
\hline
$a12m310 $ & 1.251(19) & 0.891(045) & 1.035(37) & 1.210(41) & 0.865(058) & 1.001(44) & 1.064(05) & 0.968(22) \\
$a12m220S$ & 1.224(44) & 0.916(233) & 1.019(53) & 1.203(56) & 0.903(237) & 1.001(56) & 1.081(18) & 0.983(27) \\
$a12m220 $ & 1.234(25) & 1.024(086) & 1.011(38) & 1.202(43) & 1.001(096) & 0.985(45) & 1.071(09) & 0.975(23) \\
$a12m220L$ & 1.262(17) & 0.807(039) & 1.035(36) & 1.225(41) & 0.786(052) & 1.005(44) & 1.067(04) & 0.971(21) \\
\hline
$a09m310 $ & 1.235(15) & 0.936(054) & 1.054(30) & 1.176(50) & 0.893(031) & 1.007(42) & 1.045(03) & 0.962(20) \\
$a09m220 $ & 1.260(19) & 0.958(063) & 1.015(30) & 1.215(53) & 0.926(044) & 0.982(41) & 1.053(03) & 0.969(21) \\
$a09m130 $ & 1.245(32) & 1.050(128) & 0.969(35) & 1.206(57) & 1.019(116) & 0.942(44) & 1.052(08) & 0.969(22) \\
$a09m130W$ & 1.249(21) & 0.952(074) & 1.011(30) & 1.207(53) & 0.923(058) & 0.980(44) & 1.052(06) & 0.968(22) \\
\hline
$a06m310 $ & 1.233(30) & 1.090(104) & 1.046(33) & 1.205(46) & 1.065(100) & 1.021(36) & 1.043(06) & 0.991(12) \\
$a06m310W$ & 1.205(24) & 0.984(074) & 1.037(30) & 1.180(42) & 0.964(071) & 1.014(34) & 1.035(11) & 0.983(15) \\
$a06m220 $ & 1.206(21) & 0.959(071) & 1.022(27) & 1.198(41) & 0.953(066) & 1.014(32) & 1.050(07) & 0.997(12) \\
$a06m220W$ & 1.241(26) & 0.672(082) & 1.018(34) & 1.220(45) & 0.661(080) & 1.000(37) & 1.039(09) & 0.987(13) \\
$a06m135 $ & 1.220(27) & 0.876(120) & 1.005(30) & 1.203(45) & 0.864(118) & 0.990(35) & 1.042(10) & 0.990(14) \\
\hline
11-point fit & 1.218(25) & 1.022(80) & 0.989(32) & 1.197(42) & 1.010(74) & 0.966(37) & & \\
$\chi^2/$d.o.f. & 0.21 & 1.43 & 0.10 & 0.05 & 1.12 & 0.20 & & \\
10-point fit & 1.215(31) & 0.914(108) & 1.000(41) & 1.200(56) & 0.933(108) & 0.994(48) & & \\
$\chi^2/$d.o.f. & 0.24 & 1.30 & 0.09 & 0.06 & 1.15 & 0.09 & & \\
$10^\ast$-point fit & 1.218(25) & 1.021(80) & 0.989(32) & 1.197(43) & 1.009(74) & 0.966(37) & & \\
$\chi^2/$d.o.f. & 0.23 & 1.67 & 0.11 & 0.06 & 1.31 & 0.17 & & \\
$8$-point fit & 1.245(42) & 1.214(130) & 0.977(67) & 1.172(94) & 1.123(105) & 0.899(86) & & \\
$\chi^2/$d.o.f. & 0.20 & 1.14 & 0.13 & 0.06 & 0.87 & 0.13 & & \\
\end{tabular}
\end{ruledtabular}
\caption{Results for the renormalized isovector charges calculated in
two ways, $g_\Gamma^{u-d,{\rm bare}}/g_V^{u-d,{\rm bare}} \times
Z_\Gamma^{u-d}/Z_V^{u-d}$ and $g_\Gamma^{u-d,{\rm bare}} \times
Z_\Gamma^{u-d}$. The errors are obtained by adding in quadrature the
errors in the bare matrix elements and in the renormalization constants
given in Table~\protect\ref{tab:Zfinal}. The unrenormalized charges
are given in Table~\protect\ref{tab:resultsbare}. In the last two
columns, we also give the results for the bare, $g_V^{u-d,{\rm
bare}}$ and the renormalized, $Z_V g_V^{u-d,{\rm bare}}$, vector
charge. The latter should be unity as it is conserved. The
deviations are found to be up to 4\%. Results of the four CCFV
fits (11-point, 10-point, $10^\ast$-point, and the $8$-point
defined in the text) are given in the bottom eight rows. }
\label{tab:resultsrenormIV}
\end{table*}
We are also interested in extracting flavor diagonal charges which can
be written as a sum over isovector ($u-d$) and isoscalar ($u+d$)
combinations. These combinations renormalize with the corresponding
isovector, $Z^{\rm isovector}$, and isoscalar, $Z^{\rm isoscalar}$,
factors that are, in general,
different~\cite{Bhattacharya:2005rb}.~\footnote{In general, one
considers the singlet and non-singlet combinations in a $N_f$-flavor
theory. In this paper, we are only analyzing the insertions on $u$
and $d$ quarks that are taken to be degenerate, so it is convenient
to use the 2-flavor labels, isosinglet ($u+d$) and isovector
($u-d$).} Only the isovector renormalization constants are given in
Table~\ref{tab:Zfinal}.
In perturbation theory, the difference between $Z^{\rm isovector}$ and
$Z^{\rm isoscalar}$ appears at two loops, and is therefore expected to
be small. Explicit calculations in
Refs.~\cite{Alexandrou:2017qyt,Alexandrou:2017oeh,Green:2017keo} show
that $Z^{\rm isosinglet} \approx Z^{\rm isovector}$ for the axial and
tensor charges. Since the two agree to within a percent, we will
assume $Z_{A,T}^{\rm isoscalar} = Z_{A,T}^{\rm isovector}$ in this
work, and renormalize both isovector ($u-d$) and isoscalar ($u+d$)
combinations of charges using $ Z^{\rm isovector}$. In the case of
the tensor charges, this approximation is even less significant since
the contribution of the disconnected diagrams to the charges is
consistent with zero within errors~\cite{Bhattacharya:2015wna}.
In the case of the scalar charge, the difference between $Z^{\rm
isosinglet}$ and $Z^{\rm isovector}$ can be large due to the
explicit breaking of the chiral symmetry in the Wilson-clover action
which induces mixing between flavors. This has not been fully
analyzed for our clover-on-HISQ formulation, so only the bare results
for $g_S^{u-d}$ and $g_S^{u+d}$, and the renormalized results for
$g_S^{u-d}$ are presented in this work.
\begin{table*}
\centering
\begin{ruledtabular}
\begin{tabular}{c|cc|ccc}
ID & $g_A^{u}$ & $g_A^{d}$ & $g_T^{u}$ & $g_T^{d}$ & $g_T^{u+d}$ \\
\hline
$a15m310 $ & 0.920(19) & $-$0.307(07) & 0.860(26) & $-$0.209(07) & 0.649(21) \\
\hline
$a12m310 $ & 0.929(17) & $-$0.322(09) & 0.835(30) & $-$0.200(10) & 0.635(26) \\
$a12m220S$ & 0.904(42) & $-$0.321(27) & 0.781(51) & $-$0.238(33) & 0.543(68) \\
$a12m220 $ & 0.924(24) & $-$0.311(14) & 0.818(32) & $-$0.194(12) & 0.624(30) \\
$a12m220L$ & 0.922(12) & $-$0.340(09) & 0.819(29) & $-$0.216(08) & 0.600(26) \\
\hline
$a09m310 $ & 0.928(12) & $-$0.308(05) & 0.845(24) & $-$0.208(07) & 0.637(19) \\
$a09m220 $ & 0.931(15) & $-$0.329(08) & 0.810(24) & $-$0.205(08) & 0.604(20) \\
$a09m130 $ & 0.901(23) & $-$0.344(17) & 0.772(29) & $-$0.198(12) & 0.574(28) \\
$a09m130W$ & 0.919(17) & $-$0.330(09) & 0.806(25) & $-$0.205(09) & 0.601(23) \\
\hline
$a06m310 $ & 0.916(27) & $-$0.317(16) & 0.836(29) & $-$0.210(13) & 0.626(31) \\
$a06m310W$ & 0.897(24) & $-$0.307(17) & 0.833(26) & $-$0.204(10) & 0.629(25) \\
$a06m220 $ & 0.890(16) & $-$0.316(13) & 0.816(22) & $-$0.206(11) & 0.609(21) \\
$a06m220W$ & 0.905(25) & $-$0.336(16) & 0.809(30) & $-$0.209(12) & 0.600(30) \\
$a06m135 $ & 0.902(23) & $-$0.318(13) & 0.811(26) & $-$0.193(11) & 0.618(26) \\
\hline
11-point fit & 0.895(21) & $-$0.320(12) & 0.790(27) & $-$0.198(10) & 0.590(25) \\
$\chi^2/$d.o.f. & 0.29 & 0.52 & 0.20 & 0.67 & 0.38 \\
10-point fit & 0.890(27) & $-$0.324(17) & 0.810(36) & $-$0.201(16) & 0.608(37) \\
$\chi^2/$d.o.f. & 0.33 & 0.59 & 0.12 & 0.77 & 0.37 \\
$10^\ast$-point fit & 0.895(21) & $-$0.319(12) & 0.790(27) & $-$0.197(10) & 0.592(25) \\
$\chi^2/$d.o.f. & 0.34 & 0.57 & 0.09 & 0.57 & 0.16 \\
\end{tabular}
\end{ruledtabular}
\caption{Results for the renormalized connected part of the flavor
diagonal charges, $g_\Gamma^{\rm bare}/g_V^{{u-d},{\rm bare}} \times
Z_\Gamma^{u-d}/Z_V^{u-d}$. The final errors are obtained by adding
in quadrature the errors in estimates of the ratios $g_\Gamma^{\rm
bare}/g_V^{{u-d},{\rm bare}}$ to the errors in the ratios of the
renormalization constants, $Z_\Gamma^{u-d}/Z_V^{u-d}$ given in
Table~\protect\ref{tab:Zfinal}. Results for $g_T^{u+d}$ are
presented assuming that the disconnected contributions, shown to be
tiny in Ref.~\protect\cite{Bhattacharya:2015wna}, can be
neglected. Results of three CCFV fits (the 11-point, the 10-point, and
the $10^\ast$-point defined in the text) are given
in the bottom six rows. }
\label{tab:resultsrenormFD}
\end{table*}
\section{Continuum, chiral and finite volume fit for the charges $g_A$, $g_S$, $g_T$}
\label{sec:results}
To obtain estimates of the renormalized charges given in
Tables~\ref{tab:resultsrenormIV} and~\ref{tab:resultsrenormFD} in the
continuum limit ($a\rightarrow 0$), at the physical pion mass ($M_{\pi^0}
= 135$~MeV) and in the infinite volume limit ($L \rightarrow \infty$), we
need an appropriate physics motivated fit ansatz. To parametrize the
dependence on $M_\pi$ and the finite volume parameter $M_\pi L$, we
resort to results from finite volume chiral perturbation theory
($\chi$FT)~\cite{Bernard:1992qa,Bernard:1995dp,Bernard:2006gx,Bernard:2006te,Khan:2006de,Colangelo:2010ba,deVries:2010ah}.
For the lattice discretization effects, the corrections start with the
term linear in $a$ since the action and the operators in our
clover-on-HISQ formalism are not fully $O(a)$ improved. Keeping just
the leading correction term in each, plus possibly the chiral
logarithm term discussed below, our approach is to make a simultaneous
fit in the three variables to the data from the eleven ensembles. We call
these the CCFV fits. For the isovector charges and the flavor diagonal
axial and tensor charges, the ansatz is
\begin{align}
g_{A,S,T}^{u-d} (a,M_\pi,L) &= c_1 + c_2a + c_3 M_\pi^2 + c_3^{\rm log} M_\pi^2 \ln \left(\frac{M_\pi}{M_\rho}\right)^2 \nonumber \\
&+ c_4 M_\pi^2 \frac{e^{-M_\pi L}}{X(M_\pi L)} \,,
\label{eq:extrapgAST}
\end{align}
where $M_\rho$ in the chiral logarithm is the renormalization scale.
The coefficients, $c_3^{\rm log}$, are known in $\chi$PT, and with
lattice QCD data at multiple values of $M_\pi$ and at fixed $a$ and
$M_\pi L$ one can compare them against values obtained from the
fits. As shown in Fig.~\ref{fig:conUmD-extrap11}, the $M_\pi$
dependence of all three isovector charges is mild and adequately fit
by the lowest order term. Since the $c_3^{\rm log}$ predicted by
$\chi$PT are large, including it requires also including still higher
order terms in $M_\pi$ to fit the mild dependence. In our case, with
data at just three values of $M_\pi$ and the observed mild dependence
between 320 and 135~MeV, including more than one free parameter is not
justified based on the Akaike Information Criterion (AIC) that requires the
reduction of $\chi^2$ by two units for each extra parameter. In short,
we cannot test the predictions of $\chi$PT. For example, in a fit
including the chiral log term and a $M_\pi^3$ term, the two additional
terms essentially negate each other over the range of the data, i.e.,
between 320--135~MeV. If the large $\chi$PT value for the coefficient $c_3^{\rm log}$
of the chiral log is used as an input, then the fit pushes the
coefficient of the $M_\pi^3$ term to also be large to keep the net
variation within the interval of the data small. Furthermore, as can be seen from
Table~\ref{tab:chiralfit}, even the coefficients of the leading order
terms are poorly determined for all three charges. This is because the
variations between points and the number of points are both small. For
these reasons, including the chiral logarithm term to analyze the
current data does not add predictive capability, nor does it provide a
credible estimate of the uncertainty due to the fit ansatz, nor tests
the $\chi$PT value of the coefficient $c_3^{\rm log}$. Consequently,
the purpose of our chiral fit reduces to getting the value at
$M_\pi=135$~MeV. We emphasize that this is obtained reliably with
just the leading chiral correction since the fits are anchored by the
data from the two physical pion mass ensembles.
The finite-volume correction, in general, consists of a number of
terms, each with different powers of $M_\pi L$ in the denominator and
depending on several low-energy constants (LEC)~\cite{Khan:2006de}. We
have symbolically represented these powers of $M_\pi L$ by $X(M_\pi
L)$. Since the variation of this factor is small
compared to the exponential over the range of $M_\pi L$ investigated,
we set $X(M_\pi L) = {\rm constant}$ and retain only the
appropriate overall factor $M_\pi^2 e^{-M_\pi L}$, common to all the
terms in the finite-volume expansion, in our fit ansatz. The, {\it a posteriori},
justification for this simplification is that no significant finite
volume dependence is observed in the data as shown in Fig.~\ref{fig:conUmD-extrap11}.
We have carried out four fits with different selections of the
fourteen data points and for the two constructions of the renormalized
charges. Starting with the 14 calculations, we first construct a
weighted average of the pairs of points from the three $a09m130$,
$a06m310$ and $a06m220$ ensembles. For errors, we adopt the Schmelling
procedure~\cite{Schmelling:1994pz} assuming maximum correlation
between the two values from each ensemble. This gives us eleven data points
to fit.
\begin{itemize}
\item
The fit with all the data points is called the 11-point fit. This is
used to obtain the final results.
\item
Remove the coarsest $a15m310$ ensemble point from the analysis. This
is called the 10-point fit.
\item
Remove the $a12m220S$ point as it has
the largest errors and the smallest volume. This is called the
$10^\ast$-point fit.
\item
To compare results for $g_A^{u-d}$ with those from the CalLat collaboration~\cite{Chang:2018uxx}
(see Sec.~\ref{sec:comparison}), we perform an $8-$point fit that
neglects the data from the three $a\approx 0.06$~fm ensembles.
\end{itemize}
The results from these four fits and for the two ways of constructing
the renormalized isovector charges are given in
Table~\ref{tab:resultsrenormIV}. We find that the six estimates for
$g_A^{u-d}$ and $g_T^{u-d}$ from the 11-point, 10-point and
$10^\ast$-point fits with the two ways of renormalization overlap
within $1\sigma$. As discussed in Sec.~\ref{sec:comparison},
for $g_A^{u-d}$, the $a15m310$ point plays an
important role in the comparison with the CalLat results.
For the final results, we use the 11-point fit to the isovector charges
renormalized using $g_\Gamma^{\rm bare}/g_V^{\rm bare} \times
Z_\Gamma/Z_V$ as some of the systematics cancel in the double
ratio. These fits are shown in Fig.~\ref{fig:conUmD-extrap11}.
The lattice artifact that has the most impact on the final values is
the dependence of $g_A^{u-d}$ and $g_S^{u-d}$ on the lattice spacing
$a$. As shown in Fig.~\ref{fig:conUmD-extrap11}, in these cases the
CCFV fit coincides with the fit versus just $a$ (pink and grey bands
overlap in such cases). On the other hand, one can see from the middle
panels, showing the variation versus $M_\pi^2$, that had we only
analyzed the data versus $M_\pi^2$ (grey band), we would have gotten a
higher value for $g_A^{u-d}$ and a lower one for $g_S^{u-d}$, and both
with smaller errors. Our conclusion is that, even when the observed
variation is small, it is essential to perform a simultaneous CCFV fit
to remove the correlated contributions from the three lattice
artifacts.
The data for $g_T^{u-d}$ continues to show very little sensitivity to
the three variables and the extrapolated value is
stable~\cite{Bhattacharya:2016zcn}. A large part of the error in the
individual data points, and thus in the extrapolated value, is now due
to the poorly behaved two-loop perturbation theory used to match the
RI-sMOM to the $\overline{\rm MS}$ scheme in the calculation of the
renormalization constant $Z_T$. Further precision in $g_T^{u-d}$,
therefore, requires developing more precise methods for calculating the
renormalization constants.
Overall, compared to the results presented in
Ref.~\cite{Bhattacharya:2016zcn}, our confidence in the CCFV fits for
all three charges has improved with the new higher precision data.
The final results for the isovector charges in the $\overline{\rm MS}$
scheme at 2~GeV from the 11-point fit to data given in
Table~\ref{tab:resultsrenormIV} and renormalized using $g_\Gamma^{\rm
bare}/g_V^{\rm bare} \times Z_\Gamma/Z_V$ are:
\begin{align}
g_A^{u-d} &= 1.218(25) \,, \nonumber \\
g_S^{u-d} &= 1.022(80) \,, \nonumber \\
g_T^{u-d} &= 0.989(32) \,.
\label{eq:gFinal}
\end{align}
These results for $g_S^{u-d}$ and $g_T^{u-d}$ meet the target
ten percent uncertainty needed to leverage precision neutron decay
measurements of the helicity flip parameters $b$ and $b_\nu$ at the
$10^{-3}$ level to constrain novel scalar and tensor couplings,
$\epsilon_S$ and $\epsilon_T$, arising at the TeV
scale~\cite{Bhattacharya:2011qm,Bhattacharya:2016zcn}.
\begin{figure*}[tb]
\subfigure{
\includegraphics[width=0.32\linewidth]{fig2/gAovergV_a_lo_fv_hlabel}
\includegraphics[width=0.32\linewidth]{fig2/gAovergV_mpisq_lo_fv_nolabel}
\includegraphics[width=0.32\linewidth]{fig2/gAovergV_mpiL_lo_fv_nolabel}
}
\subfigure{
\includegraphics[width=0.32\linewidth]{fig2/gSovergV_a_lo_fv_hlabel}
\includegraphics[width=0.32\linewidth]{fig2/gSovergV_mpisq_lo_fv_nolabel}
\includegraphics[width=0.32\linewidth]{fig2/gSovergV_mpiL_lo_fv_nolabel}
}
\subfigure{
\includegraphics[width=0.32\linewidth]{fig2/gTovergV_a_lo_fv_hlabel}
\includegraphics[width=0.32\linewidth]{fig2/gTovergV_mpisq_lo_fv_nolabel}
\includegraphics[width=0.32\linewidth]{fig2/gTovergV_mpiL_lo_fv_nolabel}
}
\caption{The 11-point CCFV
fit using Eq.~\protect\eqref{eq:extrapgAST} to the data for the
renormalized isovector charges $g_A^{u-d}$, $g_S^{u-d}$, and
$g_T^{u-d}$ in the $\overline{{\rm MS}}$ scheme at 2~GeV. The
result of the simultaneous extrapolation to the physical point
defined by $a\rightarrow 0$, $M_\pi \rightarrow M_{\pi^0}^{{\rm
phys}}=135$~MeV and $M_\pi L \rightarrow \infty$ are marked by a red
star. The pink error band in each panel is the result of the
simultaneous fit but shown as a function of a single variable. The
overlay in the left (middle) panels with the dashed line within the
grey band is the fit to the data versus $a$ ($M_\pi^2$), i.e.,
neglecting dependence on the other two variables. The symbols used to plot the data are
defined in the left panels.}
\label{fig:conUmD-extrap11}
\end{figure*}
\begin{figure*}[tb]
\subfigure{
\includegraphics[width=0.32\linewidth]{fig3/gAovergV_u_a_lo_fv_hlabel}
\includegraphics[width=0.32\linewidth]{fig3/gAovergV_u_mpisq_lo_fv_nolabel}
\includegraphics[width=0.32\linewidth]{fig3/gAovergV_u_mpiL_lo_fv_nolabel}
}
\hspace{0.04\linewidth}
\subfigure{
\includegraphics[width=0.32\linewidth]{fig3/gAovergV_d_a_lo_fv_hlabel}
\includegraphics[width=0.32\linewidth]{fig3/gAovergV_d_mpisq_lo_fv_nolabel}
\includegraphics[width=0.32\linewidth]{fig3/gAovergV_d_mpiL_lo_fv_nolabel}
}
\caption{The 11-point
CCFV fit using Eq.~\protect\eqref{eq:extrapgAST} to the connected
data for the flavor diagonal charges $g_A^{u}$ and $g_A^{d}$
renormalized in the $\overline{{\rm MS}}$ scheme at 2~GeV. Only the
data for $g_A^u$ show a notable dependence on the lattice spacing
$a$. The rest is the same as in
Fig.~\protect\ref{fig:conUmD-extrap11}.\looseness-1
\label{fig:extrap-gA-diagonal}}
\end{figure*}
\begin{figure*}[tb]
\subfigure{
\includegraphics[width=0.32\linewidth]{fig4/gTovergV_u_a_lo_fv_hlabel}
\includegraphics[width=0.32\linewidth]{fig4/gTovergV_u_mpisq_lo_fv_nolabel}
\includegraphics[width=0.32\linewidth]{fig4/gTovergV_u_mpiL_lo_fv_nolabel}
}
\hspace{0.04\linewidth}
\subfigure{
\includegraphics[width=0.32\linewidth]{fig4/gTovergV_d_a_lo_fv_hlabel}
\includegraphics[width=0.32\linewidth]{fig4/gTovergV_d_mpisq_lo_fv_nolabel}
\includegraphics[width=0.32\linewidth]{fig4/gTovergV_d_mpiL_lo_fv_nolabel}
}
\caption{The 11-point
CCFV fit using Eq.~\protect\eqref{eq:extrapgAST} to the connected
data for the flavor diagonal charges $g_T^{u}$ and $g_T^{d}$
renormalized in the $\overline{{\rm MS}}$ scheme at 2~GeV. Only the
data for $g_T^u$ show a notable dependence on $M_\pi$. The rest is the same as in
Fig.~\protect\ref{fig:conUmD-extrap11}.\looseness-1
\label{fig:extrap-gT-diagonal}}
\end{figure*}
Results of the 11-point, 10-point, and $10^\ast$-point
fits to the connected contributions to the flavor-diagonal charges
$g_{A,T}^{u,d}$, using the isovector renormalization factor $Z_{A,T}^{\rm isovector}$, respectively,
are given in Table~\ref{tab:resultsrenormFD}.
Their behavior
versus the lattice spacing and the pion mass is shown in
Figs.~\ref{fig:extrap-gA-diagonal}
and~\ref{fig:extrap-gT-diagonal} using the 11-point fits, again with
$c_3^{\rm log}=0$ in the ansatz given in Eq.~\eqref{eq:extrapgAST}.
The data exhibit the following features:
\begin{itemize}
\item
The noticeable variation in the axial charges is in $g_A^u$ versus $a$
which carries over to $g_A^{u-d}$.
\item
The flavor diagonal charges $g_T^{u,d}$ show little variation except for the
small dependence of $g_T^u$ on $M_\pi^2$ which carries over to $g_T^{u-d}$.
\end{itemize}
Our final results from the 11-point fits for the connected parts of
the flavor diagonal charges for the proton are \looseness-1
\begin{align}
g_A^{u,{\rm conn}} &= 0.895(21) \qquad\ g_A^{d,{\rm conn}} = -0.320(12) \,, \nonumber \\
g_T^{u,{\rm conn}} &= 0.790(27) \qquad\ g_T^{d,{\rm conn}} = -0.198(10) \,.
\label{eq:FDconnected}
\end{align}
Estimates for the neutron are given by the $u \leftrightarrow d$
interchange.
We again remind the reader that the disconnected contributions for the
flavor diagonal axial charges are $O(15\%)$ and will be discussed elsewhere. The
disconnected contribution to $g_T^{u+d}$ is small (comparable to the
statistical errors) and $Z_T^{u-d} \approx Z_T^{u+d}$. Thus, the
results for $g_T^{u,d}$ and $g_T^{u+d}$ are a good approximation to
the total contribution. The new estimates given here supersede
the values presented in
Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2015esa}.
\begin{table*}[tb]
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{c|c|c|c|c|c}
& $c_1$ & $c_2$ & $c_3$ & $c_4$ & $g_\Gamma$ \\
& & fm${}^{-1}$ & GeV${}^{-2}$ & GeV${}^{-2}$ & \\
\hline
$g_A^{u-d}$ & 1.21(3) & 0.41(26) & 0.18(33) &$-$32(19) & 1.218(25) \\
\hline
$g_S^{u-d}$ & 1.02(1) &$-$1.57(75) & 0.22(1.12) & 24(54) & 1.022(80) \\
\hline
$g_T^{u-d}$ & 0.98(3) & 0.11(38) & 0.55(45) & 5(29) & 0.989(32) \\
\end{tabular}
\end{ruledtabular}
\caption{Values of the fit parameters in the
CCFV ansatz defined in Eq.~\eqref{eq:extrapgAST} with $c_3^{\rm
log}=0$. The results are given for the 11-point fit used to
extract the three isovector charges. }
\label{tab:chiralfit}
\end{center}
\end{table*}
\section{Assessing additional error due to CCFV fit ansatz}
\label{sec:errors}
In this section we reassess the estimation of errors from various
sources and provide an additional systematic uncertainty in the
isovector charges due to using a CCFV ansatz with only the leading
order correction terms. We first briefly review the systematics that
are already addressed in our analysis leading to the results in Eq.~\eqref{eq:gFinal}:
\begin{itemize}
\item
Statistical and excited-state contamination (SESC): Errors from these
two sources are jointly estimated in the 2- and $3^\ast$ state
fits. The 2- and $3^\ast$ state fits for $g_A^{u-d}$ and $g_T^{u-d}$
give overlapping results and in most cases the error estimates from
the quoted $3^\ast$-state fits are larger. For $g_S^{u-d}$, we compare
the 2- and $2^\ast$-state fits. Based on these comparisons, an
estimate of the magnitude of possible residual ESC is given in the
first row of Table~\ref{tab:errors} for all three charges.
\item
Uncertainty in the determination of the renormalization constants
$Z_\Gamma$: The results for the $Z$'s and an estimate of the possible
uncertainty presented in Ref.~\cite{Bhattacharya:2016zcn} have not
changed. These are reproduced in Tables~\ref{tab:Zfinal}
and~\ref{tab:errors}, respectively. With the increase in statistical
precision of the bare charges, the uncertainty in the $Z_\Gamma$ is
now a significant fraction of the total uncertainty in
$g_{A,S,T}^{u-d}$.
\item
Residual uncertainties due to the three systematics, extrapolations to
$a\to 0$ and $M_\pi L \to \infty$ and the variation with $M_\pi$.
Estimates of errors in the simultaneous CCFV fit using the lowest
order corrections (see Eq.~\eqref{eq:extrapgAST}) are given in rows
3--5 in Table~\ref{tab:errors}. These are, in most cases, judged to
be small because the variation with respect to the three variables,
displayed in Fig.~\ref{fig:conUmD-extrap11}, is small. With increased
statistics and the second physical mass ensemble, $a06m135$, our
confidence in the CCFV fits and the error estimates obtained with
keeping only the lowest-order corrections in each variable has
increased significantly. The exception is the dependence of
$g_S^{u-d}$ on $a$ as highlighted by the dependence of the
extrapolated value on whether the $a15m310$ point is included
(11-point fit) or excluded (10-point fit).
\end{itemize}
Adding the guesstimates for these five systematic uncertainties, given
in rows 1--5, in quadrature, leads to an error estimate given
in the sixth row in Table~\ref{tab:errors}. This is
consistent with the errors quoted in Eq.~\eqref{eq:gFinal} and
reproduced in the seventh row of Table~\ref{tab:errors}. We, therefore,
regard the fits and the error estimates given in Eq.~\eqref{eq:gFinal} as
adequately capturing the uncertainty due to the five systematics discussed above.
The $\chi^2/{\rm d.o.f.}$ of all four fits for the axial and tensor
charges given in Table~\ref{tab:resultsrenormIV} are already very
small. Therefore, adding higher order terms to the ansatz is not
justified as per the Akaike Information
Criterion~\cite{Akaike:1100705}. Nevertheless, to be conservative, we
quote an additional systematic uncertainty due to the truncation of
the CCFV fit ansatz at the leading order in each of the three
variables, by examining the variation in the data in
Fig.~\ref{fig:conUmD-extrap11}.
For $g^{u-d}_A$, the key reason for the difference between our extrapolated value and
the experimental results are the data on the $a\approx 0.06$~fm
lattices. As discussed in Sec.~\ref{sec:comparison}, an extrapolation
in $a$ with and without these ensembles gives $g^{u-d}_A=1.218(25)$
and $g^{u-d}_A=1.245(42)$, respectively. The difference, $0.03$, is
roughly half the total spread between the fourteen values of
$g^{u-d}_A$ given in Table~\ref{tab:resultsrenormIV}. We, therefore,
quote $0.03$ as the additional uncertainty due to the truncation of
the fit ansatz.
The dominant variation in $g^{u-d}_S$ is again versus $a$, and, as
stated above, the result depends on whether the $a15m310$ point is
included in the fit. We, therefore, take half the difference, $0.06$,
between the 11-point and 10-point fit values as the additional
systematic uncertainty. One gets a similar estimate by taking the
difference in the fit value at $a=0.06$~fm and $a=0$. For $g^{u-d}_T$,
the largest variation is versus $M_\pi^2$. Since we have data from two
ensembles at $M_\pi \approx 135$~MeV that anchor the chiral fit, we
take half the difference in the fit values at $M_\pi=135$ and $220$~MeV as
the estimate of the additional systematic uncertainty.\looseness-1
These error estimates, rounded up to two decimal places, are given in the last row of
Table~\ref{tab:errors}. Including them as a second systematic
error, our final results for the isovector charges in the
$\overline{\rm MS}$ scheme at 2~GeV are:
\begin{align}
g_A^{u-d} &= 1.218(25)(30) \,, \nonumber \\
g_S^{u-d} &= 1.022(80)(60) \,, \nonumber \\
g_T^{u-d} &= 0.989(32)(10) \,.
\label{eq:gFinal2}
\end{align}
Similar estimates of possible extrapolation uncertainty apply also to
results for the connected contributions to the flavor diagonal charges
presented in Eq.~\eqref{eq:FDconnected}. Their final analysis,
including disconnected contributions,
will be presented in a separate publication.
\begin{table}
\centering
\begin{ruledtabular}
\begin{tabular}{c|ccc}
Error From & $g_A^{u-d}$ & $g_S^{u-d}$ & $g_T^{u-d}$ \\
\hline
SESC & $0.02$ $\Uparrow$ & $0.03$ $\Uparrow$ & $0.01$ $\Downarrow$ \\
$Z$ & $0.01$ $\Downarrow$ & $0.04$ $\Uparrow$ & $0.03$ $\Downarrow$ \\
$a$ & $0.02$ $\Downarrow$ & $0.04$ $\Uparrow$ & $0.01$ $\Downarrow$ \\
Chiral & $0.01$ $\Uparrow$ & $0.01$ $\Downarrow$ & $0.02$ $\Downarrow$ \\
Finite volume & $0.01$ $\Uparrow$ & $0.01$ $\Uparrow$ & $0.01$ $\Uparrow$ \\
\hline
Guesstimate error & $0.033$ & $0.066$ & $0.04$ \\
\hline
Error quoted & $0.025$ & $0.080$ & $0.032$ \\
\hline
Fit ansatz & $0.03$ & $0.06$ & $0.01$ \\
\end{tabular}
\end{ruledtabular}
\caption{Estimates of the error budget for the three isovector charges
due to each of the five systematic effects described in the text.
The symbols $\Uparrow$ and $\Downarrow$ indicate the direction in
which a given systematic is observed to drive the central value
obtained from the 11-point fit. The sixth row gives a guesstimate of
error obtained by combining these five systematics in quadrature.
This guesstimate is consistent with the actual errors obtained from
the 11-point fit and quoted in Eq.~\protect\ref{eq:gFinal} and
reproduced in the seventh row. The last row gives the additional
systematic error assigned to account for possible uncertainty
due to the using the CCFV fit ansatz with just the lowest order
correction terms as described in the text. }
\label{tab:errors}
\end{table}
Our new estimate $g_S^{u-d}= 1.022(80)(60)$ is in very good agreement
with $g_S^{u-d}= 1.02(8)(7)$ obtained by Gonzalez-Alonso and
Camalich~\cite{Gonzalez-Alonso:2013ura} using the conserved vector
current (CVC) relation $g_S/g_V = (M_N-M_P)^{\rm QCD}/ (m_d-m_u)^{\rm
QCD}$ with the FLAG lattice-QCD estimates~\cite{FLAG:2016qm} for the
two quantities on the right hand side. The superscript QCD denotes
that the results are in a theory with just QCD, i.e., neglecting
electromagnetic corrections. Using CVC in reverse, our predictions for
$(M_N-M_P)^{\rm QCD}$, using lattice QCD estimates for $m_u$ and
$m_d$, are given in Table~\ref{tab:Mn-Mp}. The uncertainty in these
estimates is dominated by that in $g_S^{u-d}$.\looseness-1
\begin{table}[ht]
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{c|c|l}
$M_N-M_P$ & $N_f$ & $\{m_d,m_u\}^{\rm QCD}$ \\
(MeV) & Flavors & (MeV) \\
\hline
$2.58(32)$ & 2+1 & $m_d = 4.68(14)(7),m_u = 2.16(9)(7)$~\protect\cite{FLAG:2016qm} \\
$2.73(44)$ & 2+1+1 & $m_d = 5.03(26),m_u = 2.36(24)$~\protect\cite{FLAG:2016qm} \\
$2.41(27)$ & 2+1 & $m_d - m_u = 2.41(6)(4)(9)$~\protect\cite{Fodor:2016bgu} \\
$2.63(27)$ & 2+1+1 & $m_d = 4.690(54),m_u = 2.118(38)$~\protect\cite{Bazavov:2018omf}
\end{tabular}
\end{ruledtabular}
\caption{Results for the mass difference $(M_N-M_P)^{\rm QCD}$ obtained using the CVC relation with
our estimate $g_S^{u-d}= 1.022(80)(60)$ and lattice results for the up and down quark masses
from the FLAG review~\cite{FLAG:2016qm} and recent results~\protect\cite{Fodor:2016bgu,Bazavov:2018omf}. }
\label{tab:Mn-Mp}
\end{center}
\end{table}
\section{Comparison with Previous Work}
\label{sec:comparison}
A summary of lattice results for the three isovector charges for
$N_f=2$-, 2+1- and 2+1+1-flavors is shown in Figs.~\ref{fig:PASTgA},~\ref{fig:PASTgS} and~\ref{fig:PASTgT}.
They show the steady improvement in results from lattice QCD. In this
section we compare our results with two calculations published after
the analysis and the comparison presented in
Ref.~\cite{Bhattacharya:2016zcn}, and that include data from physical
pion mass ensembles. These are the
ETMC~\cite{Alexandrou:2017oeh,Alexandrou:2017qyt,Alexandrou:2017hac}
and CalLat results~\cite{Chang:2018uxx}.
\begin{figure}
\begin{center}
\includegraphics[width=.47\textwidth]{figs/gAcomp-explat-2018-06-23-mag}
\end{center}
\vspace{-0.5cm}
\caption{A summary of results for the axial isovector charge,
$g_A^{u-d}$, for $N_f=2$- 2+1- and 2+1+1-flavors. Note the much
finer x-axis scale for the plot showing experimental results for
$g_A^{u-d}$. The lattice results (top panel) are from: PNDME'18
(this work); PNDME'16~\protect\cite{Bhattacharya:2016zcn};
CalLat'18~\protect\cite{Chang:2018uxx};
LHPC'14~\protect\cite{Green:2012ud};
LHPC'10~\protect\cite{Bratt:2010jn};
RBC/UKQCD'08~\protect\cite{Lin:2008uz};
Lin/Orginos'07~\protect\cite{Lin:2007ap};
ETMC'17~\protect\cite{Alexandrou:2017oeh,Alexandrou:2017hac};
Mainz'17~\protect\cite{Capitani:2017qpc}
RQCD'14~\protect\cite{Bali:2014nma};
QCDSF/UKQCD'13~\protect\cite{Horsley:2013ayv};
ETMC'15~\protect\cite{Abdel-Rehim:2015owa} and
RBC'08~\protect\cite{Yamazaki:2008py}. Phenomenological and other
experimental results (middle panel) are from:
AWSR'16~\protect\cite{Beane:2016lcm} and
COMPASS'15~\protect\cite{Adolph:2015saz}. The results from neutron
decay experiments (bottom panel) have been taken from:
Brown'17~\protect\cite{Brown:2017mhw};
Mund'13~\protect\cite{Mund:2012fq};
Mendenhall'12~\protect\cite{Mendenhall:2012tz};
Liu'10~\protect\cite{Liu:2010ms};
Abele'02~\protect\cite{Abele:2002wc};
Mostovoi'01~\protect\cite{Mostovoi:2001ye};
Liaud'97~\protect\cite{Liaud:1997vu};
Yerozolimsky'97~\protect\cite{Erozolimsky:1997wi} and
Bopp'86~\protect\cite{Bopp:1986rt}. The lattice-QCD estimates in
red indicate that estimates of excited-state contamination, or
discretization errors, or chiral extrapolation were not
presented. When available, systematic errors have been added to
statistical ones as outer error bars marked with dashed lines. }
\label{fig:PASTgA}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.47\textwidth,clip]{figs/gScomp-explat-2018-06-23-mag}
\end{center}
\vspace{-0.5cm}
\caption{A summary of results for the isovector scalar charge, $g_S^{u-d}$, for
$N_f=2$- 2+1- and 2+1+1-flavors. The lattice results are from:
PNDME'18 (this work);
PNDME'16~\protect\cite{Bhattacharya:2016zcn};
LHPC'12~\protect\cite{Green:2012ej};
PNDME'11~\protect\cite{Bhattacharya:2011qm};
ETMC'17~\protect\cite{Alexandrou:2017qyt} and
RQCD'14~\protect\cite{Bali:2014nma}. The estimates based on the
conserved vector current and phenomenology are taken from
Gonzalez-Alonso'14~\protect\cite{Gonzalez-Alonso:2013ura} and
Adler'75~\protect\cite{Adler:1975he}. The rest is the same as in Fig.~\protect\ref{fig:PASTgA}.
}
\label{fig:PASTgS}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.47\textwidth,clip]{figs/gTcomp-explat-2018-06-23-mag}
\end{center}
\vspace{-0.5cm}
\caption{A summary of results for the isovector tensor charge, $g_T^{u-d}$, for
$N_f=2$- 2+1- and 2+1+1-flavors. The lattice and phenomenology
results quoted from:
PNDME'18 (this work);
PNDME'16~\protect\cite{Bhattacharya:2016zcn};
PNDME'15~\protect\cite{Bhattacharya:2015wna}
LHPC'12~\protect\cite{Green:2012ej};
RBC/UKQCD'10~\protect\cite{Aoki:2010xg};
ETMC'17~\protect\cite{Alexandrou:2017qyt};
RQCD'14~\protect\cite{Bali:2014nma} and
RBC'08~\protect\cite{Yamazaki:2008py}.
The phenomenological estimates are taken from the following sources:
Kang'15~\protect\cite{Kang:2015msa};
Goldstein'14~\protect\cite{Goldstein:2014aja};
Pitschmann'14~\protect\cite{Pitschmann:2014jxa};
Anselmino'13~\protect\cite{Anselmino:2013vqa};
Bacchetta'13~\protect\cite{Bacchetta:2012ty} and
Fuyuto'13~\protect\cite{Fuyuto:2013gla}.
Rest same as in Fig.~\protect\ref{fig:PASTgA}.
}
\label{fig:PASTgT}
\end{figure}
The ETMC results $g_A^{u-d}=1.212(40)$, $g_S^{u-d}=0.93(33)$ and
$g_T^{u-d}=1.004(28)$~\cite{Alexandrou:2017oeh,Alexandrou:2017qyt,Alexandrou:2017hac}
were obtained from a single physical mass ensemble generated with
2-flavors of maximally twisted mass fermions with a clover term at
$a=0.0938(4)$~fm, $M_\pi=130.5(4)$~MeV and $M_\pi L = 2.98$. Assuming
that the number of quark flavors and finite volume corrections do not
make a significant difference, one could compare them against our results
from the $a09m130W$ ensemble with similar lattice parameters:
$g_A^{u-d}=1.249(21)$, $g_S^{u-d}=0.952(74)$ and
$g_T^{u-d}=1.011(30)$. We remind the
reader that this comparison is at best qualitative since estimates
from different lattice actions are only expected to agree in the
continuum limit.\looseness-1
Based on the trends observed in our CCFV fits shown in
Figs.~\ref{fig:conUmD-extrap11}--\ref{fig:extrap-gT-diagonal}, we
speculate where one may expect to see a difference due to the lack of
a continuum extrapolation in the ETMC results. The quantities that
exhibit a significant slope versus $a$ are $g_A^{u-d}$ and
$g_S^{u-d}$. Again, under the assumptions stated above, we would
expect ETMC values $g_A^{u-d}=1.212(40)$ to be larger and
$g_S^{u-d}=0.93(33)$ to be smaller than our extrapolated values given
in Eq.~\eqref{eq:gFinal}. We find that the scalar charge (ignoring the large error) fits the
expected pattern, but the axial charge does not.
We also point out that the ETMC error estimates are taken from a
single ensemble and a single value of the source-sink separation using
the plateau method. Our results from the comparable calculation on the
$a09m130W$ ensemble with $\tau=14$ (see Figs.~\ref{fig:gA2v3a09}
and~\ref{fig:gT2v3a09} and results in
Table~\ref{tab:results3bareu-d}), have much smaller errors.
The more detailed comparison we make is against the CalLat result
$g_A^{u-d} = 1.271(13)$~\cite{Chang:2018uxx} that agrees with the
latest experimental average, $g_A^{u-d} = 1.2766(20)$. The important
question is, since the CalLat calculations were also done using the
same 2+1+1-flavor HISQ ensembles, why are the two results, after CCFV
fits, different?
To understand why the results can be different, we first review the
notable differences between the two calculations. CalLat uses (i)
M\"obius domain wall versus clover for the valence quark action. This
means that their discretization errors start at $a^2$ versus $a$ for
PNDME. They also have no uncertainty due to the renormalization factor
since $Z_A/Z_V=1$ for the M\"obius domain wall on HISQ formalism. (ii)
They use gradient flow smearing with $t_{gf}/a=1$ versus one HYP
smearing to smooth high frequency fluctuations in the gauge
configurations. This can impact the size of statistical errors. (iii)
Different construction of the sequential propagator. CalLat inserts a
zero-momentum projected axial current simultaneously at all time slices
on the lattice to construct the sequential propagator. Their data are,
therefore, for the sum of contributions from insertions on {\it all}
time slices on the lattice, i.e., including contact terms and insertion
on time slices outside the interval between the source and the sink.
CalLat fits this summed three-point function versus only
the source-sink separation $\tau$ using the 2-state fit
ansatz. (iv) The ranges of $\tau$ for which the data have the maximum
weight in the respective n-state fits are very different in the two
calculations. The CalLat results are obtained from data at much
smaller values of $\tau$, which accounts for the smaller error
estimates in the data for $g_A^{u-d}$. (v) CalLat analyze the coarser
$a\approx 0.15$, $0.12$ and $0.09$~fm ensembles. At $a \approx 0.15$~fm,
we can only analyze the $a15m310$ ensemble due to the presence of
exceptional configurations in the clover-on-HISQ formulation at
lighter pion masses. On the other hand, computing resources have so far
limited CalLat from analyzing the three fine $a\approx 0.06$~fm and
the physical mass $a09m130$ ensembles.
A combination of these factors could easily explain the $\approx 5\%$
difference in the final values. The surprising result, shown in
Table~\ref{tab:CalLat}, is that estimates on the seven ensembles
analyzed by both collaborations are consistent and do not show a
systematic difference. (Note again that results from two different lattice
formulations are not, {\it a priori}, expected to agree at finite
$a$.) These data suggest that differences at the $1\sigma$ level (see
also our analysis in Table~\ref{tab:errors}) are conspiring to produce
a 5\% difference in the extrapolated value. Thus, one should look for
differences in the details of the CCFV fit.
We first examine the extrapolation in $a$. A CCFV fit keeping our data
from only the eight $a\approx 0.15$, $0.12$ and $0.09$~fm ensembles
gives a larger value, $g_A^{u-d} = 1.245(42)$, since the sign of the
slope versus $a$ changes sign as is apparent from the data shown in
the top three panels of Fig.~\ref{fig:conUmD-extrap11}. Thus the three
$a\approx 0.06$~fm ensembles play an important role in our continuum
extrapolation.
Our initial concern was possible underestimation of statistical errors
in results from the $a \approx 0.06$~fm lattices. This prompted us to
analyze three crucial ensembles, $a09m130$, $a06m310$ and $a06m220$, a
second time with different smearing sizes and different random
selection of source points. The consistency between the pairs of data
points on these ensembles suggests that statistical fluctuations are
not a likely explanation for the size of the undershoot in
$g_A^{u-d}$. The possibility that these ensembles are not large enough
to have adequately explored the phase space of the functional
integral, and the results are possibly biased, can only be checked
with the generation and analysis of additional lattices.
The chiral fits are also different in detail. In our data, the errors
in the points at $M_\pi \approx 310$, 220 and 130 MeV are similar,
consequently all points contribute with similar weight in the fits. The
errors in the CalLat data from the two physical mass ensembles
$a15m130$ and $a12m130$ are much larger and the fits are predominately
weighted by the data at the heavier masses $M_\pi \approx 400$, 350
310 and 220 MeV. Also, CalLat finds a significant change in the value
between the $M_\pi \approx \{400,\ 350,\ 310\}$~ MeV and $M_\pi
\approx 220$~MeV points, and this concerted change, well within
$1\sigma$ errors in individual points, produces a larger dependence on
$M_\pi$. In other words, it is the uniformly smaller values on the
$M_\pi \approx \{400,\ 350,\ 310\}$~MeV ensembles compared to the data
at $M_\pi\approx 220$~MeV that makes the CalLat chiral fits different and
the final value of $g_A^{u-d}$ larger.
\begin{figure*}
\begin{center}
\includegraphics[width=.49\textwidth,clip]{figs/fig-ST}
\includegraphics[width=.49\textwidth,clip]{figs/plot-LHC}
\end{center}
\vspace{-0.5cm}
\caption{Current and projected $90 \%$ C.L. constraints on
$\epsilon_S$ and $\epsilon_T$ defined at 2~GeV in the
$\overline{MS}$ scheme. (Left) The beta-decay constraints are
obtained from the recent review article
Ref.~\protect\cite{Gonzalez-Alonso:2018omy}. The current and future
LHC bounds are obtained from the analysis of the $pp \to e + MET +
X$. We have used
the ATLAS results~\protect\cite{Aaboud:2017efa}, at $\sqrt{s} =
13$~TeV and integrated luminosity of 36 fb$^{-1}$. We find that the
strongest bound comes from the cumulative distribution with a cut on
the transverse mass at 2 TeV. The projected future LHC bounds are
obtained by assuming that no events are observed at transverse mass
greater than 3 TeV with an integrated luminosity of 300
fb$^{-1}$. (Right) Comparison of current LHC bounds from $pp \to e +
MET + X$ versus $pp \to e^+ e^- + X$. }
\label{fig:eSeT}
\end{figure*}
To summarize, the difference between our and CalLat results comes from the
chiral fit and the continuum extrapolation. The difference in the
chiral fit is a consequence of the ``jump'' in the CalLat data between
$M_\pi = \{400,\ 350,\ 310\}$ and the $220$~MeV data. The CalLat data
at $M_\pi \approx 130$~MeV do not contribute much to the fit because
of the much larger errors. We do not see a similar jump between our
$M_\pi \approx 310$ and $220$~MeV or between the 220 and the 130~MeV
data as is evident from Fig.~\ref{fig:conUmD-extrap11}. Also, our
four data points at $M_\pi \approx 310$~MeV show a larger spread. The
difference in the continuum extrapolation is driven by the smaller
estimates on all three fine $a \approx 0.06$~fm ensembles that we have
analyzed. Unfortunately, neither of these two differences in the fits
can be resolved with the current data, especially since the data on 7
ensembles, shown in Table~\ref{tab:CalLat}, agree within
$1\sigma$. Our two conclusions are: (i) figuring out why the $a\approx
0.06$~fm ensembles give smaller estimates is crucial to understanding
the difference, and (ii) with present data, a total error estimate of
$\approx 5\%$ in $g_A^{u-d}$ is realistic.
\begin{table}[ht]
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{ruledtabular}
\begin{tabular}{l|c|c}
& This Work & CalLat \\
\hline
$a15m310$ & 1.228(25) & 1.215(12) \\
$a12m310$ & 1.251(19) & 1.214(13) \\
$a12m220S$ & 1.224(44) & 1.272(28) \\
$a12m220$ & 1.234(25) & 1.259(15) \\
$a12m220L$ & 1.262(17) & 1.252(21) \\
$a09m310$ & 1.235(15) & 1.236(11) \\
$a09m220$ & 1.260(19) & 1.253(09) \\
\end{tabular}
\end{ruledtabular}
\caption{The data for the renormalized axial charge $g_A^{u-d}$ for
the proton on the seven 2+1+1-flavor HISQ ensembles that have been
analyzed by us and the CalLat collaboration~\protect\cite{Chang:2018uxx}. The
results are consistent within $1\sigma$ in most cases. }
\label{tab:CalLat}
\end{center}
\end{table}
Even with the high statistics calculation presented here, the
statistical and ESC errors in the calculation of the scalar charge are between
5\%--15\% on individual ensembles. As a result, the error after the
continuum extrapolation is about $10\%$. Over time, results for
$g_S^{u-d}$, presented in Fig.~\ref{fig:PASTgS}, do show significant
reduction in the error with improved higher-statistics calculations.
The variation of the tensor charge $g_T^{u-d}$ with $a$ or $M_\pi $ or
$M_\pi L$ is small. As a result, the lattice estimates have been
stable over time as shown in Fig.~\ref{fig:PASTgT}. The first error
estimate in our result, $g_T^{u-d} = 0.989(32)(10) $, is now dominated
by the error in $Z_T$.
\section{Constraining new physics using precision beta decay measurements}
\label{sec:est}
Nonstandard scalar and tensor
charged-current interactions are parametrized by the dimensionless
couplings $\epsilon_{S,T}$~\cite{Bhattacharya:2011qm,Cirigliano:2012ab}:
\begin{eqnarray}
{\cal L}_{\rm CC} &=&
- \frac{G_F^{(0)} V_{ud}}{\sqrt{2}} \ \Big[ \
\epsilon_S \ \bar{e} (1 - \gamma_5) \nu_{\ell} \cdot \bar{u} d
\nonumber \\
&+ &
\epsilon_T \ \bar{e} \sigma_{\mu \nu} (1 - \gamma_5) \nu_{\ell} \cdot \bar{u} \sigma^{\mu \nu} (1 - \gamma_5) d
\Big] ~.
\end{eqnarray}
These couplings can be constrained by a
combination of low energy precision beta-decay measurements (of the
pion, neutron, and nuclei) combined with our results for the isovector
charges $g_{S}^{\rm u-d}$ and $g_T^{\rm u-d}$, as well at the
Large Hadron Collider (LHC) through the reaction $pp \to e \nu +
X$ and $pp \to e^+ e^- + X$. The LHC constraint is valid provided the mediator of the new
interaction is heavier than a few TeV.
In Fig.~\ref{fig:eSeT} (left) we show current and projected bounds on
$\{\epsilon_S, \epsilon_T\}$ defined at 2~GeV in the $\overline{MS}$
scheme. The beta decays constraints are obtained from the recent
review article Ref.~\cite{Gonzalez-Alonso:2018omy}. The current
analysis includes all existing neutron and nuclear decay measurements,
while the future projection assumes measurements of the various decay
correlations with fractional uncertainty of $0.1\%$, the Fierz
interference term at the $10^{-3}$ level, and neutron lifetime with
uncertainty $\delta \tau_n = 0.1 s$. The current LHC bounds are
obtained from the analysis of the $pp \to e + MET + X$, where $MET$
stands for missing transverse energy. We have used the ATLAS
results~\cite{Aaboud:2017efa}, at $\sqrt{s} = 13$~TeV and integrated
luminosity of 36 fb$^{-1}$. We find that the strongest bound comes by
the cumulative distribution with a cut on the transverse mass at 2
TeV. The projected future LHC bounds are obtained by assuming that no
events are observed at transverse mass greater than 3~TeV with an
integrated luminosity of 300 fb$^{-1}$.
The LHC bounds become tighter on the inclusion of $Z$-like mediated
process $pp \to e^+ e^- + X$. As shown in Fig.~\ref{fig:eSeT} (right),
including both $W$-like and $Z$-like mediated processes, the current
LHC bounds are comparable to future low energy ones, motivating
more precise low energy experiments. In this analysis
we have neglected the NLO QCD corrections~\cite{Alioli:2018ljm}, which
would further strengthen the LHC bounds by $O(10\%)$. Similar bounds are
obtained using the CMS data~\cite{Sirunyan:2018mpc,Sirunyan:2018exx}.
\section{Conclusions}
\label{sec:conclusions}
We have presented a high-statistics study of the isovector and
flavor-diagonal charges of the nucleon using clover-on-HISQ lattice
QCD formulation. By using the truncated solver with bias correction
error-reduction technique with the multigrid solver, we have significantly improved the
statistical precision of the data. Also, we show stability in the
isolation and mitigation of excited-state contamination by keeping up
to three states in the analysis of data at multiple values of
source-sink separation $\mathop{\tau}\nolimits$. Together, these two improvements allow
us to demonstrate that the excited-state contamination in the axial
and the tensor channels has been reduced to the 1\%--2\% level. The
high-statistics analysis of eleven ensembles covering the range
0.15--0.06~fm in the lattice spacing, $M_\pi =$ 135--320~MeV in the
pion mass, and $M_\pi L =$ 3.3--5.5 in the lattice size allowed us to
analyze the three systematic uncertainties due to lattice
discretization, dependence on the quark mass and finite lattice
size, by making a simultaneous fit in the three variables $a$,
$M_\pi^2$ and $M_\pi L$. Data from the two physical mass ensembles,
$a09m130$ and $a06m135$, anchor the improved chiral fit. Our final
estimates for the isovector charges are given in
Eq.~\eqref{eq:gFinal2}.
One of the largest sources of uncertainty now is from the calculation
of the renormalization constants for the quark bilinear operators.
These are calculated nonperturbatively in the RI-sMOM scheme over a
range of values of the scale $Q^2$. As discussed in
Ref.~\cite{Bhattacharya:2016zcn}, the dominant systematics in the
calculation of the $Z$'s comes from the breaking of the rotational
symmetry on the lattice and the 2-loop perturbative matching between
the RI-sMOM and the $\overline{\text{MS}}$ schemes.
Our estimate $g_A^{u-d}=1.218(25)(30)$ is about $1.5 \sigma$ (about
$5\%$) below the experimental value $g_A/g_V = 1.2766(20)$. Such low
values are typical of most lattice QCD calculations. The recent
calculation by the CalLat collaboration, also using the 2+1+1-flavor
HISQ ensembles, gives $g_A^{u-d}=1.271(13)$~\cite{Chang:2018uxx}. A
detailed comparison between the two calculations is presented in
Sec~\ref{sec:comparison}. We show in Table~\ref{tab:CalLat} that
results from seven ensembles, which have been analyzed by both
collaborations, agree within $1\sigma$ uncertainty. Our analysis
indicates that the majority of the difference comes from the chiral
and continuum extrapolations, with $1\sigma$ differences in individual
points getting amplified. Given that CalLat have not analyzed the
fine $0.06$~fm ensembles and their data on the two physical pion mass
ensembles, $a15m130$ and $a12m130$ have much larger errors and do not
contribute significantly to their chiral fit, we conclude that our error
estimate is more realistic. Further work is, therefore, required to
resolve the difference between the two results.
Our results for the isovector scalar and tensor charges,
$g_S^{u-d}=1.022(80)(60)$ and $g_T^{u-d}=0.989(32)(10)$, have achieved
the target accuracy of 10\% needed to put bounds on scalar and tensor
interactions, $\epsilon_S$ and $\epsilon_T$, arising at the TeV scale
when combined with experimental measurements of $b$ and $b_\nu$
parameters in neutron decay experiments with $10^{-3}$
sensitivity~\cite{Bhattacharya:2011qm}. In Sec.~\ref{sec:est}, we
update the constraints on $\epsilon_S$ and $\epsilon_T$ from both low
energy experiments combined with our new lattice results on
$g_S^{u-d}$ and $g_T^{u-d}$, and from the ATLAS and the CMS
experiments at the LHC. We find that the constraints from low energy
experiments combined with matrix elements from lattice QCD are
comparable to those from the LHC.
For the tensor charges, we find that the dependence on the lattice
size, the lattice spacing and the light-quark mass is small, and the
simultaneous fit in these three variables, keeping just the lowest-order
corrections, has improved over that presented in
Ref.~\cite{Bhattacharya:2015wna}.
We have also updated our estimates for the connected parts of the
flavor-diagonal charges. For the tensor charges, the contribution of
the disconnected diagram is consistent with
zero~\cite{Bhattacharya:2015wna,Bhattacharya:2015esa}, so the
connected contribution, $g_T^{u} = 0.790(27)$ and $g_T^{d} = -
0.198(10)$ for the proton, is a good approximation to the full result that
will be discussed elsewhere.
The extraction of the scalar charge of the proton has larger
uncertainty. The statistical errors in the lattice data for
$g_S^{u-d}(a, M_\pi, M_\pi L)$ are 3--5 times larger than those in
$g_T^{u-d}(a,M_\pi,M_\pi L)$, and the data show significant dependence
on the lattice spacing $a$ and a weaker dependence on the pion mass
$M_\pi$. Our estimate, $g_S^{u-d}=1.022(80)(60)$, is in very good
agreement with the estimate $g_S^{u-d}=1.02(8)(7)$ obtained using the
CVC relation $g_S/g_V = (M_N-M_P)^{\rm QCD}/ (m_d-m_u)^{\rm QCD}$ in
Ref.~\cite{Gonzalez-Alonso:2013ura}. In Table~\ref{tab:Mn-Mp}, we
used our new estimate to update the results for the mass difference
$(M_N-M_P)^{\rm QCD}$ obtained by using the
CVC relation. Taking the recent 2+1 flavor value $m_d - m_u =
2.41(6)(4)(9)$~MeV from the BMW collaboration~\cite{Fodor:2016bgu} gives
$(M_N-M_P)^{\rm 2+1QCD} = 2.41(27)$~MeV, while the 2+1+1-flavor
estimates $m_u=2.118(38)$~MeV and $m_d=4.690(54)$~MeV from the
MILC/Fermilab/TUMQCD collaboration~\cite{Bazavov:2018omf} give
$(M_N-M_P)^{\rm 2+1+1QCD} = 2.63(27)$~MeV.
|
2,869,038,156,233 | arxiv | \section{Introduction}
The Kerr metric that describes a rotating black hole is a
solution of the Einstein's field equations of general relativity.
The observed event-horizon-scale images of the supermassive black
hole candidate in the center of the giant elliptical galaxy M87
are consistent with the dark shadow of a Kerr black hole
predicted by general relativity [1]. The motion of a particle in
the vicinity of the Kerr black hole is integrable because of the
existence of four conserved quantities including the energy,
angular momentum, rest mass and azimuthal motion of the particle.
The azimuthal motion corresponds to the Carter constant [2], which
is obtained from the separation of variables in the
Hamilton-Jacobi equation.
Observational evidences demonstrate the existence of strong
magnetic fields in the vicinity of the supermassive black hole at
the centre of the Galaxy [3]. The external magnetic fields which
can be considered as a tidal environment are generally believed to
play a crucial role in the transfer of the energy from the
accretion disk to jets. Radiation reaction depending on the
external magnetic field strength causes the accretion of charged
particles from the accretion disk to shift towards the black hole.
An inductive charge introduced by Wald [4] generates an induced
electric field due to a contribution to the Faraday induction from
the parallel orientation of the spin of a black hole and the
magnetic field. When the inductive charge takes the Wald charge,
the potential difference between the horizon of a black hole and
infinity vanishes, and the process of selective accretion is
completed [5, 6]. The effects of the magnetic fields involving
the induced electric field are so weak in comparison to the
gravitational mass effects that they do not change the spacetime
metrics. However, they can essentially affect the motion of
charged test particles in accreting matter if the ratio of the
electric charge and mass of the particle is large. In most cases,
the fourth invariable quantity related to the azimuthal motion of
the particles is absent when the external electromagnetic fields
are considered near the black hole. Thus, the dynamics of charged
test particles in the black holes with external electromagnetic
fields is nonintegrable.
Although the magnetic fields in the vicinity of the black holes
destroy the integrability of these spacetimes in many problems,
the radial motions of the charged particles on the equatorial
plane are still integrable and solvable. It is mainly studied by
means of an effective potential. The effective potential seems
simple, but it describes many important properties of the
spacetimes. In particular, unstable circular orbits, stable
circular orbits, and innermost stable circular orbits (ISCOs) on
the equatorial plane are clearly shown through the effective
potential. It is interesting to study these equatorial orbits in
the theory of accretion disks. An accreted material with
sufficient angular momentum relative to an axisymmetric massive
central body will be still attracted by the
central body, but such force will be compensated due to the large
angular momentum. This easily forms an accretion disk. However,
the accreted material without sufficient angular momentum will
fall into the central body [7-9]. Electromagnetic fields could
influence dynamics of charged particles in accreting matter,
therefore, the ISCOs in the field of a magnetized black hole are
shifted towards the horizon for a suitable spin direction. In
other words, the inner boundary of the accretion disk goes towards
the central body. In view of the importance of the topic on the
effective potential and stable circular orbits on the equatorial
plane, the topic has been taken into account in a large number of
literatures [7-28]. These problems discussed in the existing works
are based on the equatorial plane. In some extended theories of
gravity, such as Brans-Dicke gravity, scale-dependent gravity and
asymptotically safe gravity in the context of black hole physics
[29-35], the effective potentials, unstable circular orbits,
stable circular orbits and ISCOs on a plane slightly different
from the equatorial can be discussed similarly.
When the external magnetic fields destroy the spacetime's symmetry
(precisely speaking, the external magnetic fields lead to the
absence of the fourth constant related to the particles' azimuthal
motion), the generic motion of charged particles on the
non-equatorial plane can be chaotic in some circumstances. If the
external magnetic fields do not destroy the symmetry, no chaotic
dynamics is possible. For example, charged particle motions in the
Kerr-Newman black hole spacetime are regular and nonchaotic
because of the existence of four integrals leading to the
integrability of the system [36]. Chaos describes a dynamical
system sensitive dependence on initial conditions. The theory of
chaotic scattering in the combined effective potential of the
black hole and the asymptotically uniform magnetic field is useful
to explore the mechanism hidden behind the charged particle
ejection [5]. The energy of the charged particle in such combined
fields is split into one energy mode along the magnetic field line
direction and another energy mode at the perpendicular direction.
The chaotic charged particle dynamics in the combined
gravitational and magnetic fields leads to an energy interchange
between the two energy modes of the charged particle dynamics. As
a result, it can provide sufficient energy to ultra-relativistic
motion of the charged particle along the magnetic field lines.
Based on the importance of studies of the chaotic motion in the
gravitational field of a black hole combined with an external
electromagnetic field, many authors [5, 6, 12, 20, 23, 37-46] are
interested in this field.
The detection of the chaotical behavior requires the adopted
computational scheme with reliable results. Without doubt,
higher-order numerical integrators such as an eighth- and
ninth-order Runge-Kutta-Fehlberg integrator with adaptive step
sizes can yield high-precision numerical solutions. However, they
are more computationally demanding than lower-order solvers. For
Hamiltonian systems, the most appropriate solvers are symplectic
integrators which respect the symplectic nature of Hamiltonian
dynamics and show no secular drift in energy errors [47-53].
A symplectic integrator method for numerical
calculation of charged particle trajectory is well known due to
its small error in energy even for long integration times, which
make it perfectly suited for the description of regular and
chaotic dynamics through Poincar\'{e} sections calculations [54].
Because the variables are inseparable in Hamiltonian systems
associated to curved spacetimes, the standard explicit symplectic
integrators do not work when these Hamiltonian systems are
separated into two parts. In this case, completely implicit
symplectic methods including the implicit midpoint method [55, 56]
and Gauss-Runge-Kutta methods [41, 54, 57, 58] are often
considered. Explicit and implicit combined symplectic methods
[59-63] take less cost than these completely implicit methods, and
then are also used. Recently, explicit symplectic integrators were
proposed for nonrotating black holes when the Hamiltonians of
these black holes have several splitting parts with analytical
solutions as explicit functions of proper time [64-66]. With the
aid of time transformations, explicit symplectic integrators are
easily available for the Kerr type spacetimes [67].
The authors of [54] employed the Gauss-Legendre
symplectic solver (i.e., s-stage implicit symplectic Runge-Kutta
method) to study the regular and chaotic dynamics of charged
particles around the Kerr background endowed with an axisymmetric
electromagnetic test field with the aid of Poincar\'{e} sections.
The authors of [68] applied the time-transformed explicit
symplectic integrators introduced in [67] to mainly explore the
effect of the black hole spin on the chaotic motion of a charged
particle around the Kerr black hole immersed in an external
electromagnetic field. Unlike Ref. [68], the present work
particularly focuses on how a small change of the black hole
inductive charge [6] exerts influences on the effective potential,
stable circular orbits and ISCOs on the equatorial plane, and a
transition from order to chaos of orbits on the non-equatorial
plane. The effects of other dynamical parameters such as the
magnetic field parameter are also considered. For this purpose, we
introduce a dynamical model for the description of charged
particles moving around the Kerr black hole immersed in an
external magnetic field in Sect. 2. The effective potential,
stable circular orbits and ISCOs on the equatorial plane are
discussed in Sect. 3. The explicit symplectic integrators are
designed for this problem, and the dependence of the orbital
dynamical behavior on the parameters is shown in Sect. 4. Finally,
the main results are concluded in Sect. 5.
\section{Kerr black hole immersed in external magnetic field}
The Kerr black hole is the description of a rotating black hole
with mass $M$ and angular momentum $a$. In the standard
Boyer-Lindquist coordinates $(t, r, \theta, \phi)$, its time-like
metric is written as $ds^{2}=-c^2d\tau^2$, that is,
\begin{eqnarray}
ds^{2} &=& g_{\alpha\beta}dx^{\alpha}dx^{\beta}=g_{tt}c^2dt^2+2g_{t\phi}cdtd\phi \nonumber \\
&& +g_{rr}dr^2 +g_{\theta\theta}d\theta^2+g_{\phi\phi} d\phi^2.
\end{eqnarray}
These nonzero components in this metric are found in the paper of
[69] as follows:
\begin{eqnarray}
g_{tt} &=& -(1-\frac{2GMr/c^2}{\Sigma}), \nonumber \\
g_{t\phi} &=& -\frac{(2GMr/c^2)a\sin^{2}\theta}{\Sigma}, \nonumber \\
g_{rr} &=& \frac{\Sigma}{\Delta}, ~~~~~~~~~~
g_{\theta\theta}=\Sigma, \nonumber \\
g_{\phi\phi} &=& (\rho^2+\frac{2GMr}{\Sigma}
a^{2}\sin^{2}\theta)\sin^{2}\theta, \nonumber
\end{eqnarray}
where $\Sigma=r^2+a^2\cos^{2}\theta$, $\Delta=\rho^2-2GMr/c^2$ and
$\rho^2=r^2+a^2$. $\tau$ and $t$ are proper and coordinate times,
respectively. $c$ is the speed of light, and $G$ denotes the
gravitational constant.
Suppose the Kerr black hole is immersed in an external
asymptotically uniform magnetic field, which has strength $B$ and
yields an induced charge $Q$. Set $\xi^{\alpha}_{(t)}$ and
$\xi^{\alpha}_{(\phi)}$ as time-like and space-like axial Killing
vectors. An electromagnetic four-vector potential can be found in
Refs. [6] and [70] and is written as
\begin{equation}
A^{\alpha}=aB\xi^{\alpha}_{(t)}+\frac{B}{2}\xi^{\alpha}_{(\phi)}-\frac{Q}{2}\xi^{\alpha}_{(t)}.
\end{equation}
This potential has two nonzero covariant components
\begin{eqnarray}
A_{t} &=& g_{t\alpha}A^{\alpha}= (aB-\frac{Q}{2})g_{tt}+\frac{B}{2}g_{t\phi}, \\
A_{\phi} &=& g_{\phi\alpha}A^{\alpha}=
(aB-\frac{Q}{2})g_{t\phi}+\frac{B}{2} g_{\phi\phi}.
\end{eqnarray}
When $Q=2aB_w$, the inductive charge is the Wald charge $Q_W$, and
$B_w$ is a magnetic field corresponding to the Wald charge [4].
The induced charge like the Wald charge $Q_W$ is so small that it
has no contribution to the background geometry of the black hole
[71]. However, the induced charge can exert an important influence
on the motion of a charged particle under some circumstances, as
will be shown in later discussions.
The motion of the particle around the rotating black hole embedded
in the external magnetic field is described by the Hamiltonian
\begin{eqnarray}
H &=& \frac{1}{2m}g^{\mu\nu}(p_{\mu}-qA_{\mu})(p_{\nu}
-qA_{\nu}) \nonumber \\
&=& \frac{H_1}{m} +\frac{1}{2m}\frac{\Delta}{\Sigma}p^{2}_{r}
+\frac{1}{2m}\frac{p^{2}_{\theta}}{\Sigma},
\end{eqnarray}
where $p_{r}$ and $p_{\theta}$ are generalized momenta, and $H_1$
is a function of $r$ and $\theta$ [68]:
\begin{eqnarray}
H_1 &=&\frac{1}{2}g_{tt}[f_1(E+qA_t)+f_2(L-qA_{\phi})]^2
\nonumber \\
&& +\frac{1}{2}g_{\phi\phi}[f_2(E+qA_t)+f_3(L-qA_{\phi})]^2 \nonumber \\
&& -g_{t\phi}[f_1(E+qA_t)+f_2(L-qA_{\phi})] \nonumber \\
&& \cdot [f_2(E+qA_t)+f_3(L-qA_{\phi})].
\end{eqnarray}
Here, $f_1$, $f_2$ and $f_3$ are functions of $r$ and $\theta$ as
follows:
\begin{eqnarray}
f_1 &=& \frac{g_{\phi\phi}}{c^2(g_{tt}g_{\phi\phi}-g^{2}_{t\phi})}, \\
f_2 &=& \frac{g_{t\phi}}{c(g_{tt}g_{\phi\phi}-g^{2}_{t\phi})}, \\
f_3 &=& \frac{g_{tt}}{g_{tt}g_{\phi\phi}-g^{2}_{t\phi}}.
\end{eqnarray}
$E=-p_t$ is a constant energy of the particle, and $L=p_{\phi}$ is
a constant angular momentum of the particle. $p_t$ and $p_{\phi}$
are generalized momenta, which satisfy the relations
\begin{eqnarray}
&& \dot{t} = \frac{\partial H}{\partial p_t}= -f_1(E+qA_t)-f_2(L-qA_{\phi}), \\
&& \dot{\phi} = \frac{\partial H}{\partial p_{\phi}}=
f_2(E+qA_t)+f_3(L-qA_{\phi}).
\end{eqnarray}
Because the 4-velocity
$U^{\alpha}=(c\dot{t},\dot{r},\dot{\theta},\dot{\phi})$ is always
identical to the constant $U^{\alpha}U_{\alpha}=-c^2$, the
Hamiltonian (5) remains invariant and obeys the constraint
\begin{equation}
H = -\frac{1}{2} mc^2.
\end{equation}
In fact, this third invariable quantity corresponds to the rest
mass of the particle.
For simplicity, $c$ and $G$ take geometrized units: $c=G=1$.
Dimensionless operations to the Hamiltonian (5) are carried out
thorough a series of scale transformations: $r\rightarrow rM$,
$t\rightarrow tM$, $\tau\rightarrow \tau M$, $a\rightarrow aM$,
$E\rightarrow Em$, $p_r\rightarrow mp_r$, $L\rightarrow mML$,
$p_{\theta}\rightarrow mMp_{\theta}$, $q\rightarrow mq$,
$B\rightarrow B/M$ and $H\rightarrow mH$. Note that no scale
transformation is given to the inductive charge $Q$. When these
treatments are employed, $M$ and $m$ in all the above-mentioned
expressions are eliminated or taken as geometrized units: $M=m=1$.
The horizon event of the black hole exists for $|a|\leq 1$.
For convenience, we take $Q^{*}=qQ$ and
$B^{*}=qB$.
\section{Effective potential and stable circular orbits}
Apart from the three integrals (10)-(12) in the dimensionless
Hamiltonian (5), the fourth constant related to the particles'
azimuthal motion is absent in general when the external magnetic
field forces are included. The absence of the fourth constant is
mainly caused by the $g_{\phi\phi}$ term in Eq. (4) rather than
the $g_{tt}$ term in Eq. (3). Because $g_{tt}$ is only a function
of $r$, it does not destroy the presence of the fourth constant.
However, $g_{\phi\phi}$ is a function of $r$ and $\theta$ and
therefore the Hamilton-Jacobi equation of Eq. (5) has no separable
form of variables $r$ and $\theta$. This leads to the absence of
the fourth constant. Of course, the $g_{t\phi}$ terms being
functions of $r$ and $\theta$ in Eqs. (3) and (4) also have some
contributions to the absence of the fourth constant. In other
words, the main contribution to the absence of the fourth constant
in the system (5) comes from the external magnetic fields
associated with $B^{*}$. The inductive charges associated with
$Q^{*}$ also exert some influences on the absence of the fourth
constant. Thus, the dimensionless Hamiltonian (5) is
non-integrable. However, it can be integrable for some particular
cases. For instance, radial motions of charged particles on the
equatorial plane $\theta=\pi/2$ are integrable. The radial motions
are described in terms of effective potential $V$, i.e., the
expression of $E$ obtained from Eqs. (5) and (12) with
$p_r=p_{\theta}=0$:
\begin{equation}
V=E=\frac{B}{2 A}+\sqrt{\frac{B^{2}+4 A C+2 A}{4 A^{2}}},
\end{equation}
where $A$, $B$ and $C$ are expressed as
\begin{eqnarray}
A &=& -\frac{1}{2}(f_{1}^{2} g_{tt}+f_{2}^{2} g_{\phi\phi}-2 f_{1} f_{2} g_{t\phi}), \nonumber \\
B &=& \mathrm{B}_1+\mathrm{B}_2+B_3, \nonumber \\
C &=& \frac{1}{2} g_{tt} C_1+\frac{1}{2} g_{\phi\phi} C_2-g_{t\phi} C_3, \nonumber \\
B_1 &=& g_{tt} q A_{t} f_{1}^{2}+g_{tt} f_{1} f_{2} L-g_{tt} f_{1} f_{2} q A_{\phi}, \nonumber \\
B_2 &=& g_{\phi\phi} q A_{t} f_{2}^{2}+g_{\phi\phi} f_{2} f_{3} L-g_{\phi\phi} f_{2} f_{3} q A_{\phi}, \nonumber \\
B_3 &=& -2 g_{t\phi} q A_{t} f_{1} f_{2}-g_{t\phi} f_{1} f_{3} L+g_{t\phi} q A_{\phi} f_{1} f_{3} \nonumber \\
&& -g_{t\phi} f_{2}^{2} L+g_{t\phi} q A_{\phi} f_{2}^{2}, \nonumber \\
C_1 &=& f_{1}^{2} q^{2} A_{t}^{2}+2 f_{1} f_{2} q L A_{t}-2 f_{1} f_{2} q^{2} A_{t} A_{\varphi}\nonumber \\
&& +f_{2}^{2}\left(A_{\phi}^{2} q^{2}-2 A_{\phi} L q+L^{2}\right), \nonumber \\
C_2 &=& f_{2}^{2} q^{2} A_{t}^{2}+2 f_{2} f_{3} q L A_{t}-2 f_{2} f_{3} q^{2} A_{t} A_{\phi}\nonumber\\
&&+f_{3}^{2}\left(A_{\phi}^{2} q^{2}-2 A_{\phi} L q+L^{2}\right), \nonumber \\
C_3 &=& A_{\phi}^{2} f_{2} f_{3} q^{2}-A_{\phi} A_{t} f_{1} f_{3} q^{2}-A_{\phi} A_{t} f_{2}^{2} q^{2}\nonumber\\
&&+A_{t}^{2} f_{1} f_{2} q^{2}-2 A_{\phi} f_{2} f_{3} L q+A_{t} f_{1} f_{3} L q\nonumber\\
&&+A_{t} f_{2}^{2} L q+f_{2} f_{3} L^{2}. \nonumber
\end{eqnarray}
The local minimal values of the effective potential correspond to
stable circular orbits, which satisfy the relation $dr/d\tau=0$
and the following conditions
\begin{equation}
\frac{d V}{d r}=0,
\end{equation}
\begin{equation}
\frac{d^{2} V}{d r^{2}} \geq 0.
\end{equation}
When the equality sign (=) is taken in Eq. (15), the innermost
stable circular orbit (ISCO) is present.
Taking parameters $L=2\sqrt{3}$, $a=0.1$, and $Q^{*}=2\times
10^{-4}$ (If $q=0.1$ and $B_W=0.01$, then $Q=0.002$ is the Wald
charge), we plot the effective potentials for several different
magnetic parameters $B^{*}$ in Fig. 1. When the magnetic parameter
$B^{*}$ increases, the left shape of the effective potential goes
away from the black hole, and the shape of the effective potential
is not altered. The energies of the unstable or stable circular
orbits become smaller. That is to say, the effective potential for
a larger value of $B^{*}$ is below that for a smaller value of
$B^{*}$. However, the radii of the stable circular orbits in Table
1 get larger as $B^{*}$ increases.
An increase of the inductive charge parameter $Q^{*}$ does not
alter the shape of the effective potential, but makes the left
shape of the effective potential go away from the black hole in
Fig. 2. Meantime, the energies of the unstable or stable circular
orbits decrease, but the radii of the stable circular orbits
increase in Table 2.
Fig. 3 clearly describes the dependence of the effective potential
on the black hole's spin $a$. The energies of the stable circular
orbits increase when $a$ gets larger. The radii of the stable
circular orbits always increase (see also Table 3).
How does the effective potential vary as the
particle's angular momentum $L$ increases? The effective potential
for a larger value of $L$ is always over that for a smaller value
of $L$, as shown in the Kerr spacetime of Figs. 4 (b) and (c).
Note that there are critical values of $L$ corresponding to the
ISCOs colored Red in Table 3, such as $L=3.4641$ for the
Schwarzschild spacetime with $a=0$. When the angular momenta $L$
are larger than the critical values, the stable circular orbits
are present in Table 3. However, no stable circular orbits exist
for $L$ less than the critical values. As $a$ or $L$ increases,
the radii of the stable circular orbits also increase.
Although the radii of the stable circular orbits increase with an
increase of $a$, the radii of the ISCOs become smaller in Table 3.
In addition, the radii of the ISCOs depend on the sign of the
particle's angular momentum as well as the magnitude of the
particle's angular momentum. When $L>0$ (for this case, the spin
direction of the black hole is consistent with the particle's
angular momentum), the considered orbits are called as direct
orbits. When $L<0$ (for this case, the spin direction of the
black hole is opposite to the particle's angular momentum), the
considered orbits are called as retrograde orbits [38]. Given
parameters $a$, $Q^{*}$ and $B^{*}$, the radii of the ISCOs for
the retrograde orbits are larger than those for the direct orbits.
Any one of the parameters $a$, $Q^{*}$ and $B^{*}$ increases, the
radii of the ISCOs for the retrograde orbits or the direct orbits
decrease. More details on the ISCOs are listed in Tables 4-6.
\section{Numerical investigations}
Without loss of generality, the Hamiltonian system (5) for the
description of the motion of charged particles at the
non-equatorial plane is nonintegrable and has no analytical
solutions. Numerical integration schemes are convenient to solve
this system. Particularly for obtaining the numerical solutions of
the Hamiltonian problem, symplectic integrators are naturally a
prior choice. Because explicit symplectic integrators are
generally superior to implicit ones at same order in computational
efficiency, their applications also remain a high priority. Owing
to the difficulty in the separation of variables or the separation
of two integrable parts in curved spacetimes, the implicit
symplectic integrators rather than the explicit ones are suitably
applicable to the curved spacetimes in general [41, 54-58].
Recently, Wang et al. [64-66] split the Hamiltonians of
non-rotating black holes surrounded by external magnetic fields
into several parts with analytical solutions as explicit functions
of proper time $\tau$, and successfully constructed the explicit
symplectic integrators for these non-rotating black holes. More
recently, the authors of [67] gave a time transformation to the
Kerr geometry, and designed the explicit symplectic integrators
for the time-transformed Hamiltonian with a desired splitting
form. The time-transformed explicit symplectic integrators were
applied to study the dynamics of charged particles moving around
the Kerr black hole surrounded by external magnetic fields without
the inductive charge $Q$ [68]. Following the two works [67, 68],
we use the explicit symplectic integrators to the Hamiltonian
problem (5).
\subsection{Explicit symplectic integrators}
The authors of [67] introduced a time transformation function
\begin{equation}
d\tau=g(r,\theta)dw, ~~~~ g(r,\theta)=\frac{\Sigma}{r^2},
\end{equation}
where $w$ is a new coordinate time unlike the original coordinate
time $t$. The Hamiltonian (15) becomes
\begin{equation}
K=g(H+p_0)=\frac{\Sigma}{r^2} (H_1+p_0)
+\frac{\Delta}{2r^2}p^{2}_{r} +\frac{1}{2r^2}p^{2}_{\theta}.
\end{equation}
The new Hamiltonian $K$ is a time-transformed Hamiltonian, where
the proper time $\tau$ is viewed as a coordinate $q_0=\tau$ and
its corresponding momentum is $p_0=-H=1/2\neq p_t$. In this case,
$K$ is always identical to zero for any coordinate time $w$, i.e.,
\begin{equation}
K\equiv 0.
\end{equation}
Now, the time-transformed Hamiltonian $K$ in Eq. (17) is split
into five parts
\begin{equation}
K=K_1+K_2+K_3+K_4+K_5,
\end{equation}
where all sub-Hamiltonians are expressed as
\begin{eqnarray}
K_1 &=& \frac{\Sigma}{r^2} (H_1+p_0), \\
K_{2} &=& \frac{1}{2}p^{2}_{r},\\
K_{3} &=& -\frac{1}{r}p^{2}_{r},\\
K_{4} &=&
\frac{a^2}{2r^2}p^{2}_{r}, \\
K_{5} &=& \frac{1}{2r^2}p^{2}_{\theta}.
\end{eqnarray}
$K_2$, $K_3$ and $K_5$ are consistent with those of [67], but
$K_1$ and $K_4$ are not.
Each of the five sub-Hamiltonians $K_1$, $K_2$, $K_3$, $K_4$ and
$K_5$ is solved analytically, and its solutions are explicit
functions of the new coordinate time $w$. Operators associated to
the solutions of $K_1$, $K_2$, $K_3$, $K_4$ and $K_5$ are
$\hat{K}_1$, $\hat{K}_2$, $\hat{K}_3$, $\hat{K}_4$ and
$\hat{K}_5$, respectively. The solutions of the system (5)
advancing a new coordinate time step $\Delta w=h$ are given in
terms of an explicit second-order symplectic integrator
\begin{eqnarray}
S^{K}_2(h) &=& \hat{K}_5 (\frac{h}{2}) \circ
\hat{K}_4(\frac{h}{2})\circ \hat{K}_3(\frac{h}{2})\circ
\hat{K}_2(\frac{h}{2})\circ \hat{K}_1(h) \nonumber
\\ && \circ
\hat{K}_2(\frac{h}{2}) \circ \hat{K}_3(\frac{h}{2})\circ
\hat{K}_4(\frac{h}{2})\circ \hat{K}_5(\frac{h}{2}),
\end{eqnarray}
as was proposed in [67]. The second-order method easily yields a
fourth-order symplectic integrator [72]
\begin{equation}
S^{K}_4(h)=S^{K}_2(\gamma h)\circ S^{K}_2(\delta h)\circ
S^{K}_2(\gamma h),
\end{equation}
where $\delta=1-2\gamma$ and $\gamma=1/(2-\sqrt[3]{2})$.
In fact, the explicit symplectic algorithms (25) and (26) are
attributed to the development of the time-transformed symplectic
method of [73] in the Kerr spacetime and its extension. The method
of Mikkola [73] aims to exhibit good performance of symplectic
integrators for close encounters of objects or high orbital
eccentricities in the solar system. In the idea of Mikkola, these
integrators use fixed time steps and remain symplectic for the new
time, but adaptive time steps for the original time. However, the
time steps in the method of [67] including the present integrators
(25) and (26) are approximately invariant for the proper time
$\tau$ because $g\approx 1$ and $\Delta\tau\approx g\Delta
w\approx\Delta w=h$ for $r\gg2$ in Eq. (16). As the authors of
[67] claimed, the time transformation mainly aims to eliminate the
function $\Sigma$ in the denominators of the terms $p_r$ and
$p_{\theta}$ in the Hamiltonian $H$ and to cause the
time-transformed Hamiltonian $K$ to have the desired separable
form.
In comparison with S4, a fourth-order method implicit symplectic
algorithm (IM4) consisting of three second-order implicit midpoint
methods [56] is applied to the time-transformed Hamiltonian $K$.
The conventional fourth-order Runge-Kutta explicit integration
method (RK4) is also employed. Of course, IM4 and RK4 are suitable
for the original Hamiltonian (5).
The new coordinate time step is given by $h=1$. The parameters are
$E=0.9935$, $L=4.6$, $a=0.5$, $B^{*}=1\times10^{-3}$, and $Q^{*}=1\times10^{-3}$.
The initial conditions are $\theta=\pi/2$ and $p_{r}=0$. If the
initial separation $r$ is given, then the initial value
$p_{\theta}>0$ is obtained from Eq. (12). We take $r=55$ for Orbit
1, and $r=75$ for Orbit 2. When the three algorithms S4, IM4 and
RK4 independently integrate the two orbits in the system (17), the
evolutions of $K$ in Eq. (18) with integration time $w$ are shown
in Figs. 5 (a) and (b). The explicit symplectic method S4 and the
implicit symplectic algorithm IM4 do not show secular drifts in
Hamiltonian errors, but RK4 does. In addition, S4 and IM4 are
almost the same, and have two orders of magnitude smaller errors
than RK4. Accuracy of each algorithm for Orbit 2 in Fig. 5(b) has
an advantage over that for Orbit 1 in Fig. 5(a). Is this result
because Orbit 2 is regular and Orbit 1 is chaotic? In fact, Orbit
1 is a regular Kolmogorov-Arnold-Moser (KAM) torus, but Orbit 2
exhibits the chaoticity, as described through Poincar\'{e} section
at the plane $\theta=\pi/2$ with $p_{\theta}>0$ in Fig. 5(c). The
result on the preference of accuracy of each algorithm for Orbit 2
over that for Orbit 1 is because Orbit 1 has a larger average
period than Orbit 2. Although both orbits are not exactly periodic
and Orbit 2 are chaotic, they have approximately average periods.
Based on good computational efficiency, S4 is employed in the
later studies.
\subsection{Dynamics of generic orbits}
Let us consider the effect of a small change of the inductive
charge parameter $Q^{*}$ on the orbital dynamics. If $Q^{*}=1\times10^{-3}$
in Fig. 5(c) gives place to $Q^{*}=0$, no chaos exists in Fig.
6(a). When $Q^{*}=5\times10^{-4}$, all the orbits in Fig. 6(b) are still
regular. As the inductive charge parameter increases to
$Q^{*}=6\times10^{-4}$ in Fig. 6(c), the pink orbit with the initial
separation $r=100$ is chaotic. For $Q^{*}=8\times10^{-4}$ in Fig. 6(d),
Orbit 2 and the pink orbit with the initial separation $r=100$ are
chaotic. As the inductive charge parameter increases and takes the
Wald charge $Q^{*}=2aBq=1\times10^{-3}$ in Fig. 5(c), chaos becomes
stronger from the global phase-space structure. These facts show
that a small increase of the inductive charge parameter $Q^{*}$
can easily induce chaos. An explanation to this result is given
here. The inductive charges in the vicinity of the Wald charge are
so small that they do not contribute to the spacetime curvature,
but can exert somewhat important influences on the motions of
charged particles and even enhance the chaotic properties. The
inductive charges have small contributions to the absence of the
fourth constant although the external magnetic fields have main
contributions, as is claimed above. When $B^{*}$ is given an
appropriate value and the ratio of the particle's charge $q$ to
the particle's mass $m$ (i.e., $q/m$) is large enough, the
inductive charges are possible to bring a contribution to the
occurrence of chaos.
A minor change of the magnetic parameter $B^{*}$ has an important
effect on the orbital dynamics. With charge $B^{*}$ increasing,
the evolution of orbits transits from regular KAM tori for
$B^{*}=3\times10^{-4}$ in Fig. 7(a) to chaos for $B^{*}=8\times10^{-4}$ in Fig.
7(b), and to stronger chaos for $B^{*}=1.1\times10^{-3}$ in Fig. 7(c). An
increase of $B^{*}$ means that of the Lorentz force and therefore
enhances the strength of chaos.
The above demonstrations mainly focus on how the two parameters
$Q^{*}$ and $B^{*}$ exert influences on the dynamical behavior of
orbits. What about the effect of the black hole spin $a$ on the
orbital dynamics? Fig. 8 gives an answer to this question. It is
found that the chaotic properties are gradually weakened and ruled
out as the dragging effects of the spacetime by the rotating black
hole increase. This fact supports the result of [12]. It is shown
again that an increase of the inductive charge parameter $Q^{*}$
is helpful to induce chaos for a given value $a$. Similarly, an
increase of the particle's angular momentum $L$ also results in
weakening and suppressing the chaotic properties, as shown in Fig.
9.
As is well known, chaos is stronger when the larger the particle's
energy $E$ increases. This result is confirmed by fast Lyapunov
indicators (FLIs) in Fig. 10. Here, computations of the FLIs are
based on the method of [74]. The FLI is the logarithm of the ratio
of the separation between two nearby trajectories $d(\tau)$ at
proper time $\tau$ to the starting separation $d(0)$:
\begin{equation}\label{Eq:fli}
FLI=\log_{10}\frac{d(\tau)}{d(0)}.
\end{equation}
Different growths of separation $d(\tau)$ with proper time $\tau$
allow one to distinguish between ordered and chaotic orbits. A
slowly polynomial or algebraical increase of the separation
indicates the regularity of the considered bounded orbit for
$E=0.9925$ in Fig. 10. However, a rapidly exponential increase of
the separation turns out to be the characteristic of chaoticness
of the considered bounded orbit for $E=0.9935$. The FLI for
$E=0.995$ is less than for $E=0.997$ after the integration time
$w=10^{6}$ or $\tau=10^{6}$, therefore, the former chaoticity is
weaker than the latter one. That is, an increase of the energy $E$
gives rise to enhancing strength of chaos.
We find that the FLIs are always smaller than 3.5 for the regular
case, whereas larger than this value for the chaotic case when the
integration time reaches $w=10^{6}$. Now, we employ the technique
of FLIs to trace how a small variation of one parameter affects a
dynamical transition from order to chaos. Only one of the
parameters is given many different values, and the initial
conditions (except $p_{\theta}$) and the other parameters are
fixed. Each FLI is obtained after the integration time $w=10^{6}$.
The transition from order to chaos occurs when $Q^{*}\geq0.00056$ (Fig.
11(a)), $B^{*}\geq 0.000844$ (Fig. 11(b)), or $E\geq0.99379$ (Fig.
11(c)). However, the transition from chaos to order occurs when
$L\geq5.84789$ (Fig. 11(d)). That is, the strength of chaos is
enhanced as one of the parameters $Q^{*}$, $B^{*}$ and $E$
increases, but weakened as the parameter $L$ increases. The
effects of variations of these parameters on the orbital dynamics
described by the technique of FLIs are consistent with those
described by the technique of Poincar\'{e} sections.
The transition from order to chaos occurs when the black hole's
spin $a\geq0.046$ in Fig. 11(e). Namely, an increase of $a$ leads
to strong chaos. The result is consistent with that of [68], but
unlike that of Fig. 8 in which the dragging effects of the
spacetime weaken the chaotic properties from the global
phase-space structures. The different results between Figs. (8)
and 11(e) are due to distinct choices of the initial conditions
and other parameters. Perhaps, the dependence of the dynamical
behavior on the spin may be different if the chosen initial
conditions and other parameters are varied.
\section{Conclusions}
In this paper, we mainly focus on studying the dynamics of
charged particles around the Kerr black hole immersed in an
external electromagnetic field, which can be considered as a tidal
environment.
At first, we discuss the radial motions of charged particles on
the equatorial plane through the effective potential. We trace how
the dynamical parameters exert influences on the effective
potential. It is found that the particle energies at the local
maxima values of the effective potentials increase with an
increase of the black hole spin and the particle angular momentum,
whereas decrease as one of the inductive charge parameter and
magnetic field parameter increases. In addition, the radii of
stable circular orbits on the equatorial plane always increase.
However, the radii of ISCOs are decreasing as any
one of the black hole spin $|a|$, inductive charge parameter
$Q^{*}$ and uniform magnetic field parameter $B^{*}$ is
increasing.
Then, we investigate the motions of charged particles on the
non-equatorial plane using a time-transformed explicit symplectic
integrator. The effects of small variations of the parameters on
the orbital regular and chaotic dynamics are studied through the
techniques of Poincar\'{e} sections and fast Lyapunov indicators.
As a result, the dynamics is sensitive dependence on a small
variation of any one of the inductive charge parameter, magnetic
field parameter, energy and angular momentum. Chaos is easily
induced as these dynamical parameters increase. When the dragging
effects of the spacetime increase, the chaotic properties are not
always enhanced or weakened under some circumstances.
The theoretical work may have potential astrophysical
applications. The unstable or stable circular orbits and ISCOs
would be helpful to study some accretion disks. The theory of
chaotic scattering in the combined effective potential and the
asymptotically uniform magnetic field would be applicable to
explaining the mechanism hidden behind the charged particle
ejection. The existence of the magnetic fields involving the
induced electric field may be demonstrated through observational
evidences.
\section*{Acknowledgments}
The authors are very grateful to three referees for valuable
comments and useful suggestions. This research has been supported
by the National Natural Science Foundation of China [Grant Nos.
11973020 (C0035736) and 12133003], the Special Funding for Guangxi
Distinguished Professors (2017AD22006), and the National Natural
Science Foundation of Guangxi (Nos. 2018GXNSFGA281007, and
2019JJD110006).
|
2,869,038,156,234 | arxiv | \section{Introduction}
\label{sec:Intro}
Atomic nuclei in condensed phases behave, in many cases, as quantum objects. For instance, Nuclear Quantum Effects are responsible for the {\em heat capacity problem}, i.e., the deviation from the classical Dulong and Petit law for the heat capacity of solids at low temperatures. The solution of this issue eventually led to the development of the harmonic theory of solids, an accurate quantum theory that lets us to compute their thermal properties at temperatures lower than the Debye temperature, and can be corrected to account for anharmonic effects~\cite{BornHuang,AshcroftMermin}. By reducing the description of an insulating solid to a set of independent harmonic oscillators, the {\em phonons}, weakly interacting through anharmonic couplings, this theory also provides a framework for the computation of transport properties, in particular of heat conductivity. In contrast to the very high accuracy that can be achieved for thermal properties, however, the computation of transport properties is sensibly more delicate and often requires ad hoc approximations for the lifetime of phonons, which is limited by phonon-phonon scattering processes and the presence of defects.
The general framework of the harmonic theory of solids, originally developed for crystals, can be adapted to {\em disordered solids}. This is at the expenses of employing a numerical approach to characterize the harmonic eigenmodes, that replace phonons and are no longer determined by symmetries. Again, this procedure can be efficiently employed to determine thermal properties, while its application to transport is much more limited. Very often these properties are indeed calculated via classical statistical mechanics approaches (based on classical Molecular Dynamics simulations), whose results are next empirically corrected to account for quantum effects (see, among others, ~\cite{mizuno2016relation}). We also note that, in systems (ordered or disordered) involving light nuclei (e.g., hydrogen in solid ice), the large wavelength associated with light atoms makes the harmonic approximation itself inappropriate. Therefore, an exact calculation should in general be considered even for thermal properties, or for the determination of phase boundaries~\cite{Bronstein2016}.
The harmonic theory of crystalline solids undoubtedly constitutes a remarkable achievement, as many results can be obtained based on an almost fully analytical approach. However, the above limitations in computing transport properties or in applying the theory to disordered structures, point to the necessity of numerical approaches. It would therefore be highly desirable to develop a numerical methodology that could fully take into account the quantum nature of atomic nuclei, allowing us to determine without approximations both thermal and transport properties of any insulating solid.
When interested in thermal properties, an exact numerical method that encompasses all quantum aspects and is valid at any temperature, independently of the strength of anharmonic effects, involves the path integral representation of the partition function~\cite{Barker1979,Chandler1981,Herman1982,Pollock1984}. In the absence of exchange effects (a reasonable hypothesis in most common solids), the determination of thermodynamic properties at the inverse temperature $\beta =(k_B T)^{-1}$ involves the sampling of an equivalent system where each quantum particle is replaced by a discretized "path" consisting of $M$ "imaginary time slices". The method becomes exact in the limit of large $M$, and the sampling of $N$ quantum degrees of freedom at temperature $T$ turns out to be equivalent to that of $N\times M$ classical degrees of freedom at temperature $M\times T$. This sampling can be achieved efficiently using Monte Carlo or Molecular Dynamics methods, leading to the PIMC and PIMD methods, respectively.
Computation of transport properties is more problematic. The standard Green and Kubo statistical mechanics approach to transport coefficients~\cite{Green1952,Kubo1957,Luttinger1964}, obtains the heat conductivity tensor $\kappa$ in a system of volume $V$ at temperature $T$ from a time correlation function of the energy current operator $\bf{J}$ as,
\begin{equation}
\kappa_{\alpha \beta } = \frac{1}{Vk_BT^2}\int_0^\infty dt \langle J_\alpha(t)J_\beta(0)\rangle.
\label{eq:k_ab}
\end{equation}
Unfortunately, the path integral method provides directly static (time-independent) quantities only. A possible solution to this problem has been identified long ago~\cite{Thirumalai1983}, by noting that the PIMC approach can rather supply the analytical continuation of the correlation functions on the {\em imaginary time} axis, simply by computing the correlation between two imaginary time slices along the path. The power spectrum, $S_{AB}(\omega)$, of a real time correlator, $C_{AB}$, between two operators $A$ and $B$ can then be obtained in an apparently straightforward manner by using the identity,
\begin{equation}
C_{AB}(i\tau) = \int_0^\infty d\omega \left[S_{AB}(\omega) e^{-\hbar\omega\tau} + S_{BA}(\omega)e^{-\hbar\omega(\beta-\tau)}\right].
\label{eq:inversion}
\end{equation}
While Eq.~(\ref{eq:inversion}) in principle allows one to obtain $S$ based on the data for $C(i\tau)$, with $\tau$ in $[0,\beta]$, it is well known that the inversion problem is ill-posed, in the sense that determining $S$ with high precision is an extremely difficult task, even if $C$ is known with excellent accuracy. For this reason, the approach pioneered by a few groups in the eighties within the framework of path integral calculations did not spread widely. Many recent studies obtained in various fields~\cite{PhysRevB.57.10287,Bertaina2017,LEVY2017149,PhysRevB.95.014102,PhysRevB.98.134509}, however, indicate that the present computing capabilities should by now allow us to carry out this program satisfactorily, by addressing the two major (and related) difficulties: {\em i)} to obtain with high accuracy the imaginary time correlation, in particular for current operators which suffer from the well known issue of diverging variance~\cite{Herman1982} in the limit of large $M$; and {\em ii)} to solve the ill-posed problem of obtaining the frequency spectrum from the imaginary time correlation functions.
Here we address these two issues based on numerical and analytical calculations of very simple examples, namely a single harmonic oscillator or an ensemble of oscillators with a continuum distribution of frequencies. The interest of this choice is twofold. First, due to its simplicity, we can obtain exact analytical expressions for most quantities of interest, including all time dependent correlations and exact expressions for the discretized path integrals. The availability of these expressions enables a precise control of the different sources of error, which can be both of statistical origin or associated with the discretization itself. Second, the harmonic oscillator is at the heart of the harmonic theory of solids, the natural starting point for any calculation of transport in insulating solids. Completely controlling this case is, therefore, crucial for any serious step forward in this direction.
The manuscript is organized as follows: in Sect.~\ref{sec:PIMC} we introduce the general formalism of the path integral and imaginary time correlations, while in Sect.~\ref{sec:inversion} we present the procedure that we have developed to cope with the inversion problem. In Sect.~\ref{sec:estimators} we next describe a new approach that circumvents the issue of the diverging variance for current-current correlators. Finally, in Sects~\ref{sec:case1} and~\ref{sec:case2} we illustrate the application of these methods to a single harmonic oscillator, followed by the case of a collection of oscillators with a continuum distribution of frequencies, mimicking the density of states of a crystalline solid. In Sect.~\ref{sec:conclusion} we draw our conclusions.
\section{\label{sec:PIMC}The path integral formalism for time correlations}
The path integral Monte Carlo method provides a numerically exact route to the evaluation of thermodynamic properties of quantum systems at finite temperature, $T$. If we consider, for simplicity, a system described by a single degree of freedom $X$ of mass $m$, with Hamiltonian $ \hat{H} = \hat{P}^2/2m + U(\hat{X})$, the average value of an observable $\hat{A}$ is
\begin{equation}
\langle\hat{A} \rangle =\frac{1}{Z(\beta)} \text{Tr} [\hat{A}\, e^{-\beta \hat{H} } ],
\end{equation}
where $Z(\beta)=\text{Tr}[e^{-\beta\hat{H}}]$. In the PIMC approach, the trace is evaluated by expressing the density operator as $e^{-\beta\hat{H}}= (e^{-\beta\hat{H}/M})^M$. In the position representation $\vert X\rangle$, and using the notation $\rho(X,Y,\tau) = \langle X \vert e^{-\tau\hat{H}} \vert Y \rangle $, we can write
\begin{multline}
\langle \hat{A} \rangle = \frac{1}{Z(\beta)}
\int dX_0\ldots dX_M \\
\langle X_0 \vert \hat{A} \vert X_1 \rangle \rho(X_1,X_2,\beta/M)\ldots\rho(X_M,X_0,\beta/M).
\label{eq:average1}
\end{multline}
If an expression for $\rho(X,Y,\tau)$ is known, the observable can be evaluated by sampling the "path" $\{X_0\ldots X_M\}$ with a statistical weight proportional to $\rho(X_1,X_2,\beta/M)\ldots\rho(X_M,X_0,\beta/M)$. As the matrix element $\langle X_0 \vert \hat{A} \vert X_1 \rangle$ of a {\em local} operator $\hat{A}$ involves in general a term $\delta(X_0-X_1)$, the sampling is actually performed over a closed path of $M$ points. In the following we will repeatedly consider the "primitive" approximation, based on the factorisation of the kinetic and potential parts of the density operator and valid in the limit of small $\tau$\cite{Chandler1981},
\begin{equation}
\rho_p(X,Y,\tau) \simeq \sqrt{\frac{m}{2\pi\hbar^2\tau}}\exp\left\{-m\frac{ (X-Y)^2}{2\hbar^2\tau} -\frac{\tau}{2}\left[U(X)+U(Y)\right]\right\}.
\label{eq:primitive-approx}
\end{equation}
This simplified expression can be replaced by a more accurate one if needed, and if the exact value of $\rho$ is known, as it is the case for the harmonic oscillator, the latter can be used to sample the path more efficiently~\cite{feynman1998statistical}.
Here, we are interested in equilibrium time correlation functions that determine the linear response properties of the system. A time correlation involving the observables $A$ at time $t$ and $B$ at time $t=0$ is the equilibrium average of the product of the operators $\hat{A}(t)= e^{itH/\hbar}\hat{A} e^{-itH/\hbar}$, and $\hat{B}(0)=\hat{B}$, which we can write as,
\begin{equation}
C_{AB}(t/\hbar) = \langle\hat{A}(t)\hat{B}(0)\rangle = \frac{1}{Z(\beta)}\text{Tr}[\hat{A}(t)\hat{B}(0)e^{-\beta\hat{H}}].
\end{equation}
Obviously, the splitting method could be applied to the operators $\exp(it\hat{H}/\hbar)$. Unfortunately, the statistical weight associated with the resulting path is imaginary, and therefore it is not suitable for usual sampling methods. If, however, the real time $t$ is replaced by an imaginary time $t=i\tau \hbar$, we can write,
\begin{multline}
C_{AB}(i\tau) = \frac{1}{Z(\beta)}\text{Tr} [\hat{A}e^{-\tau\hat{H}}\hat{B}e^{-(\beta-\tau)\hat{H}}] \\
=\frac{1}{Z(\beta)} \int dX dX'dY dY' \\
\langle X \vert \hat{A} \vert X' \rangle
\rho(X',Y,\tau) \langle Y \vert \hat{B} \vert Y' \rangle
\rho(Y',X,\beta -\tau),
\end{multline}
which is defined for $0 \le \tau \le \beta$, and verifies $ C_{AB}(i\tau)= C_{BA}(i(\beta-\tau))$.
Partitioning again the interval $[0,\beta]$ into $M$ slices of width $\Delta\tau= \beta/M$, the correlation function can be sampled for discrete values of $\tau$ of the form $\tau_k=k\Delta\tau$, with $k=0\ldots M-1$, at a computational cost that is similar to that needed to calculate the thermodynamic observables of Eq.~(\ref{eq:average1}), obtaining
\begin{multline}
C_{AB}(i\tau_k)=
\frac{1}{Z(\beta)}
\int dXdY dX_1... dX_M \langle X \vert \hat{A} \vert X_1 \rangle \rho(X_1,X_2,\Delta \tau)...\\ \rho(X_{k-1},X_k,\Delta\tau)
\langle X_k \vert \hat{B} \vert Y \rangle
\rho(Y,X_{k+1},\Delta\tau)...\rho(X_M,X,\Delta\tau).
\end{multline}
As in Eq.~(\ref{eq:average1}), here the sampling must be performed over the $\{X_1\ldots X_M\}$ coordinates of the path, the $X$ and $Y$ variables being eliminated by the $\delta$-functions contained in the matrix elements of $\hat{A}$ and $\hat{B}$.
\section{A statistical approach to the inversion problem}
\label{sec:inversion}
Once the imaginary time correlations, denoted by $C(\tau)$ from now on, have been obtained for a set of $M$ discrete values $\{\tau_0...\tau_{M-1}\}$ in the interval $[0,\beta]$, the real time correlation functions relevant to describe the system physical response can, in principle, be obtained by inverting Eq.~(\ref{eq:inversion}). This is common to many studies of quantum systems, and generally described as the "analytical continuation" procedure. It is, however, ill-posed, in the sense that if the spectrum $S(\omega)$~\footnote{In this paragraph we drop the $AB$ subscripts in Eq.~(\ref{eq:inversion})} is described by a set of parameters (such as the values of $S$ on a discrete $\omega$-grid, or the coefficients of an expansion in terms of some basis set), and the $C(\tau_k)$ are affected by statistical errors, a very large number of solutions for $S$ compatible with the original data will be found.
This topic is the subject of a vast literature, and it is fair to conclude that no single method emerges as a preferred solution. Generally speaking, most current solutions employ some particular version of a "maximum entropy" approach~\cite{JARRELL1996,Boninsegni1996}. The spectral function, $S_{ME}$, is therefore obtained as an average over the possible $S(\omega)$'s (defined by some finite set of parameters), weighted by the probability that they are the exact model given the data set $(\textbf{C},\sigma^2)$,
\begin{equation}
S(\omega)_{ME} = \int \mathcal{D}S \ p(S|\text{C},\sigma^2)S(\omega).
\label{eq:sme1}
\end{equation}
Here $\mathcal{D}S$ indicates the phase space element associated with the parametrization of $S(\omega)$, $\textbf{C} = (C(\tau_1), C(\tau_2), \dots, C(\tau_M))^{\dagger} \equiv (C_1, C_2, \dots C_M)^{\dagger}$ is a line vector that contains the data points, and $\sigma^2$ describes the statistical uncertainty of these data in the form of a covariance matrix. By using the Bayes formula,
\begin{equation}
p(S|\text{C},\sigma^2) = \frac{p(\textbf{C},\sigma^2|S)}{p(\textbf{C}, \sigma^2)}p(S),
\end{equation}
and making the assumption of Gaussian statistics for the likelihood, we can write,
\begin{equation}
p(\textbf{C}|S,\sigma) \propto e^{-\frac{1}{2}(\textbf{C} - \textbf{C}[S])(\sigma^2)^{-1} (\textbf{C} - \textbf{C}[S])} = e^{-\frac{1}{2}\chi^2[S]}\label{likelihood},
\end{equation}
which we can interpret as the definition of $\chi^2[S]$. Here $\textbf{C}[S]$ is the expression of the vector $C$, obtained by inserting a known spectrum $S$ into the r.h.s. of Eq.~(\ref{eq:inversion}) and computing the resulting $M$ correlation values. In the case of a spectrum defined by the amplitudes $A(\omega_p)$ for a set of $N_\omega$ discrete frequencies on a regular grid, using Eq.~(\ref{eq:inversion}) we obtain,
\begin{equation}
\tilde{C}[S](\tau_\alpha) = \sum_{p=1}^{N_\omega} A(\omega_p) \left( e^{-\omega_p \tau_\alpha} + e^{-(\beta - \tau_\alpha)\omega_p}\right) \label{eq::correlation fit}.
\end{equation}
In traditional maximum entropy methods, Eq.~(\ref{eq:sme1}) is solved at the saddle point level, by minimizing the functional $\mathcal{F}=\frac{1}{2}\chi^2[S] - H[S]$. Here, $H[S]$ is an entropic functional, which assigns a penalty to irregular solutions that would lead to an overfitting of the statistical errors contained in the data. For a positive spectrum, $H[S]$ is usually chosen as the associated Shannon entropy, with a coefficient controlling the strength of the regularisation.
In this work we employ the so-called "stochastic analytical inference" or "stochastic maximum entropy"~\cite{Fuchs2010} method, where Eq.~(\ref{eq:sme1}) is sampled by Monte-Carlo methods over $\mathcal{D}S$, which can be constrained to positive values of $S$ through the prior probability $p(S)$. The term $\frac{1}{2}\chi^2[S]$ can hence be considered as an effective energy functional, and the method can be refined by introducing an additional parameter in the form of an effective inverse temperature $\Theta$ as,
\begin{equation}
S(\omega,\Theta)_{ME}= {Z(\Theta)^{-1}}\int \mathcal{D}S \ S(\omega) e^{-\frac{1}{2}\Theta \chi^2[S]}.
\label{eq:sme2}
\end{equation}
Here the normalisation $Z(\Theta)= 1 / \exp{\{\Theta F(\Theta)\}}$ is an effective partition function. Note that the traditional maximum entropy approach corresponds to a mean field version of Eq.~(\ref{eq:sme2}), where one uses as an estimate of the spectrum the minimum of the mean field free energy $F_{MF}(\theta)= \frac{1}{2}\chi^2[S]- \Theta^{-1} H[S] $. In view of the following analysis, we make the simplifying assumption of uncorrelated data points, so that the covariance matrix is diagonal. As a result, we can write the energy functional $\chi^2[S]$ in the form,
\begin{equation}
\chi^2 = \sum_{\alpha=0}^{M-1}\frac{[C(\tau_\alpha) - \tilde{C}[S](\tau_\alpha)]^2}{\sigma^2(\tau_\alpha)},
\label{eq:chi2}
\end{equation}
with $\sigma^2(\tau_\alpha)$ the statistical uncertainty on the data point $\alpha$. Several arguments~\cite{Fuchs2010} have been evoked for fixing $\Theta=1$. In contrast, in~\cite{Fuchs2010} it has been proposed to pick for $\Theta$ the value $\Theta^*$ that maximises $Z(\Theta)$, which is argued to also maximise the posterior probability $P(\theta | C)$. This possibility, which corresponds to a balance between energy and entropy dominated solutions, requires however a full free energy calculation.
At variance with these proposals, we optimise the value of $\Theta$ employing the following procedure. An initial data set, $C(\tau_\alpha)$, is generated with known statistical uncertainty $\sigma^2(\tau_\alpha)$ by using, for instance, a path integral simulation of the considered model. In cases were $C(\tau)$ is known analytically, synthetic data could also be generated starting from the exact solution, and introducing a controlled uncertainty. Starting from these data, the spectrum $S_{ME}(\Theta)$, described by $P$ degrees of freedom $A(\omega_p)$, is obtained through a Monte-Carlo sampling of Eq.~(\ref{eq:sme2}) for a given value of $\Theta$. Note that a well converged Monte-Carlo average will lead to a spectrum $S_{ME}(\Theta)$ with an associated $\chi^2\sim \mathcal{O} (M\epsilon)$, where $\epsilon$ is a residual error, while the average $\langle \chi^2 \rangle \sim\mathcal{O}(M\epsilon+P/\Theta)$. We denote $\bar{C}_\Theta(\tau_\alpha)$ the correlation function associated with this average spectrum.
In order to determine the optimal choice of $\Theta$, therefore discriminating among different models for $S(\omega)$ (e.g., different finite discretizations on an $\omega$-gird), we combine the maximum entropy approach with a validation procedure borrowed from the statistical learning theory~\cite{MEHTA20191}. We, therefore, generate $P'$ new sets of validation data, $C_{\mathrm{val}, i}(\tau_\alpha)$ ($i=1,\ldots, P'$), by using the same technique (even not necessarily with the same accuracy) that we use to produce the original data set, and determine the associated,
\begin{equation}
\chi^2_{\mathrm{val}} = \frac{1}{P'}\sum_{i=1}^{P'} \sum_{\alpha=0}^{M-1}[\bar{C}_\Theta(\tau_\alpha) - C_{\mathrm{val},i}(\tau_\alpha)]^2 .
\label{eq::chi2 validation}
\end{equation}
We can show that this can be interpreted as a measure of the difference between the estimate $\bar{C}_\Theta(\tau_\alpha)$ and the exact correlation function, denoted by ${C}_{\mathrm {exact}}(\tau_\alpha)$. Indeed, by writing
\begin{equation}
\chi^2_{\mathrm{val}}= \frac{1}{P'} \sum_{i=1}^{P'} \sum_{\alpha=0}^{M-1} [\bar{C}_\Theta(\tau_\alpha) - {C}_{\mathrm{exact}}(\tau_\alpha) + {C}_{\mathrm{exact}}(\tau_\alpha) - C_{val,i}(\tau_\alpha)]^2 ,
\end{equation}
in the limit of large $P'$ and assuming that the average over the validation data returns the exact correlation function, we obtain
\begin{equation}
\chi^2_{\mathrm{val}}= \sum_{\alpha=0}^{M-1} [\bar{C}_\Theta(\tau_\alpha) - {C}_{\mathrm{exact}}(\tau_\alpha)]^2 + \sum_{\alpha=0}^{M-1} \sigma^2_{\mathrm{val}}(\tau_\alpha).
\end{equation}
Here, the first term is the distance of the estimate to the exact data, while the second is the variance of the validation data, which is independent of $\Theta$. The choice of $\Theta$ will therefore be eventually dictated by the behaviour of the first term.
\section{\label{sec:estimators}Improved estimators for current correlations}
The computation of transport coefficients typically implies correlation functions involving the momentum operator, a prototypical one being $C_{pp}(\tau) = \langle p(\tau) p(0) \rangle $. In the path integral approach and within the primitive approximation of Eq.~(\ref{eq:primitive-approx}), the momentum operator is expressed as a difference of coordinates, so that the correlation function for $\tau \ne 0$ takes the form $C_{pp}(\tau_k) = -\frac{1}{\Delta\tau^2}\langle (x_{k+1}-x_k)(x_1-x_0)\rangle$, where $x_k \equiv x(\tau_k)$, and $\tau_k = k\Delta\tau \equiv k\frac{\beta}{M}$ is the discretized imaginary time. The MC evaluation of $C_{pp}(\tau_k)$ is hampered by the fact that, when $\Delta\tau$ gets small, relative fluctuations in $(x_{i+1}-x_i)$ become large and the variance of the measured observable grows rapidly (in fact it diverges for $\Delta\tau\rightarrow0$). As the uncertainty $\delta_{MC}$ of the MC estimate of an observable $A$ is related to its variance $\sigma_A^2$ by $\delta_{MC} \propto \sigma_A/\sqrt{\tau_{sim}}$, one is therefore forced to increase the simulation time, $\tau_{sim}$, in order to achieve a given precision.
This problem was identified early in the development of PIMC, when trying to estimate the atoms kinetic energy, which is $\propto C_{pp}(\tau=0)$. A solution was proposed in~\cite{Herman1982}: instead of directly using the above expression for $C_{pp}(\tau_k)$, the integrals entering the correlation function can be rearranged obtaining a new estimator for $C_{pp}(\tau_k)$, with identical average but smaller variance. The new expression, known in the case of the kinetic energy as the "virial estimator", does not depend explicitly on $\Delta\tau$, and therefore does not suffer from the diverging variance associated with the "naive" estimator.
We now show that the strategy used to obtain the virial estimator can be generalized to any correlation function involving the momentum operator~\cite{PhysRevLett.111.050406}. Specifically, we consider correlation functions of the general form involved in calculation of transport coefficients, e.~g., $C_{pF}(\tau) = \langle ( \hat{p}(\tau)\hat{F}(\tau))_s (\hat{p}(0)\hat{F}(0))_s \rangle$. Here $\hat{F}(\tau)$ is a shorthand notation for a generic local function $F(\hat{X}(\tau))$, which in the case of heat transport would be related to the potential energy. The subscript $s$ indicates that the operator product, which represents an observable quantity, is by convention made Hermitian by symmetrizing the operator, as $( \hat{p}\hat{F})_s = \frac{1}{2}(\hat{p}\hat{F}+\hat{F}\hat{p})$.
Within the primitive approximation and following this definition one obtains,
\begin{multline}
C_{pF}(\tau_k) = - \frac{1}{\Delta\tau^2}m^2 \langle (x_{k+1} - x_{k}) F(x_{k}) (x_{1} - x_{0}) F(x_0) \rangle \\
+ \frac{1}{2\Delta\tau}m \langle (x_{k+1} - x_{k}) F(x_{k}) F'(x_0)\rangle - \\
+ \frac{1}{2\Delta\tau}m \langle (x_{1} - x_{0}) F(x_{0}) F'(x_{k})\rangle
- \frac{1}{4} \langle F'(x_k) F'(x_0) \rangle,
\label{eq::pF_correlation}
\end{multline}
This expression is valid for $k\ge 1$, while the case $k=0$ must be treated separately, along similar lines.
The MC calculation of Eq.~(\ref{eq::pF_correlation}) suffers from the same numerical problem as the momentum correlations, the variance of the leading term in $1/\Delta\tau$ diverging as $\Delta\tau$ approaches zero. In order to improve the estimator, we have generalized the procedure originally used for the kinetic energy calculations ($C_{pp}(0)$), and obtain a new estimator with reduced variance for general correlation functions. We start from the first term in Eq.~(\ref{eq::pF_correlation}), which has the strongest dependence on $\Delta \tau$, and can be expressed as,
\begin{multline}
\frac{1}{\Delta\tau^2}\langle F(x_k)(x_{k+1}-x_k)F(x_0)(x_1-x_0)\rangle =\\=\frac{1}{\hbar^2\Delta\tau^2 Z} \int dx_0 \int dx_1 \dots \int dx_M F(x_k) (x_{k+1}-x_k) F(x_0) (x_1-x_0)\\ \rho_0(x_1-x_0; \Delta\tau)\dots \rho_0(x_M- x_{M-1}; \Delta\tau) \exp\left[-\Delta\tau \sum _{j=0}^{M} V(x_i)\right].
\end{multline}
We now transform the set of coordinates $\{x_0, x_i\}$ to $\{x_0, y_i\}$, such that $y_i = x_{i+1}-x_i$. The constraint $x_{M} \equiv x_0$ is accounted for by introducing a term $\delta\left(\sum_{i=0}^{M-1} y_i\right)$, leading to
\begin{multline}
\frac{1}{\Delta\tau^2}\langle F(x_k)(x_{k+1}-x_k)F(x_0)(x_1-x_0)\rangle =\\=\frac{1}{\Delta\tau^2 Z} \int dx_0 \int dy_0 \dots \int dy_{M-1} \delta\left(\sum_{i=0}^{M-1} y_i\right) F\left(\sum_{i=0}^{k-1}y_i +x_0\right) \\ y_k F(x_0)y_0 \rho_0(y_0; \Delta \tau)\dots \rho_0(y_{M-1};\Delta \tau) \exp[-\Delta\tau W],
\end{multline}
with
\begin{equation}
W = \sum _{j=0}^{M-1} V\left(\sum_{i = 0}^j y_i + x_0\right).
\end{equation}
By using the identity:
\begin{equation}
\frac{1}{\hbar\Delta\tau}y_k \rho(y_k;\Delta\tau)= -\partial _{y_k} \rho(y_k, \Delta\tau),
\end{equation}
we can integrate by parts for the integration over $y_k$. Our next step is based on the observation that the derivative of the $\delta$ function w.~r.~t. to $y_0$ can be distributed over all coordinates, i.e., $\partial_{y_k}\delta\left(\sum y_j\right) = \frac{1}{M} \sum_i \partial_{y_i}\delta\left(\sum y_j\right)$. A second integration by parts over each of the $y_i$ variables eventually leads to
\begin{multline}
\frac{1}{\Delta\tau^2}\langle F(x_k)(x_{k+1}-x_k)F(x_0)(x_1-x_0)\rangle
= \\ \left\langle F(x_k)(x_1-x_0)F(x_0)\left[\frac{1}{M}\sum_{j=1}^{M-1} j
V'(x_j)-
\sum_{j=k+1}^{M-1} V'(x_j)\right]
\right\rangle-\\- \frac{k}{(\Delta\tau M)}\langle F'(x_k)(x_1-x_0)F(x_0)\rangle
-\frac{1}{(\Delta\tau M)}\langle F(x_k)F(x_0)\rangle. \label{eq::virial_expression}
\end{multline}
For the special case $F(x) \equiv 1$, we can show that Eq.~(\ref{eq::virial_expression}) reduces to a virial-like formula for the momenta correlations $C_{pp}(\tau_k)=\langle x_k V'(x_0)\rangle$ (see App.~\ref{sec:appendixA}). Repeating the procedure for the terms linear in $\frac{1}{\Delta\tau}$, such as the second term in Eq.~(\ref{eq::virial_expression}), we can write the correlation in a form that apparently does not depend on $\Delta \tau$ (recall that $M\Delta \tau =\beta$ is a constant). The calculations, together with the expressions appropriate for the special case $k=0$, are sketched in App.~\ref{sec:appendixA}.
In contrast with the initial expression Eq.~(\ref{eq::pF_correlation}), all terms are now well-defined as $\Delta\tau \rightarrow 0$. We note, however, that the number of terms involved in the first part of Eq.~(\ref{eq::virial_expression}) increases linearly with $M=\beta/\Delta \tau$, so that the gain following our manipulation is not immediately obvious. The argument that Eq.~(\ref{eq::virial_expression}) indeed leads to a variance reduction is the following: If all the $M$ contributions to the first term were independent, its variance would scale as $\Delta \tau\times M$, where $\Delta \tau$ comes from the term $\langle \vert x_1-x_0\vert \rangle$, and the factor $M$ accounts for the $M$ contributions in the sum. As the segments in the path are correlated, even if this estimate is only approximate it still indicates that the variance remains finite even for $\Delta \tau \rightarrow 0$. We explicitly verify the variance reduction numerically for the harmonic oscillator in the following section.
We conclude this section by stressing that the above derivation to improve generic estimators involving momentum operators is by no means limited to the harmonic oscillator, but remains valid in general, in particular for the case of interacting particles and also beyond the use of the primitive approximation in the path integral.
\section{\label{sec:case1}Case study I: the single harmonic oscillator}
\subsection{Computing correlation functions}
\label{sec:computing}
We now apply the methods described above to our test cases. We start by considering the canonical example of a single quantum harmonic oscillator of frequency $\omega_0$ in one dimension, with potential energy $V=\frac{1}{2} m\omega_0^2 X^2$, and focus on the time correlation function of an operator with the structure of an energy current, e.~g., $C_{pV}(\tau) = \langle (p(\tau)V(\tau))_s (p(0)V(0))_s \rangle$. The PIMC approach within the primitive approximation allows us to extract the values of the imaginary time correlation function $C_{pV}(\tau_k)$, at $M$ discrete time values, $\tau_k= (k-1)\beta/M$. Two main sources of inaccuracy are associated to this procedure: a systematic error, associated to the use of the primitive approximation for the density matrix, and the statistical uncertainty due to finite sampling. In the following we show how to plainly control these issues.
For an harmonic oscillator, the systematic deviation due to the discretization of the imaginary time $\Delta\tau=\beta/M$ can be assessed directly, by comparing the result expected from the PIMC approach (which in this case can be obtained exactly) with the analytical expression for the correlation function $C_{pV}(\tau)$, which corresponds to the continuous limit $M\rightarrow \infty$. By applying the canonical formalism for the harmonic oscillator, we indeed obtain,
\begin{multline}
C^{\text{exact}}_{pV}(\tau) =\left( \frac{m\hbar^3\omega_0^3}{256}\right) \frac{1}{\sinh^3(\beta \omega_0/2)}\times \\
\left[12\cosh\left(\frac{3\beta\omega_0}{2}-3\omega_0 \tau\right)\right. \\
\left.+2\left(4e^{-\beta\omega_0}+e^{-2\beta\omega_0}+1\right)e^{\beta\omega_0} \cosh\left(\frac{\beta\omega_0}{2} -\omega_0 \tau\right) \right].
\label{eq::exact pv correlation}
\end{multline}
In order to calculate the exact expression of the correlation function within the primitive approximation of the discretized path integral, we first note that all the integrals involved in the calculation are Gaussian. By using the discretized representation for the momentum operator, one writes $C_{pV}(\tau)$ as a thermodynamic average of products of the variables $x$. Wick's theorem allows to recast such correlations $\langle x_1 \dots x_{2n}\rangle$ into products of pair correlation functions $\langle x_ix_j\rangle$ which are easily accessible as $\langle x_ix_j\rangle = A_{ij}^{-1}$. $\mathbf{A}$ is a symmetric $M \times M$ matrix, and we can write, $\langle x_ix_j\rangle =\int dX x_ix_j\text{e}^{-X^T\mathbf{A}X}$. We can therefore use numerical methods to calculate the matrix elements, as discussed in App.~\ref{sec:appendixB}. The relative difference between the two calculations is illustrated in Fig.~\ref{fig::discretization_error_pV}.
We observe that, for a sufficiently small value of $\beta/M$, the deviation is virtually not affected by a change of $\beta$.
\begin{figure}[t]
\center{\includegraphics[width=1. \linewidth]{fig01.pdf}}
\caption{
Relative discretization error, $1-C_{pV}^{\mathrm{PI}}(\tau)/ C_{pV}^{\mathrm{exact}}(\tau)$, between the path integral, $C_{pV}^{\mathrm{PI}}(\tau)$, and the exact results, $C_{pV}^{\mathrm{exact}}(\tau)$, for the energy current correlation function, as a function of $\beta/M$. We show the data corresponding to the imaginary times $\tau=0$ and $\tau=\hbar\beta/2$, and indicate with symbols and solid lines the results for $\beta=3$ and $10$, respectively.
}
\label{fig::discretization_error_pV}
\end{figure}
\begin{figure}[b]
\center{\includegraphics[width=1. \linewidth]{fig02.pdf}}
\caption{
Difference between the exact correlation function $C_{pV}^{\mathrm{exact}}(\tau)$ and the values obtained by Monte Carlo sampling, $C_{pV}^{\mathrm{MC}}(\tau)$, of a path with $M=100$ time slices, for $\beta=1$, illustrating the variance reduction obtained by the improved
estimator discussed in Sect.~\ref{sec:estimators}. We show with line-points the primitive estimator and with the continuous line the improved estimator, both using the same
Monte Carlo data.
}
\label{fig::pv_correlation_beta1}
\end{figure}
\begin{figure}[t]
\center{\includegraphics[width=1. \linewidth]{fig03.pdf}}
\caption{
Reconstruction of the spectral function associated to $C_{pV}(\tau)$ at $\beta=10$ corresponding to the indicated values for the number of delta functions in the model, $N_\omega$, and effective temperature $\Theta=1$. The area of the filled rectangles indicate the weight of the two delta-functions of the exact spectrum centered at $\omega_0 $ and $3\omega_0$, corresponding to the $\Delta\omega=1$ discretization.
}
\label{fig::sp function b10 discretization}
\end{figure}
\begin{figure}[b]
\center{\includegraphics[width=1. \linewidth]{fig04.pdf}}
\caption{
Reconstructed spectra for the energy current correlation function $C_{pV}(\tau)$ at $\beta=10$, with $N_\omega=25$ and at the indicated values of $\Theta$. The filled rectangles are centered at the positions of the two delta-functions of the exact spectrum, with an area corresponding to their respective weights.}
\label{fig::sp function b10 theta}
\end{figure}
In addition to this quantitative estimate, it is important to note that, for this system, the discretization preserves the qualitative shape of the correlation functions. One can show (see App.~\ref{sec:appendixC}) that the calculation using a finite but large $M$ corresponds to the exact calculation ($M\to\infty$) for slightly shifted oscillator strength and inverse temperature. The Trotter error therefore only introduces small quantitative deviations in the spectral density, but does not give rise to spurious qualitative features such as a broadening of the spectral lines.
We next focus on the second source of error affecting the PIMC calculation, limited sampling. Indeed, error bars corresponding to average values are obtained by estimating the variance of the observable, which decreases as $\tau_\text{sim}^{-1/2}$, with $\tau_\text{sim}$ the simulation time. For a given simulation time, the quality of the result therefore crucially depends on the variance of the estimator. We illustrate this point in Fig.\ref{fig::pv_correlation_beta1}, by comparing calculations for the energy current correlation function, $C_{pV}$, using the naive estimator, Eq.~(\ref{eq::pF_correlation}), and the improved version of Eq.~(\ref{eq::virial_expression}). The data of Fig.~\ref{fig::pv_correlation_beta1} clearly show that the virial estimator leads to a spectacular improvement compared to the naive one, with a statistical error that is now comparable to the systematic one resulting from the discretization.
\begin{figure}[t]
\centering
\includegraphics[width=1. \linewidth]{fig05.pdf}
\caption{
$N_\omega$-dependence of the $\chi^2_{\mathrm{val}}$ extracted from the validation step of the reconstructed spectral functions for $C_{pV}(\tau)$, at $\beta=10$ and with $\Theta=1$. Squares and triangles correspond to shifted grids: for $N_\omega=5$ red square shows the shift $\delta\omega=0.25$ and the green one $\delta\omega=0.5$, for $N_\omega=10$ plot shows $\delta\omega=0.1$ as a red triangle and $\delta\omega=0.25$ as a green one.
}\label{fig::chi valid b10 discretization}
\end{figure}
\begin{figure}[b]
\center{\includegraphics[width=1. \linewidth]{fig06.pdf}}
\caption{
Main panel: Comparison of the $\chi^2_{\mathrm{val}}$ obtained from our validation for various values of $\Theta$ and $N_\omega$. The area of the circles is proportional to the corresponding value of $ \chi^2_{\mathrm{val}}$. Inset: $\chi^2_{\mathrm{val}}$ as a function of $\Theta$, at the indicated values of $N_\omega$.
}
\label{chi2 valid b10 table}
\end{figure}
\begin{figure}[t]
\center{\includegraphics[width=1. \linewidth]{fig07.pdf}}
\caption{
Spectral reconstruction for $C_{pV}(\tau)$ at $\beta=3$, obtained at the indicated values of the discretization, $N_\omega$, for a fixed $\Theta=1$. The filled rectangles are centered at the positions of the two delta-functions of the exact spectrum for $\Delta\omega=1$, with an area corresponding to their respective weights.
}
\label{fig::sp function b3 discretization}
\end{figure}
\begin{figure}[b]
\center{\includegraphics[width=1. \linewidth]{fig08.pdf}}
\caption{
Spectral reconstructions from $C_{pV}(\tau)$ at $\beta=3$ for $N_\omega=5$ using different values of $\Theta$. The filled rectangles are centered at the positions of the two delta-functions of the exact spectrum, with an area corresponding to their respective weights.
}
\label{fig::sp function b3 theta}
\end{figure}
\begin{figure}[t] \center{\includegraphics[width=1. \linewidth]{fig09.pdf}}
\caption{
Main panel: $\chi^2_{\mathrm{val}}$ from the validation procedure at the corresponding values $\Theta$ and $N_\omega$. The area of the circles is proportional to the value of $ \chi^2_{\mathrm{val}}$. Inset: $\chi^2_{\mathrm{val}}$ as a function of the effective temperature $\Theta$, at the indicated values of $N_\omega$.
}
\label{chi2 valid b3 table}
\end{figure}
\subsection{The inversion problem}
We now use the reconstruction procedure outlined in Sect.~\ref{sec:inversion} to extract the frequency spectrum for the correlation functions obtained in Sect.~\ref{sec:computing}. In order to perform a reconstruction one needs both to define the set of parameters that expresses the spectral density in Eq.~(\ref{eq::correlation fit}) and in the integration measure of Eq.~(\ref{eq:sme2}), and to chose the effective inverse temperature $\Theta$. In the following, we use a discretized model of the spectral density, which is described as a sum of $N_\omega$ delta-functions in the $\omega$-space, see Eq.~(\ref{eq:sme1}). Specifically, we consider a regular grid of $\omega$-values defined on the interval $[0, 5]$, with a fixed spacing between points, $\Delta\omega=5/N_\omega$. In addition, we will consider the possibility of a global shift of the grid by $\delta\omega < \Delta \omega$. Unless specified otherwise, $\delta\omega=0$, and we fix the origin of the grid in $\omega=0$.
The exact expression for the time correlation function, Eq.~(\ref{eq::exact pv correlation}), implies that $C_{pV}(\tau)$ decays exponentially with $\tau$ in the interval $[0,\beta/2]$, with a decay rate $\mathcal{O}(\omega_0)$. Larger values of $\beta$ therefore lead to a larger amplitude in the decay, with the consequence that the contribution of different frequencies can be more easily resolved for larger $\beta$'s. In short, a correlation function of the form $[\exp(-\omega_0\tau) +\exp(-3\omega_0\tau)]$ will be hard to distinguish from $2\exp(-2\omega_0\tau)$ if data are only available in the interval $ [0,1/\omega_0]$. Resolving the two frequencies $\omega_0$ and $3\omega_0$ is therefore essentially impossible if $\beta/2 <1$.
In order to illustrate this point, we calculate and analyse the spectral function for the energy current correlation functions at the two inverse temperatures $\beta=3$ and $10$, with an imaginary time discretization $\Delta\tau = 0.1$. With this value of $\Delta\tau$, the systematic discretization error is smaller than the statistical error for our simulation time, so it can be safely neglected. The main constraint for the reconstruction comes from the imaginary time interval $[0, 1/\omega_0]$. The relative error of the MC data corresponding to these values of $\tau$ is of $\mathcal{O}(10^{-2})$. For larger $\tau$ the relative error becomes comparable with the data due to the fact that $C_{pV}(\tau)$ approaches 0 with $\tau \rightarrow\beta/2$.
We start by considering the case $\beta=10$. First, we evaluate the effect of the grid size, $N_\omega$, on the reconstruction. In Fig.~\ref{fig::sp function b10 discretization} we show the spectra obtained for various values of $N_\omega$, keeping a fixed $\Theta=1$. As mentioned above, there is no {\em a-priori} argument guiding the most appropriate parametrization of the spectrum. In the following we analyze the accuracy of the spectral reconstruction by comparing the values of $\chi^2_{\mathrm{val}}$ defined in Eq.~(\ref{eq::chi2 validation}), using an independent test data set. This is obtained within an additional MC simulation of the correlation function, with the same parameters as the original one. We also consider a data set of the same size, $P'$, as the one that was used to produce $C_{pV}(\tau_k)$.
\begin{figure}[t]
\center{\includegraphics[width=1. \linewidth]{fig10.pdf}}
\caption{
Spectral reconstruction of $C_{pV}^{\text{cont}} (\tau)$ for the continuous distribution of oscillator frequencies, at the indicated values of the discretization $N_\omega$, at fixed $\Theta=1$. The shaded area indicates the exact spectral function.
}
\label{fig::contin spectrum discr}
\end{figure}
\begin{figure}[b]
\center{\includegraphics[width=1. \linewidth]{fig11.pdf}}
\caption{
Spectral reconstruction of $C_{pV}^{\text{cont}} (\tau)$ for the continuous distribution of oscillators, for $N_\omega=10$ and $\Theta=1$ and $10$, respectively. Here we compare the results pertaining to a grid shifted by $\delta\omega= 0.25$ to those with $\delta\omega=0$, the usual (not shifted) case. The shaded area indicates the exact spectral function.
}
\label{fig::contin spectrum shift theta=1 and theta=10}
\end{figure}
In Fig.~\ref{fig::chi valid b10 discretization} we show $\chi^2_{\mathrm{val}}$ as a function of the number of grid points. Clearly, increasing the number of coefficients $A(\omega_i)$ of Eq.~(\ref{eq:sme1}) does not lead to a better spectral reconstruction. In contrast, by introducing more degrees of freedom, one increases the entropy, and the spectral weight is smeared out excessively. In Fig.~\ref{fig::chi valid b10 discretization} we also show the effect on $\chi^2_{\mathrm{val}}$ of a shift $\delta \omega$. As expected, shifting the nodes away from $\omega_1=\omega_0$ and $\omega_2=3\omega_0$, which are the only frequencies present in the exact spectrum determined by Eq.~(\ref{eq::exact pv correlation}), deteriorates the accuracy of the spectrum obtained through the validation step.
The second parameter determining the quality of the statistical maximum entropy reconstruction is the effective temperature, $\Theta$. In Fig.~\ref{fig::sp function b10 theta} we show the behaviour of the spectral function for a chosen $\omega$-grid at the indicated values of $\Theta$. As expected from Eq.~(\ref{eq:sme1}), by increasing $\Theta$ the result approaches the most probable configuration that describes the correlation function $C_{pV}(\tau)$, reducing entropic effects. In Fig.~\ref{chi2 valid b10 table} we combine the above results for different pairs of parameters ($\Theta$, $N_\omega$), and plot the corresponding $\chi^2_{\mathrm{val}}$. Our validation procedure therefore strongly points to using models with a smaller number of delta functions combined with large values of $\Theta \gg 1$ for the spectral reconstruction. Based on the comparison with the exact spectrum, this choice is also clearly the one that leads to the description of the spectrum in closest agreement with the exact prediction. We conclude that the use of $\chi^2_{\mathrm{val}}$ indeed seems to provide an unbiased estimate of the quality of the reconstruction.
We now consider the spectral reconstruction for $C_{pV}(\tau)$ at $\beta=3$, again clarifying the influence of $\Theta$ and of the lattice discretization $N_\omega$. In Figs.~\ref{fig::sp function b3 discretization} and~\ref{fig::sp function b3 theta} we show selected examples of the resulting spectra. In contrast to the case $\beta=10$, we now observe in general a much stronger broadening of the peaks, which prevents us from resolving the two peak structure for $\Theta=1$, even for sparse $\omega$-grids. However, when combining sparse grids with sufficiently large $\Theta$ in the inversion, one improves towards the correct two peaks structure, as can be seen in Fig.~\ref{fig::sp function b3 theta}. The data shown in Fig.~\ref{chi2 valid b3 table} also indicate that this choice indeed corresponds to the lowest values of $\chi^2_{\mathrm{val}}$, confirming the validity of this indicator.
\begin{figure}[t]
\center{\includegraphics[width=1. \linewidth]{fig12.pdf}}
\caption{
Main panel: $\chi^2_{\mathrm{val}}$ from the validation procedure at the corresponding values $\Theta$ and $N_\omega$. The area of the circles is proportional to the value of $ \chi^2_{\mathrm{val}}$. Blue circles correspond to the results for an $\omega$-grid shifted by $\delta\omega=\Delta\omega / 2$. Inset: $\chi^2_{\mathrm{val}}$ as a function of the effective temperature $\Theta$, at the indicated values of $N_\omega$.
}
\label{fig::cont spectrum valid}
\end{figure}
\section{\label{sec:case2}Case study II: continuum distribution of oscillators}
We now move to our second test model, and study the potential energy current correlation function of a system containing a large number of independent, non interacting harmonic oscillators. Considering the $C_{pV}$ of Eq.~(\ref{eq::exact pv correlation}) as a function of $\omega_0$, the correlation function for an ensemble of oscillators with a continuum of frequencies can be written as,
\begin{equation}
C_{pV}^{\text{cont}} (\tau) = \int_0^{\omega_{cut}} d\omega_0\; C^{\text{exact}}_{pV}(\tau;\omega_0) g(\omega_0).
\label{eq::exact continous pv correlation}
\end{equation}
The form of the density of states, $g(\omega_0)$, and the value of the frequency cutoff, $\omega_{cut}$, are arbitrary. In the following we consider a Debye-like form, $g(\omega_0)\propto\omega_0^2$, with $\omega_{cut}=1$, and fix $\beta=10$. With this choice, the exact spectrum for the energy current correlation is a superposition of two functions with a compact support, assuming non zero values in the range $[0,\omega_{cut}]$ and $[0,3\omega_{cut}]$, respectively. As a result, it will display two sharp discontinuities, at $\omega_{cut}$ and $3\omega_{cut}$, respectively.
Contrary to the single oscillator case, here we do not generate the data by Monte Carlo simulation, but we rather employ the exact analytical expression, subsequently adding a Gaussian random noise with a variance proportional to the data themselves, $\sigma_k = 10^{-2}\times C_{pV}^{\text{cont}} (\tau_k)$. This variance is also used as the uncertainty to compute the $\chi^2$ of Eq.~ (\ref{eq:chi2}).
By following the same workflow discussed above for the single oscillator, we reconstruct the spectral densities for different values of $\Theta$ and number of delta functions in the model, $N_\omega$. In Fig.~\ref{fig::contin spectrum discr}, we show the influence of the discretization $N_\omega$ by fixing the canonical value $\Theta=1$. Following the same procedure than above, we calculate again $\chi^2_{\mathrm{val}}$ for the validation set by generating test correlation function from the exact result of Eq.~(\ref{eq::exact continous pv correlation}), with the same variance $\sigma_k$. The values of $\chi^2_{\mathrm{val}}$, shown in Fig.~\ref{fig::cont spectrum valid}, indicate again a more statistically sound reconstruction corresponding to sparse grids. Unfortunately, none of the curves of Fig.~\ref{fig::contin spectrum discr}, convincingly captures the sharp edges of the exact spectral density, which rather resemble two symmetrically broadened peaks. Considering shifted grids (Fig.~\ref{fig::contin spectrum shift theta=1 and theta=10}), however, as also quantitatively supported by the validation procedure, results in contrast in more asymmetric features, clearly improving the reconstruction towards the exact spectrum. Note, however, that employing sparse $\omega$-grids considerably limits frequency resolution, so that the reconstruction in the case of the continuous spectrum with its sharp discontinuities remains quite difficult.
\section{\label{sec:conclusion}Conclusion and outlook}
In this paper, we have examined the reconstruction of spectral functions for transport coefficients, starting from imaginary time correlation functions obtained by path integral Monte Carlo simulations. In particular, we have described a general strategy for wisely expressing improved estimators with reduced statistical variance for imaginary time correlation functions involving current or momentum operators. We have next introduced an inversion procedure based on a stochastic maximum entropy method, a Bayesian approach commonly used for such problems. The outcome of these procedures is in general strongly dependent on the involved parameters, as we have illustrated in the case of the harmonic oscillator spectra employing different values for the effective inverse temperature, $\Theta$, as well as different choices for the grid discretization, $N_\omega$, or offset, $\delta \omega$. Despite their apparent simplicity, the oscillator models studied here provide
challenging benchmarks for the spectral reconstruction due to the sharp undamped delta-functions they contain.
Pure Bayesian approaches suggest to eliminate the parameters dependence by using a flat prior with the most general and flexible model for the spectral density, e.~g., a large value for $N_\omega$, together with $\Theta=1$ to encompass all possible solutions consistent with the data. In contrast, in our case studies we have shown that the spectra corresponding to these standard choices exceedingly suffer from the usual problems of all maximum entropy reconstructions: broadening or merging of peaks, smoothing out any sharp features in the underlying exact spectrum.
Indeed, in practice, path integral Monte Carlo data are strongly correlated in imaginary time, undermining a true justification of the Bayesian choice $\Theta=1$. Different values of $\Theta$ may be therefore considered to approximate efficiently the true, unknown likelihood function. On the other hand, the use of flexible models for the spectral function, containing a large number of parameters, possibly introduces a large amount of entropy into the Bayesian inversion, such that different parametrizations (linear or logarithmic grids in regions where spectral densities are flat, for instance) in general strongly modify the results. The representation of a model must therefore be considered itself as a "parameter", making illusory in our view a "parameter-free" Bayesian inversion.
In this paper we have addressed exactly the above difficulties, and developed a validation procedure to quantitatively control any parameter dependence of the Bayesian inversion. Our proposal is based on the quantity $\chi^2_{\mathrm{val}}$ constructed from independent data not involved in the maximum entropy inversion, which provides an efficient and readily applicable method to select the optimal choice of parameters, corresponding to the lowest value of $\chi^2_{\mathrm{val}}$.
We have shown explicitly that the new validation step clearly identifies a discrete set of two delta functions in the case study of the single harmonic oscillator, and provides unambiguous indications towards the correct asymmetric sharp edges in the case of an underlying continuous frequency spectrum. Also, in both cases, our validation procedure eventually selects models containing just a limited number of parameters, which intrinsically limits the resolution of the reconstruction. Overall, combining in a consistent workflow Bayesian inversion together with an efficient validation procedure able to select model parameters and effective temperature dependence, indeed seems to offer promising perspectives for capturing qualitative and quantitative features in spectral reconstruction.
We conclude by noting that the Green-Kubo method, combined with the harmonic theory of solids and a numerical perturbative treatment of anharmonic effects, has recently proven to be remarkably effective for the determination of heat conductivity at low temperature in systems such as amorphous silicon~\cite{Isaeva2019,Simoncelli2019}. Our hope is to extend those works to arbitrary temperatures and stronger anharmonic effects, on one hand employing path integrals to relax the assumptions underlying the perturbative treatment of anharmonicity, and on the other hand using the strategies for the spectral reconstruction developed in the present paper.
\acknowledgements
{This work has been supported by the project Heatflow (ANR-18-CE30-0019-01) funded by the french "Agence Nationale de la Recherche".}
\section*{Data Availability Statement}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2,869,038,156,235 | arxiv | \section{Introduction}
\label{sec:intro}
Black holes (BHs) constitute one of the main implications of General Relativity (GR) and of any metric theory of gravity. The recent observational evidences about their existence and nature through the detection of gravitational waves (GWs) from the Laser Interferometer Gravitational-Wave Observatory (LIGO) \cite{Abott2016a,Abott2016b,Abott2017} and the first image of the BH located at the center of the Galaxy M87 from the Event Horizon Telescope (EHT) collaboration \cite{EHC20191,EHC20192,EHC20193,EHC20194,EHC20195,EHC20196} shed new light on this intriguing gravitational massive sources. Such astrophysical objects are well known to be characterized by an event horizon, which is a one way-membrane separating the smoothly-behaving exterior spacetime from the casually disconnected inner regions hiding essential singularities at their center.
Actually, there are several techniques to gather information about such objects: the oldest practice consists in observing the interaction between BH and the surrounding matter through the emission of electromagnetic signals in the X-ray energy band \cite{Done2002}; tracking the motion of stellar objects around a supermassive BH (SBH), as it has been doing for long time with Sgr A* \cite{Gillessen2017}; the detection of GWs signals through instruments of higher and higher sensibilities \cite{Gourgoulhon2019,Abuter2020}; the ability of imaging the matter dynamics in the vicinity of a BH \cite{EHC20191,EHC20192,EHC20193,EHC20194,EHC20195,EHC20196,Kim2020}. This period, defined the \emph{multi-messenger era} for the wealth of complementary observational data, offers the possibility to have finally more insight into the BHs' description within or outside the GR theory.
There is a huge variety of theoretical compact objects' candidates, which perfectly mimic all observational properties of a BH with arbitrary accuracy \cite{Cardoso2019}. In this huge class of BH mimickers, an appealing position is occupied by the wormholes (WHs) \cite{Damour2007,Bambi2012,Bambi2013}. They have the peculiar proprieties to be horizonless, and endowed with a traversable bridge connecting two different universes \cite{Visser1995}. The traversable condition in classical GR is linked to the existence of exotic matter, having negative energy and going against the classical laws of physics. A common way to explain such issue is based on quantum mechanics \cite{Hochberg1997,Bronnikov2013,Digrezia2017,Garattini2019} or, if we frame the WH models in alternative theories of gravity, on topological arguments \cite{Lobo2009,Harko2013}.
In the literature, several have been the attempts and new techniques proposed to detect WHs and establish their observational signatures. In order to give a precise idea of such research field, it is worth citing the following works: Cardoso and Pani \cite{Cardoso2016} noted that WHs with a light ring admit similar BH ringdown stage, and their quasinormal-mode spectrum, which is completely different from that of a BH, can eventually show up only at late times; instead Konoplya and Zhidenko \cite{Konoplya2016} showed later that particular classes of WHs can actually ring similarly or differently to BHs at all times; Paul and collaborators \cite{Paul2019} produced numerical images of a thin accretion disk around both a BH and WH, determining distinctive features when a WH has accretion disks on both sides of its throat, and qualitatively similar or dramatic differences when the disk is on the same side of the observer (see figures in the paper, for details); Dai and collaborators \cite{Dai2019} showed that the gravitational flux propagates from one universe to the other one perturbing the motion of the objects, detectable with an acceleration precision of $10^{-6}\ {\rm m/s^2}$; Banerjee and collaborators \cite{Banerjee2019} calculated that BHs and WHs close to the event horizon/throat have distinctive tidal effects (a few times higher for WHs) arising from their different geometries; Hashimoto and Dalui \cite{Hashimoto2017,Dalui2019} found that the motion of massive and massless test particles exhibit chaotic behaviors near the event horizon due to its surface gravity, permitting to probe whether there exists a horizon.
In this work, we consider static and spherically symmetric WHs in pure GR, mimickers of BHs' proprieties. We propose an original procedure to diagnose a WH from a BH by employing the general relativistic Poynting-Robertson (PR) effect, which can be supported by the recent massive amount of observational data.
In high-energy astrophysics, the motion of relatively small-sized test particles (like accretion disk elements, meteors, comets, planets, dusts) around massive compact objects (like SBHs or stellar BHs) is influenced not only by the gravitational field, but also by the electromagnetic radiation from an emitting source (like accretion disk, or hot corona around a BH). Beside such forces, there is also the presence of a radiation drag force, termed PR effect, arising when the matter absorbs and reemits the radiation, generating thus a thrust force opposite to the matter orbital motion \cite{Ballantyne2004,Ballantyne2005,Worpel2013,Ji2014,Keek2014,Worpel2015}. Such effect configures as a dissipative force, which removes energy and angular momentum from the effected body \cite{Poynting1903,Robertson1937,Bini2009,Bini2011}. Recent works on such topic are: extension from the two-dimensional (2D) formulation to the three-dimensional (3D) space in Kerr metric \cite{Defalco20183d,Bakala2019}; continuous emission of radiation from a finite source in Schwarzschild spacetime \cite{Wielgus2019}; treatment under a Lagrangian formulation, where the Rayleigh potential (describing the radiation dissipative force) has been analytically determined for the first time in the GR literature \cite{DeFalco2018,Defalco2019,DeFalco2019VE}; proof within the Lyapunov theory that the critical hypersurfaces (regions where there is a balance between radiation and gravitational forces) are stable configurations \cite{Defalco2019ST}.
The paper is structured as follows: in Sec. \ref{sec:PReffect} we first derive the equations of motion of a test particle around a static and spherically symmetric WH affected by the general relativistic PR effect, discussing the proprieties and implications of such dynamics with respect to the Schwarzschild case; in Sec. \ref{sec:diagnostic} we present our proposal to disentangle a WH from a BH by analysing the electromagnetic emission proprieties from the critical hypersurfaces in Schwarzschild and WH spacetime under the PR effect; in Sec. \ref{sec:end} we discuss about the obtained results and finally give our conclusions.
\section{General relativistic Poynting-Robertson effect around a static and spherically symmetric wormhole}
\label{sec:PReffect}
In this section, we recall the proprieties of a Morris-Thorne WH metric (see Sec. \ref{sec:MTmetric}), which will be the geometrical background on which a test particle will move under the general relativistic PR effect. After having derived its equations of motion (see Sec. \ref{sec:GRPReffect}), we analyse the existence of the critical hypersurfaces (see Sec. \ref{sec:CH}), one of the most important implications of the general relativistic PR effect, which we will use in the next sections.
\subsection{The Morris--Thorne Wormhole}
\label{sec:MTmetric}
We consider a static and spherically symmetric WH, whose spacetime is described by the Morris--Throne metric \cite{Morris1988}. We use the signature $(-,+,+,+)$ for the metric, and geometrical units for gravitational constant $G$, and speed of light $c$ ($c = G = 1$). The metric line element, $ds^2=g_{\alpha\beta}dx^\alpha dx^\beta$, expressed in spherical coordinates and in the equatorial plane $\theta=\pi/2$, reads as
\begin{equation} \label{eq:MTmetric}
ds^2=-e^{2\Phi(r)}dt^2+\frac{dr^2}{1-b(r)/r}+r^2d\varphi^2,
\end{equation}
which is parametrized by $\Phi(r)$ and $b(r)$, better known as the \emph{shape} and \emph{redshift functions}, respectively.
\begin{figure*}
\centering
\includegraphics[scale=0.49]{Fig1}
\caption{Sketch of the Morris--Throne WH geometry including the presence of an accretion disk in the upper universe.}
\label{fig:Fig1}
\end{figure*}
To simplify the calculations and have a direct physical interpretations of the results we will find, we employ as orthonormal basis of vectors the proper reference frame adapted to the \emph{static observers} (SOs), given by \cite{Morris1988}
\begin{equation} \label{eq:SOframe}
\begin{aligned}
&\boldsymbol{e_{\hat t}}\equiv\boldsymbol{n}= \frac{\boldsymbol{\partial_t}}{N},\quad
\boldsymbol{e_{\hat r}}=\frac{\boldsymbol{\partial_r}}{\sqrt{g_{rr}}},\quad
\boldsymbol{e_{\hat \varphi}}=\frac{\boldsymbol{\partial_\varphi}}{\sqrt{g_{\varphi \varphi }}}.
\end{aligned}
\end{equation}
where $N\equiv(-g^{tt})^{-1/2}=e^{\Phi(r)}$ is the time lapse function \cite{Bini2009,Bini2011}. We will denote throughout the paper vector and tensor indices (e.g., $v_\alpha$; $T_{\alpha\beta}$) evaluated in the SO frame by a hat (e.g., $v_{\hat \alpha}$; $T_{\hat{\alpha}\hat{\beta}}$), instead scalar quantities (e.g., $f$) measured in SO frame will be followed by $n$ (e.g., $f(n)$).
The geometrical proprieties of a WH entail some constraints on the $b(r),\Phi(r)$ functions, which are \cite{Morris1988}:
\begin{itemize}
\item the presence of a spatial embedded 2D surface, connecting two spacetimes (defined as \emph{WH neck}, see Fig. \ref{fig:Fig1}). It is described in cylindrical coordinates by the following differential equation
\begin{equation} \label{eq:whshape}
\frac{dz(r)}{dr}=\pm\left[\frac{r}{b(r)}-1\right]^{-1/2},
\end{equation}
where the sign "+" stays for the upper universe, and the sign "-" is for the lower universe. The surface $z=z(r)$ gives the WH neck shape;
\item \emph{there are no horizons or singularities}, entailing therefore that $\Phi$ is everywhere finite;
\item the \emph{WH throat} is defined as the minimum radius such that $r_{\rm min}=b_0$ and $b(r_{\rm min})=b_0$. Such definition substituted in Eq. (\ref{eq:whshape}) gives a divergence, being formally not in agreement with the previous point. However, by exploiting the proper radial distance $l$ as measured by SOs \cite{Morris1988}
\begin{equation}\label{eq:ldist2}
\frac{dl}{dr}=\pm\left[1-\frac{b(r)}{r}\right]^{-1/2},
\end{equation}
or in a clearer form as
\begin{equation}\label{eq:ldist2}
l(r)=\pm\bigintss_{b_0}^r\frac{dr}{\left[1-\frac{b(r)}{r}\right]^{1/2}},
\end{equation}
we immediately see how the divergence disappear. Indeed, imposing that such distance is finite through all the spacetime, we require that $1-b(r)/r\ge0$ throughout the spacetime;
\item both connected universes are \emph{asymptotically flat} far from the throat in both radial directions, namely $b(r)/r\to0$ and $\Phi\to0$ for $l\to\pm\infty$;
\item another fundamental point is to characterize the material, which generates the WH spacetime curvature. We start by defining its stress-energy tensor, as measured in the SO frame, given by
\begin{equation} \label{eq:setm}
T_{\hat{t}\hat{t}}^{\rm (m)}=\rho(r),\quad T_{\hat{r}\hat{r}}^{\rm (m)}=-\tau(r),\quad T_{\hat{\varphi}\hat{\varphi}}^{\rm (m)}=p(r),
\end{equation}
where $\rho$ is the mass-energy density, $\tau$ is the radial tension, and $p$ is the lateral pressure. Now, solving the Einstein field equations and defining $(\cdot)'\equiv d(\cdot)/dr=(1-b(r)/r)^{-1/2}d(\cdot)/dl$, we obtain
\begin{eqnarray}
&&\rho(r)=\frac{b'(r)}{8\pi r^2},\label{eq:rho}\\
&&\tau(r)=\frac{b(r)/r-2[r-b(r)]\Phi'(r)}{8\pi r^2},\label{eq:tau}\\
&&p(r)=\frac{r}{2}\left[(\rho(r)-\tau(r))\Phi'(r)-\tau'(r)\right]-\tau(r).\label{eq:p}
\end{eqnarray}
These equations are extremely important, because they closely relate the matter to the metric functions and viceversa. In particular, Eq. (\ref{eq:rho}) can be easily integrated, giving
\begin{equation}
b(r)=b(r_0)+\int_{b_0}^r8\pi x^2\rho(x)dx=2m(r),
\end{equation}
where we have defined
\begin{equation}
m(r)=\frac{b_0}{2}+\int_{b_0}^r4\pi x^2\rho(x)dx,
\end{equation}
which is the effective mass contained in the sphere of radius $r$. In this equation, it is more understandable the role of the function $b(r)$, which is linked to the distribution of masses inside the WH. In particular at spatial infinity, we have \cite{Visser1995}
\begin{equation} \label{eq:totmass}
\lim_{r\to\infty}m(r)=\frac{b_0}{2}+\int_{b_0}^\infty4\pi x^2\rho(x)dx=M,
\end{equation}
where $M$ is the total mass of the system.
The traversable propriety is expressed by the \emph{flaring out condition}, which entails $\tau(r)>\rho(r)$ at the throat or that the dimensionless function
\begin{equation} \label{eq:csi}
\xi(r)=\frac{\tau(r)-\rho(r)}{|\rho(r)|}
\end{equation}
be non-negative at $r=b_0$, or also that $d^2r/dz^2>0$ at $r=b_0$, giving a particular constraint on the WH shape. All these implications physically translates in having a \emph{negative mass-energy density} inside the throat. This leads to a delicate issue regarding the existence of \emph{exotic matter}, which, albeit several proposals, is still matter of debate and seems to be forbidden by classical laws of physics, but accepted through a quantum field theory argument. However, as discussed in \cite{Hochberg19981,Hochberg19982,caplobo1,caplobo2}, it is possible to bypass this difficulty by considering modified theories of gravity where energy conditions are not violated because the additional (geometric) degrees of freedom behave as a fluid, whose energy density can be eventually negatively defined. In other words, standard fluid matter retains its own
properties but violations are prevented by the improved field equations (see e.g., Refs. \cite{Visser1989,Barcelo1999,Bohmer2012,Capozziello2012,Bahamonde2016,Capozziello2020}, for further discussions and approaches). The debate is that exotic matter or modified gravity should be taken into account to obtain realistic WHs.
\end{itemize}
In this work we are interested only in the \emph{basic WH criteria}, caring only about the geometrical structure, without considering the \emph{usability criteria}, which are defined to tune the WH for human interstellar travels (e.g., traversability of the WH neck in a relatively short time period, comfortable radial tidal force) \cite{Morris1988}.
\subsection{The Poynting-Robertson effect in General Relativity}
\label{sec:GRPReffect}
In this section, we aim at deriving the equations of motion of a test particle influenced by the gravitational force from the WH, the radiation pressure together with the general relativistic PR effect from a radiation source outside the WH throat. We adopt the following strategy: we first write the equations of motion in the SO frame, see Eq. (\ref{eq:SOframe}), and then transform them in the frame of the static observer at infinity, see Eq. (\ref{eq:MTmetric}). To this end, we make use of the \emph{observer splitting formalism}, which is able to coherently disentangle gravitational from fictitious forces arising from the relative motion of two non-inertial observers \cite{Jantzen1992,Bini1997a,Bini1997b,DeFalco2018}.
We calculate the SO kinematical quantities, which are acceleration $\boldsymbol{a}(n)=\nabla_{\boldsymbol{n}} \boldsymbol{n}$, and the relative Lie curvature vector $\boldsymbol{k_{(\rm Lie)}}(n)$, whose explicit expressions are \cite{Bini2009,Defalco20183d}
\begin{eqnarray}
a(n)^{\hat r}&=&\Phi'(r)\sqrt{1-b(r)/r},\\
k_{\rm (Lie)}(n)^{\hat r}&=&-\frac{\sqrt{1-b(r)/r}}{r}.
\end{eqnarray}
\subsubsection{Radiation field}
\label{sec:radfield}
We model the radiation field as a coherent flux of photons traveling along null geodesics on the Morris-Thorne metric. The photons depart from a radiation source around the WH (see Fig. \ref{fig:Fig1}), but located outside the neck, and only one single photon reaches the test particle at its position at each instant of time. In this case, it is important to underline that the radiation stress-energy tensor $T^{\mu\nu}$ is superimposed on the background geometry, without modifying it. Such tensor is different from the one occurring in Eq. (\ref{eq:setm}), where we have used the superscript \qm{$(m)$}, and reads as \cite{Bini2009,Bini2011}
\begin{equation}\label{eq:SET}
T^{\mu\nu}=\mathcal{I}^2 k^\mu k^\nu\,,\qquad k^\mu k_\mu=0,\qquad k^\mu \nabla_\mu k^\nu=0,
\end{equation}
where $\mathcal{I}$ is a parameter linked to the radiation field intensity, $\boldsymbol{k}$ is the photon four-momentum field, and the last two equations express the null geodesic condition. In such spacetime, we have that the energy $E=-k_t$, and the angular momentum with respect to the polar axis (or whatever other axis) $L_z=k_\varphi$, are conserved quantities along the photon trajectory. Splitting $\boldsymbol{k}$ with respect to the SO frame, we obtain \cite{Bini2009,Bini2011}
\begin{eqnarray}
&&\boldsymbol{k}=E(n)[\boldsymbol{n}+\boldsymbol{\hat{\nu}}(k,n)], \label{photon1}\\
&&\boldsymbol{\hat{\nu}}(k,n)=\sin\beta\ \boldsymbol{e_{\hat r}}+\cos\beta\ \boldsymbol{e_{\hat\varphi}}, \label{photon2}
\end{eqnarray}
where
\begin{equation}
E(n)=\frac{E}{e^{\Phi(r)}},
\end{equation}
is the photon energy measured in the SO frame, $\boldsymbol{\hat{\nu}}(k,n)$ is the photon spatial unit relative velocity with respect to the SO frame, $\beta$ is the angle measured in the SO frame in the azimuthal direction. The radiation field is governed by the impact parameter $\lambda=L_z/E$, associated with the emission angle $\beta$. The radiation field photons are emitted from a spherical rigid surface having a radius $R_\star$ centered at the origin of the spherical coordinates, and rotating rigidly with angular velocity $\Omega_{\mathrm{\star}}$.
The photon impact parameter $b$ and the related photon angle $\beta$ have the following expressions \cite{Bini2011,Bakala2019}
\begin{equation} \label{MT_impact_parameter}
\lambda=\Omega_{\star}\left[\frac{\mathrm{g_{\varphi\varphi}}}{-\mathrm{g_{tt}}}\right]_{r=R_\star},\quad \cos\beta=\frac{e^{\Phi(r)}}{r}\lambda,
\end{equation}
where we indicate with the label $r=R_\star$ that the metric components $g_{\varphi\varphi},g_{tt}$ must be evaluated in $R_\star$.
From the conservation of the stress-energy tensor, namely $\nabla_\mu T^{\mu\nu}=0$, we are able to determine the parameter $\mathcal{I}$, which has the following expression \cite{Bini2009,Bini2011}
\begin{equation}\label{INT_PAR}
\mathcal{I}^2=\frac{\mathcal{I}_0^2}{\sqrt{r^2-e^{2\Phi(r)}\lambda^2}},
\end{equation}
where $\mathcal{I}_0$ is $\mathcal{I}$ evaluated at the emitting surface.
\subsubsection{Radiation force and test particle acceleration}
\label{sec:radforce}
A test particle moves with a timelike four-velocity $\boldsymbol{U}$ and a spatial velocity $\boldsymbol{\nu}(U,n)$ with respect to the SO frames, which both read as \cite{Bakala2019}
\begin{eqnarray}
&&\boldsymbol{U}=\gamma(U,n)[\boldsymbol{n}+\boldsymbol{\nu}(U,n)], \label{testp}\\
&&\boldsymbol{\nu}=\nu(\sin\alpha\boldsymbol{e_{\hat r}}+\cos\alpha \boldsymbol{e_{\hat\varphi}}),
\end{eqnarray}
where $\gamma(U,n)\equiv\gamma=1/\sqrt{1-||\boldsymbol{\nu}(U,n)||^2}$ is the Lorentz factor, $\nu=||\boldsymbol{\nu}(U,n)||$ is the magnitude of the test particle spatial velocity, and $\alpha$ is the azimuthal angle of the vector $\boldsymbol{\nu}(U,n)$ measured clockwise from the positive $\hat\varphi$ direction in the $\hat{r}-\hat{\varphi}$ tangent plane in the SO frame.
The test particle acceleration $\boldsymbol{a}(U)$, can be calculated by using the relativity of observer splitting formalism. They can be easily derived in the SO frame by employing the proprieties of spherical symmetry shared with the Schwarzschild equations \cite{Bini2011}, i.e.,
\begin{eqnarray}
a(U)^{\hat t}&=&\gamma^2\nu\sin\alpha\ a(n)^{\hat r}+\gamma^3\nu\frac{d\nu}{d\tau},\\
a(U)^{\hat r}&=&\gamma^2[a(n)^{\hat r}+k_{\rm (Lie)}(n)^{\hat r}\nu^2\cos^2\alpha]\notag\\
&&+\gamma\left(\gamma^2\sin\alpha\frac{d\nu}{d\tau}+\nu\cos\alpha\frac{d\alpha}{d\tau}\right),\\
a(U)^{\hat \varphi}&=&-\gamma^2\nu^2\sin\alpha\cos\alpha k_{\rm (Lie)}(n)^{\hat r}\notag\\
&&+\gamma\left(\gamma^2\cos\alpha\frac{d\nu}{d\tau}-\nu\sin\alpha\frac{d\alpha}{d\tau}\right).
\end{eqnarray}
We assume that the radiation-test particle interaction occurs through Thomson scattering, characterized by a constant momentum-transfer cross section $\sigma$, independent of direction and frequency of the radiation field. We can split the photon four-momentum (\ref{photon1}) in terms of the velocity $\boldsymbol{U}$ as \cite{Bini2009,Bini2011,Defalco20183d,Bakala2019}
\begin{equation}
\boldsymbol{k}=E(U)[\boldsymbol{U}+\boldsymbol{\hat{\mathcal{V}}}(k,U)],
\end{equation}
where
\begin{equation}
E(U)=\gamma E(n)[1-\nu\cos(\alpha-\beta)],
\end{equation}
is the photon energy measured by the test particle. The radiation force $\boldsymbol{{\mathcal F}_{\rm (rad)}}(U)$ can be written as \cite{Bini2009,Bini2011,Defalco20183d,Bakala2019}
\begin{equation} \label{radforce}
{\mathcal F}_{\rm (rad)}(U)^{\hat \alpha}=\tilde{\sigma} \, [\mathcal{I} E(U)]^2\, \hat{\mathcal V}(k,U)^{\hat \alpha},
\end{equation}
where $\tilde{\sigma}=\sigma/m$ and $m$ is the test particle mass. The term $\tilde{\sigma}[\mathcal{I} E(U)]^2$ has the following expression \cite{Bakala2019}
\begin{equation} \label{eq: sigma_tilde}
\tilde{\sigma}[\mathcal{I} E(U)]^2=\frac{ A\,\gamma^2 [1-\nu\cos(\alpha-\beta)]^2}{e^{2\Phi(r)}\sqrt{r^2-e^{2\Phi(r)}\lambda^2}},
\end{equation}
where $A=\tilde{\sigma}[\mathcal{I}_0 E]^2$ being the luminosity parameter, which can be equivalently written as $A/M=L/L_{\rm EDD}\in[0,1]$ with $M$ is the mass defined in Eq. (\ref{eq:totmass}), $L$ the emitted luminosity at infinity, and $L_{\rm EDD}$ the Eddington luminosity. The terms $\hat{\mathcal V}(k,U)^{\hat \alpha}$ are the radiation field components, whose expressions are \cite{Bini2011,Bakala2019}
\begin{eqnarray}\label{rad}
&&\hat{\mathcal{V}}^{\hat r}=\frac{\sin\beta}{\gamma [1-\nu\cos(\alpha-\beta)]}-\gamma\nu\sin\alpha, \\
&&\hat{\mathcal{V}}^{\hat\varphi}=\frac{\cos\beta}{\gamma [1-\nu\cos(\alpha-\beta)]}-\gamma\nu\cos\alpha,\\
&&\hat{\mathcal{V}}^{\hat t}=\gamma\nu\left[\frac{\cos(\alpha-\beta)-\nu}{1-\nu\cos(\alpha-\beta)}\right].
\end{eqnarray}
\subsubsection{Equations of motion}
\label{sec:eom}
Collecting all the information derived in the previous sections, we are able to derive the equations of motion of a test particle moving in the equatorial plane around a WH and influenced by the radiation force (\ref{radforce}). Imposing that $\boldsymbol{a}(U)=\boldsymbol{{\mathcal F}_{\rm (rad)}}(U)$, we obtain \cite{Bini2009,Bini2011,Defalco20183d,Bakala2019}
\begin{eqnarray}
\frac{d\nu}{d\tau}&=& -\frac{\sin\alpha}{\gamma}a(n)^{\hat r}\label{EoM1}\\
&&+\frac{ A [1-\nu\cos(\alpha-\beta)][\cos(\alpha-\beta)-\nu]}{e^{2\Phi(r)}\sqrt{r^2-e^{2\Phi(r)}\lambda^2}},\nonumber\\
\frac{d\alpha}{d\tau}&=&-\frac{\gamma\cos\alpha}{\nu}\left[a(n)^{\hat r}+k_{\rm (Lie)}(n)^{\hat r}\,\nu^2\right]\label{EoM2}\\
&&+\frac{ A [1-\nu\cos(\alpha-\beta)]\sin(\alpha-\beta)}{e^{2\Phi(r)}\sqrt{r^2-e^{2\Phi(r)}\lambda^2}\ \nu\cos\alpha},\nonumber\\
U^{\hat r}&\equiv&\frac{dr}{d\tau}=\frac{\gamma\nu\sin\alpha}{\sqrt{g_{rr}}}, \label{EoM3}\\
U^{\hat \varphi}&\equiv&\frac{d\varphi}{d\tau}=\frac{\gamma\nu\cos\alpha}{\sqrt{g_{\varphi\varphi}}},\label{EoM4}\\
U^{\hat t}&\equiv&\frac{dt}{d\tau}=\frac{\gamma}{N},\label{time}
\end{eqnarray}
where $\tau$ is the affine parameter (proper time) along the test particle trajectory.
\subsection{Critical hypersurfaces}
\label{sec:CH}
The dynamical system given by Eqs. (\ref{EoM1}) -- (\ref{EoM4}) may admit the existence of a critical hypersurface, a region where there is a balance between the radiation and gravitational forces. We already know that in the equatorial plane of the Schwarzschild metric \cite{Bini2009,Bini2011}, they behave as \emph{stable attractors}, namely particular configurations where the test particle moves stably on it for all future times.
Imposing that on the critical hypersurface the test particle must move on purely circular orbits (i.e., $\alpha=0,\pi$) and with constant velocity (i.e., $\nu=\rm{const}$), we have that $d\nu/d\tau=0$, and $d\alpha/d\tau=0$, or equivalently that \cite{Bini2009,Bini2011}
\begin{eqnarray}
&&\frac{A [1-\nu\cos(\alpha-\beta)][\cos(\alpha-\beta)-\nu]}{e^{2\Phi(r)}\sqrt{r^2-e^{2\Phi(r)}\lambda^2}}=0,\label{eq:CH1}\\
&&a(n)^{\hat r}+k_{\rm (Lie)}(n)^{\hat r}\,\nu^2\notag\\
&&+\frac{A [1-\nu\cos\beta]\sin\beta}{\gamma e^{2\Phi(r)}\sqrt{r^2-e^{2\Phi(r)}\lambda^2}}=0. \label{eq:CH2}
\end{eqnarray}
From Eq. (\ref{eq:CH1}), we obtain that the velocity of the test particle on the critical hypersurface must be equal to the azimuthal photon velocity
\begin{equation}
\nu=\cos\beta.
\end{equation}
Substituting such result in Eq. (\ref{eq:CH2}), we derive an implicit equation for determining the radius $r_{\rm crit}$ of the critical hypersurface, which is given by
\begin{equation} \label{eq:CH3}
a(n)^{\hat r}+k_{\rm (Lie)}(n)^{\hat r}\cos^2\beta+\frac{A \sin^3\beta}{r\ e^{2\Phi(r)}}=0.
\end{equation}
We already know the proprieties of Eq. (\ref{eq:CH3}) in the Schwarzschild metric. For photons emitted radially (i.e., $\lambda=0$), the test particle reaches the critical hypersurface and ends its motion on a point, since there is a perfect balance between radiation and gravitational forces \cite{Bini2009,Defalco20183d}; instead in the case where the photons are emitted in an arbitrary direction (i.e., $\lambda\neq0$), the test particle when reaches the critical hypersurface it starts to move on it with constant velocity given by Eq. (\ref{eq:CH1}) \cite{Bini2011,Bakala2019}. In addition, we know that the critical radius solution of Eq. (\ref{eq:CH3}) is unique\footnote{In the Schwarzschild case, Eq. (\ref{eq:CH3}) can admit three different solutions. One solution is located very far from the BH and another one is close to the event horizon, so they are unphysical. Therefore, there exists only one physical solution \cite{Bini2011,Bakala2019}.}, and it continuously depends on the luminosity parameter $A$. Therefore, it is always possible to find a critical radius very close to the Schwarzschild event horizon for a particular luminosity parameter $A$ (crucial propriety which we will be exploited in Sec. \ref{sec:diagnostic}), as one can immediately see in Fig. \ref{fig:Fig4}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.315]{Fig4}
\caption{Critical radius $r_{\rm crit}$ in terms of the luminosity parameter $A$. The grey region, delimited by the critical radius profiles evaluated at the minimum ($\Omega_{\rm min}=0$) and maximum ($\Omega_{\rm max}=[\sqrt{-g_{tt}/g_{\varphi\varphi}}]_{r=R_\star}$) angular frequencies, defines the region where all the critical radii vary. The dashed red line represents the photon sphere, while the dashed blue line the event horizon.}
\label{fig:Fig4}
\end{figure}
\subsubsection{WH critical hypersurfaces: general remarks}
\label{sec:generalCH}
In our case, Eq. (\ref{eq:CH3}) is generic, being determined once we have an explicit expression of the metric functions $\Phi(r), b(r)$. Therefore, depending on the considered WH, we can have different dynamical systems and it may happen that Eq. (\ref{eq:CH3}) admits more than one (physical) solution, or no solutions (in the worst case), or admits solutions only in some regions of the spacetime. Let us consider a particular case, where the temporal metric component is a constant function, i.e., $\Phi(r)=\Phi_0\equiv\rm{const}$, while the shape function $b(r)$ remains still unspecified. In such particular case, Eq. (\ref{eq:CH3}) becomes
\begin{equation} \label{eq:CH4}
\lambda^2e^{4\Phi_0}r\sqrt{1-\frac{b(r)}{r}}=A(r^2-e^{2\Phi_0}\lambda^2)^{3/2}.
\end{equation}
In this case, when the photons are radially emitted (i.e., $\lambda=0$), Eq. (\ref{eq:CH4}) implies $r=0$, which is not a physical solution neither for a BH nor for a WH, independently of the explicit functional form of the shape function $b(r)$. Therefore, for $\lambda=0$ it is never possible to have critical hypersurfaces. Instead, for $\lambda\neq0$, Eq. (\ref{eq:CH4}) becomes an algebraic equation in $r$ of sixth order,
\begin{equation} \label{eq:CH5}
\begin{aligned}
&A^2r^6-3e^{2\Phi_0}\lambda^2A^2r^4+(3A^2e^{4\Phi_0}\lambda^2-e^{8\Phi_0}\lambda^4)r^2\\
&+e^{8\Phi_0}\lambda^4r b(r)-A^2e^{6\Phi_0}\lambda^6=0.
\end{aligned}
\end{equation}
This equation strictly depends on the functional form of the $b(r)$ function. The photon impact parameter $\lambda$ cannot assume any value, but it ranges in a limited interval. First of all, we must have $\lambda\ge0$ and since $\lambda$ depends both on the value of the source's radius $R_\star$ and angular velocity $\Omega_\star$, we should constraint such parameters. For reasons which will become clearer in Sec. \ref{sec:diagnostic}, we consider $R_\star=6M$, corresponding to the innermost stable circular orbit (ISCO) in the Schwarzschild metric. The $\Omega_\star$ angular velocity has minimum and maximum values corresponding respectively to $\Omega_{\rm min}=0$ and $\Omega_{\rm max}=[\sqrt{-g_{tt}/g_{\varphi\varphi}}]_{r=R_\star}$. In the Schwarzschild metric we obtain $\Omega_\star\in[0,0.14] M^{-1}\ \rm{rad/s}$, or equivalently a rotation frequency of $f_\star\in[0,4126/(M/M_\odot)]$ Hz, which finally give $\lambda\in[0,7.35]\ M$.
For $\lambda/M\ll1$, considering only the terms less or equal than $\lambda^2$, from Eq. (\ref{eq:CH5}) we obtain
\begin{equation}
r^4-3e^{2\Phi_0}\lambda^2r^2+3e^{4\Phi_0}\lambda^2=0,
\end{equation}
where we have assumed that $e^{\Phi}$ and $b(r)$ are not functions of order higher than $\lambda^4$. Such equation admits no solutions, therefore we conclude that for $\lambda/M\ll1$, there are no critical hypersurfaces. The neglected terms would have given just a small contribution close to $r=0$, being still not an admissible solution for both a BH and a WH. As it will be clearer in Sec. \ref{sec:diagnostic}, we are only interested in solutions outside of the event horizon.
Instead for $\lambda/M\gg1$ we have that Eq. (\ref{eq:CH5}) becomes
\begin{equation}
r^2-r b(r)+A^2e^{-2\Phi_0}\lambda^2=0,
\end{equation}
which depends on the explicit functional form of $b(r)$.
\section{Diagnostic to distinguish a black hole from a wormhole}
\label{sec:diagnostic}
In this section, we explain in details the strategy to diagnose a BH from a WH. The idea is mainly based on the hypothesis that a particular class of WH metrics admits a transition surface layer (located outside the event horizon), which is useful to smoothly connect the internal WH solution with the external WH region described by the Schwarzschild metric, see Sec. \ref{sec:geoastro} and Fig. \ref{fig:Fig2}. Therefore, in the WH case metric-changes occur in such transition surface layer, while in the BH case the metric continues to be that of the Schwarzschild spacetime.
We consider the presence of an accretion disk around the central compact object. It represents the source both of the radiation emission and of an intense magnetic field which produces squeezed vacuum states generating the negative energy required to make the WH both traversable and stable (see Sec. \ref{sec:exomat}). The general relativistic PR effect is very important for explaining the presence of stable critical hypersurfaces in the transition surface layer. We consider the emission proprieties not only from the disk (as it is usually done in the literature), but also from the critical hypersurface in the BH Schwarzschild metric (see Sec. \ref{sec:raytrace}). If a WH is present, we can have either no critical hypersurfaces or even if they exist, they have different emission proprieties from that of the Schwarzschild case, due to the presence of a different metric. Therefore, if we are able to fit the observational data through this model, it means that there is a BH, otherwise a WH could exist. \emph{We note that the flux emitted from the critical hypersurface in the Schwarzschild metric is a critical observable, which allows us to strongly reveal the presence of a BH.}
\subsection{Geometrical and astrophysical setup}
\label{sec:geoastro}
We consider a particular class of static and spherically symmetric WHs\footnote{We refer to the third example in the appendix of Ref. \cite{Morris1988}, entitled \emph{\qm{Solutions with exotic matter limited to the throat vicinity}}.}, contained in the appendix of the work by Morris and Thorne \cite{Morris1988}, which constitute subtle examples of perfect BH mimickers, indistinguishable both from X-ray electromagnetic and GW emissions.
In such WH metrics, the exotic matter is confined only in a small region close to the WH throat (i.e., $b_0\le r\le r_{\rm c}$), and outside there is ordinary matter extending up to the Schwarzschild radius $R_{\rm S}=2M$ (i.e., $r_{\rm c}\le r\le R_{\rm S}$). There exists a small transition surface layer (i.e., $R_{\rm S}\le r\le R_{\rm S}+\epsilon$) where it is possible to smoothly connect the inner WH solution with the Schwarzschild metric (i.e., $R_{\rm S}+\epsilon\le r$). Following the results reported by Cardoso and collaborators \cite{Cardoso2016}, we know that a good BH mimicker should have $R_{\rm S}+\epsilon<R_{\rm P}=3M$, corresponding to the photon sphere radius in the Schwarzschild metric. Based on this consideration, the transition surface layer must be located under the photon sphere radius, which consequently translates in having $\epsilon/M<1$ (see Fig. \ref{fig:Fig2}, for details).
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.52]{Fig2}
\caption{Schematic representation of one of the Morris-Thorne solutions in terms of the spacetime domain.}
\label{fig:Fig2}
\end{figure}
We note that the transition surface layer must be located outside the Schwarzschild event horizon, otherwise it would create a metric with a horizon, going therefore against the definition of WH. We have built up a very extreme case, which makes very complicate and thorny the identification of the presence of a WH, being in agreement with the actual state of the art of the observations. In addition, the location of the transition surface layer, which is very close to the event horizon, entails to inquire gravity in extreme field regimes.
In this geometrical background, we consider a thin accretion disk \cite{Shakura1973} located in the equatorial plane around the WH, extending from the Schwarzschild ISCO radius $r_{\rm in}=6M$ until $r_{\rm out}=100M$\footnote{The outer boundary of the disk mainly depends on the particular systems under investigation. We consider a sufficiently high value, because the main contributions derive from the inner regions closer to the BH, while the distant regions give only minor contributions, which do not drastically change our analysis.}.
We assume that the accretion disk is present only in one universe, and not also in the opposite side. We exclude the possibility of two accretion disks in both universes, otherwise the WH can be immediately detected from X-ray observations \cite{Paul2019}. The accretion disk elements, which move down to the ISCO radius can be modeled as test particles, having as initial position $(r,\varphi)=(r_0,\varphi_0)$ and velocity $(\nu,\alpha)=(\nu_0,\alpha_0)$. They are influenced by the radiation field coming from the accretion disk, which in the 2D case can be reasonably approximated as a ring at ISCO radius, because the radiation field from other parts of the disk is shielded.
However, it is possible also to have other emitting sources, for example like a hot corona around a BH \cite{Fabian2015}.
\subsection{Magnetic field in the accretion disk as possible mechanism to make a BH traversable and stable}
\label{sec:exomat}
The presence of an accretion disk around a BH is an important source of information about the system under study. Indeed, the role of the accretion disk might have a twofold advantage: (1) it is the emitting source which generates a radiation field (source of the general relativistic PR effect), (2) its intense magnetic field makes a BH traversable and stable. About the latter issue, we propose a possible explanation of the WH stress-energy tensor (\ref{eq:setm}), and suggest a possible mechanism through which we can select among the plethora of known astrophysical BH systems the possible candidates hosting WHs. We do not enter into the modeling details of such process, because this goes beyond the aim of this paper.
Following an idea contained in the paper of Morris and Thorne \cite{Morris1988}, besides the possibility of modified gravity discussed above, we think that a situation where quantum fields can have negative energy density, violating thus the null energy condition (NEC), is obtained by a \emph{squeezed quantum state of the electromagnetic field}. Such phenomenon consists in decreasing the noise in one observable (coincident in our case with the energy) to consequently enhance the noise in the conjugate observable. The result is that the variations in the first observable are reduced below the quantum vacuum zero-point fluctuations, entailing thus negative energy \cite{Morris1988,Drummond2004,Davis2006}. In other words, quantum squeezing is useful to withdraw energy from one region standing in the ordinary vacuum at the cost of piling up the remaining energy elsewhere. In addition, such state is physically reproducible in laboratory thanks to the nonlinear-optics squeezing technique (see \cite{Davis2006}, and references therein), being, therefore, one of the most feasible astrophysical candidates.
We underline again that we are interested in the situation where an element of accreting gas can fall inside the WH, rather than a human traverses it for interstellar travels. We deem that the possible cause for generating the squeezed vacuum states is due to the presence of strong magnetic fields in the accretion disk \cite{Morris1988}. In addition, since such magnetic fields have steady strength, a WH both \emph{traversable and stable} could be realized. This is not just an hypothesis, because in the reality there exists astrophysical BH systems endowed with accretion disk structures showing intense magnetic fields \cite{Piotrovich2015}, such as BHs in active galactic nuclei (AGNs): NGC 7469 ($2.20\times10^5$ G), Akn 564 ($1.26\times10^5$ G), NGC 4051($9.85\times10^4$ G), PG 1211+143 ($6.25\times10^4$ G), Mrk 335 ($6.10\times10^5$ G). Adopting Eq. (2.11) in Ref. \cite{Shakura1973}, where $B\sim10^8(M_\odot/M)^{1/2}$ G at the ISCO radius, it is possible to obtain higher magnetic fields, as $B\sim10^7$ for $M=100M_\odot$, and $B\sim10^6$ for $M=10^4M_\odot$.
\subsection{Ray-tracing of emitting surfaces}
\label{sec:raytrace}
From Sec. \ref{sec:CH} we know that it is always possibile to have a critical hypersurface very close to the event horizon in the Schwarzschild metric, see Fig. \ref{fig:Fig4}. We study the emission proprieties from such configuration together with those of the accretion disk in the BH case toward a distant observer. Such research topic has been only partially treated in the PR effect literature \cite{Bini2012}.
The calculations of the emitted fluxes can be performed by exploiting the \emph{ray-tracing technique}, which relies on tracking a photon trajectory from the emission point to the observer location. In order to carry out such calculations in Schwarzschild metric, there are some fundamental effects to be taken into account, which are \cite{Misner1973,Defalco2016}: light bending, gravitational lensing (or better known as solid angle), and gravitational redshift.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.56]{Fig3}
\caption{Ray-tracing geometry in Schwarzschild spacetime.}
\label{fig:Fig3}
\end{figure}
The ray-tracing geometry is depicted in Fig. \ref{fig:Fig3}. We consider a reference frame centered at the origin of the BH/WH location and having the $\boldsymbol{x}$- and $\boldsymbol{y}$-axes lying in the equatorial plane, and the $\boldsymbol{z}$-axis orthogonal to the equatorial plane. We can adopt spherical coordinates, where the radius $r$ joins the center of the coordinates with any point in the space, the azimuthal angle $\varphi$ measured clockwise from the $\boldsymbol{x}$-axis, and the latitudinal angle $\theta$ measured from the $\boldsymbol{z}$-axis. There is a static and not rotating observer located at infinity, who is inclined by an angle $i$ with respect to the $\boldsymbol{z}$-axis, and the $\boldsymbol{z_0}$-axis points in the observer's direction. The latitudinal angle $\psi$ is measured from the $\boldsymbol{z_0}$-axis, and it is known in the literature as \emph{light bending angle} \cite{Misner1973,Beloborodov2002,Defalco2016}.
Let us consider a point $P$ in the space at radial distance $R$, where a photon is emitted (see Fig. \ref{fig:Fig3}). In such point, we can define the emission angle $\alpha_{\rm em}$ as the angle formed by the radial versor $\boldsymbol{\hat r}$ and the photon velocity $\boldsymbol{k}$ both applied in the point $P$. The photon will follow a null-trajectory in the Schwarzschild spacetime (lying in a single plane), which will reach the observer location with a photon impact parameter \cite{Misner1973}
\begin{equation} \label{eq:bpho}
b_{\rm ph}=\frac{R\sin\alpha_{\rm em}}{\sqrt{1-2M/R}}.
\end{equation}
It is important to note that this photon impact parameter $b_{\rm ph}$ is different from that of the radiation field $\lambda$, see Eq. (\ref{MT_impact_parameter}).
We will employ high-accurate approximate polynomial ray-tracing equations for the accretion disk, while for the critical hypersurface we are forced to use original integral formulas since in that region are not available accurate approximate equations.
We will produce some emission templates from the critical hypersurface (see Sec. \ref{sec:ECH}), accretion disk (see Sec. \ref{sec:EAC}), and combined profiles (see Sec. \ref{sec:ET}). Then, we will analyse their behaviors to extract relevant physical information. We will focus only on the Schwarzschild spacetime, because for other metrics in the surface transition layer (i.e., $2M\le r\le 2M+\epsilon$) it can occur either that there are no critical hypersurfaces, so the matter flows down to the throat, or even if they exist, they have different emission proprieties with respect to the Schwarzschild metric, which the observer at infinity can immediately distinguish (see Sec. \ref{sec:CH}).
We do not perform the same calculations in the WH case, for mainly two reasons: (1) we do not know \emph{a-priori} the most suitable metric to be used in the transition surface layer, but instead it can be determined \emph{a-posteriori} if the observational data shows strong departures from the BH model; (2) the mathematical problem behind this case is very complex and can be the topic of another paper. Indeed, it entails to solve these issues: $(i)$ developing the ray-tracing equations in such new metric, $(ii)$ analysing their proprieties, $(iii)$ smoothly matching such equations with those of the Schwarzschild metric on the boundary of the transition surface layer.
\subsubsection{Emission from the critical hypersurface}
\label{sec:ECH}
The ray-tracing of the critical hypersurface considers for each point the related light bending angle, which is calculated through the formula \cite{Defalco2016}
\begin{equation}
\cos\psi=\sin i\cos\varphi.
\end{equation}
The light bending equation \cite{Misner1973,Beloborodov2002,Defalco2016}
\begin{equation} \label{eq:libe}
\psi=\int_R^\infty \frac{dr}{r^2}\left[\frac{1}{b_{\rm ph}^2}-\frac{1}{r^2}\left(1-\frac{2M}{r} \right) \right]^{-\frac{1}{2}},
\end{equation}
is valid for every $R>2M$ and $\alpha_{\rm em}\in[0,\pi/2]$. Through interpolation numerical methods we determine the emission angle $\alpha_{\rm em}$ \cite{Press2002}. In particular we must distinguish photons with zero and one turning points \cite{Defalco2016}. Defined $\psi_{\rm p}$ as the light bending angle corresponding to the emission angle $\alpha_{\rm em}=\pi/2$, we have that for $\psi\in[0,\psi_{\rm p}]$ there are zero turning points, while for $\psi\in[\psi_{\rm p},\psi_{\rm max}]$ we have one turning point, where $\psi_{\rm max}$ is the light bending angle corresponding to the maximum emission angle \cite{Defalco2016}
\begin{equation} \label{alphamax}
\alpha_{\rm max}=\pi-\arcsin\left[\frac{3}{2}\sqrt{3\left(1-\frac{2M}{R}\right)}\frac{2M}{R}\right].
\end{equation}
Indeed, for $\alpha_{\rm em}\in[\alpha_{\rm max},\pi]$ the photon is swallowed by the BH and cannot reach the observer. Such argument is valid for $R\ge3M$ (disk case), while for $R<3M$ (our case), we have that $\alpha_{\rm max}$ becomes\footnote{For $R\le3M$, it occurs that $\alpha_{\rm max}\le\pi/2$ until to arrive to $\alpha_{\rm max}=0$ at $R=2M$, forming the so called \emph{cone of avoidance} \cite{Chandrasekhar1992}.} \cite{Chandrasekhar1992}
\begin{equation} \label{alphamax2}
\alpha_{\rm max}=\arcsin\left[\frac{3}{2}\sqrt{3\left(1-\frac{2M}{R}\right)}\frac{2M}{R}\right].
\end{equation}
This remark is very useful not only to make smooth the $\alpha_{\rm max}$ function, as it can be seen in Fig. \ref{fig:Fig6}, but also to perform a correct ray-tracing procedure.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.28]{Fig6}
\caption{Maximum emission angle $\alpha_{\rm max}$ in terms of the emission radius $R$. The dashed red line is for $\alpha=\pi/2$, and this is the threshold to have turning points, while the dashed green line is to determine the position of the photon sphere.}
\label{fig:Fig6}
\end{figure}
For photons with one turning point, we apply a \emph{symmetrization process} by defining $\psi_{\rm S}=2\psi_{\rm p}-\psi$, because Eq. (\ref{eq:libe}) is defined only for $\psi\in[0,\psi_{\rm p}]$. Then, we will obtain the emission angle $\alpha_{\rm S}\in[0,\pi/2]$, therefore to obtain the right emission angle $\alpha_{\rm em}$ corresponding to $\psi$, we use another \emph{symmetrization process}, i.e., $\alpha_{\rm em}=\pi-\alpha_{\rm S}$ \cite{Defalco2016}. In our case we do not consider any symmetrization process since $\alpha_{\rm em}\le\pi/2$.
Another non-trivial aspect is that in the range $R<3M$ the function in the square root of Eq. (\ref{eq:libe}) is always positive. For practical reasons, considering the change of variables $x=R/r$, we rewrite such equation as
\begin{equation} \label{eq:libe2}
\psi=\int_0^1 \frac{\sin\alpha\ dx}{\sqrt{f(x,R,\alpha)}}.
\end{equation}
where
\begin{equation} \label{eq:libe2}
f(x,R,\alpha)=1-\frac{2M}{R}-x^2\sin^2\alpha\left(1-\frac{2Mx}{R} \right).
\end{equation}
In Fig. \ref{fig:Fig7} we prove what we have claimed.
\begin{figure}[ht!]
\centering
\includegraphics[trim=0cm 2cm 0cm 1cm,scale=0.3]{Fig7}
\caption{Function $f(x,R,\alpha)$, see Eq. (53), plotted in terms of the emission radius $R$, and $x$ variable of integration. The blue and yellow surfaces are respectively for $f(x,R,0)$ and $f(x,R,\alpha_{\rm max})$.}
\label{fig:Fig7}
\end{figure}
We have all the elements for calculating the photon impact parameter $b_{\rm ph}$ and the solid angle formula \cite{Defalco2016}
\begin{equation} \label{EFSA}
d\Omega=\frac{\frac{\cos i}{R^2\ \sin^2\psi}\frac{b_{\rm ph}^2}{\cos\alpha_{\rm em}}} {\int_R^\infty \frac{dr}{r^2}\left[1-\frac{b_{\rm ph}^2}{r^2}\left(1-\frac{2M}{r} \right) \right]^{-\frac{3}{2}}}\ dR\ d\varphi.
\end{equation}
For producing the emission profiles, we need only to calculate the gravitational redshift $(1+z)^{-1}$. The test particle velocity on the critical hypersurface with respect to the coordinate time $t$ is \cite{Bini2009,Bini2011}
\begin{equation}
U^\alpha\equiv\frac{dx^\alpha}{dt}=\left(1,0,0,\frac{R-2M}{R^3}\lambda\right).
\end{equation}
The photon velocity is \cite{Misner1973}
\begin{eqnarray}
k_t&=&-E,\\
k_r&=&E\sqrt{1-\frac{b_{\rm ph}^2}{R^2}\left(1-\frac{2M}{R}\right)}\left(1-\frac{2M}{R}\right)^{-1},\\
k_\varphi&=&Eb_{\rm ph},
\end{eqnarray}
and the observer velocity is $V_0^\alpha=(1,0,0,0)$. Therefore, the gravitational redshift is \cite{Misner1973}
\begin{equation} \label{eq:redshift}
(1+z)^{-1}\equiv\frac{V_0^\alpha k_\alpha}{U^\alpha k_\alpha}=\left(1-\frac{R-2M}{R^3}\lambda b_{\rm ph}\right)^{-1}
\end{equation}
The flux emitted by the critical hypersurface for an observed frequency $\nu_{\rm em}$ can be calculated through \cite{Defalco2016}
\begin{equation} \label{eq:flux}
F_{\nu_{em}}=\int_{\Omega} \frac{\epsilon_0 \xi^{q}}{4\pi}\,(1+z)^{-4}\ d\Omega,
\end{equation}
where $\epsilon_0$ is the surface emissivity varying as a power law of $\xi=r/M$ with index $q$.
For $\lambda=0$, we know that the test particle does not move, therefore for whatever emission radius $R$ and observer inclination angle $i$, we have a profile peaked at 1, as we expect and it can be seen in Fig. \ref{fig:Fig8}.
\begin{figure}[ht!]
\centering
\includegraphics[trim=1.3cm 2cm 0cm 2.5cm,scale=0.35]{Fig8}
\caption{Normalized flux of the critical hypersurface for $\lambda=0,i=30^\circ,R=2.5M$, and surface emissivity index $q =-3$.}
\label{fig:Fig8}
\end{figure}
In Fig. \ref{fig:Fig9} we display different templates performed for different $\lambda$ values (i.e., $\lambda=1,5$), observer inclination angles $i$ (i.e., $i=30^\circ,60^\circ,80^\circ$), and for emission radii ranging from very close to the event horizon ($R=2.2M$) to near the photon sphere ($R=2.8M$).
\begin{figure*}[p!]
\centering
\vbox{
\includegraphics[trim=1.3cm 2cm 0cm 0cm,scale=0.6]{Fig9_1}
\includegraphics[trim=1.3cm 2cm 0cm 1cm,scale=0.6]{Fig9_2}}
\caption{Critical hypersurface's normalized fluxes for $R=2.2M$, $\lambda=1,5$, $i=30^\circ,60^\circ,80^\circ$,and $q =-3$.}
\label{fig:Fig9}
\end{figure*}
These profiles are important to have information on the critical hypersurface's features in the BH case. They behave as broad iron line profiles shaped by the PR effect, where a fundamental parameter is the PR-radiation photon impact parameter $\lambda$. Indeed, for $\lambda/M\le1$ the fluxes peak very close to 1, while for $\lambda/M\ge1$ the fluxes depart from it and become broader. Since the gravitational redshift (\ref{eq:redshift}) depends only on the transverse velocity of the matter on the critical hypersurface, the higher is the velocity, the broader is the profile. The general relativistic effects are enhanced by increasing the observer inclination angle. For astrophysical purposes, it is useful to calculate for each emission profile (determined by $\lambda$ and $R=r_{\rm crit}$) the related luminosity parameter $A(r_{\rm crit},\lambda)/M$ through Eq. (\ref{eq:CH3}), see Table \ref{tab:Table1}. This information permits to have a list of luminosities emitted from astrophysical sources, which are very helpful both to obtain a set of input parameters for our model (i.e., $r_{\rm crit}$ and $\lambda$), and also viceversa to select the systems where to look for WHs.
\begin{table}[h]
\caption{Different values of $A(r_{\rm crit},\lambda)/M$.}
\centering
\begin{spacing}{1.2}
\begin{tabular}{c||cccccccc}
\hline
\hline
& & & & & & & & \\
$r_{\rm crit}\setminus\lambda$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\
& & & & & & & & \\
\hline
$2.1M$ & 0.22 & 0.21 & 0.20 & 0.19 & 0.16 & 0.13 & 0.10 & 0.07 \\
$2.2M$ & 0.30 & 0.29 & 0.26 & 0.22 & 0.17 & 0.11 & 0.05 & 0.01 \\
$2.3M$ & 0.36 & 0.35 & 0.30 & 0.23 & 0.15 & 0.07 & 0.01 & -- \\
$2.4M$ & 0.41 & 0.39 & 0.32 & 0.23 & 0.13 & 0.04 & -- & -- \\
$2.5M$ & 0.45 & 0.42 & 0.34 & 0.23 & 0.11 & 0.02 & -- & -- \\
$2.6M$ & 0.48 & 0.45 & 0.35 & 0.23 & 0.10 & 0.01 & -- & -- \\
$2.7M$ & 0.51 & 0.47 & 0.36 & 0.22 & 0.09 & 0.01 & -- & -- \\
$2.8M$ & 0.54 & 0.49 & 0.37 & 0.22 & 0.08 & 0.00 & -- & -- \\
$2.9M$ & 0.56 & 0.51 & 0.38 & 0.21 & 0.07 & 0.00 & -- & -- \\
\hline
\hline
\end{tabular}
\end{spacing}
\label{tab:Table1}
\end{table}
\subsubsection{Emission from the disk}
\label{sec:EAC}
We consider the emission from the disk modeled by the very broad iron line (Fe K$_{\rm\alpha}$) observed around 6.4 keV in a number of astrophysical systems (see Refs. \cite{Miller2007,Cackett2010}, and references therein), well known in the high-energy literature \cite{Frank2002}. Since the accretion disk extends from $r\ge R_{\rm ISCO}$, it is sufficient to employ the approximate polynomial-formulas by De Falco and collaborators \cite{Defalco2016}\footnote{There are other more accurate formulas proposed in the literature, which can be also employed, although they are extremely useful when we are closer to the photon sphere location \cite{Semerak2015,Laplaca2019,Poutanen2019}. In our case, the accuracy of our approximation is enough \cite{Defalco2016}.}.
We apply the same ray-tracing procedure scheme discussed in the previous section, but we replace the light bending equation (\ref{eq:libe}) with \cite{Beloborodov2002,Defalco2016}
\begin{equation}
\alpha_{\rm em}=\arccos\left[1-(1-\cos\psi)\left(1-u\right)\right].
\end{equation}
where $u=2M/r$. Such formula permits to easily determine $\alpha_{\rm em}$, without resorting to any numerical method. We replace also the solid angle formula (\ref{EFSA}) with \cite{Defalco2016}
\begin{equation} \label{AFSA}
\begin{aligned}
d\Omega&\approx \frac{\cos i}{\sin^2\ \psi\ (1-u)}\ R \left[ 2z+\left(1-2C\right)z^2+\right. \\
&\left. +\left(1-C+2C^2-2D\right)z^3\right]\ dR\ d\varphi,
\end{aligned}
\end{equation}
where
\begin{equation}
C=\frac{4-3u}{4(1-u)},\qquad D=\frac{39u^2-91u+56}{56(1-u)^2}.
\end{equation}
The gravitational redshift (\ref{eq:redshift}) is also replaced with
\begin{equation} \label{eq:redshift2}
\begin{aligned}
(1+z)^{-1}=\frac{\left(1-\frac{2M}{r}-\omega_k^{2}r^{2}\right)^{1/2}}{\left(1+b_{\rm ph}\omega_k
\frac{\sin i\,\sin\varphi}{\sin\psi} \right)},
\end{aligned}
\end{equation}
where we consider matter orbiting with Keplerian velocity $\omega_k=\sqrt{M/r^3}$ around the BH. The observed flux is still calculated by using Eq. (\ref{eq:flux}).
\begin{figure}[ht!]
\centering
\includegraphics[trim=1.3cm 2cm 0cm 0cm,scale=0.4]{Fig5}
\caption{Normalized broad iron line profiles for isotropic radiation emission from a disk extending from $r_{\rm in} =6M$ to $r_{\rm out} =100M$, assuming a surface emissivity index $q =-3$ and observer inclination angles $i=30^\circ,60^\circ,80^\circ$.}
\label{fig:Fig5}
\end{figure}
In Fig. \ref{fig:Fig5}, we show some disk emission plots, exhibiting the characteristic skewed, highly broadened, and double-horned line profiles \cite{Fabian1989,Bao1994,Beckwith2004}. The general relativistic effects, together with the transversal Doppler shifts and gravitational redshift, strongly shape the iron K$_{\alpha}$ line, allowing thus to inquire the accretion disk dynamics \cite{McClintock2011}. The general relativistic effects are enhanced by increasing the observer inclination angle $i$. The highest peak corresponds to the blue-shifted emission from material on the approaching side, while the other peak is related to red-shifted emission from matter on the receding part. The broadest part of the line is produced by the fastest motion of matter in the inner regions \cite{Fabian1989}.
\subsubsection{Total emission}
\label{sec:ET}
Combining the results obtained for the critical hypersurface (Figs. \ref{fig:Fig8} and \ref{fig:Fig9}) with those of the disk (Fig. \ref{fig:Fig5}), we can produce the total emission from astrophysical systems hosting BHs, which is what we actually observe.
For $\lambda=0$, we notice from Fig. \ref{fig:Fig10} a small intermediate peak (related to the critical hypersurface) between the other two peaks of the accretion disk. The characteristic shape of this system will be a \emph{three-horned profile}, where a peak gives important information on the critical hypersurface, and therefore on the metric where it moves. In this particular case, the flux from the critical hypersurface sums with those from the accretion disk, explaining the existence of the intermediate peak.
\begin{figure}[ht!]
\centering
\includegraphics[trim=1.3cm 2cm 0cm 3.5cm,scale=0.35]{Fig10}
\caption{Normalized flux of the total system, using the data of Fig. \ref{fig:Fig8} for the critical hypersurface and Fig. \ref{fig:Fig5} for the disk.}
\label{fig:Fig10}
\end{figure}
In Fig. \ref{fig:Fig11} we display a great variety of total emission profiles, combining the profiles from Figs. \ref{fig:Fig5} and \ref{fig:Fig9}. We have disparate behaviors, all showing in a way more or less pronounced the distinctive feature of the three-horned line.
\begin{figure*}[p!]
\centering
\vbox{
\includegraphics[trim=1.3cm 2cm 0cm 0cm,scale=0.6]{Fig11_1}
\includegraphics[trim=1.3cm 2cm 0cm 1cm,scale=0.6]{Fig11_2}}
\caption{Normalized flux of the total system, using the data of Fig. \ref{fig:Fig9} for the critical hypersurface and Fig. \ref{fig:Fig5} for the disk.}
\label{fig:Fig11}
\end{figure*}
Remembering that $\lambda$ is connected with the velocity of the matter moving on the critical hypersurface, we have:
for high values of $r_{\rm crit}$ and $\lambda$ the peak related to the PR effect is shifted to high energies and is more enhanced than those from the accretion disk, while for small values of both parameters the flux from the critical hypersurface sums with those of the disk. In this last cases, we have disk-like emission profiles with the presence of a distinctive higher blue-shifted peak.
This new procedure can be heavily and timely supported by both present, like XMM-Newton \cite{Beckwith2004,Tomsick2014}, EHT \cite{Chael2016}, and future observational X-ray data, like Advanced Telescope for High-ENergy Astrophysics (ATHENA) \cite{Barcons2017}, enhanced X-ray Timing and Polarimetry mission (eXTP) \cite{Zhang2016}, Imaging X-ray Polarimetry Explorer (IXPE) \cite{Soffitta2013}. In addition, such technique could be simultaneously combined with other different methods of observations, like GWs detections or imaging of matter close to a BH, to acquire more and precise information.
\section{Conclusions}
\label{sec:end}
In this work, we have derived the equations of motion (\ref{EoM1}) -- (\ref{time}) of a test particle in a static and spherically symmetric WH spacetime (Morris-Thorne like, see Sec. \ref{sec:MTmetric}) under the influence of the general relativistic PR effect, see Sec. \ref{sec:GRPReffect}. We consider a particular BH mimickers' class of WHs, where the exotic matter is placed in a small region close to the throat, and then ordinary matter extending up to $R_{\rm S}=2M$. A small transition surface layer located within $R_{\rm p}=3M$, permits to smoothly connect the inner solution to the Schwarzschild metric, see Fig. \ref{fig:Fig2}. A particular WH dynamics is determined, whenever the redshift $\Phi(r)$ and shape $b(r)$ functions are explicit. This dynamical system can admit the existence of a critical hypersurface, a region where there is an equilibrium between the radiation and gravitational forces, see Sec. \ref{sec:CH}. We have recalled some useful proprieties of such stable configurations in the Schwarzschild metric (see Fig. \ref{fig:Fig4}), and then we have investigated some general aspects for $\Phi=\Phi_0\equiv{\rm const}$ (value usually assumed by the redshift function in the transition surface layer \cite{Morris1988}). We have found that there are no critical hypersurfaces for $\lambda/M\ll1$, while for $\lambda/M\gg1$ it strongly depends on the functional form of the shape function $b(r)$.
We have developed a diagnostic to distinguish a BH from the class of WHs outlined above. We have considered as astrophysical setup an accretion disk around a BH/WH (in only one universe) extending from the ISCO radius to $r_{\rm out}=100M$, see Sec. \ref{sec:geoastro}. The ISCO radius could be considered as the radiation source which alters the geodesic motion of the test particle through the general relativistic PR effect. We deem that the presence of an accretion disk with high magnetic fields $B\sim10^4-10^7$ G, might be the cause for producing squeezed vacuum electromagnetic states, phenomenon which generates negative energy and makes the BH traversable and stable, see Sec. \ref{sec:exomat}. This might be a very useful discriminant to reduce the search for WHs among several astrophysical systems.
Knowing that the critical hypersurface can be located very close to the event horizon of a BH (or in the transition surface layer of a WH), this configuration can be employed to inquire the proprieties of the geometrical background and distinguish the two structures. An observable through which we can achieve such objective is the observed emission profiles, by employing the ray-tracing technique from the emission location toward the observer at infinity, see Sec. \ref{sec:raytrace}. We have first analysed the emission proprieties from the critical hypersurfaces in the BH Schwarzschild metric (see Figs. \ref{fig:Fig8} and \ref{fig:Fig9}), where they strongly depend on the $\lambda$ values, the critical hypersurface radius $r_{\rm crit}$, and the observer inclination angle $i$, which all contribute to enhance the general relativistic effects, see Sec. \ref{sec:ECH}. In such analysis, we have also provided the Table \ref{tab:Table1}, where we show some possible luminosities of BH systems, both for relating them with the input parameters of our model (i.e., $r_{\rm crit}$ and $\lambda$), and for reducing the astrophysical systems, where to look for WHs. Then, we have reported the emission from the accretion disk (see Fig. \ref{fig:Fig5}), exhibiting the characteristic skewed, double-horned iron line profile, see Sec. \ref{sec:EAC}. Finally, combining the two emissions to calculate the \emph{total} flux, we see the characteristic feature of a three-horned profile (see Figs. \ref{fig:Fig10} and \ref{fig:Fig11}), where a peak easily identifiable gives important information not only on the critical hypersurface, but also on the metric where the matter moves. Indeed, if the observational data can be well fitted by this model, we can conclude that there is a BH; instead if the fit through such description is not suitable to interpret the observational data, it means that there are metric-changes and the possibility to have a WH could be realistic, see Sec. \ref{sec:ET}. In addition, such method can be advantageously supported by the recent and near-future observational data, see Sec. \ref{sec:ET}.
This work finds not only a practical implication in diagnosing a BH from a WH, but also other interesting applications. First of all, since the critical hypersurface lies very close to the event horizon, it can be extremely useful to infer fundamental proprieties on the gravitational field and also on how gravity couples with photons in strong field regimes. Another original idea consists in employing some generic spherically symmetric BH metrics, which are built within a model-independent framework and do not reflect nor need a specific theory of gravity \cite{Rezzolla2014}. They can be used to approximate arbitrary BH spacetimes through a small set of coefficients, that can be recovered from astronomical observations. In this way, we can measure in an agnostic manner possible deviations from GR and hence determine whether alternative theories of gravity are needed. In addition, this approach can be a tool to determine the WH metric in the transition surface layer, if one observes metric-changes there.
Finally, we stress that the method devised in this paper is based upon a \emph{toy model}, where several
elementary features can be further improved. Indeed as future projects, we aim at extending such description both in the diagnostic procedure and in the modeling aspects by improving the following aspects: considering a rotating axially symmetric WH spacetime in GR, eventually setting such description in the 3D space, and extending such treatment in modified theories of gravity.
\section*{Acknowledgements}
V.D.F. thanks Osservatorio Astronomico Monte Porzio Catone for hospitality and support, and the Silesian University in Opava for support. V.D.F. and E.B. thank Gruppo Nazionale di Fisica Matematica of Istituto Nazionale di Alta Matematica for support. V.D.F. thanks Prof. Luigi Stella for the useful discussions. V.D.F. and E.B. thank Dr. Viacheslav Emelyanov for the useful suggestions and references on the squeezed vacuum states. S.C. and M.D.L. acknowledge the support of Istituto Italiano di Fisica Nucleare (INFN) {\it iniziative specifiche} MOONLIGHT2 and TEONGRAV.
|
2,869,038,156,236 | arxiv | \section{Introduction}
Multigraded Hilbert schemes parametrize
families of ideals in a polynomial ring that share
the same Hilbert function with respect to some
grading by an abelian group~\cite{HS}. We are interested in the following
particular case.
Let $X = (x_{ij})$ be a $d {\times} n$-matrix of unknowns.
We fix the polynomial ring $K[X]$ over a field $K$ with the
$\mathbb{Z}^n$-grading by column degree, i.e.\ ${\rm deg}(x_{ij}) = e_j$.
In this grading, the Hilbert function of the
polynomial ring $K[X]$ equals
\begin{equation*}
\mathbb{N}^n \rightarrow \mathbb{N} \,,\,\,
(u_1,\ldots,u_n)\, \mapsto\, \prod_{i=1}^n \binom{u_i + d-1}{d-1} .
\end{equation*}
We study the multigraded Hilbert scheme $H_{d,n}$,
which parametrizes $\mathbb{Z}^n$-homogeneous ideals $I$ in $K[X]$
such that $K[X]/I$ has the Hilbert function
\begin{equation}
\label{eqn:ourHF}
\mathbb{N}^n \rightarrow \mathbb{N} \,,\,\,
(u_1,\ldots,u_n)\, \mapsto\, \binom{u_1 {+} u_2 + \cdots + u_n+d-1}{d-1}.
\end{equation}
The key example is the ideal $I_2(X)$
that is generated by the $2 {\times} 2$-minors of $X$,
and whose quotient is indeed $\mathbb{Z}^n$-graded
with Hilbert function~(\ref{eqn:ourHF}).
The Hilbert scheme $H_{d,n}$ has the following geometric interpretation.
Each $\mathbb{Z}^n$-homogeneous ideal in $K[X]$ specifies a subscheme
of the product of projective spaces $(\mathbb{P}^{d-1})^n =
\mathbb{P}^{d-1} \! \times \cdots \times \mathbb{P}^{d-1}$. The subscheme
specified by the ideal $I_2(X)$ is the diagonal embedding of
$\mathbb{P}^{d-1}$ in $(\mathbb{P}^{d-1})^n$. Our Hilbert scheme
$H_{d,n}$ is a natural parameter space for
degenerations of this diagonal in $(\mathbb{P}^{d-1})^n$.
The results obtained in this paper are as follows.
In Section 2 we prove that all ideals $I$ in $H_{d,n}$
are radical and Cohen-Macaulay. This result is derived
by identifying a distinguished Borel-fixed ideal $Z$
with these properties. It confirms a conjecture
on multilinear Gr\"obner bases made by Conca in \cite{Conca}.
In Section 3 we show that $I_2(X)$ and one of its initial monomial ideals
are smooth points on $H_{d,n}$. The
irreducible component containing these points is an
equivariant compactification of the homogeneous space $G^n/G$ where
$G = {\rm PGL}(d)$, and $G \subset G^n$ is the diagonal embedding. For $n = 2$
we recover Thaddeus' construction in \cite{Th} of the space of complete
collineations.
The relationship of our compactification of $G^n/G$
to those constructed by
Lafforgue in \cite{Laf} will be discussed in
Remark \ref{remark:Lafforgue} and Example \ref{example:twothree}.
Section 4 is concerned with the case $d=2$, and we regard its results to be the
main contribution of this paper.
We show that $H_{2,n}$ is irreducible, but singular
for $n \geq 4$, and we determine its combinatorial structure.
Each point in $H_{2,n}$ corresponds to a certain tree of projective lines.
Among these are precisely $2^n (n+1)^{n-2} $ monomial ideals, one for each
tree on $n+1$ unlabeled vertices and $n$ labeled directed edges,
and these form a graph.
In Section 5 we study the case $d=n=3$. These
are the smallest parameters
for which the
multigraded Hilbert scheme $H_{d,n}$ is reducible.
We show that $H_{3,3}$
is the reduced union of seven irreducible components,
with the main component of dimension $16$
parametrizing degenerations of the diagonal
in $\mathbb{P}^2 {\times} \mathbb{P}^2 {\times} \mathbb{P}^2$.
We list all monomial ideals on
$H_{3,3}$ and their incidence relations.
Section 6 outlines a connection to convexity in
affine buildings and tropical geometry. Extending
previous results in \cite{BY, KT}, we show how
Gr\"obner degenerations on $H_{d,n}$ can be used
to compute special fibers of Deligne schemes.
\section{On a conjecture of Conca}
Our plan is to derive
Conca's Conjecture~4.2 in~\cite{Conca}
from the following result.
\begin{thm}
\label{thm:radical}
All ideals $I$ corresponding to points in $H_{d,n}$ are radical ideals.
\end{thm}
\begin{proof}
We may assume that $K$ is an infinite field.
Let $G = {\rm PGL}(d,K)$, the group of invertible
$d \times d$-matrices modulo scaling, let
$B$ be the Borel subgroup of images of upper
triangular matrices in $G$, and let $T$ be the algebraic torus
of images of diagonal matrices in $G$. Then $T^n$ is
a maximal torus in $G^n$, and $B^n $ is a Borel subgroup in $G^n$.
We consider the action of these groups
on the Hilbert scheme $H_{d,n}$. The $T^n$-fixed points
of $H_{d,n}$ are the monomial ideals that have
the same $\mathbb{Z}^n$-graded Hilbert function as $I_2(X)$.
It suffices to assume that $I$ is such a monomial ideal
because every other ideal $J \in H_{d,n}$ can be degenerated to a
monomial ideal $I = {\rm in}(J)$ via Gr\"obner bases,
and if ${\rm in}(J)$ is radical then so is $J$.
We can further assume that $I$ is {\em Borel-fixed},
which means that $I$ is fixed under the action of $B^n$.
Indeed, if $A_1, A_2, \ldots , A_n$
are generic matrices of $G$ then we replace the
ideal $I$ first by its image $I' = (A_1,A_2, \ldots,A_n) \circ I$,
and then by the initial monomial ideal ${\rm in}(I')$.
The ideal ${\rm in}(I') = {\rm gin}(I)$ is
the {\em multigraded generic initial ideal}.
The same approach as in \cite[\S 15.9.2]{Eis} shows that
${\rm gin}(I)$ is Borel-fixed.
Moreover, if ${\rm gin}(I)$ is radical then so is $I$. Hence,
it suffices to show that every
Borel-fixed ideal $I$ in $H_{d,n} $ is a radical ideal.
Our result will be a direct consequence of the following two claims:
\noindent Claim 1:
{\em There is precisely one Borel-fixed ideal $Z$ in $H_{d,n}$.}
\noindent Claim 2: {\em The unique Borel-fixed ideal $Z$ is radical.}
\smallskip
We first describe the ideal $Z$ and then prove that it has these
properties.
Let $u$ be any vector in the set
\begin{equation}
\label{eqn:ineqal}
U = \left\{(u_1, \ldots, u_n) \in \mathbb{Z}^n :
0 \leq u_i \leq d-1 \hbox{ and }
\textstyle\sum_i u_i = (n-1)(d-1)\right\}.
\end{equation}
We write $Z_u$ for the ideal generated by all unknowns
$x_{ij}$ with $i \leq u_j$ and $1 \leq j \leq n$.
This is a Borel-fixed prime monomial ideal.
Consider the intersection of the prime ideals $Z_u$:
\begin{equation*}
Z \,\,:= \,\,\bigcap_{u \in U} Z_u.
\end{equation*}
The monomial ideal $Z$ is a radical and Borel-fixed.
Each of its ${\binom{d+n-2}{d-1}}$ associated
prime ideals $Z_u$ has the same codimension $(n-1)(d-1)$.
We now apply Conca's results in \cite[Section 5]{Conca}.
He showed that $Z$ has the same Hilbert function as $I_2(X)$.
Therefore, the ideal $Z$ is the promised
Borel-fixed ideal in $H_{d,n}$.
More precisely, \cite[Theorem 5.1]{Conca}
states that $Z$ is precisely the
generic initial ideal ${\rm gin}(I_2(X))$ of
the ideal of $2 \times 2$-minors.
We claim that $Z$ is the only
Borel-fixed monomial ideal in $H_{d,n}$.
To show this, we apply results about
the {\em multidegree} in \cite[\S 8.5]{MS}.
The multidegree of the prime ideal $Z_u$ is the monomial
$\, {\bf t}^u = t_1^{u_1} t_2^{u_2} \cdots t_n^{u_n} $.
By \cite[Theorem 8.44]{MS},
$Z_u$ is the only unmixed Borel-fixed monomial ideal
having multidegree ${\bf t}^u$.
By \cite[Theorem 8.53]{MS}, the Borel-fixed
ideal $Z = \cap_u Z_u$ has the multidegree
\begin{equation}
\label{eqn:multidegofZ}
\mathcal{C}\bigl(k[X]/Z;{\bf t}\bigr)
\quad = \quad
\sum_{u \in U} {\bf t}^{u}
\quad = \quad
\sum_{u \in U} t_1^{u_1} t_2^{u_2} \cdots t_n^{u_n} .
\end{equation}
Since the multidegree of a homogeneous ideal is determined by
its Hilbert series \cite[Claim 8.54]{MS}, we conclude that
every ideal $I \in H_{d,n}$ has multidegree~(\ref{eqn:multidegofZ}).
Now, suppose that $I \in H_{d,n}$ is Borel-fixed.
Since $I$ is monomial, each minimal primary component contributes at most one
term ${\bf t}^{u}$ to the multidegree~(\ref{eqn:multidegofZ}). Thus,
by \cite[Theorem 8.53]{MS},
the minimal primes of $I$ are precisely the prime ideals
$Z_u$ where $u$ runs over the elements of $U$.
This implies $\,I \subseteq \sqrt{I} = Z $.
However, since $I$ and $Z$ have
the same Hilbert function in a positive grading, we conclude
that $I = Z$, as desired.
\end{proof}
\begin{remark}
Our proof of Theorem \ref{thm:radical} was based
on an idea that was suggested to us
by Michel Brion. In~\cite{BrionMult}, Brion proves that for any
multiplicity-free subvariety of a flag variety, such as the
diagonal in a product of projective spaces, there exists a flat
degeneration to
a reduced union of Schubert varieties, which is our~$Z$. Although
Theorem \ref{thm:radical}
only applies to the special case of the diagonal in a product of
projective spaces, it establishes reducedness not just for
{\em some}
degeneration but for
{\em any} ideal with the same multigraded Hilbert function.
Our proof
combined the nice argument from~\cite{BrionMult} with the
explicit description of the Borel-fixed monomial ideal given by Conca in~\cite{Conca}. \qed
\end{remark}
We now come to the question asked by Conca in
\cite[Conjecture 1.1]{Conca}. Given any $d \times d$-matrices
$A_1, A_2, \ldots , A_n$ with entries in $K$, we apply them
individually to the $n$ columns of the matrix
$X = (x_{ij})$, form the ideal of $2 \times 2$-minors
of the resulting $d\times n$-matrix,
and then take the initial monomial ideal
\begin{equation}
\label{eqn:concaideal}
\,{\rm in}((A_1, \ldots, A_n) \circ I_2(X))
\end{equation}
with respect to some term order. Conca conjectures that
(\ref{eqn:concaideal}) is always a squarefree monomial ideal.
He proves this for generic $A_i$ by showing that
(\ref{eqn:concaideal}) equals the Borel-fixed ideal $Z$
constructed above. Theorem \ref{thm:radical} implies
the same conclusion under the much weaker hypothesis that the
$A_i$ are invertible.
\begin{cor} \label{cor:itssquarefree}
For any invertible $d \times d$-matrices $A_1, \ldots, A_n$
and any term order on $K[X]$, the monomial ideal
$\,{\rm in}((A_1, \ldots, A_n) \circ I_2(X))$ is squarefree.
\end{cor}
\begin{proof}
Applying invertible matrices $A_i$ to $I_2(X)$
corresponds to taking the orbit of $I_2(X)$
under the action of $G^n $ on $H_{d,n}$.
Therefore, (\ref{eqn:concaideal}) is a monomial ideal
that lies in $H_{d,n}$. By Theorem \ref{thm:radical},
it is radical and hence squarefree.
\end{proof}
Corollary~\ref{cor:itssquarefree} implies
\cite[Conjecture 4.2]{Conca}.
At present, we do not know how to prove Conca's stronger conjecture
to the effect that Corollary
\ref{cor:itssquarefree} holds without the
hypothesis that the matrices $A_i$ are invertible~\cite[Conjecture~1.1]{Conca}.
One idea is
to extend our study to multigraded Hilbert schemes on $K[X]$
whose defining Hilbert function is bounded above by~(\ref{eqn:ourHF}).
\begin{cor} \label{cor:connected}
The multigraded Hilbert scheme
$H_{d,n}$ is connected.
\end{cor}
\begin{proof}
All ideals in $H_{d,n}$ can be connected
to their common generic initial ideal~$Z$
by Gr\"obner degenerations.
\end{proof}
In what follows we take a closer look at the combinatorics
of the ideal~$Z$.
\begin{prop}
The ideal $Z$ is generated by all monomials
$x_{i_1 j_1} x_{i_2 j_2} .... x_{i_k j_k}$
where $1 \leq k - 1 \leq i_1,i_2,\ldots,i_k \leq d - 1$,
$j_1 < j_2 < \cdots < j_k$,
and $i_1 + i_2 + \cdots + i_k \leq d(k - 1)$.
The maximum degree of a minimal generator is
${\rm min}(d,n)$.
\end{prop}
\begin{proof}
This is the description of
the ideal $Z$ given by Conca
\cite[\S 5]{Conca}.
\end{proof}
All ideals $I$ in $H_{d,n}$ share the same Hilbert series
in the ordinary $\mathbb{Z}$-grading,
\begin{equation*}\sum_{r=0}^\infty {\rm dim}_K (K[X]/I)_r \cdot z^r
\,\,\, = \,\,\,
\frac{h(z)}{(1-z)^{n+d-1}} .\end{equation*}
The {\em $h$-polynomial} in the numerator
can be seen from the ideal of $2 {\times} 2$-minors:
\begin{equation*} h(z) \quad = \quad \sum_{i=0}^{{\rm min}(d-1,n-1)}
\binom{d-1}{i} \cdot \binom{n-1}{i} \cdot z^i . \end{equation*}
Note that $\,h(1) = \binom{n+d-2}{d-1} \,$ is the common
scalar degree of the ideals in $H_{d,n}$.
\begin{cor} \label{cor:CM}
Every ideal $I $ in $H_{d,n}$ is Cohen-Macaulay.
\end{cor}
\begin{proof}
Since $Z$ is the common generic initial ideal of
all ideals $I$, it suffices to show that
the Borel-fixed ideal $Z$ is Cohen-Macaulay.
Let $\Delta_Z$ denote the $(n+d-2)$-dimensional
simplicial complex corresponding to $Z$. The
vertices of $\Delta_Z$ are the
$dn$ matrix entries $x_{ij}$,
and its facets are the $\binom{n+d-2}{d-1}$
sets $\, F_u \, = \,\{ x_{ij} : i > u_j \}\,$
which are complementary to the prime ideals $Z_u$.
We order the facets $F_u$ according to the
lexicographic order on the vectors $u$.
We claim that this ordering of the facets is a
{\em shelling} of $\Delta_Z$. Since the Stanley-Reisner
ring of a shellable simplicial complex is
Cohen-Macaulay \cite[Theorem~III.2.5]{Stanley}, this will imply
Corollary \ref{cor:CM}. To verify the shelling property
we must show that every facet $F_u$ has a unique
subset $\eta_u$ such that the faces of $F_u$
not containing $\eta_u$ are exactly those appearing
as a face of an earlier facet.
If this condition holds then the $h$-polynomial
can be read off from the shelling as follows:
\begin{equation*}
h(z) \,\,\, = \,\,\,
\sum_{u \in U} z^{\# \eta_u}.
\end{equation*}
The unique subset of the facet $F_u$
with these desired properties equals
\begin{equation*}
\eta_u \,\, = \,\,
\{\, x_{ij} \,:\, j > 1 \,\,\hbox{and} \,\ i = u_j+1 < d \,\}. \end{equation*}
Indeed, suppose $G$ is a face common to $F_u$ and $F_{u'}$ for some $u' <
u$. Then $u_j' > u_j$ for some $j > 1$, so $G$ does not contain $x_{u_j+1,j}
\in \eta_u$. Conversely, suppose that $G$
is a face of $F_u$ which does not contain
$\eta_u$, and let $x_{u_j+1,j}$
be any element of $\eta_u \backslash G$.
Since $j > 1$,
\begin{equation*}
F_{u+e_j-e_1} \, = \, F_u \backslash \{x_{u_j+1,j}\}
\cup \{ x_{u_1,1} \} \end{equation*}
is a facet of $\Delta_Z$ which contains $G$
and which comes earlier in our ordering.
\end{proof}
\begin{remark}
The shellability of $\Delta_Z$ was mentioned
in \cite[Remark 5.12]{Conca} but no details
were given there. It would be interesting to
know whether the simplicial complex $\Delta_I$ of
every monomial ideal $I$ in $H_{d,n}$ is shellable.
\end{remark}
\section{Group completions}
In this section we relate our multigraded Hilbert scheme
to classical constructions in algebraic geometry.
For $n=2$ we recover the space of complete collineations
and its GIT construction due to Thaddeus in~\cite{Th}.
Brion~\cite{Brion} extended Thaddeus' work
to the diagonal $X \hookrightarrow X \times X$ of any
rational projective homogeneous variety
$X$.
While the present study is restricted to the case
$X = \mathbb{P}^{d-1}$, we believe that many of our results will extend
to $X \hookrightarrow X^n$ in Brion's setting.
\begin{prop}
\label{prop:grothen}
There is a injective morphism from the multigraded Hilbert scheme~$H_{d,n}$ to a
connected component of the Grothendieck Hilbert scheme of subschemes
of~$(\mathbb{P}^{d-1})^n$.
\end{prop}
\begin{proof}
By~\cite[\S 4]{HS}, there is a natural morphism from the multigraded Hilbert
scheme $H_{d,n}$ to the Grothendieck Hilbert scheme.
Theorem~\ref{thm:radical} shows that every point in $H_{d,n}$ corresponds to a
radical ideal~$I \subset K[X]$. Furthermore, the Hilbert function tells us that
the ideal $I + \langle x_{1j}, \ldots, x_{dj} \rangle$ has (affine) dimension $d+n-2$,
but by Corollary~\ref{cor:CM}, $I$ is pure of dimension $d+n-1$, so no
associated prime of $I$ contains $\langle x_{1j}, \ldots, x_{dj}\rangle$. Thus,
$I$ is uniquely determined by the subscheme it defines in $(\mathbb{P}^{d-1})^n$, so
the morphism is injective.
\end{proof}
\begin{remark}
We do not know whether the morphism in Proposition~\ref{prop:grothen} is an
immersion, nor whether it is always surjective.
\end{remark}
Recall that the $G^n$-action on $H_{d,n}$ transforms ideals as follows:
\begin{equation}
\label{eqn:action}
I \,\,\mapsto \,\, (A_1,A_2,\cdots,A_n) \circ I .
\end{equation}
If $A_1 = A_2 = \cdots = A_n$ then the ideal $I$ is left invariant,
so the stabilizer of any point $I \in H_{d,n}$ contains the
diagonal subgroup $\,G \,=\, \{(A,A,\ldots,A) \}\,$ of $G^n$.
Moreover, the stabilizer of the determinantal ideal $I_2(X)$
is precisely the diagonal subgroup $G$.
We write $\,\overline{G^n/G}\,$ for the closure
of $G^{n} \circ I_2(X)$ in the Hilbert scheme
$H_{d,n}$.
\begin{thm}
The subscheme $\overline{G^{n}/G}$ is an irreducible
component of $H_{d,n}$. It is a compactification of the homogeneous space
$G^{n}/G$, so it has dimension $ (d^2-1)(n-1)$.
\end{thm}
This theorem can be deduced from Proposition \ref{prop:grothen}
using standard algebraic geometry arguments concerning the
tangent sheaf of Grothendieck's Hilbert scheme. What we present below
is a more detailed combinatorial proof based on the identification
of an explicit smooth point in Lemma \ref{lem:diag}.
\begin{proof}
Clearly, the dimension of the tangent space at $I_2(X)$ is
at least $(d^2-1)(n-1)$, the dimension of $G^{n}/G$.
By semi-continuity, it is bounded above by the tangent
space dimension of $H_{d,n}$ at any initial monomial ideal
${\rm in}(I_2(X))$. In Lemma \ref{lem:diag} below, we identify a particular
initial ideal for which this dimension equals $(d^2-1)(n-1)$.
From this we conclude that the tangent space of $H_{d,n}$ at $I_2(X)$
has dimension $\,(d^2-1)(n-1)$. This is precisely the dimension of
the orbit closure $\overline{G^{n}/G}$ of $I_2(X)$. We also conclude
that $I_2(X)$ is a smooth point of $H_{d,n}$, and that the
unique irreducible component of
$H_{d,n}$ containing $I_2(X)$ is the compactified space $\overline{G^{n}/G}$.
\end{proof}
Let $M$ denote the ideal generated by the
quadratic monomials $x_{ik} x_{jl}$ for all
$1 \leq i < j \leq d$ and $1 \leq k < l \leq n$.
We call $M$ the {\em chain ideal}, because
its irreducible components correspond to chains
in the grid from $x_{1n}$ to $x_{d1}$. It is
a point in $\,\overline{G^{n}/G} \subset H_{d,n}\,$
since $M = {\rm in}(I_2(X))$ in the lexicographic order.
\begin{lem} \label{lem:diag}
The tangent space of the multigraded Hilbert scheme
$H_{d,n}$ at the chain ideal $M$ has dimension $\,(d^2-1)(n-1)$.
\end{lem}
\begin{proof}
We claim that the following three classes
$\rho$, $\sigma$ and~$\tau$ form a basis for the tangent space
${\rm Hom}_{K[X]} (M,K[X]/M)_0 $:
\smallskip
\noindent {\em Class $\rho$}:
For each triple of indices $(i,j,l)$
with $1 \leq i < j \leq d$ and $1 < l \leq n$
we define a $K[X]$-module homomorphism
$\,\rho_{ijl}\colon M \rightarrow K[X]/M\,$ by setting
\begin{align*}
\rho_{ijl} ( x_{hk} x_{jl}) &= x_{hk} x_{il} \quad
\hbox{whenever} \,\, i \leq h < j \,\,\hbox{and} \,\,k < l\hbox{, and} \\
\rho_{ijl}(m) &= 0 \quad
\hbox{for all other minimal generators $m$ of $M$}.
\end{align*}
\smallskip
\noindent {\em Class $\sigma$}:
For each triple of indices $(i,j,k)$
with $1 \leq i < j \leq d$ and $1 \leq k < n$
we define a $K[X]$-module homomorphism
$\,\sigma_{ijk}\colon M \rightarrow K[X]/M\,$ by setting
\begin{align*}
\sigma_{ijk} ( x_{ik} x_{hl}) &= x_{jk} x_{hl} \quad
\hbox{whenever} \,\, i < h \leq j \,\,\hbox{and} \,\,k < l\hbox{, and} \\
\sigma_{ijl}(m) &= 0 \quad
\hbox{for all other minimal generators $m$ of $M$}.
\end{align*}
\noindent {\em Class $\tau$}:
For each pair of indices $(i,k)$
with $1 \leq i < d$ and $1 \leq k < n$
we define a $K[X]$-module homomorphism
$\,\tau_{ik}\colon M \rightarrow K[X]/M\,$ by setting
\begin{align*}
\tau_{ik} ( x_{i,k} x_{i+1,k+1}) &= x_{i,k+1} x_{i+1,k} \quad \hbox{and,}\\
\tau_{ik}(m) &= 0 \quad
\hbox{for all other minimal generators $m$ of $M$}.
\end{align*}
The above $K[X]$-linear maps are
$\mathbb{Z}^d$-homogeneous of degree zero, and
they are clearly linearly independent over $K$.
There are $(n-1)(d-1)$ maps in the class
$\tau$, and there are $(n-1)\binom{d}{2}$ each in
the classes $\rho$ and $\sigma$.
This adds up to the required total number of
$\,(d^2-1)(n-1) = (n-1)(d-1)+2(n-1)\binom{d}{2}$.
It remains to be seen that every $\mathbb{Z}^d$-graded
$K[X]$-module homomorphism
from~$M$ to $K[X]/M$ of degree zero is a $K$-linear
combination of the above. Suppose that
$\phi\colon M \rightarrow K[X]/M$ is a module homomorphism. Then,
for $i < j$ and
$k < l$, we can uniquely write $\phi(x_{ik} x_{jl})$ as a linear combination
of monomials not in~$M$. Furthermore, by subtracting appropriate multiples of
$\rho_{ijl}$ and $\sigma_{ijk}$, we can assume that the monomials in
the linear combination do
not include
$x_{ik}x_{il}$ or $x_{jk}x_{jl}$. Suppose that for some $n \leq m$, the coefficient of $x_{mk} x_{nl}$
is some non-zero $\alpha\in K$. Either $i < m$ or $n < j$, and the two cases are
symmetric under reversing the order of both the column indices and the row indices, so we
assume the former.
For any $o$ such that $n \leq o \leq m$ and $i < o$, the syzygies imply
\begin{equation*}
\alpha x_{o,k+1}x_{mk}x_{nl}\,\, + \cdots
=\,\, x_{o,k+1} \phi(x_{ik} x_{jl})
\,\,=\,\, x_{jl} \phi(x_{ik} x_{o, k+1}).
\end{equation*}
Since the first term is non-zero in $K[X]/M$, the monomial must be divisible by
$x_{jl}$. Thus, either $j = n$, or both $j=o$
and $l = k+1$. In the first case, taking $o = m$, and using the assumption that
the coefficient of $x_{mk}x_{m,k+1}$ in $\phi(x_{ik}x_{o,k+1})$ is zero, we get
a contradiction.
In the second case, if $j \neq n$, then we must only
have one
choice of $o$ and this forces $i = n = m-1$. Therefore, $\phi$ is a linear
combination of homomorphisms of class $\tau$, and thus the classes of $\rho$,
$\sigma$, and~$\tau$ span the tangent space at $M$.
\end{proof}
\begin{cor}
\label{cor:chain-smooth}
The chain ideal $M$ is a smooth point on $H_{d,n}$.
The unique irreducible component of $H_{d,n}$
containing $M$ is the completion $\overline{G^{n}/G}$.
\end{cor}
We now turn to the case $n = 2$ which is well-studied in the
literature. The compactification $\overline{G^2/G}$
is the classical {\em space of complete collineations}, which was
investigated by Thaddeus in \cite{Th}. In fact, we have:
\begin{cor}
The multigraded Hilbert scheme $H_{d,2}$ is smooth and irreducible.
It coincides with the space of complete collineations:
$\,H_{d,2} = \overline{G^2/G}$.
\end{cor}
\begin{proof}
Up to relabeling, the chain ideal is the
only monomial ideal in $H_{d,2}$. This point is
smooth by Lemma~\ref{cor:chain-smooth}, and hence $H_{d,2}$ is smooth.
Since it is connected by Corollary~\ref{cor:connected},
we conclude that $H_{d,2}$ is also irreducible.
The results in \cite{Th} show that the Grothendieck Hilbert scheme is isomorphic
to the space of complete collineations, and in particular smooth and
irreducible.
Thus, the morphism in Proposition~\ref{prop:grothen} is an isomorphism between
$H_{d,2}$ and the space of complete collineations.
\end{proof}
The representation of $\overline{G^2/G}$ as a multigraded Hilbert scheme
$H_{d,2}$
gives rise to nice polynomial
equations for the space of complete collineations.
Namely, each ideal $I$ in $ H_{d,2}$ is generated by $\binom{d}{2}$
equations of degree $(1,1)$.
As there are $d^2$ monomials in $K[X]_{(1,1)}$,
this describes an embedding of $H_{d,2}$ into
the Grassmannian ${\rm Gr}\big(\binom{d}{2},d^2\big)$.
The subscheme $H_{d,2}$ of this Grassmannian is cut out
by the determinantal equations which are derived
by requiring that the ideal $I$ has the correct number
of first syzygies in degrees $(1,2)$ and $(2,1)$.
\begin{ex} [Equations defining $H_{3,2}$] \rm
We shall realize the $8$-dimensional manifold $H_{3,2}$ as a
closed subscheme of the $18$-dimensional Grassmannian
${\rm Gr}(3,9)$, by giving explicit equations
in the $84$ Pl\"ucker coordinates. Our equations
furnish an explicit projective embedding for
Thaddeus' GIT construction \cite{Th} which is
reviewed further below.
Fix a $3 \times 9$-matrix of unknowns
\begin{equation*}
A \quad = \quad \begin{bmatrix}
a_{11} &a_{12} & a_{13} & a_{21} & a_{22} & a_{23} & a_{31} & a_{32} & a_{33}\\
b_{11} &b_{12} & b_{13} & b_{21} & b_{22} & b_{23} & b_{31} & b_{32} & b_{33}\\
c_{11} &c_{12} & c_{13} & c_{21} & c_{22} & c_{23} & c_{31} & c_{32} & c_{33}
\end{bmatrix}.
\end{equation*}
Consider the ideal $I$ generated by
the three bilinear polynomials in the vector
\begin{equation*} A \cdot
\bigl(
x_{11} x_{12},
x_{11} x_{22},
x_{11} x_{32},
x_{21} x_{12},
x_{21} x_{22},
x_{21} x_{32},
x_{31} x_{12},
x_{31} x_{22},
x_{31} x_{32} \bigr)^T. \end{equation*}
The condition for $I$ to be a point in $H_{3,2}$ is equivalent to
the condition that the rows of the following two $9{\times} 18$-matrices
are linearly dependent:
\begin{equation*} \!\!
\left[\begin{array}{cccccccccccccccccc}
a_{11} \!&\! a_{12} \!&\! a_{13} \!&\! a_{21} \!&\! a_{22} \!&\! a_{23} \!&\! a_{31} \!&\! a_{32}
\!&\! a_{33} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
b_{11} \!&\! b_{12} \!&\! b_{13} \!&\! b_{21} \!&\! b_{22} \!&\! b_{23} \!&\! b_{31} \!&\! b_{32}
\!&\! b_{33} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
c_{11} \!&\! c_{12} \!&\! c_{13} \!&\! c_{21} \!&\! c_{22} \!&\! c_{23} \!&\! c_{31} \!&\! c_{32}
\!&\! c_{33} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! a_{11} \!&\! a_{12} \!&\! a_{13} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! a_{21}
\!&\! a_{22} \!&\! a_{23} \!&\! a_{31} \!&\! a_{32} \!&\! a_{33} \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! b_{11} \!&\! b_{12} \!&\! b_{13} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! b_{21}
\!&\! b_{22} \!&\! b_{23} \!&\! b_{31} \!&\! b_{32} \!&\! b_{33} \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! c_{11} \!&\! c_{12} \!&\! c_{13} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! c_{21}
\!&\! c_{22} \!&\! c_{23} \!&\! c_{31} \!&\! c_{32} \!&\! c_{33} \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! a_{11} \!&\! a_{12} \!&\! a_{13} \!&\! 0 \!&\! 0
\!&\! 0 \!&\! a_{21} \!&\! a_{22} \!&\! a_{23} \!&\! a_{31} \!&\! a_{32} \!&\! a_{33} \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! b_{11} \!&\! b_{12} \!&\! b_{13} \!&\! 0 \!&\! 0
\!&\! 0 \!&\! b_{21} \!&\! b_{22} \!&\! b_{23} \!&\! b_{31} \!&\! b_{32} \!&\! b_{33} \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! c_{11} \!&\! c_{12} \!&\! c_{13} \!&\! 0 \!&\! 0
\!&\! 0 \!&\! c_{21} \!&\! c_{22} \!&\! c_{23} \!&\! c_{31} \!&\! c_{32} \!&\! c_{33}
\end{array}\right]
\end{equation*}
\begin{equation*} \!\!
\left[\begin{array}{cccccccccccccccccc}
a_{11} \!&\! a_{21} \!&\! a_{31} \!&\! a_{12} \!&\! a_{22} \!&\! a_{32} \!&\! a_{13} \!&\! a_{23} \!&\! a_{33} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
b_{11} \!&\! b_{21} \!&\! b_{31} \!&\! b_{12} \!&\! b_{22} \!&\! b_{32} \!&\! b_{13} \!&\! b_{23} \!&\! b_{33} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
c_{11} \!&\! c_{21} \!&\! c_{31} \!&\! c_{12} \!&\! c_{22} \!&\! c_{32} \!&\! c_{13} \!&\! c_{23} \!&\! c_{33} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! a_{11} \!&\! a_{21} \!&\! a_{31} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! a_{12} \!&\! a_{22} \!&\! a_{32} \!&\! a_{13} \!&\! a_{23} \!&\! a_{33} \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! b_{11} \!&\! b_{21} \!&\! b_{31} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! b_{12} \!&\! b_{22} \!&\! b_{32} \!&\! b_{13} \!&\! b_{23} \!&\! b_{33} \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! c_{11} \!&\! c_{21} \!&\! c_{31} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! c_{12} \!&\! c_{22} \!&\! c_{32} \!&\! c_{13} \!&\! c_{23} \!&\! c_{33} \!&\! 0 \!&\! 0 \!&\! 0 \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! a_{11} \!&\! a_{21} \!&\! a_{31} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! a_{12} \!&\! a_{22} \!&\! a_{32} \!&\! a_{13} \!&\! a_{23} \!&\! a_{33} \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! b_{11} \!&\! b_{21} \!&\! b_{31} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! b_{12} \!&\! b_{22} \!&\! b_{32} \!&\! b_{13} \!&\! b_{23} \!&\! b_{33} \! \\ \!
0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! 0 \!&\! c_{11} \!&\! c_{21} \!&\! c_{31} \!&\! 0 \!&\! 0 \!&\! 0 \!&\! c_{12} \!&\! c_{22} \!&\! c_{32} \!&\! c_{13} \!&\! c_{23} \!&\! c_{33} \!
\end{array}\right]
\end{equation*}
These two matrices are obtained by multiplying the generators of $I$
with the
entries in the two columns of $X = (x_{ij}) $ respectively. This results in
nine polynomials of bidegree $(2,1)$ and nine polynomials of bidegree $(1,2)$,
each having $18$ terms.
These two sets of polynomials must be linearly
dependent because each $I \in H_{3,2}$
has its first syzygies in these two bidegrees.
The $84$ maximal minors of the matrix $A$ are the
Pl\"ucker coordinates $p_{i_1 i_2, j_1 j_2, k_1 k_2}$
on the Grassmannian ${\rm Gr}(3,9)$, where
the indices run from~$1$ to~$3$.
Using Laplace expansion, we write each $9 {\times} 9$-minor of the
two matrices as a cubic polynomial in these Pl\"ucker coordinates.
The condition that the matrices have rank at most eight translates into a
system of homogeneous cubic polynomials in the $84$ unknowns
$p_{i_1 i_2,j_1 j_2, k_1 k_2}$, and these
cubics define the space of complete collineations,
$\overline{G^2/G} = H_{3,2}$, as a subscheme of ${\rm Gr}(3,9)$.
Thaddeus \cite{Th} realizes
$H_{3,2}$ as the (Chow or GIT)
quotient of the Grassmannian ${\rm Gr}(3,6)$ by the one-dimensional subtorus of
$(K^*)^6$ given by the diagonal matrices with entries
$(\,t,\,t,\,t,t^{-1},t^{-1},t^{-1})$.
We can see this in our equations as follows. Let $U = (u_{ij})$ and
$V = (v_{ij})$ be $3 {\times} 3$-matrices of unknowns. Each point in
${\rm Gr}(3,6)$ is
represented as the row space of the $3 {\times} 6$-matrix $[U,V]$.
The group $G^2 = {\rm PGL}(3) \times {\rm PGL}(3)$ acts on
$H_{3,2}$ by translating the
distinguished point
$I_2(X)$ to the ideal generated by the three quadrics
\begin{equation*} \begin{matrix}
& (u_{i1} x_{11} + u_{i2} x_{21} + u_{i3} x_{31})
(v_{j1} x_{12} + v_{j2} x_{22} + v_{j3} x_{32}) \\
- &
(u_{j1} x_{11} + u_{j2} x_{21} + u_{j3} x_{31})
(v_{i1} x_{12} + v_{i2} x_{22} + v_{i3} x_{32})
\end{matrix}
\qquad \hbox{for} \,\,1 \leq i < j \leq 3 . \end{equation*}
The entries of the corresponding $3 \times 9$ matrix $A$ are
\begin{equation*}
a_{i_1 i_2} = u_{1 i_1} \! v_{2 i_2} \!-\! u_{2 i_1} \! v_{1 i_2} ,\,\,
b_{j_1 j_2} = u_{1 j_1} \! v_{3 i_2} \!-\! u_{3 j_1} \! v_{1 j_2} ,\,\,
c_{k_1 k_2} = u_{2 k_1} \! v_{3 i_2} \!-\! u_{3 k_1} \! v_{2 k_2} .
\end{equation*}
Writing $u_{\mu}$ for the $\mu$-th column of the matrix $U$
and $v_{\nu}$ for the $\nu$-th column of $V$, this
translates into the following parametric representation
of $H_{3,2}$:
\begin{equation*}
p_{i_1 i_2, j_1 j_2, k_1 k_2} =
\det[u_{i_1},v_{i_2}, u_{j_1}] \det[v_{j_2}, u_{k_1},v_{k_2}]
- \det[u_{i_1},v_{i_2}, v_{j_2}] \det[u_{j_1}, u_{k_1},v_{k_2}].
\end{equation*}
These are quadratic polynomials in the Pl\"ucker coordinates on
${\rm Gr}(3,6)$.
They are invariant under Thaddeus' torus action and antisymmetric
under swapping each of the three index pairs. Note that of the
$84$ polynomials
only $12$ are actually Pl\"ucker binomials. Of the others, $6$ are zero
(for example, $p_{11,21,31} = 0$) and $66$ are Pl\"ucker monomials
(for example,
$p_{11, 21,32} = \det[u_1, v_1, u_2] \det[v_1, u_3, v_2]$).
This resulting map ${\rm Gr}(3,6) \rightarrow {\rm Gr}(3,9)$
gives an embedding of Thaddeus' quotient $H_{3,2} = {\rm Gr}(3,6)/K^*$.
The cubic relations on ${\rm Gr}(3,9)$ described above
characterize the image of this embedding. \qed
\end{ex}
\begin{remark}
\label{remark:Lafforgue}
In the introduction of \cite{Laf},
Lafforgue describes the following compactification of $G^{n}/G$.
We consider the $d \times d$-minors of the
$d \times (dn)$-matrix $\,(A_1,A_2,\ldots,A_n)$.
For each minor there is a corresponding vector ${\bf i}$ in
the set $\, D = \bigl\{
(i_1,i_2,\ldots,i_n) \in \mathbb{N}^n : i_1 + \cdots + i_n = d \bigr\} $,
namely, $i_j$ is the number of columns of $A_j$ occurring in that minor.
We introduce a new unknown $t_{\bf i}$ for each ${\bf i} \in D$,
and we multiply each minor by the corresponding unknown $t_{\bf i}$.
The scaled minors parametrize a subvariety in an affine space of dimension
$$ \binom{nd}{d} \,\,\, = \,\,\,
\sum_{{\bf i} \in D} \binom{d}{i_1} \binom{d}{i_2}\cdots \binom{d}{i_n}. $$
This affine variety yields a projective variety $X_{d,n}$ which compactifies
$G^{n}/G$:
\begin{equation}
\label{lafhook}
G^{n}/G\,\,\, \hookrightarrow\,\,\,
X_{d,n} \,\,\,\subset \,\,\,
\prod_{{\bf i} \in D}\mathbb{P}^{\binom{d}{i_1} \binom{d}{i_2}\cdots \binom{d}{i_n}-1}.
\end{equation}
In light of \cite[\S 2]{HS}, we can identify
$X_{d,n}$ with the partial multigraded Hilbert scheme
$(H_{d,n})_D$ obtained by restricting $H_{d,n}$ to the subset
of degrees $D \subset\mathbb{Z}^n$. Hence there is a natural
morphism $H_{d,n} \rightarrow X_{d,n}$. This is an isomorphism
for $d = 2$ and $n=2$ but we do not know whether this is always the case.
In general, $X_{d,n}$ is singular, and the main result of \cite{Laf} is
a combinatorial construction that replaces $X_{d,n}$
with another -- less singular -- model $\Omega_{d,n}$.
Yet, as discussed in the erratum to \cite{Laf},
$\,\Omega_{d,n}$ is not smooth for $d,n \geq 4$. \qed
\end{remark}
\section{Yet another space of trees}
This section concerns the case $d=2$.
The Hilbert scheme $H_{2,n}$
parametrizes degenerations of the projective line
in its diagonal embedding $\,\mathbb{P}^1 \hookrightarrow (\mathbb{P}^1)^n$.
Our goal is to prove the following two theorems about the structure of $H_{2,n}$.
\begin{thm} \label{thm:IrrButSing}
The multigraded Hilbert scheme $H_{2,n}$ is irreducible,
so it equals the compactification $ \overline{{\rm PGL}(2)^{n}/{\rm PGL(2)}}$.
However, $H_{2,n}$ is singular for $n \geq 4$.
\end{thm}
Our second theorem explains why we
refer to $H_{2,n}$ as a {\em space of trees}. The qualifier
``yet another'' has been prepended
to emphasize that this is not the
{\em space of phylogenetic trees}. The latter
is familiar to algebraic geometers as a discrete
model for $\overline{\mathcal{M}_{0,n}}$;
see \cite[Theorem~1.2]{GM} for a precise statement.
Following \cite{AS}, there is a natural graph structure
on any multigraded Hilbert scheme, including $H_{2,n}$. The vertices are the
monomial ideals, and for every ideal in $H_{2,n}$ with precisely two initial
monomial ideals there is an edge between the corresponding vertices.
By \cite[Theorem 11]{AS}, this is precisely the induced subgraph
on $H_{2,n}$ of the graph of all monomial ideals.
We note that our graph is not a {\em GKM graph}
in the sense of \cite{GHZ} because the $T^n$-fixed subvarieties
corresponding to edges usually have dimension greater than one.
\begin{thm} \label{thm:YASOT}
There are $\,2^n (n{+}1)^{n-2}\,$ monomial ideals in $H_{2,n}$,
one for each tree on $n{+}1$ unlabeled vertices
with $n$ labeled directed edges. Two trees are connected by
an edge on $H_{2,n}$ if they differ by one of the following operations:
\begin{enumerate}
\item Move any subset of the trees attached at a vertex to an adjacent vertex.
\item Swap two edges that meet at a bivalent vertex (preserving orientation).
\end{enumerate}
\end{thm}
In this section we use the following notation for our matrix of variables:
\begin{equation*}
X = \begin{bmatrix}
x_1 & x_2 & \cdots & x_n \\
y_1 & y_2 & \cdots & y_n
\end{bmatrix}.
\end{equation*}
Thus $(x_i:y_i)$ are homogeneous coordinates
on the $i$-th factor in our
ambient space $(\mathbb{P}^1)^n$.
The common Hilbert function (\ref{eqn:ourHF})
of all ideals $I$ in $H_{2,n}$ equals
\begin{equation} \label{eqn:ourHF2}
\mathbb{N}^n\, \rightarrow \,\mathbb{N} \,,\,\,\, (u_1,u_2,\ldots,u_n)\, \mapsto \, u_1 + u_2 + \cdots + u_n + 1 .
\end{equation}
The unique Borel-fixed ideal in $H_{2,n}$ equals
$ \, Z = \langle x_i x_j : 1 \leq i < j \leq n \rangle$.
Our first goal is to prove that $H_{2,n}$ is irreducible. This
requires a combinatorial description of the
subvarieties $V(I)$ of $(\mathbb{P}^1)^n$
corresponding to ideals $I \in H_{2,n}$.
Note that
each such subvariety is a reduced curve of multidegree $(1,1,\ldots,1)$ in
$(\mathbb{P}^1)^n$.
\begin{lem} \label{lem:43}
The variety $V(I) \subset (\mathbb{P}^1)^n $ defined by any ideal $I \in H_{2,n}$ is
the reduced union of several copies of
the projective line $\,\mathbb P^1$. For each factor of
$(\mathbb{P}^1)^n$ there is exactly
one component of $V(I)$ which is not constant along this factor, and for this
component, the projection induces an isomorphism.
\end{lem}
\begin{proof}
Consider the projection from $V(I)$ onto the $i$-th factor of $(\mathbb{P}^1)^n$.
We infer
from the Hilbert function (\ref{eqn:ourHF2}) that this projection is an isomorphism
over an open subset of $\mathbb{P}^1$.
Hence there exists a rational map from $\mathbb P^1$ to a unique component $Y$ of
$V(I)$. The fact that the projection $Y \rightarrow \mathbb{P}^1$ is a regular morphism implies that
the curve $Y$ is smooth. We conclude that the map $Y \rightarrow \mathbb{P}^1$ is an isomorphism.
\end{proof}
Each component of $V(I)$ can be labeled by the factors onto which it maps
isomorphically. We draw $V(I)$
as a set of intersecting lines, labeled with subsets of the factors.
By Lemma \ref{lem:43}, the labels form a partition of $\{1,2,\ldots,n\}$.
Moreover, since $K[X]/I$ is Cohen-Macaulay, $V(I)$ is connected,
and because only one component is non-constant along any factor, there is no
cycle among its components.
Hence our picture is an edge-labeled tree.
We have the following converse to this description of the points of
$H_{2,n}$.
\begin{prop} \label{prop:construction-ideals-h2n}
Suppose that $Y \subset (\mathbb{P}^1)^n$ is a
union of projective lines, which is connected and such that
each factor of $(\mathbb{P}^1)^n$ has a unique
projective line projecting isomorphically onto it. Then
the radical ideal $I$ defining $Y$ is a point in $H_{2,n}$.
\end{prop}
\begin{proof}
We compute the Hilbert function and show that it coincides with (\ref{eqn:ourHF2}).
We proceed by induction on the number of components.
If $Y$ is irreducible then $Y$ is the translate of the diagonal
$\mathbb{P}^1 $ in $(\mathbb{P}^1)^n$ with some $A_i \in {\rm PGL}(2)$
acting on the $i$-th factor,
and therefore $\,I\, =\, (A_1,\ldots,A_n) \circ I_2(X)\,$ lies in $H_{2,n}$.
Now suppose $Y$ is reducible, let $Y_j$ be one of its components,
and $F_j \subset \{1,\ldots,n\}$ the index set of factors of $(\mathbb{P}^1)^n$
onto which $Y_j$ maps isomorphically. The prime ideal $I_j$ of $Y_j$
is generated by linear forms of multidegrees $\{e_i: i \not\in F_j \}$,
and by the $2 \times 2$-minors of a $2 \times |F_j|$-matrix $X_j$
which consists of the $F_j$ columns of $X$ acted on
by some $A_i \in {\rm PGL}(2)$.
The Hilbert function of $I_j$ is
\begin{equation*}
u \,\mapsto \,1 + \sum_{i \in F_j} u_i.
\end{equation*}
Since $Y$ is a tree of projective lines, there exists a component
$Y_j$ which has only one point of
intersection with the other components. Let
$Y' \subset (\mathbb{P}^1)^n$ be the union of the other components
and $I'$ the radical ideal defining $Y'$. The ideal $I_j$ of
$Y_j$ contains linear forms of degree $e_i$ for every $i \not\in F_j$,
while $I'$ contains linear forms of degree $e_i$ for every $i \in F_j$.
This implies that $I' + I_j$ is a homogenous prime ideal
generated by linear forms. Its variety equals $Y' \cap Y_j$,
and hence $I'+I_j$ has constant Hilbert function $1$. We conclude
\begin{align*}
HF(I) \,=\, HF(I' \cap I_j) \,& = \,HF(I') + HF(I_j) - HF(I' + I_j) \\
&= \, \bigl( 1 + \sum_{i \in F_j} u_i \bigr) \,\, +\,\,
\bigl( 1 +\sum_{i \not\in F_j} u_i \bigr) \,\,-\,\, 1,
\end{align*}
which is the common Hilbert function (\ref{eqn:ourHF2})
of all ideals in $H_{2,n}$.
\end{proof}
Our discussion shows that each point in $H_{2,n}$
is characterized by the following data.
First, there is a tree of projective lines $Y_j = \mathbb{P}^1$,
labeled by the parts in a partition $\{1,\ldots,n\} = \cup_j F_j$.
These represent the factors of the ambient space $(\mathbb{P}^1)^n$.
The intersection point of two lines determines a marked
point on each of the two lines.
For each line labeled with more than one factor, we have a compatible set of
isomorphisms between those factors.
Given these data, we can compute the ideal $I_j$ of a component $Y_j$ as follows:
\begin{enumerate}
\item Let $X_j$ be the submatrix of $X$ given by the columns
indexed by $F_j$, acted on by the
$2 {\times} 2 $-matrices corresponding to the isomorphisms of~$\mathbb{P}^1$.
\item For each $i \not\in F_j$ locate the
intersection point on $Y_i$ that is nearest to $Y_j$.
Let $\alpha x_i + \beta y_i$ be the linear form defining this intersection point on $Y_i$.
\item
The ideal $I_j$ is generated by these linear forms and the
$2 {\times} 2$ minors of~$X_j$.
The intersection ideal $I = \cap_j I_j$ is
the desired point in $H_{2,n}$.
\end{enumerate}
\begin{ex} \rm
The above algorithm implies that there are infinitely many
${\rm PGL}(2)^n$-orbits on $H_{2,n}$ when $n \geq 5$.
Consider a tree of four lines, $Y_1$, $Y_2$, $Y_3$, and~$Y_4$,
which meet a fifth line in four distinct points, with
coordinates $(0{:}1),(1{:}1),(1{:}0)$ and $(t \!:\! 1)$ on $Y_5 = \mathbb{P}^1$.
Each of these intersection points is identified with the
point $V(x_j) = \{(0 \! : \! 1)\}$ on the line $Y_j$.
Then we have
\begin{equation*}
\begin{matrix}
I_1 = \langle x_2, x_3, x_4,x_5 \rangle &
I_2 = \langle x_1,x_3,x_4,x_5 - y_5 \rangle &
I_3 = \langle x_1, x_2, x_4, y_5 \rangle \\
I_4 = \langle x_1, x_2, x_3, x_5 - t y_5 \rangle \! &
I_5 = \langle x_1, x_2, x_3, x_4 \rangle & \!
I = I_1 \cap I_2 \cap I_3 \cap I_4 \cap I_5
\end{matrix}
\end{equation*}
As $t$ varies over the field $K$, the ideals $I$ lie in different
${\rm PGL}(2)^5$-orbits on $H_{2,5}$ because
the cross ratio of the four points on $Y_5$
is invariant under ${\rm PGL}(2)$. \qed
\end{ex}
\begin{proof}[Proof of Theorem \ref{thm:IrrButSing}]
We shall prove that $H_{2,n}$ is irreducible. Let $I$ be any ideal in $H_{2,n}$.
We use induction on the number of components of $V(I)$ to show that
$I$ is in the closure of the orbit of $I_2(X)$.
If $V(I)$ is irreducible, then $I$ is in the orbit of $I_2(X)$ by the above
discussion, so we assume that
$V(I)$ has at least two components. We shall
construct another ideal $J \in H_{2,n}$
such that $V(J)$ has one fewer component than $V(I)$
and such that $J$ degenerates to $I$.
Consider the tree picture of $V(I)$ as described above, and
let $Y_1$ be a component
which has exactly one point
of intersection with the other components. Let $Y_2$ be one of these components
intersecting $Y_1$. After relabelling the factors and a change of coordinates,
we can assume that the isomorphisms of the factors associated to $Y_1$ are all
the identity map,
and the same holds for the isomorphisms of factors of $Y_2$. Furthermore,
we can assume that the point $Y_1 \cap Y_2$ is defined
in $(\mathbb{P}^1)^n$ by the ideal
$\,\langle x_i : i \in F_1 \rangle + \langle y_i : i \not\in F_1 \rangle $.
We now replace $Y_2$ with the component
$Y_2'$ labeled by the set $F_2' = F_1 \cup F_2$ (with the identity
isomorphisms), and we call its ideal $I'$. By
Proposition~\ref{prop:construction-ideals-h2n}, $I'$ is in $H_{2,n}$. The ideal $I_2'$
of $Y_2'$ is generated by $\{ x_j : j \not\in F_2'\}$ and the $2 {\times} 2$
minors of the submatrix of $X$ indexed by $F_2'$.
For $t \neq 0$, we
consider the ideal formed by replacing $y_j$ by $ty_j$ in $I'$ for $j \in F_1$
and take the flat limit as $t$ goes to $0$. The limit of $I_2'$ under this
action is $I_1 \cap I_2$, so the limit of $I'$ is contained in $I$. Since
$I$ and the limit of $I'$ have the same Hilbert function, they must be equal.
This proves the first assertion in Theorem~\ref{thm:IrrButSing}.
The second assertion will follow from Corollary~\ref{cor:trivalent-smooth} below.
\end{proof}
We now come to the combinatorial description of monomial ideals $I$
on $H_{2,n}$. Here the tree picture can be simplified.
There are precisely $n$ components $V(I) = Y_1 \cup Y_2 \cup
\cdots \cup Y_n$, and the partition is into singletons $F_i = \{i\}$.
Each line $Y_i$ has only two points where intersections are possible,
namely, $V(x_i) = \{(0\! : \!1)\}$ and $V(y_i) = \{(1\! : \! 0)\}$.
We draw $Y_i$ as an oriented line segment with the
intersection points only at the end points.
The orientation is indicated by an arrow
whose tail represents
$V(x_i)$ and whose head represents $V(y_i)$.
In this manner, each monomial ideal $I$ in $H_{2,n}$
is represented uniquely by a tree $T$ with
$n$ directed labeled edges. The tree $T$
has $n {+} 1$ vertices which remain unlabeled.
This establishes the first part of Theorem~\ref{thm:YASOT}.
Our construction is illustrated for $n=3$ in
Figure \ref{fig:claw-chain}. See Example \ref{example:twothree}
below for a combinatorial discussion
of the two classes of trees shown here.
\begin{figure}
\begin{centering}
\includegraphics{h23trees.eps}
\par\end{centering}
\caption{Edge-labelled trees corresponding to $2$ of the $32$ monomial ideals on
$H_{2,3}$. The tree on the left corresponds to the ideal $\langle y_1y_2,
y_1x_3, x_2x_3\rangle$ and on the right corresponds to the Borel-fixed ideal $Z
= \langle x_1x_2, x_1x_3, x_2x_3\rangle$.}
\label{fig:claw-chain}
\end{figure}
We next describe a rule for reading off the generators of
a monomial ideal $I \in H_{2,n}$ from its tree $T$.
For any two distinct indices $i$ and~$j$
in $\{1,\ldots,n\}$ we
set $z_{ij} = x_j$ if the directed edge $j$ is pointing away from
the edge $i$ in $Y$ and $z_{ij} = y_j$ otherwise.
This means that the ideal $\,I_i \,$ of the component $Y_i$
is generated by the variables
$\, z_{ij}\,$ for $j \in \{1,\ldots,n\}\backslash \{i\}$.
By intersecting these ideals for all $i$ we obtain the
following combinatorial formula for the ideal $I$.
\begin{remark}
The monomial ideal
associated with the tree $T$ equals
\begin{equation*} I \,\, = \,\,\langle \,z_{ij}z_{ji} \,:\,
1 \leq i < j \leq n \,\rangle. \end{equation*}
Explicitly, the ideal generator corresponding to
pair $\{i,j\}$ of edges equals
\begin{equation*}
z_{ij} z_{ji} \,\,\, = \,\,\,
\begin{cases}
\,\, y_i y_j & \hbox{ if the edges $i$ and $j$ point towards each other, } \\
\,\, x_i x_j & \hbox{ if the edges $i$ and $j$ point away from each other, }\\
\,\, y_i x_j & \hbox{ if the edge $i$ points to the edge $j$ but not conversely, } \\
\,\, x_i y_j & \hbox{ if the edge $j$ points to the edge $i$ but not conversely. }
\end{cases}
\end{equation*}
\end{remark}
Note that the Borel-fixed ideal $Z$ corresponds to
the star tree with all edges directed outwards.
Our next result concerns tangent spaces of $H_{2,n}$.
\begin{prop} \label{prop:tangentspace}
Let $I $ be the monomial ideal corresponding to a
directed tree $T$ as above.
Then the dimension of the tangent space of $H_{2,n}$ at $I$ is
\begin{equation} \label{eqn:fsum}
\sum_{v} f(\operatorname{degree}(v))
\end{equation}
where the sum is over all vertices of $T$, and the function~$f$ is defined
by
\begin{equation*}
f(a) = \begin{cases} \,3(a-1) & \hbox{if} \,\,\, 1 \leq a \leq 3, \\
\, a(a-1) & \hbox{if} \,\,\, a \geq 3. \end{cases}
\end{equation*}
\end{prop}
The following corollary to this result
completes the proof of Theorem~\ref{thm:IrrButSing}.
\begin{cor} \label{cor:trivalent-smooth}
The monomial ideal $I$ is a smooth point on the
Hilbert scheme $H_{2,n}$
if and only if every vertex in the tree $T$ is at most trivalent.
\end{cor}
\begin{proof}
The tree $T$ has $n+1$ vertices $v$,
and the number of edges is
\begin{equation*}
n \,\, = \,\, \frac{1}{2} \cdot \sum_v \operatorname{degree}(v)
\end{equation*}
Since $H_{2,n}$ is a compactification of ${\rm PGL}(2)^{n-1}$,
its dimension equals
\begin{equation*} {\rm dim}(H_{2,n}) \,\, = \,\,
3(n-1) \,\, = \,\, 6 n - 3(n+1) \,\, = \,\,
\sum_{v} 3(\operatorname{degree}(v) - 1).
\end{equation*}
Since $f(a) \geq 3(a-1)$, with equality if and only if $a \leq 3$,
the sum in (\ref{eqn:fsum}) is equal to ${\rm dim}(H_{2,n})$,
if and only if
$\, {\rm degree}(v) \leq 3$ for all vertices $v$.
\end{proof}
\begin{example}
The star tree ideal
$Z = \langle \,x_i x_j \,:\, 1 \leq i < j \leq n \,\rangle\,$
has tangent space dimension $n(n-1)$.
In fact, it is the most singular point on $H_{2,n}$, since
every other ideal degenerates to $Z$.
On the other hand, the chain ideal
$\,M = \langle\, x_i y_j \,: \, 1 \leq i < j \leq n \, \rangle \,$
is a smooth point on $H_{2,n}$, because the tree for $M$ is a
chain of $n$ directed edges. This confirms
Lemma \ref{lem:diag} for $d=2$. \qed
\end{example}
\begin{proof}[Proof of Proposition \ref{prop:tangentspace}]
For any distinct edges $k$ and $\ell$ meeting at a vertex $v$
of the tree $T$, we define the
following tangent directions for $H_{2,n}$ at $I$:
\begin{equation*}
\alpha_{k \ell} \colon z_{ij}z_{ji} \mapsto
\begin{cases}
\, z_{ij} \tilde{z}_{ji} & \mbox{if } i = k \mbox{ and $j$ connects to $v$ via
$\ell$ (including $j=\ell$),} \\
\,\,\,\, 0 & \mbox{otherwise.}
\end{cases}
\end{equation*}
Here we use the convention that ${\tilde z}_{ij} = x_j$ if $z_{ij} = y_j$ and
${\tilde z}_{ij} = y_j$ if $z_{ij} = x_j$.
Moreover, if $v$ is bivalent, i.e.\ $k$ and $\ell$ are the only edges incident
to $v$, define:
\begin{equation*}
\beta_{v} \colon z_{ij} z_{ji} \mapsto \begin{cases}
\, \tilde{z}_{ij} \tilde{z}_{ji} & \mbox{if } \{i,j\} = \{k, \ell\}, \\
\,\,\,\, 0 & \mbox{otherwise.}
\end{cases}
\end{equation*}
If the vertex $v$ has degree $a$, then we have defined $f(a)$ maps.
To show that these map are indeed tangent directions, we exhibit a one-parameter
deformation of $I$. The tangent vector $\beta_{v}$ is realized by the
curve on $H_{2,n}$ gotten by replacing $z_{k \ell} z_{\ell k}$ with
$z_{k \ell} z_{\ell k} - \epsilon \tilde{z}_{k \ell} \tilde{z}_{\ell k}$
among the generators of $I$. The resulting ideal in $H_{2,n}$
represents the tree of lines gotten by merging the edges
$k$ and $\ell$ to a single $\mathbb{P}^1$ labeled by $\{k,\ell\}$.
The tangent vector $\alpha_{k \ell}$ is realized by
replacing $z_{ji}$ with $z_{ji} - \epsilon \tilde{z}_{ji}$
in all prime components of $I_j$ such that
$j$ connects to $v$ via $\ell$. The resulting ideal in $H_{2,n}$
represents the tree of lines gotten by sliding the
subtree at $v$ in direction $\ell$ along the
edge labeled~$k$.
As $v$ ranges over all vertices of the tree $v$,
and $k, \ell$ range over all incident edges,
we now have a collection of tangent vectors
whose cardinality equals~(\ref{eqn:fsum}).
To see that these vectors are linearly independent, we note that
$\alpha_{k\ell}$ is the only one of these tangent vectors such that the image of
$z_{k\ell}z_{\ell k}$ has a non-zero coefficient for $z_{ij} \tilde z_{ji}$ and
that $\beta_v$ is the
only one such that the image of $z_{k\ell}z_{\ell k}$ has a non-zero
coefficient for $\tilde z_{ij} \tilde z_{ji}$. Thus, a non-trivial linear
combination of the $\alpha_{k\ell}$ and $\beta_v$ can't be the zero tangent
vector.
It remains to be seen that our tangent vectors
span the tangent space. Suppose there exists
a tangent vector $\phi$ that is not in the span of the
$\alpha_{k\ell}$ and $\beta_v$. After subtracting suitable multiples of these
known tangent vectors, we may assume that
for any pair of adjacent edges $i$ and $j$
there exists a scalar $\nu_{ij}$ such that
$\phi(z_{ij}z_{ji}) = \nu_{ij} \tilde z_{ij} \tilde z_{ji}$,
and furthermore $\nu_{ij} = 0$ if
the node $v$ shared by $i$ and $j$ is bivalent.
Suppose that $v$ has degree at least $3$ and let
$k$ be an edge incident to $v$ distinct
from $i$ and $j$. Then $z_{ij} = z_{kj}$ and hence
\begin{equation*}
\nu_{ij} z_{jk} \tilde z_{ij} \tilde z_{ji}
\, = \, z_{jk} \phi(z_{ij} z_{ji})
\, = \, \phi(z_{jk}z_{kj})z_{ji}
\, = \, \nu_{jk} \tilde z_{jk} \tilde z_{kj} z_{ji}.
\end{equation*}
This implies $\nu_{ij} = \nu_{ik} = 0$.
We conclude that
$\phi(z_{ij}z_{ji}) = 0$ for
any pair of adjacent edges $i$ and $j$.
Now suppose that $i$ and $j$ are not adjacent and write
\begin{equation*}
\phi(z_{ij}z_{ji}) \,\, = \,\, \lambda z_{ij} \tilde{z}_{ji}\, +\, \mu \tilde{z}_{ij} z_{ji}
\, + \, \nu \tilde{z}_{ij} \tilde{z}_{ji}.
\end{equation*}
Let $\ell$ be the edge adjacent to $j$ on the path from $i$ to $j$.
Then $z_{ij} = z_{\ell j}$ and
\begin{equation*} 0 \,=\, z_{ji} \phi(z_{\ell j} z_{j \ell}) \,= \,
\phi(z_{ij}z_{ji}) z_{j \ell} \\
\,=\, 0 + \mu \tilde z_{ij} z_{ji} z_{j \ell} + \nu \tilde z_{ij} \tilde
z_{ji} z_{j \ell} \quad\, \hbox{modulo $I$.} \end{equation*}
This implies $\mu=\nu=0$ and, by symmetry, $\lambda=0$.
We conclude that $\phi=0$, so our maps $\alpha_{k \ell}$ and
$\beta_v$ form a basis for the tangent space of $H_{2,n}$ at $I$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:YASOT}]
We already saw that the monomial ideals on $H_{2,n}$
are in bijection with trees $T$ with $n{+}1$ unlabeled
vertices and directed edges that are labeled
with $\{1,\ldots,n\}$. To show that there are $2^n (n{+}1)^{n-2}$
monomial ideals, it suffices to show there are $(n{+}1)^{n-2}$
edge-labeled trees on $n{+}1$ vertices. Picking an arbitrary node as the root and
shifting the labels to the nodes away from the root gives a rooted,
node-labeled tree, of which there are $(n{+}1)^{n-1}$. From the rooted tree, we
can uniquely recover the edge-labeled tree and the choice of the
root, so there are $(n{+}1)^{n-2}$ edge-labeled trees.
We now need to identify
all ideals $I$ in $H_{2,n}$ that possess precisely
two initial monomial ideals.
We already saw two classes of such ideals in the proof of
Proposition \ref{prop:tangentspace}. First, there was the ideal with generator
$z_{k \ell} z_{\ell k} - \epsilon \tilde{z}_{k \ell} \tilde{z}_{\ell k}$
which realizes the deformation $\beta_v$ and swap \# 2 in the statement of
Theorem \ref{thm:YASOT}. We also exhibited an ideal for the deformation $\alpha_{k \ell}$
which realizes the move \# 1 when the subset of trees is a singleton.
The general case is subsumed by the following argument.
In light of Theorem \ref{thm:radical} and Proposition \ref{prop:grothen},
Gr\"obner degenerations of ideals $I$ in $H_{2,n}$
correspond to scheme-theoretic limits of the trees
$Y$ with respect to one-parameter subgroups of the
$(K^*)^n$-action on $(\mathbb{P}^1)^n$.
Let $Y$ be a tree on $H_{2,n}$ that has
precisely two degenerations to $(K^*)^n$-fixed trees.
There are two cases to be considered. First suppose that
some component $Y_i$ of $Y$ is labeled by a
subset $F_i \subseteq \{1,\ldots,n\}$ with $|F_i| \geq 2$.
The component $Y_i$ admits $|F_i|!$ distinct degenerations
to a $(K^*)^n$-fixed tree, and, by Proposition \ref{prop:construction-ideals-h2n},
each of these lifts to a degeneration of $Y$. This implies that
the tree $Y$ has $n-1$ edges, and the unique non-singleton label
$F_i$ has cardinality two. Moreover, each intersection point
$Y_j \cap Y_k$ is a torus-fixed point on both $Y_j$ and $Y_k$.
This is precisely the situation in swap \# 2 above.
In the second case to be considered, the
tree $Y$ consists of the $n$ lines $Y_1,\ldots,Y_n$,
each labeled by a singleton. Consider all components $Y_i$ with
intersection points that are not torus-fixed.
For each such component $Y_i$ there exist two
torus degenerations of $Y$ that move the intersection
points on $Y_i$ to the two torus-fixed points on $Y_i$.
Under these degenerations,
the intersection points $Y_j \cap Y_k$ with $i \not\in \{j,k\}$
remain in their positions on $Y_j$ and $Y_k$.
Hence, only one component
$Y_i$ has intersection points that are not torus-fixed.
This is precisely the situation in move \# 1 in
Theorem \ref{thm:YASOT}.
\end{proof}
\begin{example} \label{example:twothree} ($n=3$) \
The Hilbert scheme $H_{2,3}$ has $32$ monomial ideals, corresponding to the
eight
orientations on the claw tree and to the eight orientations on each of the
three labeled bivalent trees. Representatives for the two classes
of trees are shown in Figure \ref{fig:claw-chain}.
The eight orientations of the claw tree can be
arranged into the vertices of a cube. Each edge in the cube is an edge in the
graph corresponding to moving two edges at a time between vertices. Along each
edge, add two vertices corresponding to bivalent trees and four edges from each
of these to each adjacent vertex of the cube. In addition to these operations
corresponding to move \# 1, there are are 24~edges corresponding to swap \# 2.
These are arranged into four
hexagons.
The six-dimensional manifold $H_{2,3}$ coincides
with Lafforgue's compactification $X_{2,3}$ in Remark \ref{remark:Lafforgue}.
Here, (\ref{lafhook}) amounts to an embedding
of $\,H_{2,3} \,$ into $\,\mathbb{P}^3 \times \mathbb{P}^3 \times \mathbb{P}^3$.
The equations for this embedding are as follows.
Each $\mathbb{P}^3$ parametrizes one of the three generators
$\, a_{ij} x_i x_j + b_{ij} x_i y_j + c_{ij} y_i x_j + d_{ij} y_i y_j $,
$1 \leq i < j \leq 3$, of an ideal in $H_{2,3}$.
Being a point in $H_{2,3}$ means that these ideal generators
admit two linearly independent
syzygies in degree $(1,1,1)$. This happens if and only if
the following $6 \times 9$-matrix has rank at most four:
$$ \begin{pmatrix}
a_{12} & 0 & b_{12} & 0 & c_{12} & 0 & d_{12} & 0 \\
0 & a_{12} & 0 & b_{12} & 0 & c_{12} & 0 & d_{12} \\
a_{13} & b_{13} & 0 & 0 & c_{13} & d_{13} & 0 & 0 \\
0 & 0 & a_{13} & b_{13} & 0 & 0 & c_{13} & d_{13} \\
a_{23} & b_{23} & c_{23} & d_{23} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & a_{23} & b_{23} & c_{23} & d_{23}
\end{pmatrix}
$$
By saturating its ideal of $5 \times 5$-minors
with respect to the irrelevant ideal
$\,\bigcap_{1 \leq i < j \leq 3}
\langle a_{ij}, b_{ij}, c_{ij}, d_{ij} \rangle $, we find
that the prime ideal of $X_{2,3} {\subset} (\mathbb{P}^3)^3$ is
generated by nine cubics such as
$ a_{12} a_{13} d_{23}
- a_{12} b_{13} c_{23}
- b_{12} a_{13} b_{23}
+ b_{12} b_{13} a_{23}$.
\qed
\end{example}
\section{Three projective planes} \label{sec:three-planes}
In this section we study the smallest case where $H_{d,n}$ is reducible,
namely, $n=d=3$.
The multigraded Hilbert scheme $H_{3,3}$
parametrizes degenerations of
the projective plane in its diagonal embedding
$\mathbb{P}^2 \hookrightarrow \mathbb{P}^2 \times \mathbb{P}^2 \times \mathbb{P}^2$.
We use the following notation for the
unknowns $x_{ij}$ in the polynomial ring $K[X]$.
\begin{equation*}
X \quad = \quad
\begin{bmatrix}
x_1 & x_2 & x_3 \\
y_1 & y_2 & y_3 \\
z_1 & z_2 & z_3
\end{bmatrix}
\end{equation*}
\begin{thm} \label{thm:three-planes}
The multigraded Hilbert scheme $H_{3,3}$ is the reduced union of
seven irreducible components,
each of which contains a dense
${\rm PGL}(3)^3$ orbit:
\begin{itemize}
\item The $16$-dimensional main component
$\,\overline{{\rm PGL}(3)^3/{\rm PGL}(3)}\,$ is singular.
\item Three $14$-dimensional smooth components are permuted under the
$S_3$-action on $(\mathbb{P}^2)^3$. At a generic point, the subscheme of
$(\mathbb{P}^2)^3$ is the union of the blow-up of $\mathbb{P}^2$ at a point, two copies of
$\mathbb{P}^2$, and $\mathbb{P}^1 \times \mathbb{P}^1$. An ideal which represents
such a point on this component is
\begin{multline} \label{eqn:ideal-extra-14}
\langle \,x_1, x_2, y_1 z_2 - z_1 y_2, y_1y_3-z_1x_3, y_2y_3-z_2x_3
\, \rangle \,\,\cap \\
\langle x_1, y_1, x_3, y_3\rangle \cap \langle x_1, x_2, x_3, y_3 \rangle \cap
\langle x_2, y_2, x_3, y_3 \rangle.
\end{multline}
\item Three $13$-dimensional smooth components
are permuted under
the $S_3$-action on $(\mathbb{P}^2)^3$. A generic point looks like the union of three copies
of $\mathbb{P}^2$
and $\mathbb{P}^2$ blown up at three points.
A representative ideal is
\begin{equation*} \label{eqn:ideal-extra-13}
\langle x_1, y_1, x_2, z_2 \rangle \cap \langle x_1, y_1, x_3, y_3 \rangle
\cap \langle x_2, y_2, x_3, y_3 \rangle \cap
\langle x_1, x_2, x_3, y_1 y_2 z_3 - z_1 z_2 y_3 \rangle.
\end{equation*}
\end{itemize}
\end{thm}
With some additional notation, we can describe the isomorphism types of
the six extra components. Let $\operatorname{Fl}$ denote the variety of
complete flags in $K^3$ and $\mathcal O_i$ the tautological bundle of
$i$-dimensional vector spaces for $i=1$ or~$2$. Then the second class of
components are
isomorphic to the bundle
\begin{equation} \label{eqn:extra-14-bundle}
\mathcal H_{2,3} \big(\mathbb{P}^2 \rightarrow \mathbb{P}(\mathcal O_2)
\times \mathbb{P}(\mathcal O_2)
\times \mathbb{P}(\mathcal O^{\oplus 3} / \mathcal O(-1))\big)
\,\rightarrow \,\operatorname{Fl} \times \operatorname{Fl} \times \mathbb{P}^2
\end{equation}
where $\mathcal H_{2,3}$ is a bundle whose fibers are each isomorphic to
the Hilbert scheme $H_{2,3}$.
A point in this bundle
is equivalent to a point $x$ in $\operatorname{Fl} \times \operatorname{Fl} \times \mathbb{P}^2$,
together with an ideal in the total coordinate ring of $\mathbb{P}((\mathcal O_2)_x)
\times \mathbb{P}((\mathcal O_2)_x) \times \mathbb{P}((\mathcal O^{\oplus 3}/\mathcal
O(-1))_x))$ with the appropriate Hilbert function~(\ref{eqn:ourHF}).
To relate this formulation to the ideal in~(\ref{eqn:ideal-extra-14}), we
identify linear forms in $K[X]$ with the direct sum of three vector
spaces, each of dimension~$3$, and $\{x_i, y_i, z_i\}$ as choice of basis for
the $i$th summand.
The two flag varieties parametrize the duals of the flags $\langle x_i
\rangle \subset \langle x_i, y_i\rangle$ for $i = 1,2$. The projective space
parametrizes the point whose ideal is $\langle x_3, y_3\rangle$. These
spaces determine all the linear generators in~(\ref{eqn:ideal-extra-14}). The
additional generators of the first component
represent a point in $H_{2,3}$, but without a
canonical choice of basis.
The third class of components are isomorphic to the projective bundle:
\begin{equation*}
\mathbb{P}(\mathcal E) \,\rightarrow\, \operatorname{Fl} \times B \times \operatorname{Fl}
\end{equation*}
where $\operatorname{Fl}$ is as before and $B$ is the blow-up of $\mathbb{P}^2 \times \mathbb{P}^2$ along
the diagonal.
We think of the blow-up variety $B$
as the parameter space of two points in
$\mathbb{P}^2$
and a line containing them. The $4$-dimensional vector bundle
$\mathcal E$ is the sum
\begin{equation} \label{eqn:bundle-13}
(\mathcal O_1 \otimes \mathcal O_1 \otimes \mathcal O_2) + (\mathcal O_1 \otimes \mathcal O_2
\otimes \mathcal O_1) + (\mathcal O_2 \otimes \mathcal O_1' \otimes \mathcal O_1)
\end{equation}
inside $\mathcal O_{\operatorname{Fl}}^{\oplus 3} \otimes \mathcal O_{B}^{\oplus 3} \otimes
\mathcal O_{\operatorname{Fl}}^{\oplus 3}$, where $\mathcal O_1$ and $\mathcal O_1'$ are the pullbacks to $B$ of
$\mathcal O(-1)$ on each of the copies of $\mathbb{P}^2$, which parametrize the two points.
The flag varieties parametrize the duals of $\langle x_i
\rangle \subset \langle x_i, y_i\rangle$ for $i = 1, 3$ and $B$ parametrizes the
two points defined by $\langle x_2, y_2\rangle$ and $\langle x_2, z_2\rangle$,
with $\langle x_2 \rangle$ as the line between them.
As before, these vector spaces determine the linear generators of the
components. The bundle $ \mathbb{P}(\mathcal E)$
parametrizes the coefficients of the cubic
generator of the ideal of the blowup of $\mathbb{P}^2$ at two points. This ideal
equals
\begin{equation}
\label{eqn:13-cubic}
\langle \,x_1, x_2, x_3, \,
a y_1 y_2 z_3 + b y_1 y_2 y_3 + c y_1 z_2 y_3 + d z_1 z_2 y_3 \,\rangle
\end{equation}
Note that the middle two terms are linearly independent even when $y_2$ and
$z_2$ coincide.
For generic coordinates, after a change of basis, we can take $b$ and
$c$ to be zero, and after rescaling, we take $a=d=1$. Thus, the
${\rm PGL}(3)^3$ orbit of the ideal~(\ref{eqn:ideal-extra-13}) is dense in the
$13$-dimensional component of $H_{3,3}$.
\begin{proof}[Proof of Theorem~\ref{thm:three-planes}:]
The proof is computational. It rests on the
{\tt Singular} code posted at
\url{http://math.berkeley.edu/~dustin/diagonal/h33.sng}.
The computation works regardless
of what the field characteristic of $K$ is.
It suffices to consider an affine neighborhood
of the unique Borel-fixed ideal $Z$ in $H_{3,3}$.
We employ the
standard method of chosing coordinates by adding
trailing terms with indeterminate coefficients
to the ten monomial generators of $Z$. The ideal defining $H_{3,3}$ is
then derived from the syzygies of $Z$.
We then derive the prime ideals representing each of the
seven components, by translating
the geometric descriptions above
into local coordinates around $Z$. Implicitization
using {\tt Singular} yields the seven prime ideals, and we
check that their intersection equals the
ideal of the Hilbert scheme itself.
We now explain how the parametric representations of the seven components
are derived.
The main component is, by definition, parametrized by the
${\rm PGL}(3)^3$ orbit of
the ideal of $2\times 2$ minors of $X$.
The other components can also be parametrized by ${\rm PGL}(3)^3$
orbits of the representative ideals, but it is also possible
-- and computationally more efficient --
to use parametrizations which do not require localization.
For the $14$-dimensional components, we begin by using local coordinates in
$H_{2,3}$ to define a family of subschemes of $(\mathbb{P}^1)^3$ over $\mathbb A^6$.
Renaming some
of the variables and adding two linear terms, we get the parametrization of the
blow-up of $\mathbb P^2$ and its degenerations in $(\mathbb P^2)^3$, i.e.\ a
neighborhood of a fiber of (\ref{eqn:extra-14-bundle}).
Intersecting with the ideals of linear spaces gives the family in $K[X]$ with
the appropriate Hilbert function. Different choices of
flags can be represented by upper triangular changes of coordinates on each of
the three columns of $X$.
The parametrization of the $13$-dimensional component follows the same
pattern. In a neighborhood of $Z$, up to change of basis, we can take
the two points parametrized by $B$ to be $\langle x_2, y_2 \rangle$ and
$\langle x_2, y_2 - az_3 \rangle$ with $\langle x_2 \rangle$ as the line
between them. In addition to $a$, the other coordinates are the entries
of the upper triangular matrix corresponding to the choice of flag and
to the coefficients of the cubic generator in (\ref{eqn:13-cubic}).
Implicitizing these parametrizations reveals the prime ideals for these seven
components, and their intersection
is found to equal the ideal
of the Hilbert scheme itself.
\end{proof}
\begin{table}
\begin{centering}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
& & & \multicolumn{3}{c|}{component} & \\
& T. sp. & planar? & main? & 14-dim.? & 13-dim.? & symm. \\
\hline
1 & 16 & y & y & n & n & 2 \\
2 & 16 & y & y & n & n & 1 \\
3 & 16 & y & y & n & n & 1 \\
4 & 18 & y & y & n & n & 6 \\
5 & 16 & y & y & n & n & 3 \\
6 & 14 & y & n & y & n & 2 \\
\hline
7 & 15 & y & n & y & y & 1 \\
8 & 16 & n & y & n & n & 1 \\
9 & 17 & n & y & y & n & 1 \\
10 & 18 & n & y & n & n & 2 \\
11 & 17 & n & y & y & n & 1 \\
12 & 14 & n & n & y & n & 2 \\
\hline
13 & 18 & n & y & y & y & 2 \\
14 & 18 & n & y & y & n & 2 \\
15 & 18 & n & y & y & y & 1 \\
\hline
16 & 18 & n & y & y & y & 6 \\
\hline
\end{tabular}
\par
\end{centering}
\caption{The symmetry classes of monomial ideals in $H_{3,3}$}
\label{tbl:monomials}
\end{table}
In Table~\ref{tbl:monomials} we show that
$H_{3,3}$ contains $13824$ monomial ideals.
They come in $16$ symmetry classes, and we list them
in four groups, corresponding to the dimension
($12$, $11$, $10$, and~$9$) of the orbit under the action of
${\rm PGL}(3)^3$. The $16$ monomial ideals appear in the same order
as their pictorial representation in Figure~\ref{fig:monomial-poset}.
The third column indicates whether or not the picture is planar.
The second column is the tangent space dimension of $H_{3,3}$ at that
point. The triple column shows which components the monomial
ideals live on. The rightmost column shows the order of the
symmetry group of the ideal. Note that the permutation group acting
on $H_{3,3}$ has order $6^4 = 1296$: it permutes the three factors
of $\mathbb{P}^2 {\times} \mathbb{P}^2 {\times} \mathbb{P}^2$ as well as the three
coordinates $\{x_i,y_i,z_i\}$ of each projective plane.
The total number $13824$ of monomial ideals on
$H_{3,3}$ equals $1296$ times the sum of the reciprocals in the last
column of Table~\ref{tbl:monomials}.
Every monomial ideal in $H_{3,3}$ corresponds to a polyhedral
complex in the boundary in the direct product
of three triangles, denoted $(\Delta_2)^3$. Using the moment
map of toric geometry, each such polyhedral complex can be
identified with the real positive points of the
corresponding subscheme of $(\mathbb
P^2)^3$. The following conditions characterize those subcomplexes of $H_{3,3}$
whose corresponding monomial ideal has the right multigraded Hilbert function:
\begin{itemize}
\item For every vector of non-negative
integers $(t_1, t_2, t_3)$ summing to $2$, there is exactly one $2$-dimensional
cell consisting of the product of a $t_1$-dimensional cell of $\Delta_2$, a
$t_2$-dimensional cell, and a $t_3$-dimensional cell.
\item The complex contains exactly ten $0$-cells, $15$ $1$-cells,
and six $2$-cells.
\end{itemize}
This characterization is sufficient to show that Table~\ref{tbl:monomials} is
complete. By the condition on the number of $1$-cells, each triangle must meet
at least two of the squares. Thus, each pair of squares must be adjacent or be
connected by a triangle, and so all three squares must meet in a common point.
There are four possible configurations of the squares: either $0$, $1$, $2$,
or~$3$ edges common to multiple squares. The monomial ideals can be enumerated
by considering all possible ways to attach the additional triangles to these
configurations.
The monomial ideals form a poset based on containment within the closures of
their orbits, illustrated in Figure~\ref{fig:monomial-poset}.
Each ideal is drawn as a $2$-dimensional subcomplex of
$(\Delta^2)^3$. The subcomplex is drawn abstractly,
but the embedding amounts to a choice of labellings. The bold lines indicate
that an additional triangle is attached along that edge.
By orbits, we mean orbits under the disconnected group which is
generated by multiplying the first column by an arbitrary matrix and by the
discrete action of permuting the columns. The number on the lower
right is the dimension of the tangent space of $H_{3,3}$ at that
monomial ideal. The ranking is by the dimension of (every component of) the
orbit of the monomial ideal: 9, 10, 11, or~12.
The maximal elements of the poset in Figure~\ref{fig:monomial-poset}
correspond to the ``planar''
complexes, i.e.\ those such that no edge contains more than two 2-cells. By
fixing a isomorphisms between the three copies of $\Delta_2$, we get a
projection from $(\Delta_2)^3$ onto $3\Delta_2$.
In the case of ideals 1, 2, 3, 4, and~5 the corresponding subcomplexes
project to tilings of $3\Delta_2$. In these cases, there is
a monomial ideal in the orbit which is in the toric Hilbert scheme of
$\Delta_2\times \Delta_2$, and the corresponding triangulation of
$\Delta_2\times \Delta_2$
is related to the tiling of $3\Delta_2$ by the Cayley trick.
The
tilings in~\cite[Figure~5]{Santos} correspond to the monomial ideals
1, 3, 4, 5, and~2 in this order. Each triangulation is regular,
and the ideals are smoothable, even in
the toric Hilbert scheme. Here, the toric Hilbert is the
subscheme of $H_{3,3}$ obtained by fixing the
Hilbert function of $I_2(X)$ with respect to the finer grading
given by both row degrees and column degrees.
For details, references and further information see
\cite[\S 2]{HS} and \cite[Theorem~2]{Santos}.
\begin{figure}
\begin{centering}
\includegraphics{h33mon.eps}
\par\end{centering}
\caption{Partial ordering of the monomial ideals on $H_{3,3}$}
\label{fig:monomial-poset}
\end{figure}
\section{Deligne schemes and their special fibers}
The original motivation which started this project was a discussion
with Annette Werner about tropical convexity, and its connection to affine buildings
and moduli of hyperplane arrangements as developed by
Keel and Tevelev \cite{KT}. Our aim was to understand
the Deligne schemes of \cite[\S 1]{KT} and their special fibers in
the concrete language of combinatorial commutative algebra.
We found that the multigraded Hilbert scheme $H_{d,n}$
offers a suitable framework for studying Deligne schemes
and their arithmetic. In this section we briefly discuss the set-up
and the connection to the combinatorial results in \cite{BY, JSY}.
We plan to pursue this further in a joint project with Annette Werner.
Let $K$ be an algebraically closed field with a non-trivial non-archimedean
absolute value, let $k$ be the residue field of $K$,
and $R$ the valuation ring of $K$.
In computational studies (such as~\cite{JSY}) we usually relax
the requirement that $K$ be algebraically closed,
and we work with the Gr\"obner-friendly
scenario $\, K = \mathbb{Q}(z)$, $R = \mathbb{Q}[z]$ and $k= \mathbb{Q}$.
Let $\mathcal{B}$ denote the Bruhat-Tits building associated
with the group ${\rm PGL}(d)$ over $K$ as defined in \cite{JSY, KT}.
The building~$\mathcal{B}$ is an infinite simplicial complex of dimension
$d-1$ whose vertices are the equivalence classes of
$R$-submodules of $K^d$ having maximal rank $d$.
Let $Y = \{Y_1,\ldots,Y_n\}$ be a finite set of vertices of the
affine building~$\mathcal{B}$. Following \cite[Definition 1.8]{KT},
we let $\mathbb{S}_Y$ denote the corresponding join of projective spaces
over $R$, and we write $S_Y$ for its special fiber over $k$.
In the special case when $Y$ is a convex subset of~$\mathcal{B}$,
a classical result of Mustafin~\cite{Mus} states that $\mathbb{S}_Y$
is semi-stable over $R$, which implies that
$S_Y$ has smooth irreducible components with normal crossings.
In this section we allow $Y$ to be any finite set of vertices -- not
necessarily convex -- of the building~$\mathcal{B}$.
Following \cite[1.10]{KT}, we shall call $\mathbb{S}_Y$ the
{\em Deligne scheme} of the subset~$Y \subset \mathcal{B}$.
We now describe the Deligne scheme $\,\mathbb{S}_Y$
and its special fiber $S_Y$ in concrete terms.
The configuration $Y$ is represented by $(Y_1, Y_2, \ldots, Y_n)$, an
$n$-tuple of invertible $d \times d$-matrices
with entries in the field $K$. This data is the input
for the algorithm of \cite{JSY} which computes the
convex hull of $Y$ in $ \mathcal{B}$.
Let $I_2(X)$ be the ideal of $2 {\times} 2$-minors
of a $d {\times} n$-matrix of unknowns.
We consider the transformed ideal $Y \circ I_2(X)$ in $K[X]$,
and we intersect it with $R[X]$:
\begin{equation}
\label{GetSpecFib}
\mathbb{I}_Y \,\,\, = \,\,\, (Y \circ I_2(X)) \,\cap \, R[X] .
\end{equation}
We call $\mathbb{I}_Y \subset R[X]$ the {\em Deligne ideal} of the
point configuration $\,Y \subset \mathcal{B}$.
The image of $\mathbb{I}_Y$ under the specialization
$R \rightarrow k$ is denoted $I_Y \subset k[X]$. We
call $I_Y$ the {\em special fiber ideal} of $Y$.
This nomenclature is justified as follows.
\begin{remark}
\label{rem:deligne}
The Deligne scheme $\,\mathbb{S}_Y$ coincides with
the subscheme of
$\,(\mathbb{P}^{d-1}_R)^n \,$ defined by the Deligne ideal
$\mathbb{I}_Y$, and its special fiber
$S_Y$ coincides with the subscheme of $(\mathbb{P}^{d-1}_k)^n$
defined by the special fiber ideal $I_Y$.
\end{remark}
Remark \ref{rem:deligne} implies that the Deligne scheme $\mathbb{S}_Y$ is
a point in the multigraded Hilbert scheme $H_{d,n}(R)$ over the valuation ring $R$,
and its special fiber $S_Y$ is a point in $H_{d,n}(k)$ over its residue field $k$.
Thus our study in Sections~2 to~5 is relevant for Deligne schemes.
In particular, Theorem \ref{thm:radical} implies:
\begin{cor}
The special fiber $\,S_Y$ of the Deligne scheme $\,\mathbb{S}_Y$ is reduced.
\end{cor}
The formula (\ref{GetSpecFib}) translates into the following Gr\"obner-based
algorithm for computing Deligne schemes and their special fibers
when $K = \mathbb{Q}(z), R = \mathbb{Q}[z], k = \mathbb{Q}$.
The input data is an $n$-tuple $Y(z)$ of
invertible $d \times d$-matrices whose entries are
rational functions in one variable $z$.
The Deligne ideal $\mathbb{I}_{Y(z)}$ is an ideal
in the polynomial ring $\mathbb{Q}[z,X]$. It is computed as follows.
We replace the $j$-th column of the matrix $X = (x_{ij})$
by its left multiplication with the $j$-th input matrix $Y_j(z)$.
We then form the $2 \times 2$-minors of the resulting matrix
and multiply each generator by a power of $z$ to
get an ideal~$L$ in $\mathbb{Q}[z,X]$. The Deligne ideal is then
obtained by saturation as follows:
\begin{equation}
\label{sat1}
\mathbb{I}_{Y(z)} \,\, = \,\, \bigl( L : \langle z \rangle^\infty \bigr).
\end{equation}
The special fiber ideal is obtained by setting $z$ to zero in the Deligne ideal:
\begin{equation}
\label{sat2}
I_{Y(z)} \,\, = \,\, \mathbb{I}_{Y(z)}|_{z=0}.
\end{equation}
This algorithm generalizes the construction
of Block and Yu \cite{BY} which concerns the special
case when the configuration $Y(z)$ lies in one apartment
of the Bruhat-Tits building~$\mathcal{B}$.
Algebraically, this means that the $Y_j(z)$ are diagonal matrices,
and, geometrically, the apartment of $\mathcal{B}$ is identified
with the standard triangulation of tropical projective space $\mathbb{T}{\mathbb{P}}^{d-1}$.
If the diagonal entries of the $Y_j(z)$ are monomials with
sufficiently generic exponents, then
(\ref{sat1})-(\ref{sat2}) simply amounts to a
Gr\"obner basis computation for the ideal~$I_2(X)$.
Block and Yu \cite{BY} consider the Alexander dual
of the initial monomial ideal of $I_2(X)$, and they showed
that minimal free resolution of that Alexander dual is
cellular. It represents the convex hull of $Y(z)$ in
$\mathbb{T}{\mathbb{P}}^{d-1}$, and hence in $ \mathcal{B}$.
We conjecture that this method can be adapted to
compute the convex hull of $Y(z)$ even if
the matrices $Y_j(z)$ do not lie in a common apartment of $\mathcal{B}$.
We note that the algorithm (\ref{sat1})-(\ref{sat2}) is not very practical for computing
the special fiber ideal $I_{Y(z)}$. We found that running the saturation step
(\ref{sat1}) in naive manner in {\tt Macaulay 2} is too slow for interesting values of $d$ and $n$.
Instead, we propose the following approach. If the
$Y(z)$ are ``sufficiently generic,''
then we replace each $Y_j(z)$ by a ``nearby''
matrix of the special form
\begin{equation}
\label{approx1}
\,\,Y_j(z) \quad \approx \quad {\rm diag}(z^{w_{1j}}, \ldots, z^{w_{dj}}) \cdot A_i\, .
\end{equation}
Here the weights $w_{ij}$ are generic rational numbers that
specify a term order~$>$ on the polynomial ring $\mathbb{Q}[X]$, and
$A = (A_1,\ldots, A_n)$ is a tuple of invertible matrices over $\mathbb{Q}$.
The precise meaning of the approximation (\ref{approx1}) is that
\begin{equation}
\label{approx2}
I_{Y(z)}\,\, = \,\, {\rm in}_>(A \circ I_2(X)).
\end{equation}
When this holds
the special fiber $S_{Y(z)}$ of the Deligne scheme
$\mathbb{S}_{Y(z)}$ is given by
a squarefree monomial ideal $I_{Y(z)}$.
The point is that the right hand side of
(\ref{approx2}) can be computed much faster in practice than evaluating
(\ref{sat1}). It amounts to computing a Gr\"obner basis
of the transformed ideal $A \circ I_2(X)$ in the polynomial ring $\mathbb{Q}[X]$.
When the $A_i$ are diagonal matrices over $\mathbb{Q}$ this
is precisely the algorithm of Block and Yu \cite{BY} for convex hulls in $\mathbb{T}{\mathbb{P}}^{d-1}$.
\bigskip
\noindent {\bf Acknowledgements.}
Both authors were supported by the U.S.~National
Science Foundation (DMS-0354321, DMS-0456960 and DMS-0757236).
We thank Michel Brion, Aldo Conca and
Annette Werner for helpful comments.
|
2,869,038,156,237 | arxiv | \section{Introduction}
The importance of the Arctic ice cover in the climate system stems primarily from its influence on the Earth's radiation budget, which it exerts through its relatively large albedo \citep{OneWatt}. Despite its importance, accurately predicting the spatio-temporal evolution of the ice cover, subject to prescribed forcings, still remains challenging. One of the principal challenges associated with making this prediction is the dynamics of the ice cover \citep{OneWatt, Rothrock:1975, rampal2011}.
The ice cover is not continuous, but is made up of a very large number of floes of different shapes, sizes, and thicknesses \citep{Thorndike1975, Rothrock:1984}. It can be inferred from field and satellite observations that sea ice behaves differently at different length scales: At the scale of an individual floe it moves like a deformable solid body, but at basin-wide scales it moves like a highly viscous liquid. This suggests that the following two approaches can be used to study its motion (see Solomon \citep{solomon1970} for a more general discussion):
\begin{enumerate}
\item In the first approach, the motion of individual ice floes is studied by solving Newton's equations for each floe. From these solutions, one then extracts statistical information \citep{rampal2009, agarwal2017} that is used to describe the motion of the ice cover at much larger length scales.
\item And in the second, one takes the ice cover to be a continuum and develops rheological models to relate the internal stress field to the other macroscopic variables, including the thickness distribution of ice \citep{Thorndike1975, TW2015, TW2017}. This is then used in the Cauchy equation, with appropriate boundary conditions, to solve for the velocity field.
\end{enumerate}
A principal aim of both these approaches is to predict the statistical properties of the ice velocity field \citep{thorndike1982, colony1984, colony1985, thorndike1986, rampal2009, agarwal2017}. Both approaches have their advantages and limitations, but the focus since the Arctic Ice Dynamics Joint Experiment (AIDJEX) expedition has been on developing observationally consistent rheological models \citep{Rothrock:1975, Feltham:2008}.
The internal stress field in the ice cover is a consequence of the mechanical interactions between the constituent ice floes \citep{Rothrock:1975}. However, unlike in the kinetic theory of gases, where molecules are assumed to interact only via elastic collisions \citep{Harris}, there are different modes of interaction -- rafting, ridging, shearing, and jostling -- between ice floes \citep{VW08}. This makes it more challenging to develop a Boltzmann-like theory for sea ice velocity, although the development of such a theory has been attempted in the context of Saturn's rings \citep{goldreich1978}, where the last three modes of interaction between ice particles are possible.
The first attempt to include floe-floe interactions into the dynamics of a single ice floe was by Sverdrup \citep{sverdrup1928}. He introduced a frictional force proportional to the floe velocity, but always in the direction opposite to it. However, this formulation is not completely correct as friction can both decelerate and accelerate an object depending on the relative velocity \citep{reed1962, solomon1970}. A more generalized description of the floe-floe interactions was developed by \citet{solomon1970} and \citet{timokhov1970}, who considered deterministic and stochastic drift of ice in one dimension, respectively. The stochasticity in Timokhov's model \citep{timokhov1970} was introduced through the probability for dynamic coagulation to occur when ice floes collided. A drawback of both these models is in the introduction of spatial gradients (of velocity in Solomon's model and of compactness in Timokhov's), which makes it unclear at what length scales the continuum assumption holds in these models. In more recent work, the ice cover has been treated as a two-dimensional granular gas to study the emergence of the internal stress from floe-floe collisions \citep{shen1987}, and flow \citep{feltham2005} and clustering \citep{herman2011} of ice floes in marginal ice zones.
The collective motion of ice floes on length scales much larger than the floe size can be approximated as that of a continuum \citep{Thorndike1975}. In this case, the equations that describe the evolution of the ice cover are the mass balance and Cauchy equations. The principal challenge associated with this approach has been in determining the constitutive equation for the internal stress \citep{Rothrock:1975}. Observations made during the AIDJEX project motivated the development of the elastic-plastic rheological model of sea ice \citep{Coon1974}. Subsequently, other rheological models have been proposed to capture various features observed in pack ice \citep{wilchinsky2004, girard2011, tsamados2013, bouillon2015, dansereau2016}. A detailed discussion of the theoretical underpinnings of some of the rheological models can be found in the reviews by \citet{Rothrock:1975} and \citet{Feltham:2008}.
The principal aim of the current work is to develop a stochastic theory of sea ice motion to capture some of the observed statistical properties \citep{rampal2009}. We achieve this by extending Sverdrup's model for the floe-floe interaction by introducing a Coulomb friction term that accounts for both acceleration and deceleration of a single ice floe. Our model is analogous to that of a Brownian particle subjected to viscous and dry frictional forces in an external force field \citep{de2005, hayakawa2005}. Our formulation of the problem permits us to explicitly calculate the probability density functions (PDFs) of the components of the fluctuating velocity, which are then compared with observations \cite{rampal2009}.
\section{The new theory}
We consider the ice floes to be rigid circular discs with thickness $h$ and radius $R$. Focussing on one of these floes, the governing equations for the horizontal motion are:
\begin{equation}
\frac{d \boldsymbol{x}}{dt} = \boldsymbol{v},
\label{eqn:position}
\end{equation}
and
\begin{equation}
\frac{d}{dt} \left(m \, \boldsymbol{v}\right) = \boldsymbol{F_a} + b \, \boldsymbol{\xi}(t) + \boldsymbol{F_o} - 2 \, m \, \Omega \, \boldsymbol{k} \times \boldsymbol{v} - \mathcal{F} \, \boldsymbol{S}(\boldsymbol{v} - \boldsymbol{\left<v\right>}).
\label{eqn:velocity}
\end{equation}
Here, $\boldsymbol{x} = (x,y)$ is the position vector, $m$ is the mass of the ice floe, $\boldsymbol{v} = (u, v)$ is its two-dimensional velocity, $\boldsymbol{F_a}(\boldsymbol{x},t)$ is the mean wind force, $b \, \boldsymbol{\xi(t)}$ represents the fluctuations in the wind forcing with $b$ being the amplitude of the fluctuations and $\boldsymbol{\xi(t)}$ being Gaussian white noise, $\boldsymbol{F_o}(\boldsymbol{x},t)$ represents the ocean drag force, $\Omega$ is the Coriolis frequency and $\boldsymbol{k}$ is the unit vector along the vertical, $\mathcal{F}(C)$ is the threshold value of the Coulomb friction force due to the neighbouring ice floes, $C \left(\in \left[0, 1\right]\right)$ is the constant compactness of the ice cover, and $\boldsymbol{S}$ is a vector given by
\begin{equation}
\boldsymbol{S}(\boldsymbol{v} - \boldsymbol{\left<v\right>}) = \frac{\boldsymbol{v} - \left<\boldsymbol{v}\right>}{|\boldsymbol{v} - \left<\boldsymbol{v}\right>|},
\label{eqn:sign1}
\end{equation}
where $\left<...\right>$ denotes an ensemble average. The function $\boldsymbol{S}$ is a generalization of the sign function for a two-dimensional vector. Introducing the fluctuating velocity as $\boldsymbol{v'} = \boldsymbol{v} - \boldsymbol{\left<v\right>}$, we can write equation \ref{eqn:sign1} as
\begin{equation}
\boldsymbol{S}(\boldsymbol{v'}) = \frac{\boldsymbol{v'}}{|\boldsymbol{v'}|}.
\end{equation}
For simplicity, we have neglected the forces due to horizontal pressure gradient of the atmosphere and gradients in sea surface height; but, they can be included in the model without any difficulty.
In introducing the floe-floe interaction term in equation \ref{eqn:velocity}, we have made the following assumptions: (a) interactions that only involve pushing and/or shearing between ice floes are important; (b) the role of collisions here is to drive the velocity to its mean value, $\left<\boldsymbol{v}\right>$; and (c) the value of the threshold force varies linearly with compactness, i.e., $\mathcal{F}(C) = \mathcal{F}_0 \, C$, where $\mathcal{F}_0$ is a constant. The reasoning for this model is the following. If $N (\gg 1)$ is the total number of ice floes, then any description of a single ice floe requires us to take into account its interactions with its $n (\ll N)$ nearest neighbours. However, the construction of a system of coupled deterministic/stochastic differential equations for these localized interactions would also require us to take into account the interactions of the neighbouring ice floes with their own nearest neighbours. Hence, any description of the dynamics of a subset of the $N$ ice floes leads to a closure problem, which is similar to the closure problem encountered in the kinetic theory of gases \citep{Harris}. Consequently, some assumptions would have to be made to truncate the problem. The mathematical form of the interactions in equation \ref{eqn:velocity}, along with assumption (b) above, represents a mean-field approximation of these interactions. We should also note here that the mathematical form of the floe-floe interactions in equation \ref{eqn:velocity} permits both acceleration and deceleration of the ice floe depending on the sign of the fluctuation. Furthermore, assuming the ocean drag is proportional to the ice velocity \citep{lepparanta2011} and the ocean is at rest, equation \ref{eqn:velocity} becomes
\begin{equation}
\frac{d \boldsymbol{v}}{dt} = \boldsymbol{F_a} + b \, \boldsymbol{\xi}(t) - \beta \, \boldsymbol{v} - 2 \, \Omega \, \boldsymbol{k} \times \boldsymbol{v} - \mathcal{F} \, \boldsymbol{S}(\boldsymbol{v'}),
\label{eqn:velocity2}
\end{equation}
where $m$ has been set to unity without any loss of generality and $\beta$ is a constant.
To obtain an equation for the velocity fluctuations, we first take the mean of equation \ref{eqn:velocity2} giving
\begin{equation}
\frac{d \left<\boldsymbol{v}\right>}{dt} = \boldsymbol{F_a} - \beta \, \left<\boldsymbol{v}\right> - 2 \, \Omega \, \boldsymbol{k} \times \left<\boldsymbol{v}\right>.
\label{eqn:velocity3}
\end{equation}
It is seen that the mean velocity of the ice floe is unaffected by the collisions with other ice floes. This is due to the fact that these collisions lead to both accelerating and decelerating forces and an average over the ensemble gives zero net force. Subtracting equation \ref{eqn:velocity3} from equation \ref{eqn:velocity2} and neglecting the effect of Coriolis force on fluctuations, we get
\begin{equation}
\frac{d \boldsymbol{v'}}{dt} = - \beta \, \boldsymbol{v'} - \mathcal{F} \, \boldsymbol{S}(\boldsymbol{v'}) + b \, \boldsymbol{\xi}(t),
\label{eqn:fluctuations}
\end{equation}
which is the required equation. The corresponding generalized Fokker-Planck equation -- also called the Kramers-Chandrasekhar equation -- is given by
\begin{equation}
\frac{\partial P}{\partial t} + \left(\left<\boldsymbol{v}\right> + \boldsymbol{v'}\right) \cdot \nabla P = \nabla_{\boldsymbol{v'}} \cdot \{\left[\beta \, \boldsymbol{v'} + \mathcal{F} \, \boldsymbol{S}(\boldsymbol{v'})\right] \, P + D \, \nabla_{\boldsymbol{v'}} P\}.
\end{equation}
Here, $P \equiv P(\boldsymbol{x}, \boldsymbol{v'},t)$ is the PDF for the velocity fluctuations and $D = b^2/2$. Assuming $P$ is spatially homogeneous leads to
\begin{equation}
\frac{\partial P}{\partial t} = \nabla_{\boldsymbol{v'}} \cdot \{\left[\beta \, \boldsymbol{v'} + \mathcal{F} \, \boldsymbol{S}(\boldsymbol{v'})\right] \, P + D \, \nabla_{\boldsymbol{v'}} P\},
\label{eqn:FPE}
\end{equation}
which is the required evolution equation for the PDF of the velocity fluctuations.
\section{Results}
\subsection{Stationary solution}
To obtain the stationary solution to equation \ref{eqn:FPE}, we introduce the drift vector $\boldsymbol{\mathcal{D}}$, which is defined as
\begin{equation}
\boldsymbol{\mathcal{D}} \equiv - \left[\beta \, \boldsymbol{v'} + \mathcal{F} \, \boldsymbol{S}(\boldsymbol{v'})\right] = -\left(\beta \, \boldsymbol{v'} + \mathcal{F} \, \frac{\boldsymbol{v'}}{|\boldsymbol{v'}|}\right).
\end{equation}
In component form, this is written as
\begin{equation}
\left(\mathcal{D}_{u'}, \mathcal{D}_{v'}\right) = - \left(\beta \, u' + \mathcal{F} \, \frac{u'}{\sqrt{u'^2 + v'^2}}, \beta \, v' + \mathcal{F} \, \frac{v'}{\sqrt{u'^2 + v'^2}}\right).
\label{eqn:Dcomp}
\end{equation}
Now, it is easily seen from equation \ref{eqn:Dcomp} that
\begin{equation}
\frac{\partial \mathcal{D}_{u'}}{\partial v'} = \frac{\partial \mathcal{D}_{v'}}{\partial u'} = \mathcal{F} \, \frac{u' \, v'}{\left(u'^2+v'^2\right)^{3/2}},
\end{equation}
which implies that $\boldsymbol{\mathcal{D}}$ can be expressed as the gradient of a potential, i.e., $\boldsymbol{\mathcal{D}} = - \nabla_{\boldsymbol{v'}} \Phi$, and that $\boldsymbol{\mathcal{D}}$ vanishes at $u' = v' = \pm \infty$ in the stationary state \citep{risken1996}. The stationary solution is readily found to be \citep[see e.g.,][]{risken1996}
\begin{equation}
P(u',v') = \mathcal{N} \, \exp{\left(-\frac{1}{D} \, \Phi\right)},
\end{equation}
where $\mathcal{N}$ is the integration constant determined by requiring that $P$ is normalized, and
\begin{equation}
\Phi = - \left(\int \mathcal{D}_{u'} \, du' + \int \mathcal{D}_{v'} \, dv'\right).
\end{equation}
Using equation \ref{eqn:Dcomp} to evaluate the integrals, we obtain
\begin{equation}
\Phi = \frac{\beta}{2} \, (u'^2 + v'^2) + 2 \, \mathcal{F} \, \sqrt{u'^2 + v'^2},
\end{equation}
and hence
\begin{equation}
P(u',v') = \mathcal{N} \, \exp{\left(-\frac{1}{D} \, \left[\frac{\beta}{2} \, (u'^2 + v'^2) + 2 \, \mathcal{F} \, \sqrt{u'^2 + v'^2}\right]\right)},
\label{eqn:FPE_ss}
\end{equation}
which is the required stationary solution for the most general case. In the following, we consider two different regimes of sea ice drift based on the values of the compactness.
\subsection{Regime 1: Low value of compactness ($C\approx 0$)}
For very low values of $C$, the effects of the neighbouring ice floes is negligible. Hence, in this regime, $\mathcal{F} = 0$ and the normalized stationary solution is
\begin{equation}
P(u',v') = \frac{\beta}{2 \, \pi \, D} \, \exp{\left(-\frac{\beta}{2 \, D} \, \left(u'^2 + v'^2\right)\right)},
\label{eqn:vel_Gauss}
\end{equation}
which is the well-known solution to the classical Brownian motion problem \citep{chandra1943}. The fluctuations in the ice-floe velocity are due to the fluctuating wind, which has been assumed to be Gaussian in nature. This leads to the ice-floe velocity fluctuations being Gaussian as well.
\subsection{Regime 2: High value of compactness ($C\approx 1$)}
In this regime, the ice floe undergoes continuous collisions with the neighbouring ice floes. Hence, it is natural to assume here that most of the resistance to the random motion of the ice floe comes from its neighbours. Setting the ocean drag force to zero ($\beta = 0$), we get the normalized stationary solution in this regime to be
\begin{equation}
P(u',v') = \frac{2 \, \mathcal{F}_0^2}{\pi \, D^2} \, \exp{\left(-\frac{2 \, \mathcal{F}_0}{D} \, \sqrt{u'^2 + v'^2}\right)}.
\label{eqn:2D_coulomb}
\end{equation}
The PDFs for the individual components can be found using
\begin{equation}
P_{u'} = \int_{-\infty}^{\infty} P(u',v') \, dv' = \int_{-\infty}^{\infty} \frac{\Lambda^2}{2 \, \pi} \, \exp{\left(-\Lambda \, \sqrt{u'^2 + v'^2}\right)} \, dv',
\label{eqn:single_PDF_u}
\end{equation}
and similarly
\begin{equation}
P_{v'} = \int_{-\infty}^{\infty} P(u',v') \, du' = \int_{-\infty}^{\infty} \frac{\Lambda^2}{2 \, \pi} \, \exp{\left(-\Lambda \, \sqrt{u'^2 + v'^2}\right)} \, du',
\label{eqn:single_PDF_v}
\end{equation}
where $\Lambda = \frac{2 \, \mathcal{F}_0}{D}$. We should note here that $P_{u'}$ and $P_{v'}$ have identical functional forms. The integrals in equations \ref{eqn:single_PDF_u} and \ref{eqn:single_PDF_v} cannot be evaluated to give analytical expressions for $P_{u'}$ and $P_{v'}$, so they would have to be computed numerically.
The components $u'$ and $v'$ enter equation \ref{eqn:2D_coulomb} as $\sqrt{u'^2 + v'^2}$. This makes obtaining the PDF for the fluctuating speed, $\mathcal{P}(V')$, straightforward, and is readily found to be \citep[see e.g.,][]{Reif}
\begin{equation}
\mathcal{P}(V') = \Lambda^2 \, V' \, \exp{\left(- \Lambda \, V'\right)},
\label{eqn:speed_PDF}
\end{equation}
where $V' = \sqrt{u'^2 + v'^2}$.
\subsection{Comparison with observations}
To compare the PDFs from our theory with observations, we use the results from the analysis of the International Arctic Buoy Program data from \citet{rampal2009}. A total of 450 drifters deployed between 1979 and 2001 were used, and data from only those buoys whose positions were at least 100 kms away from the coasts were chosen. \citet{rampal2009} chose a two-dimensional Cartesian co-ordinate system centered at the North Pole, with one of the axes pointing along the Greenwich meridian. For the analysis, the time period for winter was chosen from November to mid May, and for summer from mid June to mid September. Further details on the procedure used to obtain the mean velocity, including the choice of length and time scales for averaging, and the velocity fluctuations can be found in \citet{rampal2009}.
In figure \ref{fig:speed_pdf} we show the comparison between the theoretical PDF for the fluctuating speed and the observational PDF from Rampal \emph{et al.} \citep{rampal2009}. The functional form of the solution (equation \ref{eqn:speed_PDF}) is fit to the observational data, and $\Lambda$, which is the only fitting parameter, is determined from this fit.
\begin{figure}
\centering
\includegraphics[trim = 100 0 150 0, scale=0.18]{speed_PDF.eps}
\caption{Comparison of our theoretical PDF for the fluctuating speed with observations. Circles are data from \citet{rampal2009} and the solid curve is the functional form of the solution from theory (equation \ref{eqn:speed_PDF}). The value of $\Lambda$ obtained from the fit is $0.238$ (cm/s)$^{-1}$. The inset shows the same figure in log-linear plot.}
\vspace{-5mm}
\label{fig:speed_pdf}
\end{figure}
Using the value of $\Lambda = 0.238$ (cm/s)$^{-1}$ from the fit in figure \ref{fig:speed_pdf}, we compute the integral in equation \ref{eqn:single_PDF_u} numerically to determine $P_{u'}$. Noting that $P_{v'}$ and $P_{u'}$ have the same functional forms (equations \ref{eqn:single_PDF_u} and \ref{eqn:single_PDF_v}), we compare the PDF with observations and find that the PDFs for the velocity components are Laplace distributions. This is shown in figure \ref{fig:velocity_pdf}. As noted by \citet{rampal2009}, it is remarkable that the PDFs for $u'$ and $v'$ are approximately the same in each season, showing the fluctuations are isotropic.
\begin{figure}
\centering
\includegraphics[trim = 100 0 150 0, scale=0.18]{velocity_PDF.eps}
\caption{Comparison of our theoretical PDFs for sea-ice velocity fluctuations with observations. Symbols are data from \citet{rampal2009} and the solid curve is the PDF obtained from equation \ref{eqn:single_PDF_u} after numerically evaluating the integral. The value of $\Lambda$ used is $0.238$ (cm/s)$^{-1}$ (see figure \ref{fig:speed_pdf}).}
\vspace{-5mm}
\label{fig:velocity_pdf}
\end{figure}
It is interesting to note that the Gaussian distribution (equation \ref{eqn:vel_Gauss}) is not observed for the period between 1979-2001. However, with the dramatic decline in the ice cover over the last two decades, it might now be possible to test the correctness of this prediction. This is a part of our ongoing work.
\section{Conclusions}
We have developed a stochastic theory for the drift of a single ice floe in the Arctic. The floe-floe interactions are introduced through the Coulomb friction term \citep{de2005, hayakawa2005} in the equation of motion. We first obtained the Langevin equation for the velocity fluctuations, and then the corresponding Fokker-Planck equation. We found that for values of compactness close to unity, the stationary PDFs of the individual fluctuating velocity components are the Laplace distribution. However, for very small values of compactness, we obtained Gaussian distribution as the stationary solution. Comparison of the functional form of solution for $C \approx 1$ with observations \citep{rampal2009} shows good qualitative agreement. This agreement, despite the many simplifying assumptions made, provides confidence that the mathematical formulation of the problem is physically sound, and that the model captures the leading order physics associated with the sea-ice velocity fluctuations.
However, a shortcoming of the current model is that it does not take into account the thermal growth and mechanical deformations of the ice floe. This is remedied by writing the mass of the ice floe as $m = \rho_i \, \pi \, R^2 \, h$, where $\rho_i$ is the constant density of the ice floe, and rewriting equation \ref{eqn:velocity} as
\begin{equation}
m \frac{d \boldsymbol{v}}{dt} = - \frac{d m}{dt} \, \boldsymbol{v} + \boldsymbol{F_a} + b \, \boldsymbol{\xi}(t) + \boldsymbol{F_o} - m \, \Omega \, \boldsymbol{k} \times \boldsymbol{v} - \mathcal{F} \, \boldsymbol{S}(\boldsymbol{v} - \boldsymbol{\left<v\right>}).
\label{eqn:coupled}
\end{equation}
The changes in $R(t)$ and $h(t)$ can now be coupled with the momentum equation (equation \ref{eqn:coupled}). This, however, leads to a six-dimensional Fokker-Planck equation which is very challenging to solve -- both analytically and numerically. In such a situation, it might be more prudent to solve the coupled stochastic differential equations for $\boldsymbol{v}$, $h$ and $R$. The previous work on the thickness distribution of sea ice \citep{TW2015, TW2017} and our current theory provide a physical and mathematical framework to explore these coupled problems in future.
\section*{Acknowledgements}
The author thanks A. J. Wells for his critical comments on a previous version of the manuscript, which were helpful in improving this work.
|
2,869,038,156,238 | arxiv |
\section{%
\@startsection
{section}%
{1}%
{\z@}%
{0.8cm \@plus1ex \@minus .2ex}%
{0.5cm}%
{%
\normalfont\small\bfseries
\centering
}%
}%
\def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}%
\def\subsection{%
\@startsection
{subsection}%
{2}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\bfseries
\centering
}%
}%
\def\subsubsection{%
\@startsection
{subsubsection}%
{3}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\itshape
\centering
}%
}%
\def\paragraph{%
\@startsection
{paragraph}%
{4}%
{\parindent}%
{\z@}%
{-1em}%
{\normalfont\normalsize\itshape}%
}%
\def\subparagraph{%
\@startsection
{subparagraph}%
{5}%
{\parindent}%
{3.25ex \@plus1ex \@minus .2ex}%
{-1em}%
{\normalfont\normalsize\bfseries}%
}%
\def\section@preprintsty{%
\@startsection
{section}%
{1}%
{\z@}%
{0.8cm \@plus1ex \@minus .2ex}%
{0.5cm}%
{%
\normalfont\small\bfseries
}%
}%
\def\subsection@preprintsty{%
\@startsection
{subsection}%
{2}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\bfseries
}%
}%
\def\subsubsection@preprintsty{%
\@startsection
{subsubsection}%
{3}%
{\z@}%
{.8cm \@plus1ex \@minus .2ex}%
{.5cm}%
{%
\normalfont\small\itshape
}%
}%
\@ifxundefined\frontmatter@footnote@produce{%
\let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote
}{}%
\def\@pnumwidth{1.55em}
\def\@tocrmarg {2.55em}
\def\@dotsep{4.5pt}
\setcounter{tocdepth}{3}
\def\tableofcontents{%
\addtocontents{toc}{\string\tocdepth@munge}%
\print@toc{toc}%
\addtocontents{toc}{\string\tocdepth@restore}%
}%
\def\tocdepth@munge{%
\let\l@section@saved\l@section
\let\l@section\@gobble@tw@
}%
\def\@gobble@tw@#1#2{}%
\def\tocdepth@restore{%
\let\l@section\l@section@saved
}%
\def\l@part#1#2{\addpenalty{\@secpenalty}%
\begingroup
\set@tocdim@pagenum{#2}%
\parindent \z@
\rightskip\tocleft@pagenum plus 1fil\relax
\skip@\parfillskip\parfillskip\z@
\addvspace{2.25em plus\p@}%
\large \bf %
\leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@
\hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip
\par
\nobreak %
\endgroup
}%
\def\tocleft@{\z@}%
\def\tocdim@min{5\p@}%
\def\l@section{%
\l@@sections{}{section
}%
\def\l@f@section{%
\addpenalty{\@secpenalty}%
\addvspace{1.0em plus\p@}%
\bf
}%
\def\l@subsection{%
\l@@sections{section}{subsection
}%
\def\l@subsubsection{%
\l@@sections{subsection}{subsubsection
}%
\def\l@paragraph#1#2{}%
\def\l@subparagraph#1#2{}%
\let\toc@pre\toc@pre@auto
\let\toc@post\toc@post@auto
\def\listoffigures{\print@toc{lof}}%
\def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}}
\def\listoftables{\print@toc{lot}}%
\let\l@table\l@figure
\appdef\class@documenthook{%
\@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}%
\raggedcolumn@sw{\raggedbottom}{\flushbottom}%
}%
\def\tableft@skip@float{\z@ plus\hsize}%
\def\tabmid@skip@float{\@flushglue}%
\def\tabright@skip@float{\z@ plus\hsize}%
\def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}%
\def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}%
\def\@makefntext#1{%
\def\baselinestretch{1}%
\reset@font
\footnotesize
\leftskip1em
\parindent1em
\noindent\nobreak\hskip-\leftskip
\hb@xt@\leftskip{%
\Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}%
\hss\@makefnmark\
}%
#1%
\par
}%
\prepdef
\section{Introduction}
The charged particle pseudorapidity distribution, $dN_{ch}/d\eta$, is a
well defined experimental quantity that reflects the initial conditions
of the system, e.g. parton shadowing and gluon saturation, and also the
effects of rescattering and hadronic final
state interactions: it represents the time-integral of the particle
production throughout the entire collision. With the advent of Cu+Cu
collisions at RHIC, the system size dependence of important observables
can be studied using different collision geometries. The Cu+Cu
results \cite{phobos1} test the simple scaling features observed previously in
Au+Au collisions \cite{brahms2,phobos3}. They significantly extend
the $N_{part}$ range measured in Au+Au collisions, while the two
systems can also be compared at the same $N_{part}$, as illustrated below.
\section{Experimental setup and data analysis}
The Cu+Cu data were collected with the multiplicity array of the
PHOBOS detector \cite{phobos5} during the RHIC 2005 run. The array
consists of single-layered silicon sensors assembled into a long,
tube-shaped Octagon detector surrounding the collision point, and into
three Ring sensors on each side, detecting large-$|\eta|$ particles.
Simulations of the detector performance were based on the HIJING event
generator and GEANT, including the response of the scintillator Paddle
trigger counters.
Data from the Cu+Cu and Au+Au collisions were analyzed using the
`hit-counting' and `analog' methods \cite{phobos6}. The latter was
corrected for multiply-charged fragments emitted at large $\eta$. This
correction decreases with centrality and collision energy, and it is
less than 6\% of the total number of charged particles.
The estimated trigger efficiency is
84$\pm$5\% and 75$\pm$5\% in Cu+Cu collisions at 200 and 62.4~GeV.
The centrality of the collision was estimated from the Paddle
scintillator signals.
At 22.4 and 19.6~GeV, the pathlength-corrected energy sum
\cite{phobos3} deposited in the Octagon was used ($|\eta|<3.2$).
A Glauber-model calculation was employed to estimate
$\langle N_{part} \rangle$ for each centrality bin.
\section{Results}
The $dN_{ch}/d\eta$ distributions in Cu+Cu collisions for various
collision energies and centralities are shown in Fig. \ref{fig1}. On the
right panel the Cu+Cu and Au+Au collisions are compared, where the
centrality bins are chosen such that $\langle N_{part} \rangle$ in both
systems are similar. One can conclude that although the distributions
agree at the same $\langle N_{part} \rangle$ to first order, there are
differences at large $|\eta|$ and low energies.
In this context, we note that the two nuclear spectator remnants are
larger in Au+Au than in Cu+Cu collisions and thus may be playing a role.
\begin{figure}
\begin{center}
\includegraphics[width=50mm]{Fig1_qm08.eps}
\includegraphics[width=50mm]{Fig2_qm08.eps}
\caption{Left panel: pseudorapidity distributions of primary charged
particles from Cu+Cu collisions at 22.4, 62.4 and 200~GeV collision
energy per nucleon pair for various centrality bins. Right panel:
comparison of $dN_{ch}/d\eta$ distributions in Cu+Cu and Au+Au
collisions - corresponding to the same $\langle N_{part}\rangle$.
90\% C.L. systematic errors are shown as bands.}
\label{fig1}
\end{center}
\end{figure}
The $dN_{ch}/d\eta$ distributions exhibit longitudinal scaling when
observed from the rest frame of one of the colliding nuclei. The
coordinate transformation to the `target' frame approximately
corresponds to a shift by the beam rapidity, $y_{beam}$.
Figure \ref{fig2} compares the $dN_{ch}/d\eta'$ distributions (where
$\eta'=\eta-y_{beam}$) after normalization by $N_{part}$: a) data from
the Cu+Cu and Au+Au systems plotted at the same fraction of the total cross
section (0-6\% most central bin), and b) at the same value of $N_{part}/2A$
(where $A$ is the mass number). Both cases indicate that the scaled
particle density only depends on the collision energy
and geometry, but not on the size of the nuclei.
The $dN_{ch}/d\eta'/\langle N_{part}\rangle$ distributions
for the same centrality in both systems agree within errors, and the
overall agreement improves if the centralities are compared on the
basis of the $N_{part}/2A$ quantity (the fraction of participating
nucleons).
The longitudinal scaling is similarly present in the Cu+Cu and in the
Au+Au data.
\begin{figure}
\begin{center}
\includegraphics[width=100mm]{Fig9_qm08.eps}
\caption{Pseudorapidity distributions in Cu+Cu and Au+Au collisions at
various RHIC energies, normalized by the number of participant pairs,
plotted as a function of $\eta'=\eta-y_{beam}$, for a) the 6\% most
central events and b) central events with similar $N_{part}/2A$. 90\%
C.L. systematic errors are shown for a few typical data points.}
\label{fig2}
\end{center}
\end{figure}
The factorization between collision energy and centrality can be most
precisely studied by examining the ratios of the
$dN/d\eta'/\langle N_{part}\rangle$ distributions in central and
semi-central collisions, denoted by $R_{PC}^{N_{part}}$, at various
energies. The
published Au+Au results \cite{phobos4} are shown by the inset of
Fig.~\ref{fig3}a, exhibiting the same factorization feature as the
recent Cu+Cu data. The above ratio for Cu+Cu data is similar to that in
Au+Au data, except at the highest $\eta'$ values.
Fig.~\ref{fig3}b shows the $R_{PC}$ ratio for Cu+Cu and Au+Au for
centrality bins where the $N_{part}/2A$ values are matched. The latter
quantity characterizes the initial geometry more precisely, and indeed,
the centrality evolution of the $dN_{ch}/d\eta$ distributions measured
in Cu+Cu and Au+Au collisions are most similar if the centrality is
quantified by $N_{part}/2A$.
\begin{figure}
\begin{center}
\includegraphics[width=135mm]{Fig4_qm08.eps}
\caption{The semi-peripheral to central
$dN_{ch}/d\eta'/\langle N_{part}/2\rangle$ ratio for
Cu+Cu and Au+Au collisions at RHIC energies. The centrality bins are
selected a) according to fractional cross section (35-40\%/0-6\%) and
b) such that $N_{part}/2A$ is matched between the two systems. Inset:
the same quantity plotted for Au+Au data for four different energies.
The error bars represent 90\% C.L. systematic errors on the ratio.}
\label{fig3}
\end{center}
\end{figure}
The total number of charged particles, $N_{tot}$, normalized
by $N_{part}$, is presented in Fig.~\ref{fig4} for Cu+Cu collisions at
22.4, 62.4 and 200~GeV collision energy per nucleon pair, and compared
to smaller (p+p, d+Au) and larger (Au+Au) systems as a function of
centrality. One can conclude that $N_{tot}$ scales approximately
linearly with $N_{part}$, and the normalized yield has similar values
for the two heavy colliding systems. The d+Au data do not seem to
interpolate smoothly between the p+p and heavy ion data points.
\begin{figure}
\begin{center}
\includegraphics[width=71mm]{Fig5_qm08.eps}
\caption{
The integrated number of charged particles, scaled by $N_{part}/2$, in
p+p, d+Au, Cu+Cu and Au+Au collisions as a function of centrality
\cite{phobos4,phobos7}. The uncertainty of $N_{part}$ has been included
in the error bars.}
\label{fig4}
\end{center}
\end{figure}
\section{Summary}
Charged particle $\eta$ distributions were presented,
including the recently analysed Cu+Cu
data taken at various collision energies.
The $dN/d\eta'$ distributions scaled by $N_{part}/2$, as well as their
peripheral to central ratio were found to be
independent of the mass number, $A$, of the colliding nuclei if
centrality classes with the same $N_{part}/2A$ (fraction of participant
nucleons) are compared.
\noindent
{\bf Acknowledgements:} This work was partially supported by U.S. DOE grants
DE-AC02-98CH10886, DE-FG02-93ER40802, DE-FG02-94ER40818,
DE-FG02-94ER40865, DE-FG02-99ER41099, and
DE-AC02-06CH11357, by U.S. NSF grants 9603486, 0072204 and 0245011, by
Polish MNiSW grant N N02 282234 (2008-2010), by NSC of Taiwan Contract NSC
89-2112-M-008-024, by Hungarian grants OTKA F49823, NKTH-OTKA H07-C
74248 and by the Magyary Postdoctoral Fellowship.
\vspace{1mm}
\noindent
|
2,869,038,156,239 | arxiv | \section{Density Functional Theory}
The exact density functional of a one dimensional system of hard-particles was developed by Percus \cite{Percus1976}. The free energy is
\begin{equation}
F[\rho^{(1)}]=F_{\text{id}}[\rho^{(1)}]+F{_\text{ex}}[\rho^{(1)}],
\end{equation}
where $F_{\text{id}}$ is the ideal gas contribution and the excess part $F{_\text{ex}}$ accounts for the excluded-volume interactions between the particles:
\begin{eqnarray}
&&\beta F_{\text{id}}[\rho^{(1)}] = \int dx\rho^{(1)}(x)\left(\ln(\Lambda\rho^{(1)}(x))-1\right),\\
&&\beta F_{\text{ex}}[\rho^{(1)}] = \nonumber \\
&&-\frac12\int dx\left(\rho^{(1)}(x-\sigma/2)+\rho^{(1)}(x+\sigma/2)\right)\ln(1-\eta(x)). \nonumber
\end{eqnarray}
In the above expressions $\beta=1/k_{B}T$ with $k_{B}$ the Boltzmann constant and $T$ the temperature. $\Lambda$ is the (irrelevant) thermal wavelength, $x$ is the space coordinate, and $\eta(x)$ is the local packing fraction, defined as
\begin{equation}
\eta(x)=\int_{x-\sigma/2}^{x+\sigma/2}dx'\rho^{(1)}(x'),
\end{equation}
with $\sigma$ the particle length.
The grand canonical density functional is
\begin{equation}
\beta\Omega[\rho^{(1)}]=F[\rho^{(1)}]+\int dx\rho^{(1)}(x)(V_{\text{ext}}(x)-\mu),
\end{equation}
where $\mu$ is the chemical potential and $V_{\text{ext}}$ is the external potential.
The equilibrium density profiles are those that minimize the grand potential density functional at constant $\mu$. We use a standard conjugated gradient method to minimize $\Omega$. In order to compare the results with the canonical Brownian dynamics (BD) or Monte Carlo (MC) simulation we find the chemical potential for which the average number of particles is equal to the number of particles in the simulation. Given the reduced number of particles the canonical and the grand canonical ensembles are not equivalent. The grand canonical density profiles are combinations of canonical profiles. We show in Fig. \ref{fig_s1} the equilibrium density profiles of a system of $N=10$ particles confined in a pore with $L_x=25\sigma$ in the canonical (MC) and grand canonical (DFT) ensembles. The differences are small and do not justify the large discrepancy between the predictions of Dynamic Density Functional Theory (DDFT) and BD.
\begin{figure}[htdp]
\includegraphics[width=7cm]{fig_s1}
\caption{Equilibrium density profiles of a system of hard particles confined between hard walls separated by a distance $25\sigma$. Black-solid line: the grand canonical density profile obtained with DFT at a chemical potential $\beta\mu=0.3258$ that corresponds to an average number of particles $\langle N \rangle=10$. Red circles: canonical MC simulation of a system of $N=10$ particles.}
\label{fig_s1}
\end{figure}
\section{Dynamic Density Functional Theory}
In DDFT the time evolution of the density profile is governed by the continuity equation \cite{Tarazona1,Tarazona2}
\begin{equation}
\frac{\partial\rho^{(1)}(\vec{r},t)}{\partial t}=-\nabla\cdot {\bf J}_{\rm ad}(\vec{r},t),
\end{equation}
where $\vec r$ is the coordinates vector, $t$ is the time, and $\vec J_{\rm ad}$ is the adiabatic current given by
\begin{equation}
\xi {\bf J}_{\rm ad}(\vec{r},t)=-\rho^{(1)}(\vec r,t)\left(\nabla\frac{\delta F[\rho^{(1)}]}{\delta\rho^{(1)}(\vec r,t)}+\nabla V_{\rm ext}(\vec r,t)\right),
\end{equation}
where $\xi$ is the friction coefficient and $V_{ext}$ is an external potential.
For the one-dimensional system of particles analysed here, the equation for the time evolution of the density according to DDFT reads
\begin{eqnarray}
&&\xi\frac{\partial\rho^{(1)}(x,t)}{\partial t} = \frac{\partial^2\rho^{(1)}(x,t)}{\partial x^2} +\nonumber \\
&+&\frac{\partial}{\partial x}\left[\rho^{(1)}(x,t)\left(\frac{\rho^{(1)}(x+\sigma,t)}{1-\eta(x+\sigma/2)}\right.\right.
-\left.\left.\frac{\rho^{(1)}(x-\sigma,t)}{1-\eta(x-\sigma/2)}\right)\right] \nonumber\\
&+&\frac{\partial}{\partial x}\left({\rho^{(1)}(x,t)\frac{\partial V_{\rm ext}(x,t)}{\partial x}}\right).
\end{eqnarray}
The comparison between simulation and DDFT results for the density are shown in Fig. 1 of the the main article.
Figure~\ref{fig2}a,b show the same comparison comparison for the computed adiabatic contribution.
\begin{figure}
\includegraphics[width=7cm]{fig_s2}
\caption{a) Adiabatic force calculated from BD simulations (black-solid line) and DDFT (red-dashed line) at reduced time $t^*=t_s/\tau_B$=0.5 (top), and 1.0 (bottom) for a system initialized in a parabolic trap. b) Adiabatic force calculated from BD simulations (black-solid line) and DDFT (red-dashed line) at reduced time $t^*$=0.1 (top), and 0.2 (bottom) for a system initialized in a crystal structure.}
\label{fig2}
\end{figure}
\section{Measurements of the Current in Brownian Dynamics Simulations}
In order to measure the current in Brownian dynamics simulations we solve the one dimensional continuity equation\begin{equation}
\frac{\partial \rho^{(1)}( x,t)}{\partial t}=-\frac{\partial {\rm J}_{x}(x,t)}{\partial x} .
\end{equation}
The average
$$\langle \frac{\partial \rho^{(1)}(x,t_{s})}{\partial t} \rangle \simeq \langle \frac{\Delta \rho^{(1)}(x,t_{s})}{\Delta t}\rangle $$
is computed over $10^{6}$ independent trajectories at a fixed time $t_{s}$.
In order to carry out the calculation, we divide the one dimensional simulation box in bins of length $x_{bin}$ and accumulate the histogram of the local density changes
$$\Delta \rho^{(1)}(x,t_{s})=\frac{n(x,t_{s})-n(x,t_{s}-\Delta t)}{x_{bin}} \ ,$$
where $n(x,t)$ is the number of particles located in the bin at position $x$ and time $t$.
The density histogram is then divide by the sampling time interval $\Delta t$.
Once the average is calculated the current is obtained with the following integration
\begin{equation}
{\rm J_{x}}(x,t)=-\int_{0}^{x} dx' \langle \frac{\Delta \rho^{(1)}(x',t_{s})}{\Delta t}\rangle \ .
\end{equation}
\begin{figure}
\includegraphics[width=7cm]{fig_s3}
\caption{Evolution in time of the superadiabatic force $I^*_{\rm sad}(x_{\rm peak})=I_{\rm sad}(x_{\rm peak}) \sigma^{2}/k_{B} T$ at densities $\rho \sigma=0.4$ (green squares, $L_x=25 \sigma$) and $\rho \sigma=0.67$ (blue circles, $L_x=15 \sigma$). }
\label{fig3}
\end{figure}
\section{Superadiabatic force}
The total pair force integral $I(\vec r,t)=I_{\rm ad}(\vec r,t)+I_{\rm sad}(\vec r,t)$ is represented as the sum of an adiabatic term $I_{\rm ad}(\vec r,t)$, which contains all contributions that can be described by an equilibrium system, and a superadiabatic term $I_{\rm sad}(\vec r,t)$, which contains contributions that can not be reduced to an equilibrium description.
Therefore for all equilibrium states the superadiabatic contribution vanishes.
Figure~\ref{fig3} shows the evolution in time of the superadiabatic force at the density peak position $x_{\rm peak}$ for the system initialized in a crystal structure.
The superadiabatic contribution is zero for the equilibrium configurations at $t^*=0$ and $t^*\rightarrow \infty$.
At intermediate times the curve is characterized by a maximum at short times and by an exponential decay of the force at longer times.
|
2,869,038,156,240 | arxiv | \section{Introduction}
Recently a great deal of work has been done in
quantum optics on simulations of continuously
measured systems with dissipation, referred to variously as
quantum jumps, relative state, and Monte Carlo Wavefunction
simulations \cite{Carmichael1,Dalibard,Gardiner,Wiseman1,Carmichael2,Plenio},
which are examples of a class of techniques known as
``quantum trajectories.'' In these techniques, a deterministic
master equation for the density matrix of an open system is replaced
with a stochastic differential equation for a pure quantum state.
Averaging the solutions over all realizations of the noise reproduces
the master equation. Such a stochastic pure-state equation
is known as an ``unraveling'' of the master equation. Unraveling is
not unique; in general, there can be many stochastic equations which
average to the same master equation. A single solution for one realization
of the noise is called a ``quantum trajectory.''
In the original conception, the system was assumed to be monitored by
continuous measurements performed on the environment. The information
from the measurements ``collapsed'' the system density matrix to
a pure state, but the randomness of the measurement outcomes made this
state unpredictable. The effective description of the continuously
monitored system is thus a stochastic differential equation: an unraveling.
Different unravelings correspond to different measurement schemes.
Averaging over all possible measurement outcomes reproduces the master
equation, which is the best prediction that could be made in the absence
of any measurements.
However, even with no knowledge of the environment and its interaction
with the system, one can formally unravel any master equation into
a stochastic differential equation for pure states. In this case,
the unraveling is simply used as a means of solving the master equation.
This provides a useful numerical technique, commonly called ``quantum
Monte Carlo.'' By averaging over many solutions of the quantum
trajectory one can find an approximate solution to the master equation.
A density operator $\rho$ on a Hilbert space of dimension $N$ requires
$N^2-1$ real numbers to represent it; this can be computationally
prohibitive for large $N$, while a single state (of size $2N-2$)
remains practical, even with the necessity
of averaging over many runs of the stochastic equation.
Around the same time that quantum trajectories were introduced,
the consistent (or decoherent) histories formulation of quantum mechanics
was developed by Griffiths, Omn\`es, and Gell-Mann and Hartle
\cite{Griffiths,Omnes1,Omnes2,GMHart1,GMHart2,Omnes3}.
In this formalism, one describes a quantum system in terms of an exhaustive
set of possible histories, which must satisfy a {\it decoherence}
or {\it consistency} criterion.
Histories which satisfy this criterion do not interfere with each other,
and may be assigned probabilities which obey the usual classical
probability sum rules.
Both quantum trajectories and consistent histories describe a quantum system
in terms of alternative possible evolutions; they thus bear a certain
resemblance to each other. What is more, sets of histories corresponding
to possible records of a ``classical'' measuring device will
always decohere. Thus, there will be a set of
consistent histories which correspond to the quantum trajectories of a
continuously measured system.
Exactly such a correspondence has been shown between decoherent
histories and quantum state diffusion (QSD), a particular unraveling of
the master equation, by Di\'osi, Gisin, Halliwell and Percival \cite{DGHP}.
QSD trajectories were shown to correspond to a set of approximately
consistent histories for a specific choice of projection operators
at closely spaced intervals of time. Earlier, Di\'osi \cite{Diosi1} had
shown that a particular type of branch-dependent decoherent history
first examined by Paz and Zurek \cite{PazZurek} corresponded to yet another
type of trajectory, the orthogonal jump unraveling.
Below I will show a
similar correspondence for quantum jumps, and I conjecture that
most useful unravelings correspond to some set of decoherent histories
in an analogous way. I also demonstrate an interesting correspondence
between standard quantum jumps and the orthogonal jumps of Di\'osi.
I have shown the correspondence with decoherent histories
for a simple model in an earlier
paper \cite{Brun}, and there has also been work by Ting Yu
\cite{Yu} from a rather different perspective on the relationship of
quantum jumps and decoherent histories. In this paper, I will analyze
the correspondence between quantum trajectories and decoherent histories
in general using the concept of {it generalized records} introduced
by Gell-Mann and Hartle \cite{GMHart3}, which provides a unifying
framework for both. Recent work on non-Markovian quantum state
diffusion explicitly includes the correlation between the trajectory
and the state of the environment \cite{Diosi3,Strunz}; in that case,
the environment state is precisely this kind of generalized record.
In section II, I review the formalism of quantum trajectories, giving
the quantum jump unraveling in some detail but also briefly describing
the quantum state diffusion and ortho-jump unravelings. I then
review the formalism of decoherent (consistent) histories, and
contrast it with quantum trajectories. I point out that the usual
quantum trajectory description itself assumes a certain consistency
condition.
In section III, I examine a simple model of a photon counting experiment,
in which the photodetector is represented by a Markovian
environment producing rapid dissipation and decoherence in a field
mode coupled to a quantum system. This representation reproduces
the effects of repeated Von Neumann measurements, and in
the limit of separated timescales between the system and environment
allows one to derive a Markovian
master equation for the system alone. This master equation can be
unraveled into quantum jump or ortho-jump
trajectories as described in sections IA and IB above, and we will
see that there is an equivalence between these unravelings for this model.
In section IV, I start with the same model of a photon counting experiment,
but this time enumerate a set of consistent histories representing
different photodetection records. We see that the probabilities of
these histories are the same as the quantum jump trajectories in section
III, and that these histories do decohere to a high level of accuracy.
In section V, I show that the model detector of section III produces
generalized records, the existence of which guarantees that the
consistency conditions will be met. The existence of generalized records
allows the interpretation of quantum trajectories as consistent histories
even in a case with no measurement devices; I argue this in the case
when the outgoing electromagnetic field from the system itself serves
as a generalized record (though one with many incompatible interpretations).
Finally, in section VI, the results are summarized, and we see that they both
generalize the notion of quantum trajectories and provide a useful
calculational tool for the formalism of decoherent histories.
\section{Quantum trajectories and decoherent histories}
\subsection{Quantum Trajectories}
The systems of interest in quantum trajectories are
described by a Lindblad master equation in the Markov
approximation \cite{Lindblad},
\begin{equation}
{\dot \rho} = - i [{\hat H},\rho] + \sum_m \left( {\hat L}_m \rho {\hat L}_m^\dagger
- {1\over2} {\hat L}_m^\dagger {\hat L}_m \rho
- {1\over2} \rho {\hat L}_m^\dagger {\hat L}_m \right)\ ,
\label{master_eqn}
\end{equation}
where $\rho$ is the reduced density operator
of the system, ${\hat H}$ is the system Hamiltonian, and the $\{{\hat L}_m\}$ are a
set of {\it Lindblad operators} which model the effects of the environment.
(Note that throughout this paper I use units where $\hbar=1$.)
One of the notable characteristics of master equations
of the form (\ref{master_eqn}) is that
they do not, in general, preserve pure states. Suppose that the system
is initially in the state
\[
\rho(0) = \ket{\psi_0} \bra{\psi_0}.
\]
Over time it will evolve into a mixture which can be written
\begin{equation}
\rho(t) = \sum_\xi \ket{\psi_\xi(t)} p_\xi \bra{\psi_\xi(t)}
\equiv {\rm M}(\ket{\psi_\xi(t)}\bra{\psi_\xi(t)}),
\label{rho_expansion}
\end{equation}
where ${\rm M}()$ is the ensemble mean with probabilities
\[
p_\xi \ge 0,\ \ \sum_\xi p_\xi = 1.
\]
The decomposition (\ref{rho_expansion}) is generally not unique; there
can be many different sets of states $\{\ket{\psi_\xi}\}$ which give the
same density operator $\rho$. This ambiguity leads to an ambiguity in
unraveling the evolution, to which we will return shortly.
Now let us choose a set of states
$\ket{\psi(t,\xi(t))}$, where $\xi(t)$ is a random process and
$\ket{\psi(t,\xi(t))}$ obeys a stochastic evolution equation.
By averaging over all solutions of this stochastic equation
for all realizations of the noise $\xi(t)$, one finds the solution
of (\ref{rho_expansion}) at time $t$.
This stochastic equation will be of the form
\begin{equation}
\ket{d\psi} = \ket{u} dt + \ket{v} d\xi(t),
\end{equation}
where $d\xi(t)$ is a stochastic differential variable representing
the random process $\xi(t)$, and can include continuous diffusive noise,
discrete jumps, or both. The vectors $\ket{u}$ and $\ket{v}$
are functions of the state $\ket\psi$. (In general, there will be
several noise terms with different $d\xi(t)$.)
A single solution $\ket{\psi(t,\xi(t))}$ of this equation
for a single realization of $\xi(t)$
follows a quantum trajectory in Hilbert space.
The complete set of solutions for all $\xi(t)$
constitutes an unraveling of the master equation.
This is a rather formal notion of quantum trajectories, since it is not
clear what physical significance the states $\ket{\psi(t,\xi(t))}$
have, if any. Given the existence of many different expansions
(\ref{rho_expansion}) for the same density operator $\rho(t)$, one must
also admit the possibility of many different possible unravelings. If
the choice of unraveling is not unique, what
physical reality does it have? Fortunately, it is
possible to use the abstract idea of unraveling in practical
situations where $\ket{\psi(t,\xi(t))}$ {\it does} have a physical
interpretation. Indeed, this was the original motivation for developing
quantum trajectories \cite{Carmichael2}.
The non-unitarity of the master equation is caused by the loss
of information from the system to the environment, as their interaction
produces entanglement between the system and
environment degrees of freedom. Within the master equation
description this entanglement is inaccessible;
but if one has experimental access to the environment, it is possible to
extract information about the system by making selected
measurements. If the measurements are performed with sufficient precision
and frequency, and the initial state is known, one can determine
the exact state of the system at later times.
If we include the environment in our description, the joint
system-environment state remains pure:
\begin{equation}
\ket\Psi = \sum_i c_i \ket{a_i} \otimes \ket{b_i},
\label{purification}
\end{equation}
where $\ket{a_i}$ and $\ket{b_i}$ are states in the system and environment
Hilbert spaces, respectively. For any state $\ket\Psi$ it is possible
to find an expression (\ref{purification}) such that the $\ket{a_i}$
are all mutually orthogonal, as are the $\ket{b_i}$; this is the
Schmidt decomposition \cite{Peres}.
However, in general this is not necessary; any
decomposition (\ref{purification}) in which the $\ket{b_i}$ (but not
necessarily the $\ket{a_i}$) are orthogonal will suffice. In particular,
given an observable for the environment, we can choose the $\ket{b_i}$
to be a basis of eigenstates. Then by measuring the environment, the
system goes into the state $\ket{a_i}$ with probability $|c_i|^2$.
In this picture the system remains in a pure state, but its evolution is
no longer deterministic: it is influenced by random measurement
outcomes. This is often referred to as {\it conditional}
(or {\it relative state}) evolution: it is conditioned on the measurement
record. If one averages over all possible measurement outcomes,
a particular decomposition of the
density operator (\ref{rho_expansion}) results.
This exactly matches the earlier notion of a quantum trajectory,
but now the unraveling has a clear-cut physical interpretation:
the state $\ket{\psi(t,\xi(t))}$ represents our knowledge of the system,
conditioned on the random outcomes $\xi(t)$ of a sequence of measurements,
and $p_\xi$ is the probability of this outcome.
Note that not all measurements will satisfy the requirement
(\ref{rho_expansion})! Indeed, only a fairly restricted subclass will
do so. We discuss this restriction in IIE below, and later in section V.
\subsection{Quantum Jumps}
We can make this clearer with a concrete example. Suppose
we consider a quantum optical system, such as a small cavity with
one mirror partially silvered so that radiation can
escape. The external electromagnetic field represents the environment
in this case, while a field mode inside the cavity is the
system. We describe the system evolution by a master equation
\cite{Carmichael2}
\begin{equation}
{\dot \rho} = - i [{\hat H}_0,\rho] + \gamma {\hat a} \rho {\hat a}^\dagger
- (\gamma/2) {\hat a}^\dagger {\hat a} \rho
- (\gamma/2) \rho {\hat a}^\dagger {\hat a},
\label{cavity}
\end{equation}
where ${\hat H}_0$ represents the Hamiltonian of the system mode, ${\hat a}$ is the
annihilation operator, and $\gamma$ represents the rate of dissipation from
the cavity. We will be considering this simple master equation
for most of this paper.
Equation (\ref{cavity}) describes the evolution of the system {\it provided}
that we know nothing about the quanta which escape to the environment,
or choose to ignore that information.
Suppose we place photodetectors outside the cavity in such a way
that every escaped photon is detected. Each detection gives us some
information about the state of the system. In this case, with perfect
detection, the system evolution becomes
\begin{equation}
{d\ket\psi\over dt} = - i {\hat H}_{\rm eff} \ket\psi,
\label{qj_schrod}
\end{equation}
interrupted at random times by sudden quantum jumps
\begin{equation}
\ket\psi \rightarrow {\hat a}\ket\psi,
\label{jump}
\end{equation}
where ${\hat H}_{\rm eff}$ is the {\it effective Hamiltonian}
\begin{equation}
{\hat H}_{\rm eff} = {\hat H}_0 - i (\gamma/2) {\hat a}^\dagger{\hat a},
\end{equation}
and the jumps occur with a probability given by the norm of the state,
as we shall see explicitly below \cite{Carmichael1,Dalibard,Gardiner}.
Equation (\ref{qj_schrod}) has the form of the usual
Schr\"odinger equation, but with a non-hermitian Hamiltonian.
A state that evolves for a time $t$ without jumping is given by
\begin{equation}
\ket{\psi(t)} = {\rm e}^{-i{\hat H}_{\rm eff} t} \ket{\psi(0)}.
\end{equation}
The jumps represent detections of emitted photons, which trigger
a sudden change in our knowledge of the system. The presence of the
non-Hermitian terms in the effective Hamiltonian represents the effect
of {\it not} detecting any photons on our knowledge of the system; a
null measurement thus still affects the system.
This evolution does not preserve the norm of the state.
The actual physical state is
taken to be $\ket{\tilde\psi} = \ket\psi/\sqrt{\bracket{\psi}{\psi}}$,
the renormalized state. An actual physical detector cannot determine
the time of a photon emission with infinite precision; at best, it will
determine the time within a short interval $\Delta t$.
The probability that an initial state $\ket\psi$ evolves for a time $T$
and undergoes $N$ jumps at times $t_1, \ldots, t_N$
(which are assumed to be widely spaced with respect to
$\Delta t$) is
\begin{eqnarray}
p(\ket{\tilde\psi}) = & (\gamma\Delta t)^N
{\rm Tr}\biggl\{ {\rm e}^{-i{\H_{\rm eff}}(T-t_N)} {\hat a} {\rm e}^{-i{\H_{\rm eff}}(t_N - t_{N-1})} {\hat a}
\cdots {\hat a} {\rm e}^{-i{\H_{\rm eff}} t_1} \nonumber\\
& \times \ket\psi\bra\psi {\rm e}^{i{\H_{\rm eff}}^\dagger t_1} {\hat a}^\dagger
\cdots {\hat a}^\dagger {\rm e}^{i{\H_{\rm eff}}^\dagger(T-t_N)} \biggr\}.
\label{jumps_prob}
\end{eqnarray}
This trace is the norm of the state
\begin{equation}
\ket{\psi_{t_1,\ldots,t_N}} =
{\rm e}^{-i{\H_{\rm eff}}(T-t_N)} {\hat a} {\rm e}^{-i{\H_{\rm eff}}(t_N - t_{N-1})} {\hat a}
\cdots {\hat a} {\rm e}^{-i{\H_{\rm eff}} t_1} \ket\psi,
\end{equation}
which is the unrenormalized state resulting from
this particular sequence of jumps $t_1, \ldots, t_N$. The norm of the
unrenormalized state thus gives the probability density for that
state to be realized. The density operator $\rho$ is given by averaging
over all realizations $\ket{\tilde\psi}$ with probability
measure (\ref{jumps_prob}),
\begin{eqnarray}
\rho(t) = && {\rm M}(\ket{\tilde\psi(t)} \bra{\tilde\psi(t)}) \nonumber\\
= && \sum_{\ket{\tilde\psi}} \ket{\tilde\psi} p(\ket{\tilde\psi})
\bra{\tilde\psi} \nonumber\\
= && \sum_{\ket{\psi_{t_1,\ldots,t_N}}} (\gamma\Delta t)^N
\ket{\psi_{t_1,\ldots,t_N}} \bra{\psi_{t_1,\ldots,t_N}}, \nonumber\\
= && \sum_N \gamma^N \int_0^T dt_1 \cdots \int_{t_{N-1}}^T dt_N
\ket{\psi_{t_1,\ldots,t_N}} \bra{\psi_{t_1,\ldots,t_N}}.
\label{mean}
\end{eqnarray}
The density operator defined by (\ref{mean}) solves the master equation
(\ref{cavity}).
It is possible to rewrite these equations in
explicitly norm-preserving form,
\begin{equation}
\ket{d\tilde\psi} = - i {\hat H}_{\rm eff} \ket{\tilde\psi} dt
+ (\gamma/2)\expect{{\hat a}^\dagger{\hat a}} \ket{\tilde\psi} dt
+ \left( {{\hat a}\over{\sqrt{\expect{{\hat a}^\dagger{\hat a}}}}} - 1 \right) \ket{\tilde\psi} dN,
\label{nonlinear_jumps}
\end{equation}
at the cost of a little extra complexity
and nonlinearity \cite{Dalibard}. This is a stochastic differential
equation, where $dN$ is a stochastic variable which is 0 except at
random times (corresponding to the jumps) when it becomes 1. It
has statistics
\begin{equation}
dN dN = dN,\ \ {\rm M}_{\ket\psi}(dN) = \gamma\expect{{\hat a}^\dagger{\hat a}},
\end{equation}
where ${\rm M}_{\ket\psi}$ denotes the ensemble average over all trajectories
which are in state $\ket\psi$ at the given time $t$.
The nonlinear form has useful properties, but
for our purposes the linear form will usually prove more convenient.
\subsection{Other Unravelings}
The quantum jump equation is not the only way of
unraveling the master equation into
quantum trajectories. In fact, there are an infinite number
of such unravelings. Certain choices are more commonly used than others,
however, so we will go over them quickly.
One of the most useful is the {\it quantum state diffusion}
(QSD) equation \cite{GisinPercival}
\begin{equation}
\ket{d\psi} = - i {\hat H} \ket\psi dt
+ \sum_m \left( \expect{{\hat L}^\dagger_m} {\hat L}_m
- {1\over2} {\hat L}^\dagger_m {\hat L}_m
- {1\over2} | \expect{{\hat L}_m} |^2 \right) \ket\psi dt
+ \sum_m \left( {\hat L}_m - \expect{{\hat L}_m} \right) \ket\psi d\xi_m.
\label{qsd_eqn}
\end{equation}
This is an It\^o stochastic differential equation \cite{Gardiner2},
with the $d\xi_m$ representing continuous complex
stochastic processes with ensemble means
\begin{equation}
{\rm M}(d\xi_m) = {\rm M}(d\xi_m d\xi_n) = 0,\ \
{\rm M}(d\xi_m d\xi^*_n) = \delta_{mn} dt.
\end{equation}
It is not difficult to show that this equation also obeys the master
equation (\ref{master_eqn}) in the mean:
\begin{equation}
\rho = {\rm M}(\ket{\psi_\xi}\bra{\psi_\xi}).
\end{equation}
Rather than discrete jumps, the solutions of
this equation undergo continuous diffusion.
This standard form of the QSD equation is nonlinear and norm-preserving.
However, there is a different unraveling known as
{\it linear QSD} with equation
\begin{equation}
\ket{d\psi} = - i {\hat H} \ket\psi dt
- {1\over2} \sum_m {\hat L}^\dagger_m {\hat L}_m \ket\psi dt
+ \sum_m {\hat L}_m \ket\psi d\xi_m.
\label{linear_qsd}
\end{equation}
This does {\it not} preserve the norm of $\ket\psi$, but is also
a valid unraveling of the master equation, and has properties similar
to the nonlinear equation \cite{linearQSD}.
The QSD equation was discovered by Gisin and Percival, following from
earlier work by Gisin in the theory of measurement.
Carmichael \cite{Carmichael2} showed that a similar diffusion equation
arose in the case (\ref{cavity}) from a relative state approach, just
as in quantum jumps, but with direct photodetection replaced by
balanced homodyne detection. It was shown by Wiseman
and Milburn \cite{Wiseman2} and others \cite{Knight} that the exact
QSD equation arises in the case of balanced heterodyne detection.
Thus, one can consider both approaches as giving the state
of a system conditioned on a measurement record, but using different
measurement schemes. We will discuss this further below. (The
linear equation (\ref{linear_qsd}) does not have such a straightforward
interpretation in terms of measurements, though that does not necessarily
imply that such an interpretation might not be found.)
Conversely, Gisin and Percival have shown \cite{GisinPercival}
that jump-like behavior can
be exhibited by the QSD equation by explicitly including a portion of
the photodetector in addition to the system; their explicit inclusion
of part of the environment is similar to the approach of this paper.
Related to QSD are the {\it orthogonal jumps} of Di\'osi \cite{Diosi2}.
These trajectories obey an equation
\begin{eqnarray}
\ket{d\psi} = && - i {\hat H} \ket\psi dt
+ \sum_m \left( \expect{{\hat L}^\dagger_m} {\hat L}_m
- {1\over2} {\hat L}^\dagger_m {\hat L}_m
+ {1\over2} \expect{{\hat L}_m^\dagger {\hat L}_m}
- | \expect{{\hat L}_m} |^2 \right) \ket\psi dt \nonumber\\
&& + \sum_m \left( \ket{\psi_m} - \ket\psi \right) dN_m,
\label{ortho_jumps}
\end{eqnarray}
where the $dN_m$ now represent a stochastic jump process,
\begin{equation}
dN_m dN_n = \delta_{mn} dN_m,\ \ {\rm M}(dN_m) = r_m dt
\end{equation}
and the set of states $\ket{\psi_m}$ are mutually orthogonal and
orthogonal to $\ket\psi$. The formula for these orthogonal states
and their jump rates $r_m$ is complicated; fortunately, for the purposes
of this paper we will not need to worry about it. The deterministic
portion of equation (\ref{ortho_jumps}) is identical to that of the
QSD equation expressed in Stratonovich rather than It\^o form
\cite{Rigo}.
These orthogonal jumps are in some ways the most economical unraveling
of the master equation, in that these jumps occur fairly infrequently
but cause a large change in the state.
This unraveling also has close ties to decoherent histories, as we
shall see. However, relatively little work has been
done on the connection between ortho-jumps and measurements
\cite{Breslin1,Breslin2}.
An important result is that ortho-jumps result from choosing the
Schmidt decomposition of (\ref{purification}), where the $\ket{a_i}$
as well as the $\ket{b_i}$ are all orthogonal, and measuring the
eigenbasis $\ket{b_i}$. This measurement gives the maximum average
information about the system, but requires a dynamic measurement
scheme, since the basis $\ket{b_i}$ will in general be different
at different times.
This form of ortho-jumps is also explicitly norm-preserving, which
results in a nonlinear stochastic equation, just as for QSD.
Unlike the other equations presented, there is no completely linear
version of this unraveling; the orthogonalization procedure used in
determining the $\ket{\psi_m}$ is intrinsically nonlinear. However,
for certain special problems these $\ket{\psi_m}$ are linear functions
of $\ket\psi$. In this special case it is possible to find a linear
version of ortho-jumps which does not preserve the norm, and which
is closely related to the standard quantum jump equation. We will
examine such a case below.
All of these unravelings correspond to the same Markovian master
equation. If considered simply as a way of solving for the density
operator, it doesn't matter which unraveling one picks; averaging over
any set of trajectories will give the same final result. But
if one actually has a physical description of the environment, and
of the measurements being performed on it, this equivalence
is broken, and a particular choice of unraveling is singled out. This
is the case in this paper, where we assume a particular (albeit simplified)
description of the environment, including the effects of a measurement
apparatus, to model photodetection.
\subsection{Decoherent (or Consistent) Histories}
Let us now turn to consistent histories.
In nonrelativistic quantum mechanics one can specify a set of
possible histories by choosing a sequence of times $t_1, t_2, \ldots, t_n$
and at each time $t_i$ a complete set of orthogonal projection operators
$\{{\hat {\cal P}}^j_{\alpha_j}(t_j)\}$, such that
\begin{equation}
{\hat {\cal P}}^j_{\alpha_j}(t_j) {\hat {\cal P}}^j_{\alpha'_j}(t_j)
= {\hat {\cal P}}^j_{\alpha_j}(t_j) \delta_{\alpha_j \alpha'_j},\ \
\sum_{\alpha_j} {\hat {\cal P}}^j_{\alpha_j}(t_j) = {\hat 1},
\end{equation}
where the ${\hat {\cal P}}$'s are Heisenberg operators.
These projectors represent a complete set of exclusive alternatives at
the given times. A single history corresponds to a choice of
one projection operator ${\hat {\cal P}}^i_{\alpha_i}(t_i)$ at each time $t_i$. This
history can be represented by the sequence of indices $\alpha_1, \ldots,
\alpha_n$, which I will denote by $h$.
This is not the most general type of history. For instance, one
can make the set of projections at a later time depend on the choices
$\{\alpha_i\}$ at earlier times, making the histories
{\it branch-dependent}. We will not need this extra generality in this
paper, but it can be useful in describing measurements which are
conditioned on the results of earlier measurements.
The decoherence (or consistency) criterion is
described by the {\it decoherence functional}
$D[h,h']$, a complex functional on pairs of histories.
Two histories $h$ and $h'$
are said to {\it decohere} if they satisfy the relation
\begin{equation}
D[h,h'] = p(h) \delta_{hh'},
\label{decoherence}
\end{equation}
where $p(h)$ defines to be the probability of history $h$.
A set of histories $\{h\}$
is said to be complete and consistent if all pairs of histories satisfy
(\ref{decoherence}) and their probabilities sum to unity.
We define a {\it history operator} ${\hat C}_h$
\begin{equation}
{\hat C}_h = {\hat {\cal P}}^n_{\alpha_n}(t_n) {\hat {\cal P}}^{n-1}_{\alpha_{n-1}}(t_{n-1}) \cdots
{\hat {\cal P}}^1_{\alpha_1}(t_1).
\label{history_op}
\end{equation}
In terms of this operator the decoherence functional becomes
\begin{equation}
D[h,h'] = {\rm Tr} \{ {\hat C}_h \rho_0 {\hat C}_{h'}^\dagger \},
\end{equation}
where $\rho_0$ is the initial state. If $\rho_0$ is pure,
$\rho_0 = \ket\psi\bra\psi$, then $D[h,h']$ is an inner product
\begin{equation}
D[h,h'] = \bra\psi{\hat C}_{h'}^\dagger {\hat C}_h\ket\psi,
\label{inner_product}
\end{equation}
and the consistency condition (\ref{decoherence})
amounts to the assertion that the states
$\{{\hat C}_h\ket\psi\}$ are orthogonal for different $h$, with the
probability of a history given by the norm of the corresponding state.
\subsection{Consistency of quantum trajectories}
It is often convenient to consider these projections in the Schr\"odinger
rather than the Heisenberg picture. In this case, the ${\hat {\cal P}}$'s are no longer
time-dependent, and instead the time-evolution is given explicitly:
\begin{equation}
{\hat C}_h = {\hat {\cal P}}^n_{\alpha_n} {\rm e}^{-i{\hat H}(t_n - t_{n-1})} {\hat {\cal P}}^{n-1}_{\alpha_{n-1}}
{\rm e}^{-i{\hat H}(t_{n-1} - t_{n-2})} \cdots {\hat {\cal P}}^1_{\alpha_1} {\rm e}^{-i{\hat H}(t_1 - t_0)}.
\label{schrodinger}
\end{equation}
Written in this form, it is clear that the state ${\hat C}_h\ket\psi$ has a
strong resemblance to our previous definition of a quantum trajectory.
This succession of Hamiltonian and projection terms resembles some kind
of pure state evolution, and if no single history
has probability 1 this evolution will have a stochastic component.
However, there are a number of important differences. The first
is the most obvious: in our definition of quantum trajectories, there was
no requirement that they should obey the decoherence criterion. This is
related to the fact that the decomposition (\ref{rho_expansion}) need not
be in terms of orthogonal states. More on this below.
The second difference is that quantum trajectories were
framed in terms of a split between system and environment degrees
of freedom, with the environment traced out of the equations of motion
in the Markovian approximation. These assumptions are not made in
consistent histories, which can be quite general.
The open systems case is important, however
\cite{FeynVern,Zurek,JoosZeh,CaldLegg}, and is worth treating in depth
in consistent histories. Most such treatments to date
\cite{GMHart2,Brun3,DowkHall} involve choosing a set of projection
operators on the system alone, while making
no assertions about the environment degrees of
freedom. That is, one specializes to projections of the form
\begin{equation}
{\hat {\cal P}} = {\hat {\cal P}}_{\rm sys} \otimes {\hat 1}_{\rm env},
\label{sys_projector}
\end{equation}
where ${\hat {\cal P}}_{\rm sys}$ is a projection operator in the Hilbert space of
the system and ${\hat 1}_{\rm env}$ is the identity operator in the Hilbert
space of the environment. The Markovian approximation will only be
valid for certain choices of environments and of the system/environment
interaction, of course; but if one is treating the same systems in both
the quantum trajectories and decoherent histories approach, the approximation
should be good in both cases. One ends up with a reduced description
in terms of the system alone, quite in the spirit of the master equation.
If we physically interpret the quantum trajectories as evolution
conditioned on measurements of the environment, we realize that
(\ref{sys_projector}) is not adequate to describe this situation. To
make statements about the state of the environment, we require
projectors onto the environment, not just the system. In fact, the
correct projectors in this case have the form
\begin{equation}
{\hat {\cal P}} = {\hat 1}_{\rm sys} \otimes {\hat {\cal P}}_{\rm env},
\label{env_projector}
\end{equation}
quite different from the usual discussion of decoherence in open systems.
Thus, to look for a set of histories equivalent to a particular
unraveling, we must retain in our description enough of the environment
to include suitable projections of the form (\ref{env_projector}).
As we shall see, in some cases this may be very little. To reduce
to a description of the system alone, a further tracing out of the
environment is then required.
As far as consistency is concerned, it has not been widely appreciated
that quantum trajectories (as they are commonly defined) must
obey a consistency condition of their own. This is easily seen
by considering the effects of environmental measurements. Rather
than tracing out the environment to get a master equation (\ref{master_eqn}),
we retain a complete description of the system and environment
(\ref{purification}). There is a Hamiltonian ${\hat H}$ for the system and
environment together, such that they tend to become entangled with time:
\begin{equation}
\ket{\Psi(t)} = {\rm e}^{-i{\hat H} t} \ket{\Psi(0)}.
\end{equation}
Suppose we now perform a series of $n$ measurements on the environment
spaced $\Delta t = t/n$ apart in time,
with outcomes corresponding to projections of the form (\ref{env_projector}).
The (unnormalized) joint state conditioned on these outcomes then becomes
\begin{equation}
\ket{\Psi_\xi} = {\hat {\cal P}}_{\xi_n} {\rm e}^{-i{\hat H} \Delta t} {\hat {\cal P}}_{\xi_{n-1}}
\cdots {\hat {\cal P}}_{\xi_1} \ket{\Psi(0)},
\end{equation}
with probability $p_\xi = \bracket{\Psi_\xi}{\Psi_\xi}$. We now want
to impose the condition (\ref{rho_expansion}). In order for this
to hold we must have
\begin{equation}
\sum_\xi {\rm Tr}_{\rm env} \left\{ \ket{\Psi_\xi}\bra{\Psi_\xi} \right\} =
\sum_{\xi,\xi'} {\rm Tr}_{\rm env} \left\{
\ket{\Psi_\xi}\bra{\Psi_\xi'} \right\},
\end{equation}
which implies
\begin{equation}
\sum_{\xi \ne \xi'} {\rm Tr}_{\rm env} \left\{
\ket{\Psi_\xi}\bra{\Psi_\xi'} \right\} = 0.
\label{traj_consistency}
\end{equation}
This equation is by no means automatically satisfied. It is, in fact,
a consistency condition rather like (\ref{decoherence}). In
some ways it is much weaker, since (\ref{traj_consistency}) is only
imposed on sums of off-diagonal terms rather than individual terms
(although it must be satisfied for all times $t$). In another way
it is stronger: the trace in (\ref{traj_consistency}) is only over
the environment, so that an entire operator on the system space is
required to vanish, not just the trace. This is rather like
the partial trace decoherence of Finkelstein \cite{Finkelstein},
discussed in section IV. In physical terms, (\ref{traj_consistency})
means that the measurements made must not alter the state of the
environment such that it then acts back and alters the dynamics
of the system. As we shall see, quantum jumps (like most of the commonly
considered unravelings) satisfies both (\ref{traj_consistency}) and
(\ref{decoherence}), and in fact satisfies a consistency criterion
stronger than either. The Born-Markov approximation which is generally
made in deriving the master equation explicitly assumes that the state
of the environment is not changed by interacting with the system; so
long as measurements are made only on the outgoing field,
(\ref{traj_consistency}) will be satisfied.
What makes a set of histories decohere? While a general characterization
is still the subject of research, common mechanisms of decoherence
have been widely studied in simple models, and are
now fairly well understood. Suppose that the state of the system
at the time $t_i$ becomes correlated with some degrees of freedom of
the environment described by a Hilbert subspace ${\cal H}_i$, and the
projection operator ${\hat {\cal P}}_{\alpha_i}^i$ singles out a particular state
of ${\cal H}_i$. If the state of these degrees of freedom is subsequently
unaltered by further interaction with the system, then different choices
of the ${\hat {\cal P}}_{\alpha_i}$ at time $t_i$ will render the different histories
orthogonal, and hence consistent, at all later times.
This gives us the following picture: the total Hilbert space for the
system and environment has the form ${\cal H} =
{\cal H}_{\rm sys} \otimes {\cal H}_1 \otimes \cdots {\cal H}_N$,
and the system only interacts with a single subsystem ${\cal H}_i$
at a given time $t_i$. We choose a set of projections $\{{\hat {\cal P}}_{\alpha_i}^i\}$
onto the state of ${\cal H}_i$ at each time. Clearly, any history
composed of such projections must decohere. These subsystems
${\cal H}_i$ are called ``generalized records'' by Gell-Mann and Hartle,
because they record the choice ${\hat {\cal P}}_{\alpha_i}^i$ for all later times;
they are ``generalized'' because the particular set of degrees of
freedom represented by ${\cal H}_i$ might be very complicated, and
inaccessible to an experimenter for all practical purposes
\cite{GMHart3}.
This is highly idealized, of course. In a real system it is usually
impossible to identify these subsystems ${\cal H}_i$; the choices of
projection are generally not onto these subsystems directly, but
instead depend on correlations in the state of the system and environment;
the interaction with the system does not have this precise,
time-limited form; nor does the system generally interact directly with
the generalized record. However, in some cases one {\it can} explicitly
identify physical systems which serve as generalized records (for
instance, photons which escape to infinity without interacting further);
and even where one cannot, it is not unreasonable to assume that
something corresponding to them is present \cite{Halliwell}. It
may be that in a real environment no subsystem remains forever isolated,
but it might be effectively true on
timescales very long compared to the duration of the experiment,
or even the lifetime of the universe. In particular, situations which
correspond to measurements can almost always be assumed to have some
set of generalized records. Sometimes these are easy to identify:
lines drawn on graph paper by a measurement apparatus are more than
sufficient to cause decoherence of the measurement results
at all later times.
\section{The model}
\subsection{Model of photodetection}
Photomultiplier tubes are very complicated devices, with the complication
doing little to improve understanding of the detection process. Therefore
I will use a detector model which is simpler to analyze, based on the
techniques used by Haroche et al.\ \cite{Haroche}
in studying cavity QED in microwave cavities. Models of this type
have been used before by a number of authors \cite{McElwaine,Kist}.
Figure 1 illustrates the setup.
The heart of our detector is a lossy cavity which is probed by a beam of
atoms. We assume that the atoms are fairly well localized, so that
their motion can be treated as essentially classical. One mode of this
cavity can be excited by photons leaking in from the outside; this mode
has creation and annihilation operators ${\hat b}^\dagger$ and ${\hat b}$. We assume
that the electronic states of the atoms are prepared in a superposition
of two levels, which we label $\ket0$ and $\ket1$. If there is an atom
in state $\ket1$ in the cavity it shifts the resonance frequency. Cavity
loss results from interaction between the mode and the internal degrees of
freedom of the cavity, which we model as a reservoir of harmonic
oscillators. The time-dependent Hamiltonian for this detector model is
\begin{equation}
{\hat H}_{\rm det}(t) = \omega_{\rm det} {\hat b}^\dagger{\hat b}
+ \sum_j {\rm w}(t-t_j) {\hat b}^\dagger{\hat b} \ket{1_j} \bra{1_j}
+ \sum_k \gamma_k ({\hat b} + {\hat b}^\dagger) ({\hat r}_k + {\hat r}^\dagger_k) + \omega_k {\hat r}^\dagger_k {\hat r}_k,
\label{detector_H}
\end{equation}
where $\omega_{\rm det}$ is the frequency of the cavity mode,
$\ket{1_j}\bra{1_j}$ is a projector onto state $\ket1$ of the $j$th
atom, ${\hat r}^\dagger_k$ and ${\hat r}_k$ are the creation and annihilation operators
for the $k$th reservoir oscillator, $\omega_k$ is its frequency, and
$\gamma_k$ is the coupling to the cavity mode. The function ${\rm w}(t)$
is a window function peaked around $0$, which represents the strength of
the interaction when the atom is inside the cavity.
It is assumed that the traversal
times $t_j$ of the atoms are spaced an average time $\tau$ apart, and
that the width of the window function ${\rm w}(t)$ is narrow compared to
$\tau$. The important quantity is the integral
\begin{equation}
2 \lambda \equiv \int_{-\infty}^\infty {\rm w}(t)\ dt.
\end{equation}
The effects of the reservoir coupling have been widely studied
and are very well understood. If we assume that the temperature of
the reservoir is low compared to the oscillator energy $\omega_{\rm det}$
and the frequencies $\omega_k$ are distributed appropriately, the
reservoir produces an effective dissipation rate $\Gamma_1$
\cite{FeynVern,CaldLegg}.
The incoming atoms are prepared in the state
$\ket{\psi_{\rm in}} = (\ket0 + \ket1)/\sqrt2$. If the cavity mode
is in the number state $\ket{n}$ the atoms leave the cavity in the
state $(\ket0 + \exp(-2i\lambda n)\ket1)/\sqrt2$. This entanglement
between the atoms and the cavity mode suppresses interference terms
between number states $n$ and $n'$
by a factor of $\cos(\lambda(n-n'))$ per atom, where we assume
$|\lambda(n-n')| < \pi/2$. With the atoms spaced approximately
$\tau$ apart on average, this corresponds to a decoherence rate
of $\Gamma_2 = -(1/\tau)\ln\cos(\lambda(n-n'))$.
We assume that the dissipation time $1/\Gamma_1$ is long compared to the
decoherence time $1/\Gamma_2$, so that every photon that enters
the cavity persists long enough to be detected, but is short compared to
the rate at which photons enter the cavity, so that there is never
more than one photon in the cavity at a time, and there is essentially
no chance of a photon being coherently reabsorbed by the system from
the detector. This last point is important for consistency.
In that limit, we can trace out the atomic and reservoir degrees of
freedom, and get an equation for the cavity mode alone,
\begin{eqnarray}
{\dot\rho}_{\rm det} && = - i\omega_{\rm det} [{\hat b}^\dagger{\hat b},\rho_{\rm det}]
+ \Gamma_1 {\hat b} \rho_{\rm det} {\hat b}^\dagger
- {\Gamma_1\over2} \left({\hat b}^\dagger{\hat b}\rho_{\rm det}
+ \rho_{\rm det}{\hat b}^\dagger{\hat b} \right) \nonumber\\
&& + \Gamma_2 ({\hat b}^\dagger{\hat b}) \rho_{\rm det} ({\hat b}^\dagger{\hat b})
- {\Gamma_2\over2} \left( ({\hat b}^\dagger{\hat b})^2 \rho_{\rm det}
+ \rho_{\rm det} ({\hat b}^\dagger{\hat b})^2 \right).
\label{det_master}
\end{eqnarray}
This equation is valid on timescales long compared to $\tau$.
In actual experiments the atoms are themselves measured as they
leave the cavity, but
this is unimportant for our present purpose. We can imagine them flying
out into space forever, where they form (in the terminology of Gell-Mann
and Hartle) a generalized record of the detector. Indeed, as we shall
see in section V, the presence of the detector itself is not vital
to derive a quantum trajectory description using decoherent histories.
\subsection{System and output mode}
Consider a quantum system with Hilbert space ${\cal H}_1$, which is
isolated except for a single channel of decay---an interaction
with an external ``output mode,'' which is continuously monitored.
${\cal H}_2$ is the
Hilbert space of the output mode---the cavity mode of (\ref{det_master}).
The combined state of the system plus output mode lies in the product
Hilbert space ${\cal H}_1 \otimes {\cal H}_2$.
This reduced model is illustrated in Figure 2.
The measuring device produces two important
effects. The first is dissipation. Excitations of
the output mode will be absorbed by the measuring device at a rate
$\Gamma_1$ which we assume to be rapid compared to the interaction rate
between the system and output mode.
$1/\Gamma_1$ limits the time-resolution of the detector.
The second effect is decoherence. As the state of the output
mode becomes correlated with the atom
degrees of freedom in the detector, the phase coherence between
the ground and excited states of the output mode is lost.
This loss of coherence is far quicker than the actual rate of energy loss.
The decoherence rate is $\Gamma_2 \gg \Gamma_1$.
Assume a linear interaction between the system and the output mode,
and go to an interaction picture in the rotating wave approximation.
This lets us remove the Hamiltonian of the output mode, and gives
an interaction Hamiltonian
\begin{equation}
{\hat H}_I = \kappa ( {\hat a}^\dagger \otimes {\hat b} + {\hat a} \otimes {\hat b}^\dagger ).
\end{equation}
The total Hamiltonian is
\begin{equation}
{\hat H} = {\hat H}_0 \otimes {\hat 1}
+ \kappa ( {\hat a}^\dagger \otimes {\hat b} + {\hat a} \otimes {\hat b}^\dagger ),
\label{Hamiltonian}
\end{equation}
where ${\hat a}$ and ${\hat b}$ (${\hat a}^\dagger$ and ${\hat b}^\dagger$) are the lowering (raising)
operators for ${\cal H}_1$ and ${\cal H}_2$, respectively. We want
dissipation and decoherence to be rapid compared to
the transfer rate between system and output mode. This
requires that the system should not be too highly excited.
The hierarchy of evolution rates is $\Gamma_2 \gg \Gamma_1
\gg \kappa \expect{{\hat a}^\dagger{\hat a}}$. We thus have a system with separated
timescales.
The system plus output mode obeys a Markovian master equation:
\begin{eqnarray}
{\dot\rho} &=& - i [{\hat H},\rho] + \Gamma_1 {\hat b} \rho {\hat b}^\dagger
- {\Gamma_1\over2} ({\hat b}^\dagger{\hat b}\rho
+ \rho{\hat b}^\dagger{\hat b}) \nonumber\\
&& + \Gamma_2 {\hat n}_2 \rho {\hat n}_2 - {\Gamma_2\over2} ({\hat n}_2^2 \rho
+ \rho {\hat n}_2^2) = {\cal L} \rho,
\label{total_master}
\end{eqnarray}
where $\rho$ is the density matrix for the combined system and output
mode, and ${\cal L}$ is the Liouville superoperator.
The operator ${\hat n}_2 = {\hat b}^\dagger{\hat b}$ acts on the output mode.
Equation (\ref{total_master}) is linear, and so can be formally solved:
\begin{equation}
\rho(t_2) = \exp\biggl\{ {\cal L}(t_2 - t_1) \biggr\} \rho(t_1).
\end{equation}
Since the output mode is heavily damped, we simplify the problem
by retaining only the two lowest states $\ket0$ and $\ket1$,
treating ${\cal H}_2$ as a two-level system. We then
expand the density matrix $\rho$ explicitly in terms of its components
in ${\cal H}_1$ and ${\cal H}_2$:
\begin{equation}
\rho(t) = \rho_{00}(t) \otimes \ket0\bra0 + \rho_{01}(t) \otimes \ket0\bra1
+ \rho_{10}(t) \otimes \ket1\bra0 + \rho_{11}(t) \otimes \ket1\bra1,
\end{equation}
where the $\rho_{ij}$ are operators on ${\cal H}_1$ and the $\ket{i}\bra{j}$
on ${\cal H}_2$. In terms of these components the master equation becomes
\begin{eqnarray}
{\dot\rho}_{00} = && - i[{\hat H}_0,\rho_{00}] - i \kappa {\hat a}^\dagger \rho_{10}
+ i \kappa \rho_{01} {\hat a} + \Gamma_1 \rho_{11}, \nonumber\\
{\dot\rho}_{01} = && - i[{\hat H}_0,\rho_{01}] - i \kappa {\hat a}^\dagger \rho_{11}
+ i \kappa \rho_{00} {\hat a}^\dagger - {\Gamma_1+\Gamma_2\over2} \rho_{01}
= {\dot\rho}_{10}^\dagger, \nonumber\\
{\dot\rho}_{11} = && - i[{\hat H}_0,\rho_{11}] - i \kappa {\hat a} \rho_{01}
+ i \kappa \rho_{10} {\hat a}^\dagger - \Gamma_1 \rho_{11},
\end{eqnarray}
This model may seem highly simplified compared to an actual
photodetection experiment, but it captures most of the essential
physical principles without bogging down in unnecessary detail. To
discuss photoemission, in is necessary to include some of the environment
degrees of freedom explicitly. This is the function served by the
output mode, which is about as simple as a physical system can be.
The other important feature is the separation of
timescales between the measuring device degrees of
freedom and the effective dynamical timescale of the system.
The usual approximation of ideal measurements as instantaneous depends
on this separation of timescales.
This sort of model has been used in the past to study the spectrum
of emissions and time-correlation functions of an open optical system
\cite{Schack,Brun2}.
The important element in analyzing this model is its time evolution. Given
that $\Gamma_1 \ll \Gamma_2$,
it is convenient to expand the time-evolution superoperator
in the following form:
\begin{eqnarray}
&& {\rm e}^{{\cal L}\delta t} = {\rm e}^{{\cal L}_2 \delta t}
+ \int_0^{\delta t} dt' {\rm e}^{{\cal L}_2(\delta t - t')} {\cal L}_1 {\rm e}^{{\cal L}_2 t'} \nonumber\\
&& + \int_0^{\delta t} dt' \int_{t'}^{\delta t} dt'' {\rm e}^{{\cal L}_2(\delta t - t'')}
{\cal L}_1 {\rm e}^{{\cal L}_2(t'' - t')} {\cal L}_1 {\rm e}^{{\cal L}_2 t'} + \cdots,
\label{expansion}
\end{eqnarray}
where multiplication of superoperators is composition, with the earliest
rightmost. Second order terms are all that will be needed in this paper.
Here the terms of the master equation have
been separated:
\begin{equation}
{\cal L} = {\cal L}_1 + {\cal L}_2,
\end{equation}
with
\begin{equation}
{\cal L}_1\rho = - i [{\hat H},\rho] + \Gamma_1 {\hat b}\rho{\hat b}^\dagger
- {\Gamma_1\over2}({\hat b}^\dagger{\hat b}\rho + \rho{\hat b}^\dagger{\hat b}),
\end{equation}
and
\begin{equation}
{\cal L}_2\rho = \Gamma_2 {\hat n}_2 \rho {\hat n}_2
- {\Gamma_2\over2} ( {\hat n}_2^2 \rho + \rho {\hat n}_2^2 ).
\end{equation}
The effects of the superoperators ${\cal L}_1$ and ${\rm e}^{{\cal L}_2 t}$ are
given by
\begin{eqnarray}
({\cal L}_1\rho)_{00} && = - i[{\hat H}_0,\rho_{00}] - i\kappa{\hat a}^\dagger\rho_{10}
+ i\kappa\rho_{01}{\hat a} + \Gamma_1 \rho_{11}, \nonumber\\
({\cal L}_1\rho)_{01} && = - i[{\hat H}_0,\rho_{01}] - i\kappa{\hat a}^\dagger\rho_{11}
+ i\kappa\rho_{00}{\hat a}^\dagger - {\Gamma_1\over2} \rho_{01}
= ({\cal L}_1\rho)_{10}^\dagger, \nonumber\\
({\cal L}_1\rho)_{11} && = - i[{\hat H}_0,\rho_{11}] - i\kappa{\hat a}\rho_{01}
+ i\kappa\rho_{10}{\hat a}^\dagger - \Gamma_1 \rho_{11}, \nonumber\\
({\rm e}^{{\cal L}_2 t}\rho)_{00} && = \rho_{00}, \nonumber\\
({\rm e}^{{\cal L}_2 t}\rho)_{01} && = ({\rm e}^{- \Gamma_2 t/2})\rho_{01} =
({\rm e}^{{\cal L}_2 t}\rho)_{10}^\dagger, \nonumber\\
({\rm e}^{{\cal L}_2 t}\rho)_{11} && = \rho_{11}.
\label{evolutions}
\end{eqnarray}
Since the superoperator ${\rm e}^{{\cal L}_2 t}$ is diagonal in the components of
$\rho$, it is particularly simple to insert these expressions into the
expansion (\ref{expansion}) and carry out the integrals explicitly.
If we examine the evolution after a time $\delta t$ where $\Gamma_1 \delta t
\ll 1 \ll \Gamma_2 \delta t$, we see that the off-diagonal terms
$\rho_{01}, \rho_{10}$ are highly suppressed:
\begin{eqnarray}
({\rm e}^{{\cal L}\delta t}\rho)_{00} && \approx \rho_{00} - i[{\hat H}_0,\rho_{00}]\delta t
+ \Gamma_1 \rho_{11} \delta t
+ {2\kappa^2\delta t\over\Gamma_2}(2{\hat a}^\dagger\rho_{11}{\hat a}
- {\hat a}^\dagger{\hat a}\rho_{00} - \rho_{00}{\hat a}^\dagger{\hat a}) \nonumber\\
&& + {2\over\Gamma_2}(i\kappa\rho_{01}{\hat a}
- i\kappa{\hat a}^\dagger\rho_{10} ) - {2i\delta t\over\Gamma_2}[{\hat H}_0,i\kappa\rho_{01}{\hat a}
- i\kappa{\hat a}^\dagger\rho_{10} ]
- {2\Gamma_1\delta t\over\Gamma_2}(i\kappa{\hat a}\rho_{01}
- i\kappa\rho_{10}{\hat a}^\dagger) \nonumber\\
&& - [{\hat H}_0,[{\hat H}_0,\rho_{00}]]\delta t^2/2 - \Gamma_1\rho_{11}\delta t^2/2
- i [{\hat H}_0,\Gamma_1\rho_{11}]\delta t^2 + O(\kappa^2/\Gamma_2^2),
\label{diag0}
\end{eqnarray}
\begin{equation}
({\rm e}^{{\cal L}\delta t}\rho)_{01} \approx {2\over\Gamma_2} (i\kappa\rho_{00}{\hat a}^\dagger
- i\kappa{\hat a}^\dagger\rho_{11}) - {2i\kappa\delta t\over\Gamma_2}
(- i {\hat a}^\dagger[{\hat H}_0,\rho_{11}] + i [{\hat H}_0,\rho_{00}]{\hat a}^\dagger
- \Gamma_1 [{\hat a}^\dagger,\rho_{11}] ),
\label{off_diag}
\end{equation}
\begin{eqnarray}
({\rm e}^{{\cal L}\delta t}\rho)_{11} && \approx \rho_{11} - i[{\hat H}_0,\rho_{11}]\delta t
- \Gamma_1 \rho_{11} \delta t
+ {2\kappa^2\delta t\over\Gamma_2}(2{\hat a}\rho_{00}{\hat a}^\dagger
- {\hat a}{\hat a}^\dagger\rho_{11} - \rho_{11}{\hat a}{\hat a}^\dagger) \nonumber\\
&& + {2\over\Gamma_2}(i\kappa\rho_{10}{\hat a}^\dagger
- i\kappa{\hat a}\rho_{01} ) - {2i\delta t\over\Gamma_2}[{\hat H}_0,i\kappa\rho_{10}{\hat a}^\dagger
- i\kappa{\hat a}\rho_{01} ]
+ {2\Gamma_1\delta t\over\Gamma_2}(i\kappa{\hat a}\rho_{01}
- i\kappa\rho_{10}{\hat a}^\dagger) \nonumber\\
&& - [{\hat H}_0,[{\hat H}_0,\rho_{11}]]\delta t^2/2 + \Gamma_1\rho_{11}\delta t^2/2
+ i [{\hat H}_0,\Gamma_1\rho_{11}]\delta t^2 + O(\kappa^2/\Gamma_2^2).
\label{diag1}
\end{eqnarray}
The off-diagonal terms $\rho_{01},\rho_{10}$ will always be of order
$O(\kappa/\Gamma_2)$. We can therefore consider an approximate set of
differential equations in terms of $\rho_{00}$ and $\rho_{11}$ alone:
\begin{eqnarray}
{\dot\rho_{00}} && = - i {\H_{\rm eff}} \rho_{00} + i \rho_{00} {\H_{\rm eff}}^\dagger
+ \gamma {\hat a}^\dagger\rho_{11}{\hat a} + \Gamma_1 \rho_{11}, \nonumber\\
{\dot\rho_{11}} && = - i {\H_{\rm eff}} \rho_{11} + i \rho_{11} {\H_{\rm eff}}^\dagger
+ \gamma {\hat a}\rho_{00}{\hat a}^\dagger
- (\Gamma_1 + \gamma) \rho_{11},
\label{intermediate}
\end{eqnarray}
where $\gamma = 4\kappa^2/\Gamma_2$ and the effective Hamiltonian
\begin{equation}
{\H_{\rm eff}} = {\hat H}_0 - i {\gamma\over2} {\hat a}^\dagger{\hat a}
\end{equation}
is the same as that which appears in the quantum jumps formalism.
This equation (\ref{intermediate}) is valid on timescales long compared
to $1/\Gamma_2$. We examine it further in section IIID below.
If $\Gamma_1$ is large compared to the other terms of the equation, then
we can make the same sort of argument to show that $\rho_{11}$ will
be highly suppressed (by a factor of roughly $\gamma/\Gamma_1$) compared
to $\rho_{00}$.
In this limit we can therefore adiabatically eliminate
all components other than $\rho_{00}$ \cite{Wiseman2}. If we then
consider a reduced density matrix ${\tilde\rho}$
for the system alone, without the
output mode, it will be essentially equal to $\rho_{00}$
(with a correction of order $\gamma/\Gamma_1$ from $\rho_{11}$).
The equation for ${\tilde\rho}$ then becomes
\begin{equation}
{\dot{\tilde\rho}} = - i [{\hat H}_0,{\tilde\rho}]
+ \gamma {\hat a} {\tilde\rho} {\hat a}^\dagger
- (\gamma/2) {\hat a}^\dagger{\hat a} {\tilde\rho}
- (\gamma/2) {\tilde\rho} {\hat a}^\dagger{\hat a},
\label{adiabatic}
\end{equation}
to first order in $\gamma$.
This equation holds good on time scales long
compared to $\Gamma_1$.
Thus, in the adiabatic limit we see that this indirect measurement
scheme for the total system and output mode
does reproduce the usual master equation
(\ref{cavity}) for the system alone. Because
$\gamma$ is small, the damping is weak.
This weakness is related to the quantum Zeno effect: if the
detector had infinite time resolution, the system would
never emit a photon at all \cite{Misra}.
\subsection{Quantum jumps and continuous measurements}
We can unravel equation (\ref{adiabatic}) as described in section
IIB into stochastic trajectories $\ket{{\tilde \psi}(t,\xi)}$.
By averaging $\ket{\tilde\psi}\bra{\tilde\psi}$ over all realizations of
$\xi(t)$ with an appropriate probability measure (\ref{jumps_prob}),
one sees that it does reproduce the master equation
(\ref{adiabatic}) as required \cite{Gardiner}.
The master equation (\ref{adiabatic}) is valid only as long as
the Markovian approximation remains good. In the case of our toy model,
this means that it is valid only on timescales longer than
$1/\Gamma_1$. Thus, rather than a jump occurring at a time
$t_i$, it is more correct to consider the jump as occurring during an interval
$\Delta t \sim 1/\Gamma_1$ centered on $t_i$. For comparison
to experiment this qualification is unimportant,
but it will prove important in making
comparisons to consistent histories.
In the context of photon-counting experiments one can give a
simple physical interpretation to the individual quantum jump trajectories,
as the state of the system conditioned on the continuous measurement
record from the photon counter. As time passes without the
detection of a photon we gain information about the state of the system;
the lower states become more probable relative to the higher states,
this effect given by the non-Hermitian part of the effective
Hamiltonian. The jumps represent actual photon detections, in which both
the state of the system and the state of our knowledge change abruptly.
Consider the system plus output mode with Hamiltonian (\ref{Hamiltonian}),
and suppose that von Neumann measurements of the observable ${\hat n}_2$
are performed repeatedly on the output mode, separated by time intervals
$1/\Gamma_2$. If we average over the two possible measurement outcomes,
then the resulting mixed state is
\begin{equation}
\rho \rightarrow {\hat {\cal P}}_0 \rho {\hat {\cal P}}_0 + {\hat {\cal P}}_1 \rho {\hat {\cal P}}_1,
\label{project}
\end{equation}
where ${\hat {\cal P}}_0$ and ${\hat {\cal P}}_1$ are projections onto the states $\ket0$, $\ket1$
of the output mode. The effect of this repeated measurement is to rapidly
suppress the off-diagonal terms of the density operator $\rho$.
The $\Gamma_2$ terms in the master equation (\ref{total_master}) have
the same effect as (\ref{project}) on time scales long compared to
$1/\Gamma_2$, both giving rise to (\ref{intermediate}) in this limit.
One can thus think of these terms being a continuous
approximation to a series of repeated ideal measurements. However,
terms of this form also arise generically in the study of systems interacting
with environments \cite{Zurek}.
They are not unique to measurements. Indeed, this
sort of description can be considered as a model of the measurement
process itself: instead of taking place instantaneously, the measurement
occurs over a short period of time, as the measured system becomes
entangled with the many degrees of freedom of the measuring device.
One must include as well the $\Gamma_1$ terms which cause the emitted
photons to be absorbed (either by the environment or by the measuring
device, depending on the situation described). This could be modeled
simply by having the repeated measurements reset the output mode to $\ket0$
after measuring it, with a rate of $\Gamma_1$ (i.e., after every $n$th
measurement where $n=\Gamma_2/\Gamma_1$). The exact value of $\Gamma_1$
is not very important. It represents the time-resolution of the
measurement device, and all that is required is that photons not be
emitted more rapidly than they can be absorbed. Indeed, if one merely
assumes that the measurement device resets the state to $\ket0$ immediately
after each measurement, but that the record cannot resolve individual
clicks within an interval smaller than $1/\Gamma_1$, the probability of
a given measurement record will be exactly given by (\ref{jumps_prob});
the evolution of the system conditioned on the measurement record follows
the quantum jumps formalism given in section IIB.
\subsection{Ortho-jumps}
Consider the coupled equations (\ref{intermediate}), which
arise by averaging the full master equation
(\ref{total_master}) over times long compared to $1/\Gamma_2$.
These equations are equivalent to a Lindblad master equation for the
system plus output mode
\begin{equation}
{\dot \rho} = - i [{\hat H}_0,\rho] + \sum_{m=1}^3 {\hat L}_m \rho {\hat L}_m^\dagger
- {1\over2} {\hat L}_m^\dagger {\hat L}_m \rho
- {1\over2} \rho {\hat L}_m^\dagger {\hat L}_m,
\label{intermediate_master}
\end{equation}
with Lindblad operators
\begin{eqnarray}
{\hat L}_1 = \sqrt{\gamma}\, {\hat a} \otimes {\hat b}^\dagger, \nonumber\\
{\hat L}_2 = \sqrt{\gamma}\, {\hat a}^\dagger \otimes {\hat b}, \nonumber\\
{\hat L}_3 = \sqrt{\Gamma_1}\, {\hat 1} \otimes {\hat b}.
\end{eqnarray}
Any density operator of the form $\rho = \rho_{00} \ket0\bra0
+ \rho_{11} \ket1\bra1$ will remain of that form for all time.
We can now unravel this master equation into the orthogonal jump
trajectories of Di\'osi. For the moment, let us treat the deterministic
and jump terms separately. Let us also continue to neglect the higher
excitations of the output mode, so that it can be treated simply as
a two-level system. If the system begins in an initial state
$\ket\psi = \ket\phi \otimes \ket0$, then the ortho-jump equation
(\ref{ortho_jumps}) becomes
\begin{eqnarray}
\ket{d\psi} = \left( - i {\hat H}_0 \ket\phi
- {\gamma\over2} {\hat a}^\dagger{\hat a} \ket\phi
+ {\gamma\over2} \expect{{\hat a}^\dagger{\hat a}} \ket\phi \right) \otimes \ket0 dt
+ {\rm jumps}.
\label{ortho_eqn1}
\end{eqnarray}
If the system begins in the state $\ket\psi = \ket\phi \otimes \ket1$
then the equation is nearly identical:
\begin{eqnarray}
\ket{d\psi} = \left( - i {\hat H}_0 \ket\phi
- {\gamma\over2} {\hat a}^\dagger{\hat a} \ket\phi
+ {\gamma\over2} \expect{{\hat a}^\dagger{\hat a}} \ket\phi \right) \otimes \ket1 dt
+ {\rm jumps}.
\label{ortho_eqn2}
\end{eqnarray}
In {\it both cases} we see that the deterministic part of the evolution
is the same on the system part of the state, leaving the output mode
unchanged, and is equivalent to the deterministic part of the
nonlinear quantum jump equation (\ref{nonlinear_jumps}). What then
are the effects of the jumps?
A system in a state $\ket\psi$ can jump into any of a set of orthogonal
states which form a basis for the subspace spanned by the states
$\ket{\psi_j} = ({\hat L}_j - \expect{{\hat L}_j})\ket\psi$. If the system
is initially in the state $\ket\phi \otimes \ket0$ then only ${\hat L}_1$
makes a non-vanishing contribution to that space. Thus, there is only
one type of jump from that initial condition,
\begin{equation}
\ket\phi \otimes \ket0 \rightarrow
{\hat a}\ket\phi \otimes \ket1/\expect{{\hat a}^\dagger {\hat a}}.
\label{up_jump}
\end{equation}
These jumps occur with a mean rate
$M_{\ket\psi}(dN_1) = \gamma\expect{{\hat a}^\dagger{\hat a}} dt$.
Thus, the effect of this
jump on the system is the same as a standard quantum jump, and they
occur with the same rate.
For a state $\ket\phi \otimes \ket1$ the other two Lindblad operators
${\hat L}_2$ and ${\hat L}_3$ can give rise to jumps. However, because
$\Gamma_1 \gg \gamma$, we can to an excellent approximation neglect the
jumps due to ${\hat L}_2$, which occur very rarely compared to jumps due
to ${\hat L}_3$. Making this approximation, the jumps take the form
\begin{equation}
\ket\phi \otimes \ket1 \rightarrow \ket\phi \otimes \ket0,
\label{down_jump}
\end{equation}
and occur with a mean rate $M(dN_3) = \Gamma_1$ independent of the state
$\phi$. These downward jumps happen much more rapidly than the upward
jumps due to ${\hat L}_1$, and a trajectory has a near-unity probability
of dropping back down to $\ket0$ within a time of order $1/\Gamma_1$
after jumping up to $\ket1$.
We can thus characterize the typical evolution of these trajectories.
They spend most of their time (by a ratio of roughly $\Gamma_1/\gamma$)
in the lower state $\ket0$, with the system evolving according to the
deterministic terms of equation $(\ref{nonlinear_jumps})$. At random
intervals, there is a jump in which the system is multiplied by ${\hat a}$
and the output mode jumps to $\ket1$. Within a random time of order
$1/\Gamma_1$ the output mode drops back down to $\ket0$, while the system
continues to evolve according to $(\ref{nonlinear_jumps})$ undisturbed.
If we trace out the output mode, we see that one of these ortho-jump
trajectories is exactly equivalent to a standard jump trajectory on
the system Hilbert space alone. Each standard jump trajectory will
have many ortho-jump trajectories equivalent to it, each corresponding
to a slightly different time for the output mode to return to the state
$\ket0$. However, as the exact time
has no effect on the system evolution, this multiplicity is
unimportant. The total probability of the ortho-jump trajectories
will be exactly the same as the probability of the equivalent standard
jump trajectory.
This many-to-one relationship is a simple example of a {\it coarse-graining};
the ortho-jump trajectories are more detailed than the standard jump
trajectories, because they include information about the state of the
output mode as well. This concept is very important in the decoherent
histories formulation of the same problem. What is more, it has already
been shown by Paz and Zurek and Di\'osi \cite{Diosi1,PazZurek}
that orthogonal jump trajectories
are equivalent to a particular set of decoherent histories. This new
equivalence to standard jumps (true at least in this simple model) makes
it easy to show how standard jumps can be represented by decoherent
histories.
Strictly speaking, the equivalence shown above in equations
(\ref{ortho_eqn1}--\ref{ortho_eqn2}) is for the nonlinear version
of both unravelings. As mentioned before, in general there is
no linear version of ortho-jumps. However, when neglecting jumps due
to coherent reabsorption we see that this system is a special case,
with all jumps between the orthogonal subspaces labeled by $\ket0$
and $\ket1$.
Because of this there is a linear version of ortho-jumps for this case,
produced simply by dropping all the nonlinear terms in equations
(\ref{ortho_eqn1}--\ref{ortho_eqn2}) and removing the renormalization
in the jump (\ref{up_jump}). The resulting equations are identical in
form to the linear version of standard jumps, but the jumps are no
longer strictly identified with ``clicks'' of a detector. The
coarse-grained version of this linear equation is equivalent to
standard linear quantum jumps on the system alone, just as in the
nonlinear case.
\section{Consistent histories description}
As described in section IID, a particular history
is given by choosing one projection ${\hat {\cal P}}^j_{\alpha_j}(t_j)$ at each
time $t_j$, specified by the sequence of indices $\{\alpha_j\}$ denoted
$h$ for short. The decoherence functional on a pair of histories
$h$ and $h'$ is then given by
\begin{equation}
D[h,h'] = {\rm Tr} \biggl\{ {\hat {\cal P}}^N_{\alpha_N}(t_N) \cdots
{\hat {\cal P}}^1_{\alpha_1}(t_1) \rho(t_0) {\hat {\cal P}}^1_{\alpha_1'}(t_1) \cdots
{\hat {\cal P}}^N_{\alpha_N'}(t_N) \biggr\},
\end{equation}
where $\rho(t_0)$ is the initial density matrix of the system
\cite{GMHart1}.
We specialize to the system plus output mode described in section III.
They are initially in the pure state
$\ket\Psi = \ket{\psi_0} \otimes \ket0$. Since
the degrees of freedom of the environment (e.g., the internal degrees of
freedom of the measuring device) have already been traced out, we
replace the simple Schr\"odinger evolution (\ref{schrodinger}) with the
Liouvillian evolution of master equation (\ref{total_master}),
according to the quantum regression theorem \cite{QRT}.
The decoherence functional for two histories $h$ and $h'$ then has the form
\begin{equation}
D[h,h'] = {\rm Tr} \biggl\{ {\hat {\cal P}}_{\alpha_N} {\rm e}^{{\cal L}\delta t}(
{\hat {\cal P}}_{\alpha_{N-1}} {\rm e}^{{\cal L}\delta t}( \cdots {\rm e}^{{\cal L}\delta t}(
{\hat {\cal P}}_{\alpha_1} \ket\Psi\bra\Psi {\hat {\cal P}}_{\alpha_1'} )
\cdots ) {\hat {\cal P}}_{\alpha'_{N-1}} ) {\hat {\cal P}}_{\alpha_N'} \biggr\},
\label{jump_functional}
\end{equation}
where ${\cal L}$ is the Liouville superoperator from (\ref{total_master}).
We consider histories composed
of the following Schr\"odinger projections:
\begin{equation}
{\hat {\cal P}}_0 = {\hat 1} \otimes \ket0\bra0,\ \
{\hat {\cal P}}_1 = {\hat 1} \otimes \ket1\bra1.
\end{equation}
These projections represent the absence or presence of a photon in the
output mode. These projections are spaced a short time $\delta t$
apart, and each history is composed of $N$ projections, representing a
total time $T = N\delta t$. A single history $h$ is given by the string
$\{\alpha_1,\alpha_2,\ldots,\alpha_N\}$, where $\alpha_j = 0,1$ represents
whether or not a photon has been emitted at time $t_j = (j-1) \delta t$.
The time-evolution superoperators in (\ref{jump_functional})
tend to evolve pure states
to mixed states. This is counteracted by the effect of the
repeated projections ${\hat {\cal P}}_\alpha$, as we shall see. There are two
important issues to address within the consistent histories formalism:
the probabilities of histories (given by the diagonal terms of the
decoherence functional) and the decoherence of the set of histories as
a whole (given by the off-diagonal terms). We examine them separately.
\subsection{Probability of histories}
From the expressions (\ref{expansion}--\ref{evolutions}),
we can determine the character of the
different histories. The crucial choice is the size
of the spacing $\delta t$ between projections.
Too small and the histories will
not decohere. Too large and all we will see will be standard master
equation evolution, unresolved into trajectories. The interesting regime
is in the range
\begin{equation}
{1\over\Gamma_2} \ll \delta t \ll {1\over\Gamma_1}
\end{equation}
as described in equations (\ref{diag0}--\ref{intermediate}).
On this timescale, the $\Gamma_2$ terms are sufficient to insure decoherence
while the effects of the $\Gamma_1$
terms are resolved into individual trajectories. The time-evolution
produced by the $\exp({\cal L}\delta t)$ superoperators is given by the
simple equations (\ref{intermediate}), equivalent to the averaged
Lindblad equation (\ref{intermediate_master}).
If the external mode is initially unexcited, with
$\rho = \rho_{00} \otimes \ket0 \bra0$, then
after evolving for a time $\delta t$ the state becomes
\begin{eqnarray}
({\rm e}^{{\cal L}\delta t}\rho)_{00} &&
\approx {\rm e}^{ - i ({\hat H}_0 - i\gamma{\hat a}^\dagger{\hat a}/2 ) \delta t} \rho_{00}
{\rm e}^{ i ({\hat H}_0 + i\gamma{\hat a}^\dagger{\hat a}/2 ) \delta t}, \nonumber\\
({\rm e}^{{\cal L}\delta t}\rho)_{01} && \approx O(\kappa/\Gamma_2), \nonumber\\
({\rm e}^{{\cal L}\delta t}\rho)_{11} && \approx \gamma {\hat a} \rho_{00} {\hat a}^\dagger \delta t.
\label{unexcited}
\end{eqnarray}
Here we see the appearance of the effective Hamiltonian ${\H_{\rm eff}}$, just
as in the quantum jump unraveling.
We can also consider the case when the external mode is initially excited,
with $\rho = \rho_{11} \otimes \ket1 \bra1$. After a time $\delta t$
the state becomes
\begin{eqnarray}
({\rm e}^{{\cal L}\delta t}\rho)_{00} && \approx \Gamma_1 \rho_{11} \delta t
+ \gamma {\hat a}^\dagger \rho_{11} {\hat a} \delta t, \nonumber\\
({\rm e}^{{\cal L}\delta t}\rho)_{01} && \approx O(\kappa/\Gamma_2), \nonumber\\
({\rm e}^{{\cal L}\delta t}\rho)_{11} &&
\approx (1 - (\Gamma_1 + \gamma)\delta t)
{\rm e}^{ - i {\H_{\rm eff}} \delta t} \rho_{11}
{\rm e}^{ i {\H_{\rm eff}}^\dagger \delta t}.
\label{excited}
\end{eqnarray}
Once again the effective Hamiltonian appears, together with two additional
effects. The first is the possibility that the photon in the excited
mode will be absorbed by the measuring device. The second (much smaller)
effect is the possibility that the photon will be coherently re-absorbed
by the system. This process is so weak as to be negligible within the
regime we are considering, and we will henceforth neglect it.
This is the same approximation made in the ortho-jumps unraveling
(\ref{intermediate_master}--\ref{down_jump}) by neglecting the
${\hat L}_2$ contribution to the jumps.
By combining the above expressions with the appropriate
projections ${\hat {\cal P}}_0$ and ${\hat {\cal P}}_1$ (which pick out the $\rho_{00}$ or
$\rho_{11}$ component, respectively), we can write down the probabilities
of different possible histories.
Let us examine three illustrative
cases and see how they exactly parallel quantum jump trajectories.
\subsubsection{Evolution without jumps}
Suppose that initially $\rho_{00} = \ket\psi \bra\psi$ while
$\rho_{01} = \rho_{10} = \rho_{11} = 0$, i.e., the system is in a pure
state and no photon has been emitted. Let us consider the history
given by an unbroken string of $N$ ${\hat {\cal P}}_0$ projections, corresponding
to no photon being emitted during a time $N\delta t$.
The probability of such a history is given by the diagonal element
$D[0^N,0^N]$ of (\ref{jump_functional}). We can pick out the $\rho_{00}$
component of (\ref{unexcited}), and see that
after the first time interval $\delta t$
\begin{equation}
{\hat {\cal P}}_0 {\rm e}^{{\cal L}\delta t} (\ket{\psi}\bra{\psi} \otimes \ket0 \bra0) {\hat {\cal P}}_0
\approx
\biggl({\rm e}^{-i {\H_{\rm eff}} \delta t} \ket\psi
\bra\psi {\rm e}^{i {\H_{\rm eff}}^\dagger {\hat a}^\dagger{\hat a}) \delta t} \biggr)
\otimes \ket0 \bra0.
\end{equation}
Repeating this $N$ times and tracing out the output mode
Hilbert space ${\cal H}_2$ we get
\begin{equation}
D[0^N,0^N] \approx {\rm Tr}_1 \biggl\{ {\rm e}^{- i {\H_{\rm eff}} N\delta t} \ket\psi
\bra\psi {\rm e}^{i {\H_{\rm eff}}^\dagger N\delta t} \biggr\},
\label{no_jumps}
\end{equation}
which exactly agrees with the probability of the quantum jump or
ortho-jump trajectories
when no jumps are detected. (${\rm Tr}_1$ indicates a trace over
${\cal H}_1$ only.)
\subsubsection{Evolution up to a single jump at time $N\delta t$}
Here we can make use of the previous result (\ref{no_jumps}) up until
time $N\delta t$, when instead of using projection ${\hat {\cal P}}_0$ we use
${\hat {\cal P}}_1$. This is the same as keeping the $\rho_{11}$ component
of $\exp({\cal L}\delta t)\rho$ instead of the $\rho_{00}$ component at the
final projection time. This yields
\begin{equation}
D[0^N 1,0^N 1] \approx (\gamma\delta t) {\rm Tr}_1 \biggl\{
{\hat a} {\rm e}^{- i {\H_{\rm eff}} N\delta t} \ket\psi
\bra\psi {\rm e}^{i {\H_{\rm eff}}^\dagger N\delta t} {\hat a}^\dagger \biggr\},
\label{one_jump}
\end{equation}
Once again, this agrees with the probability of the corresponding
quantum jump trajectory. (Note, though, that in this context, $\delta t$
is short compared to the $\Delta t$ in expression (\ref{jumps_prob}). This
is due to the finer-grained nature of this history, about which more below.)
\subsubsection{Evolution after a jump}
What happens after the external mode has ``registered'' as being in the
excited state? Essentially, there are two possibilities: either the
external mode can drop back down to the unexcited state (representing
absorption of the photon by the measuring device) or it will remain in
the excited state. We can examine these two possibilities separately:
\begin{equation}
{\hat {\cal P}}_0 {\rm e}^{{\cal L}\delta t} (\ket{\psi'} \bra{\psi'} \otimes \ket1 \bra1) {\hat {\cal P}}_0
\approx \Gamma_1 \delta t \ket{\psi'} \bra{\psi'} \otimes \ket0 \bra0,
\label{after_jump0}
\end{equation}
\begin{equation}
{\hat {\cal P}}_1 {\rm e}^{{\cal L}\delta t} (\ket{\psi'} \bra{\psi'} \otimes \ket1 \bra1) {\hat {\cal P}}_1
\approx (1 - \Gamma_1 \delta t) {\rm e}^{-i{\H_{\rm eff}}\delta t} \ket{\psi'}
\bra{\psi'} {\rm e}^{i{\H_{\rm eff}}^\dagger\delta t} \otimes \ket1\bra1.
\label{after_jump1}
\end{equation}
So we see that the external mode has a probability of roughly
$\Gamma_1\delta t$ per time $\delta t$ of dropping back down to the ground
state, whereupon it resumes evolution as in (\ref{no_jumps}), and a
probability of $1-\Gamma_1\delta t$ of remaining in the excited state.
In either case, the system component of the state continues to evolve
according to the effective Hamiltonian ${\H_{\rm eff}}$.
This is exactly the same situation we encountered with ortho-jumps
in section IIID. There, we saw that a typical trajectory spent the
majority of time with the output mode in the ground state. Infrequently
this mode absorbs a photon from the system and becomes excited; this
photon is dissipated into the environment in a time of order $1/\Gamma_1$.
The rate for coherent reabsorption by the system is much lower, and can
be neglected.
Thus, we see that this set of histories corresponds exactly to the set of
ortho-jump trajectories at the intermediate timescale $\delta t$. This is,
in fact, a special case of the correspondence shown by Di\'osi. In his
treatment, the histories were {\it branch-dependent} in order to preserve
the orthogonality of the jumps. In ours, the jumps are always between
two known subspaces, and thus are branch independent.
To reproduce standard jumps, we must coarse-grain just as before.
Consider an interval
$\Delta t = M\delta t$
which is long compared to $1/\Gamma_1$ but still short compared to the
dynamical timescales of the system. Let $\omega$ be a frequency which
characterizes the system Hamiltonian ${\hat H}_0$. Then we have
$\Gamma_1 \Delta t \gg 1 \gg \omega \Delta t \gg \omega \delta t$.
Let us sum all histories which include one jump and re-absorption within
this interval (the probability of multiple jumps is small enough to be
neglected), and look at the time-propagators occurring inside the
trace:
\begin{eqnarray}
\sum_{j=0}^M && {\rm Tr}_1 \left\{ \cdots {\rm e}^{-i{\H_{\rm eff}} j\delta t} {\hat a}
{\rm e}^{-i{\H_{\rm eff}}(M-j)\delta t} \cdots \ket\psi \bra\psi \cdots
{\rm e}^{i{\H_{\rm eff}}^\dagger(M-j)\delta t} {\hat a}^\dagger
{\rm e}^{i{\H_{\rm eff}}^\dagger j\delta t} \cdots \right\} \nonumber\\
&& \approx (M+1) {\rm Tr}_1 \left\{ \cdots {\rm e}^{-i{\H_{\rm eff}} M\delta t/2} {\hat a}
{\rm e}^{-i{\H_{\rm eff}} M\delta t/2} \cdots \ket\psi \bra\psi \cdots
{\rm e}^{i{\H_{\rm eff}}^\dagger M\delta t/2} {\hat a}^\dagger
{\rm e}^{i{\H_{\rm eff}}^\dagger M\delta t/2} \cdots \right\},
\label{coarse_grain}
\end{eqnarray}
with an error of order $M\omega^2\Delta t^2$. The exact absorption time
is irrelevant, since the external mode is traced over, the system
dynamics are essentially unaffected, and the probabilities sum to 1. So making
this coarse-graining, and combining the above cases gives an expression for
the probability of a particular jump record of
\begin{eqnarray}
p(t_1,\ldots,t_n) = && (\gamma\Delta t)^N
{\rm Tr}\biggl\{ {\rm e}^{-i{\H_{\rm eff}}(T-t_N)} {\hat a} {\rm e}^{-i{\H_{\rm eff}}(t_N - t_{N-1})} {\hat a}
\cdots a {\rm e}^{-i{\H_{\rm eff}} t_1} \nonumber\\
&& \times \ket\psi\bra\psi {\rm e}^{i{\H_{\rm eff}}^\dagger t_1} {\hat a}^\dagger
\cdots {\hat a}^\dagger {\rm e}^{i{\H_{\rm eff}}^\dagger(T-t_N)} \biggr\},
\label{history_prob}
\end{eqnarray}
which exactly matches the quantum jump expression
(\ref{jumps_prob}).
\subsection{Decoherence of histories}
Such a histories description is only meaningful if
the histories decohere. Exact consistency, as in
(\ref{decoherence}), is a difficult criterion to meet. It is
more usual to show that a model is {\it approximately} consistent,
which generally insures that the histories satisfy the probability sum
rules to some level of precision.
One criterion for approximate consistency has been suggested by
Dowker and Halliwell \cite{DowkHall}. If we wish the probability sum
rules to be satisfied to a precision $\epsilon \ll 1$, we require that
\begin{equation}
|D[h,h']|^2 < \epsilon^2 D[h,h] D[h',h'] = \epsilon^2 p(h) p(h'),
\end{equation}
for all unequal pairs of histories $h,h'$. Generally speaking, the
``more different'' a pair of histories is (i.e., the more projections
they differ in), the more suppressed the off-diagonal term.
This is certainly true for this model of photodetection. So it
suffices to look at two histories which are as close as possible
without being identical.
In the case of these ``jump'' histories, this means that these histories
differ at a single time $t_i$, one having a projection ${\hat {\cal P}}_0$, the
other ${\hat {\cal P}}_1$. In the decoherence functional, this is equivalent to
picking out the $\rho_{01}$ or $\rho_{10}$ component of
$\exp({\cal L}\delta t)\ket{\psi'}\bra{\psi'}$ at that time, sandwiched
between identical projectors ${\hat {\cal P}}_0$ or ${\hat {\cal P}}_1$ on either side.
Examining the components given by (\ref{expansion}--\ref{evolutions})
and (\ref{unexcited}--\ref{excited}),
we see that
\begin{equation}
{ |D[h,h']|^2 \over p(h) p(h') } \sim { 1 \over (\Gamma_2 \delta t)^2 },
\end{equation}
so we expect the sum rules to be obeyed with a precision of roughly
$O(1/\Gamma_2\delta t)$. For large $\Gamma_2$ this is more than
adequate.
\subsection{Partial trace decoherence}
Finkelstein \cite{Finkelstein} has suggested an interesting alternative
definition for decoherence in the case of a system-environment split,
which he terms PT or ``Partial Trace'' decoherence. Consider the
operator-valued functional defined by
\begin{equation}
{\bar D}[h,h'] = {\rm Tr}_{\rm env} \biggl\{ {\hat {\cal P}}^N_{\alpha_N}(t_N) \cdots
{\hat {\cal P}}^1_{\alpha_1}(t_1) \rho(t_0) {\hat {\cal P}}^1_{\alpha_1'}(t_1) \cdots
{\hat {\cal P}}^N_{\alpha_N'} \biggr\},
\end{equation}
where ${\rm Tr}_{\rm env}$ denotes a partial trace over the environment degrees of
freedom only. ${\bar D}[h,h']$ is therefore an operator in the Hilbert
space ${\cal H}_1 \otimes {\cal H}_2$. The criterion for PT decoherence is
\begin{equation}
{\bar D}[h,h'] = 0, h \ne h',
\label{pt_decoherence}
\end{equation}
which is a much stronger condition than the usual decoherence criterion;
it means that the different alternative histories are orthogonal in
the environment degrees of freedom alone, implying the existence of
generalized records \cite{GMHart3}.
From the form of the standard decoherence functional and the correspondence
with the ortho-jumps unraveling (\ref{ortho_eqn1}--\ref{ortho_eqn2}),
we see that ${\bar D}[h,h]$ will be a projector onto the state of the system
plus output mode corresponding to the particular trajectory $j$. And from
the equivalence of this to standard jumps, it follows that
\begin{equation}
{\bar D}[h,h] = \ket{\tilde\psi_h} p(h) \bra{\tilde\psi_h}
\otimes (\ket0\bra0\ {\rm or}\ \ket1\bra1),
\end{equation}
where $\ket{\tilde\psi_h}$ is the normalized state produced by the
quantum jump trajectory corresponding to the
history $h$, and $p(h)$ is the probability of this trajectory.
By exactly the same argument as before, the off-diagonal terms vanish
and therefore this system is PT decoherent.
\section{Generalized Records}
As discussed in section IIB, one of the most powerful sources of
decoherence is the creation of entanglement between ``system''
and ``environment'' degrees of freedom. Because these correlations
exist, a careful measurement of the environment could in principle
reveal the exact state of the system; thus, the different possible
system states cannot interfere, and condition (\ref{decoherence})
is assured. Gell-Mann and Hartle term these persistent correlations
``generalized records;'' they are generalized because in practice
the measurement needed to access them might be impossibly difficult.
For instance, suppose a measuring device records a series of results
onto a paper tape. This is a record in the usual sense, and its
existence guarantees decoherence between different possible measurement
outcomes. If we then burn the tape, the record becomes difficult to
access. But nevertheless, in principle the information is not lost; by
cleverly measuring the ashes, air, and outgoing light one might in
principle be able to reconstruct it, and thus decoherence persists.
This ``in principle'' recoverable information is a generalized record.
Suppose our system and environment begin in an initial pure state.
Then the decoherence functional becomes (\ref{inner_product}),
implying that the unnormalized states $\{{\hat C}_h\ket\Psi\}$ are all
orthogonal (or zero). This implies the existence of a set of
orthogonal projection operators ${\hat R}_h$ such that
${\hat R}_h{\hat C}_{h'}\ket\Psi = \delta_{hh'} {\hat C}_h\ket\Psi$. Any such collection
of orthogonal projection operators corresponds to an observable.
Thus, in the pure state case, decoherence implies the existence of
an observable whose value is exactly correlated with which history
occurred. Conversely, the existence of such an observable implies
decoherence. This converse holds even in the case of mixed states.
This observable is the generalized record, and its existence is a
stronger consistency criterion than (\ref{decoherence}). If the
record is contained entirely in the environment degrees of freedom,
so ${\hat R}_h = {\hat 1}_{\rm sys}\otimes{\hat R}_h^{\rm env}$
where ${\hat R}_h^{\rm env}$ is a projector
on the environment Hilbert space alone, it implies the partial trace
decoherence (\ref{pt_decoherence}) of Finkelstein.
In the case of our model, the outgoing atoms clearly serve as such
a generalized record. Suppose the output mode is in the ground state
throughout some period $\Delta t$. In that case, all the outgoing
atoms should be in the state $(\ket0+\ket1)/\sqrt2$. If the output
mode is instead in the excited state, the probability is overwhelming
that at least some of the atoms will be found in the state
$(\ket0 - \ket1)/\sqrt2$. Thus, these atoms serve as a record of
the state of the output mode during the period $\Delta t$.
Of course, if the output mode were only in the excited state for a
short period, the atoms might fail to record the fact. But since
photons in the output mode persist for an average time of order
$1/\Gamma_1 \gg \Delta t$, this ambiguity is unimportant in practice.
What is more, projections on the states of different atoms commute
with each other. Thus we might, in principle, make no assertions about
the state of the output mode, but merely wait until a large number
of atoms had passed through the detector and then project onto their
collective state. This projection would be sufficient to reconstruct
the quantum trajectory in its entirety.
In fact, we can play even more elaborate games than that. There is
no reason, after all, why we should use the same projections on all
the atoms. We might switch from one set to another as we chose. In
the language of quantum trajectories, this would be like switching
our choice of unraveling at different times. Indeed, we could go further
than this: we could make our choice of projections on some atoms
contingent on our results for other atoms. Thus, our choice of unraveling
at one time might depend on the state at another time. What is more,
there is no reason that we could not choose our unraveling at early
times (i.e., the first atoms to pass through) contingent upon results
at later times (i.e., the last atoms to pass through), rather than the
other way around. Ordinary quantum trajectories theory does not consider
this possibility for good reason, since such a measurement scheme would
be extremely difficult in practice, nor is it obvious what is gained
from it, but we can see how the fairly
specific assumptions of quantum trajectories can be progressively
broadened towards the greater generality of consistent histories.
The existence of generalized records insures that our set of histories
will be consistent under all such permutations.
The existence of generalized records also lets us get away from the
whole notion of measurement. Suppose that instead of putting a
photodetector outside our cavity we put nothing there, and allow
emitted photons to escape to infinity. Away from the immediate vicinity
of the system, we can describe this external electromagnetic fields
in terms of incoming and outgoing plane waves with creation and
annihilation operators ${\hat a}^\dagger_{\pm k\ell}$ and ${\hat a}_{\pm k\ell}$,
where $\ell$ labels the polarization. A reasonable initial condition
would be that the incoming field is in the vacuum state. The outgoing
field carries away information about the system. Using plane waves
makes it difficult to discuss properties which are local in time;
consider instead the operators ${\hat c}_{x_0k_0\ell}$
and ${\hat c}^\dagger_{x_0k_0\ell}$ defined by
\begin{equation}
{\hat c}_{x_0k_0\ell} = ({\rm const})\times
\int dk \exp\left\{ - {(k-k_0)^2\over\Delta k^2}
+ i k x_0 \right\} {\hat a}_{k\ell},
\end{equation}
which annihilate (create) a Gaussian wave packet
centered at $x_0$ and $k_0$. Needless to say, these states
are overcomplete. But we can still use them to define operators
${\hat F}_C$, which are quasiprojectors onto states with a single photon
in phase space region $C$, and act as the identity
{\it outside} region $C$. If $C$ has a regular boundary, then as
the volume of $C$ becomes large compared to $\hbar$ ${\hat F}_C$ becomes
closer and closer to a true projector. What is more, if $C$ and
$C'$ are nonoverlapping regions, $F_C$ and $F_{C'}$ approximately
commute \cite{Omnes4}.
Consider now a rectangular region $C$ with length $c\Delta t$ much longer
than the width of the wave packets, located well outside
the near field of the system, and extent in wavenumber space
$l_k \gg \Delta k$ which is large compared to the linewidth of
the emitted photons and limited to outgoing photons, centered on
the cavity wavenumber $\omega_0/c$. Consider histories with projections
${\hat F}_C$ and ${\hat 1} - {\hat F}_C$ at a series of times $t_j = j\Delta t$.
The projection ${\hat 1}-{\hat F}_C$ is essentially a projection on no photon
being emitted, since the probabilities of multiple photons or photons
outside the frequency range are negligibly small. This set of histories
will correspond on the system space to exactly the set of trajectories
(\ref{history_prob}) we derived before.
Time evolution by $\Delta t$ is just equivalent to shifting the region
$C$ ``outward'' by a distance $c\Delta t$. Thus the Heisenberg
projections ${\hat F}_C(t_j)$ which contribute to the history operator
(\ref{history_op}) are actually projectors onto non-overlapping regions,
and hence commute. We could replace them all with the single
projector
\begin{equation}
{\hat R}_{i_1\ldots i_n} = {\hat F}_{i_n}(t_n) \cdots {\hat F}_{i_1}(t_1),
\end{equation}
where $i_j = 0,1$ and ${\hat F}_1(t_j) = {\hat F}_C(t_j)$,
${\hat F}_0(t_j) = {\hat 1} - {\hat F}_C(t_j)$. The observable represented by the
set of projections ${\hat R}$ is a generalized record of the quantum jumps
trajectory.
However, just as in the case of the photodetector, we can imagine
projecting onto many other things besides photon number, each choice
corresponding to a different unraveling. As long as we restrict our
projections to outgoing waves, consistency is guaranteed. If we
instead choose to project onto some combination of incoming and outgoing
waves, the histories would not be consistent, in general. This is just
as if our measurements of the outgoing field had a chance of reflecting
light back into the system; in that case the condition
(\ref{traj_consistency}) would generally not hold, and the measurement
scheme would no longer be an unraveling of the master equation.
Again, we might choose one set of projections at one time and another
at another, so as to change unravelings, and there is no reason in
principle not to make the choice of projections at one time contingent
on the results at another.
This freedom leads to a somewhat curious situation. The outgoing
field from the system constitutes a generalized record of the system
evolution. But what is it a record of? By choosing different
measurement schemes, or more generally different consistent descriptions,
we can reconstruct very different-looking evolutions. It has long
been known that on the one hand there were many different, incompatible
unravelings of the master equation, and on the other many different
incompatible sets of consistent histories. We now see that these can
correspond to different, incompatible generalized records.
\section{Conclusions}
We have seen how a continuous measurement can be described
in terms of decoherent histories, and how the resulting histories
match the corresponding quantum trajectory unraveling of the master
equation. The probabilities of the
decoherent histories match the weights of the given trajectories,
and the off-diagonal terms of the decoherence functional are highly
suppressed, as we would expect from a measurement situation.
Decoherent histories, as is widely known, must obey a
condition (\ref{decoherence}) for their probabilities to have
a consistent interpretation. What has been less widely appreciated
is that quantum trajectories must obey a consistency
condition (\ref{traj_consistency}) of their own. The reason that
this has received so little remark is that all commonly
considered unravelings of the master equation automatically satisfy
this requirement. These schemes rely on measurements of an outgoing
field, which constitutes a generalized record of the system evolution,
guaranteeing that both consistency conditions are obeyed.
The existence of multiple incompatible records creates an interesting
ambiguity in our description. Suppose that instead of measuring the
field as it is emitted from the system, we wait for a long time and
measure the field far away from the system. Our measurements then
correspond to information about the system evolution long before,
a sort of super-delayed-choice experiment. The choice of unraveling
might be delayed until long after the evolution itself is complete.
Were there no quantum trajectories until the measurement occurred?
Does one trajectory ``really'' happen? Do all of them somehow coexist?
These are the sort of interpretational questions
upon which gallons of ink can be spent
without putting an end to argument. From a practical point of view, it
doesn't matter. The fact that these sets of histories are consistent
means that we can choose an unraveling and make arguments based on
the trajectories without fear that inconsistencies will invalidate our
results. While there may be no actual measurement record, in the cases
we consider the existence of generalized records serves the same function.
While these generalized records might be hard to identify, and even
harder to measure (especially if they depart at the speed of light),
they insure that distinct alternatives remain distinct,
and cannot subsequently interfere with each other.
This is a valuable fact, especially since real measurements rarely
approach the level of perfection commonly assumed for pure state
unravelings. If these unravelings decohere from each other, they can
be treated like classical alternatives without fear of
contradictions arising. Imperfect measurements can then
be treated by further coarse-graining of these histories.
What is more, if there are many different unravelings corresponding
to different sets of decoherent histories, the choice of which to use
can become a matter of convenience. In some situations
a jump-like description is most useful; in others, a diffusion equation.
It might be possible to use either one by an appropriate choice of
projection operators on the same environment.
Consistent histories is a powerful formalism, but
producing a full set of histories can be dauntingly difficult, especially
as their number increases exponentially with the number of projections.
In practical calculations, it is often far easier to
randomly choose some subset of ``typical'' histories and average over them.
For this, quantum trajectory equations are supremely well-suited. Once
again, the freedom to choose among different but equivalent formulations
of a problem is a boon to the theorist.
Over the last several years a great deal of work has been done on problems
of quantum measurement, from a wide variety of viewpoints. I believe
that these approaches, to the extent that they are correct,
are connected at a deep level, and that by
understanding the connections their power can be enhanced.
This work is a small step in that direction. I have no doubt that much
more progress is to come.
\section*{Acknowledgments}
I would like to thank Lajos Di\'osi for very valuable
discussions, and Howard Carmichael,
Murray Gell-Mann, Nicolas Gisin, Robert Griffiths,
Jonathan Halliwell, James Hartle, Peter Knight,
Ian Percival, Martin Plenio, and R\"udiger Schack for
suggestions and feedback. Dieter Zeh asked provocative questions.
Financial support was provided by
the UK EPSRC and by NSF Grants PHY-94-07194 and PHY-96-02084.
|
2,869,038,156,241 | arxiv | \section{Introduction}
Dark sectors arising from physics beyond the standard model could provide explanations for various shortcomings of the standard model itself, including dark matter, neutrino masses, the baryon asymmetry, and the strong CP problem. One typical phenomenological consequence is the appearance of new, feebly-interacting bosons (FIBs) that can be experimentally searched and astrophysically or cosmologically constrained. One class of traditional arguments uses observational consequences of FIB emission from stars, an idea independently advanced by several groups in 1978 \cite{Mikaelian:1978jg,Dicus:1978fp,Vysotsky:1978dc,Sato:1978vy} when the Weinberg-Wilczek axion had been recognized as a consequence of the Peccei-Quinn solution of the strong CP problem. Ever since, the impact of many types of bosons in various astrophysical systems has been studied \cite{Raffelt:1996wa}, sometimes posing interesting conceptual questions about FIB production or propagation in stars.
We here follow up one such case that has emerged in several recent studies of FIB production in supernova (SN) cores \cite{Chang:2016ntp, Chang:2018rso, Lucente:2020whw,Bollig:2020xdr,Caputo:2021rux,Caputo:2022mah, Croon:2020lrf}.
Actually the feeble interaction was taken strong enough to prevent free escape after production. In analogy to the SN ``neutrino sphere,'' the FIBs emerge from a decoupling region that is traditionally pictured approximately as a black surface for thermal FIB radiation according to the Stefan-Boltzmann (SB) law \cite{Burrows:1990pk}. The relevant temperature $T_{\rm SB}$ is taken to be that of the SN medium at a radius $R_{\rm SB}$ where the FIB optical depth is 2/3, and the radiating surface is $4\pi R_{\rm SB}^2$. We will see that this prescription is rather accurate, as physically it should be, but has evoked some doubts because clearly there is no hard surface of emission---the radiation must come from a shell with a geometric thickness corresponding to optical depth of around one.
Motivated by this question and doubts in the recent literature we take a fresh look at radiative transfer by FIBs that may or may not have a significant mass. In the diffusion limit, this problem was formulated a long time ago \cite{Raffelt:1988rx}, following the standard theory of radiative transfer by photons.\footnote{A free electronically available textbook is Rutten (2003) \cite{Rutten:2003}. It provides a fantastic annotated biblio\-graphy and references both to the early papers by Schuster, Schwarzschild, Eddington, Rosseland and Milne as well as to many textbooks, explaining their focus and relevance. For our work, we have mostly consulted the classic textbook by Mihalas (1978) \cite{Mihalas:1978} and Appendix~I of Shapiro and Teukolsky (1983) \cite{Shapiro:1983du}. See also Chapter~3, Sec.~3.4, of Refs.~\cite{ThorneBook,ThorneWeb} for some useful definitions of angular moments related to our Sec.~\ref{SubSec:Angular}.} Our focus here is to study explicitly the transition between the free-streaming and trapping (diffusion) limits, both in plane-parallel and spherical geometry. The latter is particularly interesting in a situation when the FIBs are unstable and deposit energy in regions far away from the compact emission volume, i.e., in a situation where the geometric extension of the ``stellar atmosphere'' is not much smaller, or even much larger, than the core radius of a SN or a horizontal-branch or red-giant star \cite{Lucente:2022wai}.
The main simplifying assumption, motivated by the boson interaction being ``feeble'', is to include only FIB absorption and emission from a medium in local thermal equilibrium, but not scattering between different FIB momenta or annihilation. In this case the only particle-physics ingredient is the ``reduced absorption rate'' $\Gamma_\omega$ as a function of FIB energy $\omega$, where $\Gamma_\omega$ is equivalent to the imaginary part of the FIB self-energy, that also depends on the local conditions of the medium such as temperature, density, and chemical composition. In the absence of scattering, the stationary FIB occupation number on a given ray, corresponding to a given mode ${\bf k}$ of the FIB radiation field, can be expressed as an integral along this ray. Global solutions for plane-parallel or spherical geometries then follow as suitable superpositions of such single-ray solutions. In other words, for a given stationary stellar background model, the FIB radiation field is found from a quadrature. Explicit volume-integral expressions, notably in spherical geometry, are the main technical results of our paper. Based on $\Gamma_\omega(r)$ and $T(r)$ as functions of stellar radius, we thus provide integral expressions for the FIB luminosity $L_\omega(r)$. Taken at spatial infinity, $\int d\omega\,L_\omega(\infty)$ provides the total FIB luminosity, e.g., of a SN core. Moreover, one can find the energy loss or deposition at a given radius through the radial variation $dL_\omega(r)/dr$.
Solutions derived from a prescribed and stationary background model are only useful, of course, in a physical situation when the thermal timescale exceeds the dynamical one. If this is not the case, and if the diffusion limit does not apply, the full Boltzmann collision equation needs to be solved, a task that is of course the main numerical effort in core-collapse SN simulations concerning neutrino transport.
Radiative transport by neutrinos, despite their weak interaction, is a much more complicated task than our FIB treatment.
Neutrinos and antineutrinos of the electron and muon flavor can be absorbed and emitted by the medium through charged-current interactions, but neutral-current scatterings as well as annihilation and pair emission and absorption through bremsstrahlung and other processes occur on the same order of the coupling constant $G_{\rm F}^2$. Moreover, besides energy also lepton number of different flavors is being transported.
In principle, our exercises are straightforward, but the devil is in the details, even for the much simpler problem of FIB transport. The correct expressions are apparently not available in the literature (and incorrect expressions or approximations have been floated), justifying our derivations, at the risk of being seen as a pedagogical exercise of standard radiative-transfer theory. In the same vein we also show explicitly the transition between a volume integral and a quasi-thermal surface integral in the strong-trapping limit. We believe that deriving these results from first principles, starting with the Boltzmann collision equation, is an instructive exercise that offers many interesting insights that may be useful for future studies of astrophysical particle bounds.
\section{Radiative transfer by feebly interacting bosons}
We begin with the Boltzmann collision equation (BCE) for new bosons $a$ (reminiscent of ``axion'') that can be produced, for example, by processes of the type $\gamma+B\to B+a$, that is to say axion-photon conversion by interaction with fermions (for example semi-Compton scattering on electrons or muons) or other charged particles as in the Primakoff case, but photon coalescence $2\gamma\to a$ is also conceivable. On the other hand, scattering of the type $a+B\to B+a$ plays no role because the interaction is much more feeble than that of photons.
\subsection{Freeze out from first principles}
Ignoring FIB scattering from one momentum mode to another, we can focus on the evolution of a single mode with energy $\omega$ along some ray with spatial coordinate $x$. The BCE for the occupation number $f$ is in this case
\begin{equation}\label{eq:Boltzmann}
(\partial_t+v\partial_x)\,f=\Gamma_{\rm E}(1+f)-\Gamma_{\rm A} f=
\Gamma_{\rm E}-\underbrace{(\Gamma_{\rm A}-\Gamma_{\rm E})}_{\hbox{$\Gamma_{\rm A}^*$}} f,
\end{equation}
where $v$ is the particle velocity. Here $\Gamma_{\rm E}$ is the spontaneous emission rate that appears multiplied with the boson stimulation factor $1+f$, whereas $\Gamma_{\rm A}$ is the absorption rate, and in general both depend on $\omega$ and $x$. In the second expression, the terms proportional to $f$ were consolidated and are proportional to the ``reduced absorption rate'' $\Gamma_{\rm A}^*=\Gamma_{\rm A}-\Gamma_{\rm E}$ that includes the effect of stimulated emission as a negative absorption rate.
If the medium is in local thermal equilibrium, detailed balance implies that locally
$\Gamma_{\rm E}=e^{-\omega/T}\Gamma_{\rm A}$ so that the reduced absorption rate is
\begin{equation}
\Gamma\equiv\Gamma_{\rm A}^*=\Gamma_{\rm A}(1-e^{-\omega/T}),
\end{equation}
which we use as \textit{the\/} absorption rate and which is the quantity that defines the optical depth. The spontaneous emission rate is then expressed as
\begin{equation}\label{eq:GammaE}
\Gamma_{\rm E}=\frac{\Gamma}{e^{\omega/T}-1},
\end{equation}
a relation between emission and absorption corresponding to Kirchhoff's Law.
In a stationary and homogeneous situation, the left-hand side (LHS) of Eq.~\eqref{eq:Boltzmann} vanishes and the equation is solved by a thermal Bose-Einstein distribution $f_{\rm eq}=(e^{\omega/T}-1)^{-1}$. So we may write the BCE instead for the deviation from equilibrium $\Delta f=f-f_{\rm eq}$ in the form
\begin{equation}\label{eq:Boltzmann-2}
(\partial_t+v\partial_x)\,\Delta f=-\Gamma\, \Delta f.
\end{equation}
So it is the reduced absorption rate $\Gamma$ which damps the deviation of $f$ from equilibrium, explaining its central importance for radiative transfer.
In the context of thermal field theory, the boson propagation properties are encoded in its self-energy $\Pi$ within the medium. The imaginary part provides the rate-of-approach to thermal equilibrium as ${\rm Im}\,\Pi=-\omega \Gamma$ \cite{Weldon:1983jn}, once more highlighting the role of the reduced absorption rate as the central interaction parameter.
\subsection{Stationary state}
\label{sec:StationaryState}
Our main interest is a stationary situation, so only the gradient term on the LHS of the BCE survives and we need to solve
\begin{equation}
v\frac{df}{dx}= \Gamma_{\rm E}-\Gamma f,
\end{equation}
where the spontaneous emission rate $\Gamma_{\rm E}$ is given in Eq.~\eqref{eq:GammaE} in terms of the reduced absorption rate $\Gamma$ under the assumption of local thermal equilibrium.
To solve this equation we notice that $T$ and $\Gamma$ are functions of $x$ and we define the optical depth as
\begin{equation}
\tau(x)=\int_{x}^{\infty}\frac{dx'}{\lambda(x')}
\quad\hbox{with}\quad
\lambda(x)=\frac{v}{\Gamma(x)},
\end{equation}
where $\lambda$ is the mean free path (MFP) for a FIB with velocity $v$. For massless FIBs we should use $v=c=1$ everywhere. The optical depth $\tau(x)$ is measured relative to a distant observer at $x=+\infty$ where $\tau(\infty)=0$. So finally one finds the solution
\begin{equation}\label{eq:final-occ}
f(x)=\int_{-\infty}^{x}dx'\,\frac{\Gamma_{\rm E}(x')}{v}\,e^{\tau(x)-\tau(x')}.
\end{equation}
This is the intuitive answer that the occupation number at $x$ is filled by spontaneous production up to this point, reduced by the absorption along the path from production to detection. Here it was assumed that no radiation enters at the boundary at $x=-\infty$, i.e., all radiation is generated by emission within the realm of integration.
Instead of $x$ we may use $\tau(x)$ itself as a coordinate along the beam. Notice that this is a monotonically decreasing function of $x$ and thus uniquely invertible to provide $x(\tau)$. The limiting values are $\tau(\infty)=0$ and reaches a maximum value $\tau_{\rm max}=\tau(-\infty)$. Notice also that $d\tau(x)/dx=-\Gamma(x)/v=-1/\lambda(x)$ and we introduce the blackbody occupation number at $\tau$ for the local temperature $T(\tau)$
\begin{equation}
f_{\rm eq}(\tau)=\frac{1}{e^{\omega/T({\tau})}-1}.
\end{equation}
We see that the solution
\begin{equation}\label{eq:fplus}
f(\tau)=\int_{\tau}^{\tau_{\rm max}}d\tau'\,e^{\tau-\tau'}\,f_{\rm eq}(\tau')
\end{equation}
depends only on the temperature profile $T(\tau)$ along the ray. The velocity $v$ no longer appears explicitly because the optical depth is based on $\lambda$ and not on $\Gamma$. If the medium is very opaque so that we cannot see through the star to the other side we may use $\tau_{\rm max}=\infty$. For the occupation number at spatial infinity, corresponding to $\tau=0$, one finds in this opaque limit
\begin{equation}\label{eq:f0}
f(0)=\int_{0}^{\infty}d\tau\,e^{-\tau}\,f_{\rm eq}(\tau).
\end{equation}
In the special case when the medium has the same $T$ everywhere this is simply $f(0)=f_{\rm eq}$, the Bose-Einstein occupation number. So an optically thick object at temperature $T$ radiates bosons with a thermal Bose-Einstein distribution. However, even if the radiating body has a hard material surface, the photons do not emerge from that surface but from a layer with thickness of a few MFPs.
We also consider the occupation number $f_-(\tau)$ of the ``backward mode'' moving in the opposite direction, toward the star,
\begin{equation}\label{eq:fminus}
f_-(\tau)=\int_{0}^{\tau}d\tau'\,e^{-\tau+\tau'}\,f_{\rm eq}(\tau'),
\end{equation}
where it was assumed that at spatial infinity ($\tau=0$) the backward mode is not occupied. In this notation, the occupation
$f(\tau)$ of the outgoing mode is termed $f_+(\tau)$.
In this discussion we have implicitly assumed that the FIB absorption rate $\Gamma$ depends on the background medium which is geometrically bounded so that it makes sense to use a distant observer as a point of reference when using the optical depth as a measure of distance. However, when FIB decay of the form $a\to2\gamma$ is important, this approach is not justified. We will return to this question in the context of our spherically symmetric solution.
\subsection{Particle flux}
The radiation emerging from a source is usually not described in terms of the occupation numbers of the modes of the radiation field but rather by the corresponding particle or energy flux. The net particle flux in the outgoing direction is
\begin{equation}
\phi=v\,(f_+-f_-)
\end{equation}
whereas the energy flux sports an additional factor $\omega$. Assuming no backward occupation at spatial infinity, the outgoing flux for a distant observer is simply $\phi(0)=v f(0)$ given in Eq.~\eqref{eq:f0}. At intermediate positions, the flux can be expressed as
\begin{equation}\label{eq:convolution}
\phi(\tau)=v\,\int_{0}^{\infty}d\tau'\,{\rm sign}(\tau'-\tau)\,e^{-|\tau'-\tau|}\,f_{\rm eq}(\tau').
\end{equation}
So we find the intuitive result that the flux along some ray is driven by the temperature profile a few MFPs up- and downstream from the point of interest. Formally the function $\phi(\tau)$ on the interval $0\leq\tau<\infty$ is a certain linear transformation of the function $f_{\rm eq}(\tau)$ on that same interval.
\subsection{Example with power-law profile}
We can illustrate FIB freeze out with a $T$ profile inspired by a realistic Proto Neutron Star (PNS) profile of the form
\begin{equation}\label{eq:PowerLaw}
T(\tau)=T_1\tau^p,
\end{equation}
where $0<p\ll 1$ is a small number for which we use $p=1/5$ and $T_1$ is the temperature at unit optical depth. Moreover, we assume the FIB to be massless so that $v=1$. For a typical boson energy of $\omega=3T_1$ we show the solutions $f_\pm$ as well as the flux $\phi=f_+-f_-$ in Fig.~\ref{fig:f-solution}. We see that the flux escaping from the star corresponds approximately to $f_{\rm eq}$ at $\tau\simeq0.8$, but at this location the actual solution is far away from this value. The approach to the asymptotic solution is slow in the decoupling region.
\begin{figure}[h]
\hbox to\textwidth{\hfil\includegraphics[height=0.3\textwidth]{fig1a.pdf}
\hskip12pt\includegraphics[height=0.3\textwidth]{fig1b.pdf}\hfil}
\caption{Solutions for the occupation numbers $f_\pm$ and the flux $\phi$ for a massless boson and using our power-law profile for the temperature and using a typical energy $\omega=3T_1$. The horizontal black line projects the escaping flux (the occupation number at the stellar surface) with the equilibrium one and marks the Stefan-Boltzmann optical depth for this energy, here approximately at $\tau\simeq0.8$}\label{fig:f-solution}
\end{figure}
For illustration we can also go back to coordinate space and show these results as a function of geometric radius. We find it useful to take inspiration from a realistic model of a SN core, following in particular the Garching group's muonic model SFHo-18.8 \cite{Bollig:2020xdr,CCSNarchive} that we employed earlier for other studies \cite{Caputo:2021rux, Caputo:2022mah}. In this case, one can see that the temperature varies approximately as $T=T_1\,(r_1/r)^4$ which is equivalent to $T=T_1 \tau^{1/5}$, implying $\tau=(r_1/r)^{20}$ where we take $r_1=17$~km. In this representation, the approach to the asymptotic solution looks more intuitive, but it remains true that the approach to the asymptotic solution does not happen at the nominal decoupling radius, but is considerably smeared out even though here we have a fixed energy and no energy dependence of the cross section.
So the picture that a Stefan-Boltzmann flux emerges from some narrow geometric range like ``surface emission'' is clearly not accurate. The bosons reaching infinity derive from a broad radial range, equivalent to a broad range of optical depth.
\section{Strong trapping regime and plane-parallel atmosphere}
\label{sec:PlaneParallel}
The single-ray solutions of the previous section provide the full answer to the question of the stationary FIB radiation field based on a source distribution with prescribed properties (no feedback effects by particle emission on the medium). It remains to cast this result into a more explicit form for relevant overall geometries. To discuss more explicitly radiation decoupling in the strong trapping limit we use a plane-parallel atmosphere, where the temperature and optical depth are only functions of a cartesian coordinate $z$ perpendicular to the atmospheric layering. In the example shown in Fig.~\ref{fig:f-solution}, inspired by a realistic SN core model, the decoupling radius is some 17~km and the relevant shell has a thickness of a few km, so the plane-parallel approximation should provide a reasonable first description.
\subsection{Intensities vs.\ occupation numbers}
Solving the Boltzmann collision equation was most transparent using occupation numbers which appear directly in Bose stimulation factors. However, in the end we ask for the energy flux at some radial position. In this spirit we turn from occupation numbers to radiation intensities for a mode ${\bf k}$ of the radiation field
\begin{equation}
I_{\bf k}=4\pi\,\frac{\omega^2|{\bf k}|}{(2\pi)^3}\,f_{\bf k},
\end{equation}
where $\omega=(m_a^2+{\bf k}^2)^{1/2}$. Notice that $|{\bf k}|=v\omega$ where $v$ is the boson velocity. We have normalized the intensity such that the integral over energy and directions $\int I_{\bf k}\, d\omega\,d\Omega/4\pi$ is the local energy density. Whether or not to include the factor of $4\pi$ in the definitions of $I_{\bf k}$ and the blackbody intensity $B_\omega$ in Eq.~\eqref{eq:B-source} is a matter of convenience.
When the FIBs are in thermal equilibrium, the occupation number is $f_{\bf k}=1/(e^{\omega/T}-1)$ and the equilibrium (blackbody) intensity is denoted as
\begin{equation}\label{eq:B-source}
B_\omega^v=\frac{\omega^2\sqrt{\omega^2-m_a^2}}{2\pi^2}\,\frac{1}{e^{\omega/T}-1}=v_\omega B_\omega,
\quad\hbox{where}\quad
B_\omega=\frac{\omega^3}{2\pi^2}\,\frac{1}{e^{\omega/T}-1}.
\end{equation}
Here $B_\omega$ is the blackbody intensity for one massless degree of freedom and $v_\omega=\sqrt{1-m_a^2/\omega^2}$ is the velocity for a boson with mass $m_a$. For the massless case, the total energy density~is
\begin{equation}\label{eq:B0}
B=\int_0^\infty \!d\omega\,B_\omega=\frac{\pi^2}{30}\,T^4.
\end{equation}
For a nonvanishing mass, no simple expression exists.
\subsection{Angular moments}
\label{SubSec:Angular}
The previous single-ray solution applies to a mode propagating in the radial direction, but now we consider one that is inclined by $\mu=\cos\theta$ such that $\mu=+1$ is the outward direction and $\mu=-1$ the inward one. We begin with Eq.~\eqref{eq:final-occ} for the occupation at position $x=z/\cos\theta$ along the ray. As variable of integration we may use $z$, so we use the vertical depth as a measure of propagation distance, or equivalently, the optical depth $\tau$ in the vertical direction. Following the previous steps we find for the outgoing and incoming intensities
\begin{equation}\label{eq:final-occ-4}
I^+_{\omega,\mu}(\tau)=\frac{1}{\mu}\int_{\tau}^{\infty}d\tau'\,e^{(\tau-\tau')/\mu} B^v_\omega(\tau')
\quad\text{and}\quad
I^-_{\omega,\mu}(\tau)=\frac{1}{\mu}\int_{0}^{\tau}d\tau'\,e^{-(\tau-\tau')/\mu} B^v_\omega(\tau'),
\end{equation}
where $\mu>0$ by definition, i.e., we treat inward-moving modes explicitly as backward moving ones with positive $\mu$.
We are mostly interested in the energy flux, but in general one defines angular moments of the type
\begin{equation}\label{eq:moments}
M_\omega^{(n)}=\frac{1}{2}\,\int_{-1}^{+1}d\mu\,(v_\omega\mu)^n\,I_{\omega,\mu}
=\frac{1}{2}\,\int_{0}^{1}d\mu\,(v_\omega\mu)^n\Bigl[I^+_{\omega,\mu}+(-1)^n I^-_{\omega,\mu} \Bigr].
\end{equation}
Traditionally the zeroth moment (the energy density) is called $J_\omega$, the first moment (the energy flux) $H_\omega$, and the second moment $K_\omega$ is related to pressure. For photons $v=1$ and $B_\omega$ acquires a factor of 2 for the two polarization states. The factors of $v$ are understood in the sense that a flux (of energy or particles) requires one power of $v$ compared with the massless case, whereas the pressure, being essentially a flux of momenta, requires one more $v$ factor. Indeed, the spatial part of the stress-energy tensor dimensionally involves (momentum)$^2$.
The angle integrations in Eq.~\eqref{eq:final-occ-4} can be performed explicitly. For the $n^{\rm th}$ moment and using $w=1/\mu$ one finds\
\begin{equation}\label{eq:expint}
\int_0^1 d\mu\,\mu^n\,\frac{e^{-t/\mu}}{\mu}=\int_1^\infty dw\,\frac{e^{-t w}}{w^{n+1}}=E_{n+1}(t),
\end{equation}
where $E_m(t)$ is the $m^{\rm th}$ exponential integral, in {\sc Mathematica} notation {\tt ExpIntegralE[m,t]}. It obeys $dE_m(t)/dt=-E_{m-1}(t)$ and $E_m(t)=[e^{-t}-t E_{m-1}(t)]/(m-1)$ for \hbox{$m>1$}. We use $E_m(t)$ only for positive arguments where it is positive and real. To consolidate the $\pm$ cases in Eq.~\eqref{eq:moments} in a single expression it is convenient to define integral kernels of the form
\begin{equation}\label{eq:kernels}
{\sf E}_n(t)=\frac{1}{2}\,{\rm sign}(t)^n\,E_{n+1}(|t|)
\quad\hbox{where}\quad
{\rm sign}(t)=\frac{t}{|t|},
\end{equation}
shown in Fig.~\ref{fig:kernels}. These are even functions of $t$ for even $n$ and odd functions of $t$ for odd $n$. With this notation, the moments of Eq.~\eqref{eq:moments} are
\begin{equation}\label{eq:moments-1}
M_\omega^{(n)}(\tau)=v_\omega^{n+1}\int_{0}^{\infty}d\tau'\,{\sf E}_{n}(\tau'-\tau)\, B_\omega(\tau').
\end{equation}
Notice that one factor of $v_\omega$ comes from $B_\omega^v$ for particles with mass, whereas $B_\omega$ is the massless intensity and thus only a property of the medium profile, not the particle mass. In the massless case,
these are the Schwarzschild-Milne equations, providing us with the moments of the radiation field as linear transformations of the blackbody intensity on the interval $0\leq\tau<\infty$. The $0^{\rm th}$-order case, providing the local energy density, is known as the $\Lambda$-transformation.
\begin{figure}[ht]
\vskip12pt\vskip12pt
\centering
\includegraphics[width=0.45\textwidth]{fig2.pdf}
\caption{The $n^{\rm th}$-order integral kernels ${\sf E}_n(s)$ defined in Eq.~\eqref{eq:kernels}.}\label{fig:kernels}
\end{figure}
\clearpage
\subsection{Diffusion regime}
Asymptotically $E_n(t)= t^{-1}e^{-t}$ for $t\to\infty$ independently of $n$. Among other consequences, this implies that integrals over any power $t^n$ weighted with such kernels converge. It also implies that the local values of the moments only depend on the thermal radiation field a few MFPs up- and downstream. In particular, we consider a general function $b(t)$ that we expand as a Taylor series
\begin{equation}
b(t)=\sum_{m=0}^{\infty}\,\frac{b^{(m)}(0)\,t^m}{m!}.
\end{equation}
Then we find
\begin{eqnarray}
\int_{-\infty}^{+\infty}dt\,b(t)\,{\sf E}_n(t)&=& \sum_{m=0}^{\infty}\,\frac{1+(-1)^{m+n}}{2}\,\frac{b^{(m)}(0)}{m+n+1},
\end{eqnarray}
which for the first three momenta gives explicitly
\begin{subequations}
\begin{eqnarray}
\int_{-\infty}^{+\infty}dt\,b(t)\,{\sf E}_0(t)&=& \sum_{m=0}^{\infty}\,\frac{1+(-1)^m}{2}\,\frac{b^{(m)}(0)}{m+1}=
b(0)\,\,+\frac{b''(0)}{3}\,\,+\ldots\\
\int_{-\infty}^{+\infty}dt\,b(t)\,{\sf E}_1(t)&=& \sum_{m=0}^{\infty}\,\frac{1-(-1)^m}{2}\,\frac{b^{(m)}(0)}{m+2}=
\frac{b'(0)}{3}+\frac{b'''(0)}{5}+\ldots\\
\int_{-\infty}^{+\infty}dt\,b(t)\,{\sf E}_2(t)&=& \sum_{m=0}^{\infty}\,\frac{1+(-1)^m}{2}\,\frac{b^{(m)}(0)}{m+3}=
\frac{b(0)}{3}\,\,+\frac{b''(0)}{5}\,+\ldots
\end{eqnarray}
\end{subequations}
Of course, this representation makes only sense at large optical depth where the lower limit of integration can be extended to $-\infty$ and the Taylor expansion is really around a point $\tau\gg 1$. In this case we see that the kernels for the first two momenta at leading order effectively act as
\begin{equation}
{\sf E}_0(\tau)\simeq\delta(\tau),\quad
{\sf E}_1(\tau)\simeq-{\textstyle\frac{1}{3}}\delta'(\tau),
\end{equation}
assuming the function $b(\tau)$ varies sufficiently slowly. Recall that $\int dx\,f(x)\,\delta'(x)=-f'(x)$.
So deeply in the trapped regime, many MFPs away from the surface, the net diffusive flux is
\begin{equation}
F_{\rm diff}(\tau,\omega)=\frac{v^2_\omega}{3}\,\frac{d}{d\tau}\,B_\omega(\tau)
\quad\hbox{or}\quad
F_{\rm diff}(z,\omega)=-\frac{v_\omega^2\lambda_\omega}{3}\,\frac{d}{dz}\,B_\omega(z)
\end{equation}
which is driven by the temperature gradient. (We prefer the letter $F$ to $H$ that is traditional in the theory of radiative transfer.) Recall that $z$ is the coordinate perpendicular to the plane-parallel atmospheric layering, that the MFP is $\lambda_\omega=v_\omega/\Gamma_\omega$ with the particle velocity $v_\omega$, that the Jacobian is $dz/d\tau=-\lambda$, and that a factor $v_\omega^2$ comes from the first factor in Eq.~\eqref{eq:moments-1}. The diffusive flux is a good representation of the true flux when the MFP is short compared with the length scale of temperature variation. However, we can formally define $F_{\rm diff}(\tau)$ everywhere, whether or not it is a good approximation of the true flux.
Finally we can define the Stefan-Boltzmann (SB) flux, given by the equilibrium intensity at a given radial position times an average angular flux factor 1/2 and times another factor 1/2 to count only the outward going modes. $F_{\rm SB}(\tau)$ is the hypothetical FIB flux produced by a black surface at the radial position $\tau$ with the local $T(\tau)$. Of course, the SB flux is simply another way of expressing the local temperature. Overall we define three different fluxes
\begin{subequations}
\begin{eqnarray}
F_{\rm true}(\tau,\omega) &=& v_\omega^2\int_{0}^{\infty}d\tau'\,{\sf E}_1(\tau'-\tau) B_\omega(\tau'),
\label{eq:Ftrue-convolution}
\\[1.5ex]
F_{\rm diff}(\tau,\omega) &=&\frac{v_\omega^2}{3}\,\frac{d}{d\tau}\,B_\omega(\tau),
\label{eq:Fdiff-1}\\[1.5ex]
F_{\rm SB}(\tau,\omega) &=&\frac{v_\omega^2}{4}\,B_\omega(\tau).
\end{eqnarray}
\end{subequations}
For the diffusive flux, we have rediscovered the usual factor 1/3 following directly from the properties of the exponential integrals. We recall that $B_\omega$ is the intensity for one massless boson degree of freedom.
\subsection{Integration over energies for a grey atmosphere}
We are usually not interested in the detailed energy dependence unless there are resonant effects. So we may integrate over energies, but this requires to specify the energy dependence of the FIB interaction rate. The assumption that the reduced absorption rate $\Gamma$ does not depend on energy is called the ``grey-atmosphere approximation'' in the theory of radiative transfer. Moreover, we now consider massless particles with $v=c=1$. The grey-atmosphere approximation is surprisingly well motivated by FIBs absorbed by the Primakoff process as detailed in Sec.~\ref{sec:PrimakoffInteractionModel}. Here we simply use this approach for the purpose of illustration.
The integrated blackbody energy density for a single massless boson degree of freedom was given in
Eq.~\eqref{eq:B0} as $B(\tau)=(\pi^2/30)\,T(\tau)^4$, where the optical depth does not depend on energy by assumption. Then our three fluxes are explicitly
\begin{subequations}\label{eq:three-fluxes}
\begin{eqnarray}
F_{\rm true}(\tau) &=& \int_{0}^{\infty}d\tau'\,{\sf E}_1(\tau'-\tau)\,B(\tau'),
\label{eq:three-fluxes-true}\\[1.5ex]
F_{\rm diff}(\tau) &=& \frac{1}{3}\,\frac{d}{d\tau}\,B(\tau),\\[1.5ex]
F_{\rm SB}(\tau) &=&\frac{1}{4}\,B(\tau).
\end{eqnarray}
\end{subequations}
Besides overall coefficients, the SB flux is a purely local quantity, the diffusive flux a spatial derivative, whereas the true flux involves a nonlocal operator, a convolution over all space, in practice a few units of optical depth upstream and downstream. So these three fluxes are nicely systematic about the FIB flux in the trapping limit.
For a distant observer ($\tau=0$) and inserting the definition of ${\sf E}_1(t)$, the true flux is found to be
\begin{equation}\label{eq:Ftrue-distant}
F_{\rm true}(0)=\frac{1}{2}\int_{0}^{\infty}d\tau\,E_2(\tau)\,B(\tau),
\end{equation}
where $E_2(t)$ is the second exponential integral. The interpretation is that of every boson launched isotropically at optical depth $\tau$, the probability to escape to infinity is given by the transmittance ${\sf T}(\tau)=\frac{1}{2}\,E_2(\tau)$. The factor 1/2 accounts for all bosons launched away from the surface cannot escape, whereas the others have a chance of escape of $E_2(\tau)$. If all bosons were emitted either exactly toward or exactly away from the surface, the transmittance would be $\frac{1}{2}\,e^{-\tau}$. Due to the angular average $e^{-\tau}\to E_2(\tau)$. We recall that $\tau$ here means the optical depth counted directly inward from the surface. The functional form of $\frac{1}{2}\,E_2(\tau)$ is, for positive~$\tau$, the orange curve in Fig.~\ref{fig:kernels} marked ${\sf E}_1$. For large arguments it is $\frac{1}{2}e^{-\tau}/\tau$.
\subsection{Example with power-law profile}
For illustration we return to the power-law profile of Eq.~\eqref{eq:PowerLaw}. Apart from a global factor that we now leave out, the three fluxes are
\begin{equation}\label{eq:fluxes4p}
F_{\rm true}(\tau,p)=\int_{0}^{\infty}d\tau'\,{\sf E}_1(\tau'-\tau)\,\tau'^{\,4p},\quad
F_{\rm diff}(\tau,p)=\frac{4p}{3}\,\tau^{4p-1},\quad
F_{\rm SB}(\tau,p) =\frac{1}{4}\,\tau^{4p}.
\end{equation}
For a typical case $p=0.2$ we show the fluxes as a function of optical depth and radius in Fig.~\ref{fig:Fluxes4p}. We see that the diffusive and true fluxes become asymptotically close at large optical depth and then separate in the freeze-out region. This is most intuitively clear in the radial plot. The required optical depth for the SB flux to match the escaping true flux is $\tau_{\rm SB}\simeq0.60$.
\begin{figure}[ht]
\centering
\hbox to\textwidth{\hfil\includegraphics[height=0.3\textwidth]{fig3a.pdf}
\hskip6pt\includegraphics[height=0.3\textwidth]{fig3b.pdf}\hfil}
\caption{The fluxes of Eq.~\eqref{eq:fluxes4p} for $p=0.2$. The optical depth where the SB flux matches the escaping true flux is $\tau_{\rm SB}=0.60$. For the radial dependence we used $\tau=(17~{\rm km}/r)^{20}$ as earlier.}\label{fig:Fluxes4p}
\end{figure}
Notice that $p=1/4$ is a special value where $F_{\rm diff}=1/3$ is a constant and $F_{\rm SB}=\tau/4$ increases linearly. We have not used this value to avoid an overly special case. In general, the true flux at the surface ($\tau=0$) is explicitly
\begin{equation}\label{eq:tautau}
F_{\rm true}(0,p)=\frac{p\,\Gamma(4p)}{1+2p}\simeq
\frac{1}{6}+0.06(p-1/4)+1.02(p-1/4)^2+{\cal O}[(p-1/4)^3],
\end{equation}
where we have used an expansion around the special value of $p=1/4$ where this flux is near to a minimum.
The condition $F_{\rm SB}(\tau_{\rm SB})=\frac{1}{4}\tau_{\rm SB}^{4p}=F_{\rm true}(0,p)$ leads to
\begin{equation}
\tau_{\rm SB}(p)=
\left(\frac{4p\,\Gamma(4p)}{1+2p}\right)^{1/4p}\simeq
\frac{2}{3}\,\left[1+2\left(p-\frac{1}{4}\right)\right]
\end{equation}
where the approximation is good on the few-percent level in the entire range $0<p<1$. For our special value $\tau_{\rm SB}(1/4)=2/3$ is exact.
In the neutrino decoupling region of a SN core, when diffusive transport is still appropriate, the neutrino flux itself, driven by the temperature gradient, is approximately constant. Therefore, the radiation density of neutrinos scales roughly linearly with neutrino optical depth. As the neutrino scattering rate is proportional to the density as assumed for our FIBs, the power-law index $p\simeq1/4$ for the temperature as a function of optical depth is well motivated in the neutrino decoupling region and borne out from numerical models.
We may also ask where the emitted radiation reaching a distant observer is actually emitted. Equation~\eqref{eq:Ftrue-distant} implies a distribution proportional to $E_2(\tau)B(\tau)$. For $p=1/5$ and thus $B\propto\tau^{4/5}$, the normalized source distribution is
\begin{equation}
f_{\rm source}(\tau)=\frac{7}{2\,\Gamma(4/5)} \tau^{4/5} E_2(\tau),
\end{equation}
shown in the left panel of Fig.~\ref{fig:SourceDistribution}. As a function of geometric radius once more we assume $\tau=(r_0/r)^{20}$ with $r_0=17\,{\rm km}$, leading to the normalized source distribution
\begin{equation}
f_{\rm source}(r)=\frac{70}{\Gamma(4/5)}\,\frac{1}{r_0} \left(\frac{r_0}{r}\right)^{37}
E_2\bigl[(r_0/r)^{20}\big]
\end{equation}
shown in the right panel of Fig.~\ref{fig:SourceDistribution}. The vertical dashed lines show the location of the Stefan-Boltzmann radius.
\begin{figure}[ht]
\centering
\hbox to\textwidth{\hfil\includegraphics[height=0.3\textwidth]{fig4a.pdf}
\hskip12pt\includegraphics[height=0.3\textwidth]{fig4b.pdf}\hfil}
\caption{Source distribution of bosons reaching a distant observer. For the temperature distribution, the power-law index $p=0.2$ was used as in Fig.~\ref{fig:Fluxes4p}. For the dependence on the geometric radius we used again $\tau=(17~{\rm km}/r)^{20}$. The vertical dashed lines indicate the position of the Stefan-Boltzmann radius of $\tau_{\rm SB}\simeq0.60$ and $r_{\rm SB}\simeq17.43\,~{\rm km}.$}\label{fig:SourceDistribution}
\end{figure}
We learn from this figure that the bosons reaching infinity originate from a fairly thick shell, not a sharp ``boson sphere.'' The grey-atmosphere model, without any energy dependence of the cross section, provides the ``sharpest'' conceivable emission sphere. For neutrinos, the cross section varies with the square of energy and the ``neutrino sphere'' is much more smeared out and energy dependent.
\subsection{Constant plus linear profile for the radiation density}
The special power-law profile $T(\tau)\propto \tau^{1/4}$ corresponds to a linear profile for the radiation density $B(\tau)\propto\tau$. The next simple profile derives from adding an arbitrary constant
\begin{equation}\label{eq:linear-profile}
B(\tau)=B_0\,\left(\tau+q\right),
\end{equation}
where the letter $q$ is traditionally used. The true flux is found through the convolution of Eq.~\eqref{eq:three-fluxes-true}, leading to a complicated expression in terms of exponential integrals. At the surface ($\tau=0$) one finds the following true flux to be compared with the SB flux
\begin{equation}\label{eq:SB-linear}
F_{\rm true}(0)=B_0\,\left(\frac{1}{6}+\frac{q}{4}\right)
\quad\hbox{while}\quad
F_{\rm SB}=B_0\,\left(\frac{\tau_{\rm SB}}{4}+\frac{q}{4}\right).
\end{equation}
Thus the true flux at the surface is the same as the SB flux at the optical depth $\tau_{\rm SB}=2/3$,
independently of the constant $q$. This is the formal derivation of where this particular reference number comes from that floats around in the literature. For other temperature profiles and for non-grey atmospheres, $\tau_{\rm SB}=2/3$ is only an estimate.
\subsection{Self-consistent temperature profile and Eddington case}
In this paper we are considering FIB emission from a star or SN core with prescribed properties. On the other hand, in the trapping regime the FIB transfer of energy is not a perturbative effect, especially when they decouple at a radius larger than the neutrino sphere. In this case, the atmospheric run of temperature is determined by FIB energy transport and, in a stationary state, is determined by the condition $F_{\rm true}(\tau)=\text{constant}$. Finding the corresponding $B(\tau)$ is a formidable mathematical challenge that was solved in different ways as detailed, for example, in the book \cite{Kourganoff:1952}. Expressing the solution in the form of Eq.~\eqref{eq:linear-profile}, the solution $q(\tau)$ is called the Hopf function that can be explicitly expressed, for example, as an integral that can be evaluated numerically.
We mention in passing that there is a surprisingly accurate approximation credited to Milne and Eddington that is given by the constant $q=2/3$. From Eq.~\eqref{eq:SB-linear} we glean that in this case the true surface flux is $F_{\rm true}(0)=B_0/3$ and thus the same as the diffusive flux deep inside. The different flux components are shown in Fig.~\ref{fig:constantflux}, where the constant and linear terms of $B(\tau)$ each provide exactly the flux $B_0/6$ at the surface. While the Eddington profile was chosen to provide the same flux at the surface as deep inside, we see from Fig.~\ref{fig:constantflux} that the flux is surprisingly constant also in the intermediate range. We see that the SB flux, shown as a green line, matches the surface flux (horizontal orange line) at $\tau_{\rm SB}=2/3$ as expected.
\begin{figure}[ht]
\vskip12pt
\centering
\includegraphics[width=0.45\textwidth]{fig5.pdf}
\caption{Fluxes for the Eddington profile $B(\tau)$ of the form Eq.~\eqref{eq:linear-profile} with $q=2/3$. The optical depth where the SB flux matches the escaping true flux is $\tau_{\rm SB}=2/3$ exactly. We show separately the fluxes generated by the linear and constant bits of the radiation density that each contribute exactly $B_0/6$ to the flux at the surface. If one were to use the Hopf function $q(\tau)$ instead of $q=2/3$, the true flux (solid blue line) would exactly equal the constant $B_0/3$ (orange horizontal line) that is also equal to the nominal $F_{\rm diff}$, which is here constant everywhere and shown even in the low-$\tau$ region where the diffusion approximation is not justified.}\label{fig:constantflux}
\end{figure}
\clearpage
\subsection{True-flux convolution in geometric variables}
Estimating the FIB flux with the SB approach is a good approximation that can be done for the energy-integrated flux or, if the monochromatic reduced absorption rate $\Gamma_\omega$ strongly depends on energy, for every $\omega$ separately. However, many of the recent papers that have motivated our study used a time series of numerical SN models that were post-processed to obtain the FIB luminosity in the trapping limit. So if one anyway performs a numerical study of that type, one may as well compute directly the true flux for each energy $\omega$ based on the convolution integral Eq.~\eqref{eq:Ftrue-convolution}.
However, while the optical depth $\tau$ as a measure of distance is very useful for conceptual discussions, it is somewhat abstract for practical implementation. More importantly, it has the disadvantage that $\Gamma_\omega$ is assumed to decrease with increasing radius like the medium of a star so that spatial infinity corresponds to vanishing optical depth. However, for massive FIBs that can decay, for example by $a\to2\gamma$, the concept of optical depth is not directly appropriate and the FIB flux at infinity vanishes irrespective of the details of the source.
Both issues are resolved by returning to an integral over a geometric variable $z$ which here is the coordinate perpendicular to the plane-parallel atmosphere. The convolution integral of Eq.~\eqref{eq:Ftrue-convolution} for the monochromatic true flux becomes explicitly
\begin{eqnarray}\label{eq:Ftrue-geometric}
F_\omega(z)&=&\int_{-\infty}^{+\infty}dz'\,\Gamma_\omega(z')\,v_\omega B_\omega(z')\,
{\sf E}_1\biggl[\int_{z'}^{z}\!\! dz''\,\frac{\Gamma_\omega(z'')}{v_\omega} \biggr]
\nonumber\\[1.5ex]
&=&\int_{-\infty}^{+\infty}dz'\,Q_\omega(z')\,
{\sf E}_1\biggl[\int_{z'}^{z}\frac{dz''}{\lambda_\omega(z'')} \biggr].
\end{eqnarray}
Here $\omega$ is the energy of a FIB with mass $m_a$ and velocity $v_\omega=(1-m_a^2/\omega^2)^{1/2}$. The reduced absorption rate $\Gamma_\omega(z)$ can also include free decay far away from the source. The thermal intensity $B_\omega(z)=(\pi^2/30)T(z)^4$ is the one for a massless boson. Notice that one velocity factor in front of Eq.~\eqref{eq:Ftrue-convolution} has cancelled against $v_\omega^{-1}$ appearing in the Jacobian through $|d\tau_\omega/dz|=1/\lambda_\omega=\Gamma_\omega/v_\omega$. Moreover, $\lambda_\omega(z)$ is the local MFP, based on the reduced absorption rate.
We have also introduced the volume energy loss rate, differential with regard to its variable $\omega$,
\begin{equation}\label{eq:Qdefinition}
Q_\omega(z)=\Gamma_\omega(z)\,v_\omega B_\omega(z),
\end{equation}
where the thermal FIB energy density was defined in Eq.~\eqref{eq:B-source}. Recall that the spontaneous emission rate is $\Gamma_{{\rm E},\omega}=\Gamma_\omega/(e^{\omega/T}-1)$, to be multiplied with the phase-space factor $v_\omega\omega^3/(2\pi^2)$ to obtain the energy emission rate per energy interval $d\omega$. Together this implies Eq.~\eqref{eq:Qdefinition} as a product of the reduced absorption rate times the blackbody FIB intensity. Notice that this applies to any process that absorbs the FIBs, including inverse bremsstrahlung or two-photon decay. The overall normalization (including a factor of $4\pi$ in $B_\omega$) is such that
\begin{equation}
Q(z)=\int_{m_a}^\infty d\omega\, Q_\omega(z)
\end{equation}
is the local energy loss rate per unit volume, for example in units of ${\rm erg}\,{\rm cm}^{-3}\,{\rm s}^{-1}$.
The non-appearance of a velocity factor in the flux expression of Eq.~\eqref{eq:Ftrue-geometric} is slightly confusing. Therefore, as a sanity check, we consider a uniform plane-parallel atmosphere at a constant temperature. The atmosphere ends at a surface at $z=0$. So $B_\omega(z)=0$ for $z>0$ and is constant for $z<0$. Likewise, $\Gamma_\omega$ is constant for $z<0$ and vanishes otherwise. The convolution integral can be solved analytically, however requiring many cases depending on the values of $z$, $z'$ and $z''$. We find explicitly
\begin{equation}\label{eq:Fisothermal}
F_\omega(z)=\frac{v_\omega^2 B_\omega}{4}
\begin{cases}2 E_3(-z/\lambda_\omega) &\hbox{for $z< 0$}, \\
1&\hbox{for $z\geq 0$,}
\end{cases}
\end{equation}
where $E_3$ is the third exponential integral discussed around Eq.~\eqref{eq:expint}. We show this solution in Fig.~\ref{fig:isothermal} where we see that $F_\omega(z)$ develops a few MFPs below the surface and emerges with the Stefan-Boltzmann value $v^2_\omega B_\omega/4$, including a factor $v^2_\omega$ in front of the massless-boson intensity. For a massive particle, the flux is reduced in two ways, the explicit $v_\omega$ coming from the flux and one factor from the phase-space density of modes within an energy interval $d\omega$, not from the velocities of individual particles.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{fig6.pdf}
\caption{Uniform and isothermal medium with a surface at $z=0$. {\em Blue line:} Boson energy flux for a given MFP $\lambda_\omega$ in the medium and no interaction in vacuum. {\em Orange line:} Same MFP in the medium, but a remaining MFP of $3\lambda_\omega$ due to decays in vacuum.}\label{fig:isothermal}
\end{figure}
\subsection{Including boson decay}
We briefly illustrate the case where the FIBs can decay after emerging from the surface of an isothermal medium. So we assume that in the medium the (reduced) MFP is $\lambda_\omega$, caused by all kinds of processes, including photon coalescence. In vacuum, only free decay is possible for which we take schematically an MFP of $3\lambda_\omega$. Performing the convolution, in analogy to
Eq.~\eqref{eq:Fisothermal} we find
\begin{equation}\label{eq:Fisothermaldecay}
F_\omega(z)=\frac{v_\omega^2 B_\omega}{4}
\begin{cases}2 E_3(-z/\lambda_\omega) &\hbox{for $z< 0$} \\
1&\hbox{for $z= 0$}\\
2 E_3(z/3\lambda_\omega) &\hbox{for $z>0$}
\end{cases}
\end{equation}
as shown in Fig.~\ref{fig:isothermal}. The behavior in the medium depends only on the reduced interaction rate, not the individual contributions from different processes. In vacuum, where the source $B_\omega=0$, only vacuum decay is relevant. Notice that the variation with distance is not exponential because the large-argument limit is $E_3(s)\to e^{-s}/s$. The particles still decay exponentially on their trajectories, but the angle average implies that the overall flux decreases faster with distance. This behavior is a consequence of the plane-parallel model because at a large distance from a star, many stellar radii away, the flux decreases exponentially because the trajectories become more and more collinear with distance.
\subsection{Rosseland average interaction rate for the diffusive flux}
If the reduced MFP depends on energy, the energy-integrated true flux is given by Eq.~\eqref{eq:Ftrue-geometric} after performing the $\int d\omega$ integral. The diffusive flux, on the other hand, is given by the $\int d\omega$ integral of
Eq.~\eqref{eq:Fdiff-1}. In geometric variables, one finds
\begin{equation}
F_{\rm diff}(z)=-\frac{1}{3}\int_{m_a}^\infty d\omega\,v_\omega^2\lambda_\omega(z)\frac{d}{dz}B_\omega(z)
=-\frac{\nabla T}{3}\int_{m_a}^\infty d\omega\,\frac{v_\omega^3}{\Gamma_\omega}\frac{dB_\omega}{dT},
\end{equation}
where $B_\omega$, given in Eq.~\eqref{eq:B-source}, is the spectral blackbody density for a massless boson so that
\begin{equation}
\frac{dB_\omega}{dT}=\frac{1}{2\pi^2}\,\frac{\omega^4 e^{\omega/T}}{T^2(e^{\omega/T}-1)^2}.
\end{equation}
For a massless boson with energy-independent MFP, the flux expression is
\begin{equation}\label{fig:Diffuse-Flux}
F_{\rm diff}(z)=-\frac{\lambda}{3}\,\nabla T\int_{0}^\infty d\omega \frac{dB_\omega}{dT}
=-\frac{\lambda}{3}\,\frac{2\pi^2}{15}\,T^3\nabla T
=-\frac{\lambda}{3}\,\nabla B(z).
\end{equation}
Therefore, if we wish to express the general diffusive flux in terms of an equivalent average MFP, comparing the two expressions yields
\begin{equation}\label{eq:Rosseland}
\lambda_{\rm eff}=\int_{m_a}^\infty d\omega\,\frac{v_\omega^3}{\Gamma_\omega}\,\frac{dB_\omega}{dT}
\bigg/\int_{0}^\infty d\omega \frac{dB_\omega}{dT}
=\frac{15}{4\pi^4}\,\frac{1}{T^5}\int_{m_a}^\infty d\omega\,
\frac{\omega\,(\omega^2-m_a^2)^{3/2}\, e^{\omega/T}}{(e^{\omega/T}-1)^2\,\Gamma_\omega}.
\end{equation}
In radiative transport, this effective MFP corresponds to the Rosseland average of the interaction rate.
\section{Boson luminosity in spherical geometry}
Our study is motivated by several recent papers concerning the FIB luminosity of a SN core and the associated energy loss. As we have argued in the previous section, the energy loss in the trapping limit can be estimated very well by quasi-thermal emission from a blackbody surface according to the Stefan-Boltzmann law. On the other hand, if one performs a numerical integration over an externally prescribed background model, one may as well use the exact expressions. Going beyond energy loss and asking for the nonlocal mode of energy transfer carried by FIBs, especially if these are radiatively unstable and can deposit energy far away from the point of production, a geometrically correct treatment is more important because a plane-parallel approximation is not
appropriate if the energy is deposited far away from the compact stellar core. A similar question arises in the context of FIB energy loss and transfer in Horizontal Branch (HB) stars where the nonlocal transfer of energy was described as ``ballistic'' in contrast to that by diffusion \cite{Lucente:2022wai}. Therefore, we now turn to formulating the FIB flux in spherical geometry.
\subsection{Solution on a ray in geometric variables}
The solution for the stationary flux in any geometry derives from the stationary solution on a given ray of the radiation field that was discussed in Sec.~\ref{sec:StationaryState}. Because FIBs are only absorbed or emitted, but not scattered, different momentum modes of the FIB radiation field are decoupled and so a single ray provides the mother of all solutions. Following Sec.~\ref{sec:StationaryState} we thus consider a ray along some chosen FIB momentum direction, use a geometric coordinate~$s$, and express the solutions in terms of intensities instead of occupation numbers,
\begin{subequations}\label{eq:rayintensities}
\begin{eqnarray}
I_\omega^+(s)&=&\frac{1}{v_\omega}\int_{-\infty}^s\! ds'\, Q_\omega(s')\,\exp\biggl[-\int_{s'}^{s}\frac{ds''}{\lambda_\omega(s'')}\biggr],
\\
I_\omega^-(s)&=&\frac{1}{v_\omega}\int_{s}^\infty ds'\, Q_\omega(s')\,\exp\biggl[-\int_{s}^{s'}\!\frac{ds''}{\lambda_\omega(s'')}\biggr],
\end{eqnarray}
\end{subequations}
where $\pm$ refers to the intensities of the FIB modes at the point $s$ along or opposite to the ray which has the direction of increasing $s$. The local FIB energy production rate $Q_\omega(s)=\Gamma_\omega(s) B_\omega(s)=v_\omega B_\omega(s)/\lambda_\omega(s)$ was defined earlier in Eq.~\eqref{eq:Qdefinition} and we assume that the blackbody intensity $B_\omega(s)$ and the reduced MFP $\lambda_\omega(s)$ are externally prescribed. The intensity at $s$ is simply the integral over the emission from downstream of the respective direction, modified by exponential damping along the way.
\subsection{Spherical volume integration: Observer perspective}
As a first of two ways to calculate the boson flux at radius $r$ in a spherically symmetric star we consider an observer at that radius and ask for the contribution of a given source to the outward or inward energy flux at that location and then integrate over all sources. Using the geometric setup shown in Fig.~\ref{fig:VolumeIntegrals4}, we consider a ray with a coordinate $s$ which is zero at $r$ and positive in the inward direction for later convenience. The ray is tilted relative to the radial direction by an angle $\theta$ with $\mu=\cos\theta$. Both $Q_\omega(r)$ and $\lambda_\omega(r)$ are assumed to be given as a function of stellar radius $r$. The impact parameter of this ray is $b=r\sin\theta$ and half the secant line is $a=r\cos\theta$. At point $s$ on this ray, the distance to the center of the star is given by $R^2=b^2+(a-s)^2$, providing
\begin{equation}\label{eq:rsmu}
R_{r,s,\mu}=\sqrt{r^2+s^2-2rs\mu}.
\end{equation}
The intensities for the two directions on this ray at $s=0$ follow directly from Eq.~\eqref{eq:rayintensities}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.35\textwidth]{fig7.pdf}
\caption{Geometric setting for calculating the boson flux at radius $r$ from the observer perspective.}\label{fig:VolumeIntegrals4}
\end{figure}
However, we are interested in the energy flux in the radial direction, not the intensity, and so we need another factor $v_\omega\mu$, yielding at $s=0$ the energy fluxes
\begin{subequations}\label{eq:Fluxes-observer}
\begin{eqnarray}
F_{\omega,\mu}^+(r)&=&\mu\int_{0}^\infty ds\, Q_\omega(R_{r,s,\mu})\,\exp\biggl[-\int_{0}^{s}\!\frac{ds'}{\lambda_\omega(R_{r,s',\mu})}\biggr],
\label{eq:Fluxes-observer-a}
\\
F_{\omega,\mu}^-(r)&=&\mu\int_{-\infty}^0 \! ds\, Q_\omega(R_{r,s,\mu})\,\exp\biggl[-\int_{s'}^{0}\frac{ds'}{\lambda_\omega(R_{r,s',\mu})}\biggr].
\end{eqnarray}
\end{subequations}
Notice that by our choice of direction of the $s$ variable, it is $I^-_\omega$ that contributes to the radially outward flux $F^+_\omega$ and the other way around. Thus defined, both $F^\pm_{\omega,\mu}$ are positive if $\mu>0$ and the total flux is \smash{$F_{\omega,\mu}=F^+_{\omega,\mu}-F^-_{\omega,\mu}$}. To obtain the total flux, this expression must be integrated $\int_0^1 d\mu$ if we define the angle as the one between the ray and the radial direction as in Fig.~\ref{fig:VolumeIntegrals4}. However, as $F_{\omega,\mu}$ consists of a piece with positive and one with negative $\mu$, we may instead use only the first piece and integrate over all $\mu$ so that
\begin{equation}\label{eq:observer-flux}
F_\omega(r)=
\frac{1}2\int_{-1}^{+1}d\mu\,\mu\underbrace{\int_0^{\infty}\!ds\,Q_\omega(R_{r,s,\mu})\,
\exp\left[-\int_0^s \frac{ds'}{\lambda_\omega(R_{r,s',\mu})}\right]}_{\displaystyle v_\omega I_{\omega,\mu}(r)}.
\end{equation}
The second integral is the local intensity $I_{\omega,\mu}(r)$ times the particle velocity $v_\omega$ as a function of direction.
The mono\-chromatic luminosity is $L_\omega(r)=4\pi r^2 F_\omega(r)$.
As a cross check we consider the strong trapping limit, where the particle MFP is short compared with $r$ and it makes sense to worry only about the region around $r$ with a few MFPs upstream and downstream. In this case, the general volume integral should approach the plane-parallel result. After integrating over emission angles, Eqs.~\eqref{eq:Fluxes-observer} are
\begin{subequations}\label{eq:Fluxes-observer-integrated}
\begin{eqnarray}
F_{\omega}^+(r)&=&\frac{1}{2}\int_0^1 d\mu\,\mu\int_{0}^\infty ds\, Q_\omega(R_{s,\mu})\,\exp\biggl[-\int_{0}^{s}\!\frac{ds'}{\lambda_\omega(R_{s',\mu})}\biggr],
\\
F_{\omega}^-(r)&=&\frac{1}{2}\int_0^1 d\mu\,\mu\int_{-\infty}^0 \! ds\, Q_\omega(R_{s,\mu})\,\exp\biggl[-\int_{s}^{0}\frac{ds'}{\lambda_\omega(R_{s',\mu})}\biggr].
\end{eqnarray}
\end{subequations}
Because the $s$-integral contributes only for $s\ll R$ we may expand the expression Eq.~\eqref{eq:rsmu} for the radial position as
\begin{equation}
R_{s,\mu}=r-s\mu+{\cal O}(s^2/r).
\end{equation}
The variable $s$ along the beam now always appears multiplied with $\mu$ and we introduce $z=-\mu s$, which is the radial distance to the position $r$. Notice that for positive $\mu$ a position at a radius larger than $r$ has negative $s$ by our convention for the direction of the considered ray. So the positive $z$-direction is the outward radiation direction and $R_{s,\mu}\to r+z$.
Because of strong trapping, regions a few MFPs upstream or downstream from $r$ are suppressed by the exponential damping factor, we may nominally extend the $dz$ integral to infinity. Therefore, the fluxes are
\begin{subequations}\label{eq:Fluxes-pp}
\begin{eqnarray}
F_{\omega}^+(r)&=&\int_{-\infty}^0 \! dz\,Q_\omega(r+z)\,
\frac{1}{2}\int_0^1 d\mu\,
\exp\biggl[-\frac{1}{\mu}\int_{z}^{0}\frac{dz'}{\lambda_\omega(r+z')}\biggr],
\\
F_{\omega}^-(r)&=&\int_{0}^\infty dz\,Q_\omega(r+z)\,
\frac{1}{2}\int_0^1 d\mu\,
\exp\biggl[-\frac{1}{\mu}\int_{0}^{z}\frac{dz'}{\lambda_\omega(r+z')}\biggr].
\end{eqnarray}
\end{subequations}
The $d\mu$-integrals can now be expressed in terms of exponential integrals as explained around Eq.~\eqref{eq:expint}. The total flux $F_\omega=F_\omega^+-F_\omega^-$ can then be pieced together and reproduces a convolution integral in analogy to
Eq.~\eqref{eq:Ftrue-geometric}. The analogy becomes perfect if we reinterpret $r$ as our $z$-variable and shift the integration variables accordingly.
\begin{figure}[b!]
\centering
\includegraphics[scale=0.50]{fig8.pdf}
\caption{Geometric setting for calculating the contribution of a given source point to the energy flux at radius $r$.}\label{fig:VolumeIntegrals2}
\end{figure}
\subsection{Spherical volume integration: Source perspective}
We next turn to the second picture, sketched in Fig.~\ref{fig:VolumeIntegrals2}, where we consider emission from a given source and ask for its contribution to the outward and inward luminosities at some radius $r$. We begin with sources inside of $r$, all of which contribute to $L^+_\omega(r)$. We place the source at position $a<r$ on the $y$-axis, radiating isotropically in all directions characterized by the angle $\beta$. Following a ray in the direction $\beta$ with coordinate $s$ (origin at the source), the corresponding radius vector in the $x$-$y$-plane is ${\bf R}^<_{a,\beta,s}=(s\,\sin\beta,a+s\,\cos\beta)$ so that
\begin{equation}
R^<_{a,\beta,s}=\sqrt{a^2+s^2+2sa\cos\beta}.
\end{equation}
By the same geometric consideration, the upper limit of integration is
\begin{equation}\label{eq:smax}
s^{\rm max}_{r,a,\beta}=\sqrt{r^2-a^2\sin^2\beta}-a\cos\beta.
\end{equation}
Therefore, the intensity contribution of this source at radius $r$ is damped by
\begin{equation}
\exp\biggl[-\int_{0}^{s^{\rm max}_{r,a,\beta}}\,\frac{ds}{\lambda_\omega({R^<_{a,\beta,s})}}\biggr].
\end{equation}
The ray punches through the $r$-sphere with an angle $\theta$ and so the outward flux requires a factor $\cos\theta$. On the other hand, if we think of the ray as having a small cross section, the area of intersection with the $r$-sphere is increased by $1/\cos\theta$ and so these two factors cancel. Actually if there were no damping of the emitted radiation, all bosons emitted from the source per unit time must pass the $r$-sphere and so the source contribution to the flux is the same, independently of the source position within the sphere. We only need to calculate the position-dependent average damping factor.
Integrating over all source points within radius $r$,
we find
\begin{equation}\label{eq:Lplusinside}
L^{+,{<}}_\omega(r)=\int_0^r\! da\,4\pi a^2\,Q_\omega(a)\,
\frac{1}{2}\int_{-1}^{+1}d\cos\beta\,
\exp\biggl[-\int_{0}^{s^{\rm max}_{r,a,\beta}}\,\frac{ds}{\lambda_\omega({R^<_{a,\beta,s})}}\biggr]
\end{equation}
for the outward luminosity provided by sources inside the sphere $r$. Notice that in this case, contrary to the observer perspective, it is not possible to uniquely define the flux and we therefore work with the luminosity. The two quantities are however easy related $ F^{+,{<}}_\omega(r)= L^{+,{<}}_\omega(r)/4\pi r^2$.
For a source outside the $r$-sphere (right panel in Fig.~\ref{fig:VolumeIntegrals2}), a ray passes through the sphere if the angle $\beta$ is constrained by $\sin\beta<r/a$ so that
\begin{equation}
c^{\rm min}_{r,a}=\cos\beta_{\rm max}=\sqrt{1-r^2/a^2}.
\end{equation}
The distance from the stellar center of a point $s$ on the ray is
\begin{equation}
R^>_{a,\beta,s}=\sqrt{a^2+s^2-2sa\cos\beta}.
\end{equation}
The length on the ray until the first and second points of intersection is
\begin{equation}
s^{(1,2)}_{r,a,\beta}=a\cos\beta\pm \sqrt{r^2-a^2\sin^2\beta}.
\end{equation}
For the flux contribution of a ray intersecting a tilted surface, the same remarks apply as earlier. The first intersection point contributes to the inward flux, the second one to the outward flux. Collecting everything, we find for the total flux
\begin{equation}\label{eq:source-flux}
L_\omega(r)=\int_0^\infty\! da\,4\pi a^2\,Q_\omega(a)\,{\sf E}_\omega(r,a),
\end{equation}
where for $a<r$
\begin{equation}
{\sf E}^<_\omega(r,a)=\frac{1}{2}\int_{-1}^{+1}d\cos\beta\,
\exp\biggl[-\int_{0}^{s^{\rm max}_{r,a,\beta}}\,\frac{ds}{\lambda_\omega({R^<_{a,\beta,s})}}\biggr]
\end{equation}
and for $a>r$
\begin{equation}
{\sf E}^>_\omega(r,a)=\frac{1}{2}\int_{c^{\rm min}_{r,a}}^{+1}d\cos\beta\,\left\lbrace
\exp\biggl[-\int_{0}^{s^{(2)}_{r,b,\beta}}\,\frac{ds}{\lambda_\omega({R^>_{a,\beta,s})}}\biggr]
-\exp\biggl[-\int_{0}^{s^{(1)}_{r,b,\beta}}\,\frac{ds}{\lambda_\omega({R^>_{a,\beta,s})}}\biggr]
\right\rbrace
\end{equation}
and ${\sf E}_\omega(r,a)=0$ for $r=a$.
While this result looks far more complicated than the one found from the observer perspective, its structure is more reminiscent of the plane-parallel case in that we convolve the radial source distribution $Q_\omega(r)$ with the kernel ${\sf E}_\omega(r,a)$ which is positive for $a<r$ and negative for $a>r$ and the local flux is determined by the sources a few MFPs inside and outside the considered radius.
The formal transition to the plane-parallel case is made by assuming the MFP is so small that the $a$-integration contributes only in a thin shell around $r$ and we set $a=r+z$ with $|z|\ll r$. The angle integration for $z<0$ covers only the range $0<\cos\beta<1$ and so for $z<0$ we find
\begin{equation}
{\sf E}^<_\omega(r,z)=\frac{1}{2}\int_{0}^{1}d\cos\beta\,
\exp\biggl[-\int_{0}^{-z/\cos\beta}\,\frac{ds}{\lambda_\omega(r+z-s\cos\beta)}\biggr].
\end{equation}
Substituting $z'=-s\cos\beta$, this becomes
\begin{equation}
{\sf E}^<_\omega(r,z)=\frac{1}{2}\int_{0}^{1}d\cos\beta\,
\exp\biggl[-\frac{1}{\cos\beta}\int_{z}^{0}\,\frac{dz'}{\lambda_\omega(r+z')}\biggr]
=\frac{1}{2}E_2\biggl[\int_{z}^{0}\,\frac{dz'}{\lambda_\omega(r+z')}\biggr].
\end{equation}
The derivation for $z>0$ is analogous if we notice that the contribution from the second intersection point can be dropped in the present limit. Therefore, ${\sf E}_\omega(r,a)$ in the small-$\lambda$ limit is pieced together from exponential integral functions as in the plane-parallel case.
\subsection{Luminosity at infinity}
Another important limit is the FIB luminosity at infinity, corresponding to the total energy loss of the star or SN core in the form of FIBs. This quantity is only useful when the FIBs are essentially stable, otherwise the flux at infinity always vanishes. In practice, we take $r$ to be much larger than the geometric size of the production region, but much smaller than the MFP against decay. As a consequence, there are no sources outside the very large $R$, so the large-$R$ limit can be taken on the basis of the outward luminosity.
Beginning with the ``observer perspective,'' the starting point is the outgoing flux of Eq.~\eqref{eq:Fluxes-observer-a}, implying a luminosity
\begin{equation}
L_{\omega,\mu}^+(r)=4\pi r^2\,\frac{1}{2}\int_0^1d\mu\,\mu\int_{0}^\infty ds\, Q_\omega(R_{r,s,\mu})\,\exp\biggl[-\int_{0}^{s'}\!\frac{ds'}{\lambda_\omega(R_{r,s',\mu})}\biggr].
\end{equation}
By assumption, regions far away from the star do not contribute to FIB production or decay, so the range of angles $\theta$ that contribute become infinitesimally small for $r\to\infty$. This singularity is avoided by using instead the impact parameter $b=r\sin\theta$ as integration variable. Moreover, the integration along the ray shown in Fig.~\ref{fig:VolumeIntegrals4} is performed in a shifted variable $z=s-r$, which amounts in the limit $r\to\infty$ to putting the zero-point of this variable at the point of intersection of the impact line $b$ with the ray. Using these coordinates, the radial position is $R_{r,s,\mu}\to R_{z,b}=(b^2+z^2)^{1/2}$. The angle integral thus becomes
\smash{$4\pi r^2 \frac{1}{2}\int_0^1 d\mu\,\mu \ldots\to 2\pi \int_0^\infty db\, b \ldots$},
where we have used $\mu=1$, $b=r\sin\theta=r\theta$, and we have extended the $b$-integration to $\infty$ because only regions with $b\ll r$ contribute by assumption. Notice that $2\pi b$ is the circumference of a circle with radius $b$, so the $b$-integration is simply one over the stellar disk in terms of the radius (or impact parameter) on the disk. The volume integration has become one over the stellar disk and, transverse to it in the observer direction, over the new variable $z$. With $L_\omega=L_\omega^+(\infty)$, collecting everything, and re-naming the variable of integration $b\to r$ we find
\begin{equation}\label{eq:Linfty-observer}
L_\omega=\int_0^\infty \!dr\,2\pi r\int_{-\infty}^{+\infty} dz\,
Q_\omega\bigl(\sqrt{r^2+z^2}\bigr)
\,\exp\biggl[-\int_z^{\infty}\!\frac{dz'}{\lambda_\omega\bigl(\sqrt{r^2+z^{\prime\,2}}\bigr)}\biggr].
\end{equation}
The total energy loss finally requires an integration over $d\omega$.
We next turn to the ``source perspective'' and note that for $r\to\infty$ all sources are within the $r$-sphere, so as a starting point we may use Eq.~\eqref{eq:Lplusinside} for the outward flux caused by sources within the $r$-sphere. In the limit $r\to\infty$ the upper limit of integration becomes $s^{\rm max}_{R,a,\beta}\to\infty$. Collecting everything and re-naming the variable of integration $a\to r$
\begin{equation}\label{eq:Linfty-source}
L_\omega=\int_0^\infty\! dr\,4\pi r^2\,Q_\omega(r)\,
\underbrace{\frac{1}{2}\int_{-1}^{+1}\!d\cos\beta\,
\exp\biggl[-\int_{0}^{\infty}\,\frac{ds}{\lambda_\omega\bigl(\sqrt{r^2+s^2+2rs\cos\beta}\bigr)}\biggr]}_{
\displaystyle {\sf T}_\omega(r)=\bigl\langle e^{-\tau_{\omega,\mu}(r)}\bigr\rangle_{\rm angles}}.
\end{equation}
The intuitive meaning is that we perform a volume integral over the radius-dependent energy-loss rate, reduced by the angle-averaged transmittance ${\sf T}_\omega(r)$, where $\tau_{\omega,\mu}(r)$ is the optical depth of the source point in a specific direction of emission.
The expressions for $L_\omega$ from the observer perspective Eq.~\eqref{eq:Linfty-observer} and from the source perspective Eq.~\eqref{eq:Linfty-source} are both intuitive, yet look very different. However, one can show with a direct transformation of the integral expressions that they are indeed the same.
\subsection{Transmittance in the strong-trapping limit}
To calculate the FIB flux at infinity in spherical symmetry, the crucial geometric information in Eq.~\eqref{eq:Linfty-source} is encoded in the angle-averaged transmittance
\begin{equation}\label{eq:transmittance}
{\sf T}_\omega(r)=
\frac{1}{2}\int_{-1}^{+1}\!d\mu\,
\exp\biggl[-\int_{0}^{\infty}\,\frac{ds}{\lambda_\omega\bigl(\sqrt{r^2+s^2+2rs\mu}\bigr)}\biggr],
\end{equation}
where we have renamed $\cos\beta\to\mu$. The integral in the exponential is the optical depth $\tau_{r,\mu}$ for a FIB emitted at radius $r$ with direction $\mu=\cos\beta$ relative to the radial direction.
In some recent papers \cite{Bollig:2020xdr, Croon:2020lrf}, the transmittance was estimated as $e^{-\tau(r)}$, where $\tau(r)$ is
the optical depth in the outward-radial direction ($\beta=0$), i.e., the shortest way out. This approximation overestimates the transmittance except at the center of the star because otherwise the optical depth is larger in all directions compared with the radial-outward one. The prescription of Ref.~\cite{Chang:2016ntp} and re-used in Ref.~\cite{Lucente:2020whw} effectively employs an even larger transmittance. For a specific SN model, the difference between the naive transmittance $e^{-\tau(r)}$ and that of Eq.~\eqref{eq:transmittance} is shown in Fig.~\ref{fig:L-Primakoff} below (dashed vs.\ solid blue line).
While there is no general answer concerning the difference, it is easy to estimate in the strong-trapping limit where the FIBs essentially emerge from a Stefan-Boltzmann sphere at $\tau(r_{\rm SB})=2/3$ and if we assume that the absorption rate decreases fast with radius at and beyond $r_{\rm SB}$. So if the geometric atmospheric height of the decoupling region is small relative to the decoupling radius we are back to the plane-parallel atmosphere approximation. In Eq.~\eqref{eq:transmittance} this implies that $s\ll r$ for the contributing range, implying that $\sqrt{r^2+s^2+2rs\mu}\to r+s\mu$. As a variable of integration we choose the vertical depth $z=\mu s$ and we also note that in the strong-trapping limit the transmittance for inward-bound directions vanishes, so the $d\mu$ integral is only over positive $\mu$. Collecting everything, we find in the plane-parallel approximation
\begin{equation}\label{eq:transmittance-2}
{\sf T}(\tau)=
\frac{1}{2}\int_{0}^{1}\!d\mu\,e^{-\tau/\mu}=\frac{1}{2}\,E_2(\tau),
\end{equation}
where $E_2(\tau)$ is the second exponential integral defined in Eq.~\eqref{eq:expint}. In the plane-parallel case, the transmittance only depends on optical depth, not geometric radial position, where here $\tau$ stands for the ``outward radial'' optical depth of a given source. We have also dropped the index $\omega$ for convenience.
We may compare ${\sf T}(\tau)$ with the naive value $e^{-\tau}$ in various cases. For $\tau=0$ we have $e^{-\tau}=1$ and $E_2(\tau)=1$ and so ${\sf T}(0)=1/2$, which is 1/2 times the naive value of~1. The reason is that of all bosons launched at the surface, only the outward-moving ones escape. For very large $\tau$, $E_2(\tau)/\exp(-\tau)\to \tau^{-1}$, so besides the previous factor 1/2 concerning the inward-bound bosons, the naive transmittance is $\tau$ times the true one and thus a vast overestimate because any trajectory that deviates only mildly from the exact radial direction implies much larger absorption. Finally, for the Stefan-Boltzmann value $\tau=2/3$, the ratio is 0.4968. In absolute terms, ${\sf T}(2/3)=0.1239$, meaning that around 1 in 8 FIBs produced at $\tau=2/3$ makes it to infinity. Counting only the outward-bound ones ($\mu>0$), almost exactly one in four escapes.
\section{Explicit example I: Supernova energy loss through Primakoff production}
As a first example we consider axion-like particles (ALPs) with a generic two-photon interaction encoded in the coupling strength $G_{a\gamma\gamma}$ and with a mass so small that decays are irrelevant and that we can use ultrarelativistic kinematics. In this case they are absorbed or produced only by the Primakoff process $\gamma+Ze\leftrightarrow Ze+a$ on charged particles. It is reasonable to approximate the reduced absorption rate $\Gamma_\omega$ as independent of energy \cite{Caputo:2021rux}, so this case comes close to the ``grey atmosphere'' approximation of radiative transfer theory. We use our expressions to calculate the total energy-loss rate $L_a$ for a prescribed numerical SN model as a function of $G_{a\gamma\gamma}$, compare it with the neutrino luminosity $L_\nu$, and find the two solutions for $G_{a\gamma\gamma}$ where $L_a=L_\nu$. The trapping solution is found to agree very well with the one from the Stefan-Boltzmann argument as anticipated.
\subsection{Interaction model}
\label{sec:PrimakoffInteractionModel}
We now consider massless ALPs that are assumed to interact with the electromagnetic field through the Lagrangian
\begin{equation}
{\cal L}_{a\gamma\gamma}=-\frac{a}{4}\, F_{\mu\nu}\tilde F^{\mu\nu}=G_{a\gamma\gamma}\,a\,{\bf E}\cdot{\bf B},
\end{equation}
where $G_{a\gamma\gamma}$ is a coupling constant with dimension (energy)$^{-1}$. It is the only particle-physics parameter entering our discussion. ALPs are dominantly absorbed by the Primakoff process $a+Ze\to Ze+\gamma$ on charged particles with a rate
\begin{equation}
\Gamma_{\rm A}=Z^2\alpha G^2_{a\gamma\gamma}n_Z f_{\rm S} f_{\rm B},
\end{equation}
where $n_Z$ is the number density of targets, $f_{\rm S}$ a screening factor, and $f_{\rm B}$ a Bose stimulation factor for the final-state photon. The rate has been summed over final-state photon polarizations.
An exact evaluation of this rate for the conditions of a SN core is not available because there are many complications as detailed in Sec.~II.E of Ref.~\cite{Caputo:2021rux}. Electrons as targets are relativistic and degenerate and will be neglected. Charged nuclear targets are not only protons (as had often been assumed), but in the hottest and most important regions also small nuclear clusters. Neglecting electrons one can use Debye-H\"uckel screening \cite{Raffelt:1985nk}, but here as well as in the target phase space we neglect degeneracy effects, probably not a bad approximation in the relevant hottest regions. As suggested in Ref.~\cite{Caputo:2021rux} we finally set $\sum Z^2 n_Z\to (1-Y_n)n_{\rm B}$, where $Y_n$ is the neutron abundance (number of neutrons per baryon), keeping in mind that in general $1-Y_n$ is {\em not} the same as the proton abundance, although we call it the effective proton abundance. The screening factor varies only slowly in the range of energy relative to the screening scale and, given the relatively rough approximations used, we may as well set it to unity. Finally, the Bose stimulation factor is $f_{\rm B}=(1+f_\gamma)$ and because the targets do not recoil much, the photon energy is nearly the same as the ALP energy $\omega$, so $f_\gamma=1/(e^{\omega/T}-1)$ and we note that $f_{\rm B} = 1+f_\gamma=1/(1-e^{-\omega/T})$. Multiplication with $(1-e^{-\omega/T})$ to obtain the reduced absorption rate and collecting everything yields for the latter
\begin{equation}
\Gamma=\underbrace{\alpha G_{a\gamma\gamma}^2}_{\displaystyle \sigma_a}~
\underbrace{(1-Y_n)\,n_{\rm B}}_{\displaystyle \hat n}.
\end{equation}
Numerically, the cross section is
\begin{equation}\label{eq:cross-section}
\sigma_a=2.84 \times 10^{-42}~{\rm cm}^2~G_6^2,
\quad\hbox{where}\quad
G_6=\frac{G_{a\gamma\gamma}}{10^{-6}\,{\rm GeV}^{-1}}.
\end{equation}
In this way we are naturally led to the simple case of a grey-atmosphere model which is defined by the reduced absorption rate not to depend on energy. In this case the energy integral in Eq.~\eqref{eq:Linfty-source} can be done explicitly and the ALP luminosity at infinity is
\begin{equation}\label{eq:spherical-integral-Primakoff}
L_a=\int_{0}^{\infty}\!\!dr\,\underbrace{4\pi r^2\,B(r)\,\sigma_a \hat n(r)}_{\displaystyle L'_a(r)}
\,{\sf T}(r),
\end{equation}
where the angle-averaged transmittance ${\sf T}(r)$ following from Eq.~\eqref{eq:transmittance} is
\begin{equation}\label{eq:transmittance-Primakoff}
{\sf T}(r)=
\frac{1}{2}\int_{-1}^{+1}\!d\mu\,
\exp\biggl[-\int_{0}^{\infty}\,ds\,\sigma_a\hat n\bigl(\sqrt{r^2+s^2+2rs\mu}\bigr)\biggr].
\end{equation}
All we need to evaluate $L_a$ is a profile of $(1-Y_n)\rho_{\rm B}$ and of the temperature.
\subsection{Supernova model and its ALP flux}
To illustrate these results we evaluate them explicitly for a numerical SN model. We use the Garching muonic SN model SFHo-18.8 at $t_{\rm pb}=1$\,s that was used in several recent studies of SN particle bounds \cite{Bollig:2020xdr,Caputo:2021rux,Caputo:2022mah}. These SN models include muons, which is a generic physical effect, although not crucial for our discussion. The models are spherically symmetric, but include convection in the form of a mixing-length treatment. (The fixed $T$ gradient in the approximate range 8--15~km seen in the left-middle panel of Fig.~\ref{fig:SN-Model} reflects convection.) The final neutron-star baryonic mass is $1.351\,M_\odot$, the final gravitational mass is $1.241\,M_\odot$, so the total amount of liberated gravitational binding energy is the difference which is $1.98\times 10^{53}\,{\rm erg}$. Therefore, the released neutrino energy is at the lower end of the typical range, whereas the duration of neutrino emission is relatively short (due to convection) and the maximum temperature of around 40~MeV reached in the core is relatively small. More details are shown in Refs.~\cite{Bollig:2020xdr,Caputo:2021rux}, whereas the parameters relevant for us are plotted in Fig.~\ref{fig:SN-Model}.
\begin{figure}[ht]
\hbox to\textwidth{\hfill\includegraphics[width=0.4\textwidth]{fig9a.pdf}\kern20pt\includegraphics[width=0.4\textwidth]{fig9b.pdf}\hfill}
\hbox to\textwidth{\hfill\includegraphics[width=0.4\textwidth]{fig9c.pdf}\kern20pt\includegraphics[width=0.4\textwidth]{fig9d.pdf}\hfill}
\hbox to\textwidth{\hfill\includegraphics[width=0.4\textwidth]{fig9e.pdf}\kern20pt\includegraphics[width=0.4\textwidth]{fig9f.pdf}\hfill}
\caption{Supernova model described in the text. {\em Left column:} Baryon density (in terms of nuclear density), effective proton abundance, and temperature as indicated. {\em Right column:} ALP production distribution $L'_a(r)$ in the top panel is for $\sigma_a=10^{-41}\,{\rm cm}^2$ ($\sigma_{41}=1$). On a linear scale it corresponds to the red curve in the bottom panel. The transmittance is shown for the indicated values of $\sigma_{41}$. The bottom panel shows the normalized distributions $L'_a(r)$ for the indicated values of~$\sigma_{41}$.}\label{fig:SN-Model}
\end{figure}
To calculate the ALP luminosity, in principle one should include gravitational effects that are also included in numerical SN models, notably gravitational redshift as outlined in Refs.~\cite{Caputo:2021rux,Caputo:2022mah}. On the other hand, our entire treatment of radiative transfer has ignored such effects and in particular redshift and bending of trajectories. Here we are not performing a precision analysis of particle bounds but rather illustrate the relationship between volume-emission and boson-sphere Stefan-Boltzmann emission. Therefore we continue to ignore gravitational effects.
We express the ALP interaction strength in terms of the cross section Eq.~\eqref{eq:cross-section} that we parameterized in terms of $\sigma_{41}=\sigma_a/10^{-41}\,{\rm cm}^2$. The scale is chosen such that for $\sigma_{41}\simeq 1$ the ALP sphere will be close to the neutrino sphere at a radius of around 17~km. In the right-top panel of Fig.~\ref{fig:SN-Model} we show the ALP production rate $L_a'(r)$ for $\sigma_{41}=1$ defined in Eq.~\eqref{eq:spherical-integral-Primakoff}. In normalized form and on a linear vertical scale it is the same as the red curve in the right-bottom panel. The maximum of emission is near the $T$ maximum. In addition, the central stellar region is geometrically suppressed by the $4\pi r^2$ factor.
In the right-middle panel, we show the transmittance of Eq.~\eqref{eq:transmittance-Primakoff} for the indicated values of $\sigma_{41}$, whereas in the right-bottom panel we show the product $L_a'(r){\sf T}(r)$ in normalized form, i.e., the source distribution of the escaping ALPs. We see that the ALPs always originate from a shell of thickness of a few km. In the free-streaming limit (unit transmittance) this shell is simply given by the product of the $T^4$ and $\hat n$ profiles together with the geometric $4\pi r^2$ factor. For larger coupling strengths, the emitting shell moves outward, driven by the transmittance that steeply falls for smaller radius, and the production rate, that steeply falls for larger $r$. However, the resulting shell is never very thin. The variation of widths of these normalized curves is also represented by their variation in height and we glean from the plot that the radial region of emission becomes less than a factor of 2 sharper for ``surface emission'' instead of free-streaming volume emission. This conclusion agrees with the schematic plane-parallel atmosphere model shown in Fig.~\ref{fig:SourceDistribution}.
Next we show in Fig.~\ref{fig:L-Primakoff} the ALP luminosity thus derived as a function of the Primakoff cross section. We compare it with the neutrino luminosity $L_\nu=5.68\times10^{52}\,{\rm erg/s}$ of this model. This value corresponds approximately to the neutrino-sphere region around 17\,km, whereas after taking redshift effects into account it is $L_\nu=4.4\times10^{52}\,{\rm erg/s}$ for an observer at infinity. However, as we do not include redshift effects in our ALP luminosity calculation, we compare the luminosities roughly in the local environment.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\columnwidth]{fig10.pdf}
\caption{ALP luminosity for our unperturbed SN model as a function of the effective Primakoff cross section on protons, to be compared with the neutrino luminosity. The blue solid line uses the full transmittance of Eq.~\eqref{eq:transmittance-Primakoff}, whereas the dashed line uses the naive transmittance $e^{-\tau(r)}$; here $\tau(r)$ is optical depth in the outward radial direction.}\label{fig:L-Primakoff}
\end{figure}
On the free-streaming side, the two luminosities are equal for $\sigma_a=1.0\times10^{-46}\,{\rm cm}^2$, corresponding to $G_{a\gamma\gamma}=0.59\times10^{-10}\,{\rm GeV}^{-1}$. On the trapping side, they are equal for $\sigma_a=1.27\times10^{-41}\,{\rm cm}^2$, corresponding to $G_{a\gamma\gamma}=2.1\times10^{-6}\,{\rm GeV}^{-1}$. In which sense these $G_{a\gamma\gamma}$ values should be seen as constraints has been discussed elsewhere \cite{Caputo:2021rux}. Here we simply take them as the values where the ALP luminosity, calculated on an unperturbed SN model, equals $L_\nu$ of that model.
In the trapping limit, we may compare the ALP flux with the one found from the Stefan-Boltzmann argument. In our model, the SB flux $4\pi r_{\rm SB}^2 (\pi^2/120)T_{\rm SB}^4$ equals $L_\nu$ for $r_{\rm SB}=16.99\,{\rm km}$. The cross section required to achieve $\tau=2/3$ at this radius is $\sigma_a=1.03\times10^{-41}\,{\rm cm}^2$, corresponding to
$G_{a\gamma\gamma}=1.9\times10^{-6}\,{\rm GeV}^{-1}$. Therefore, within 10\% one finds the same coupling strength as one found with the full transmittance-modified volume integration. The errors incurred by all other approximations, for example concerning the Primakoff cross section and concerning the impact of gravity, are of similar magnitude. Therefore, on this level of precision there is no particular benefit in performing the full volume integration that can be numerically cumbersome.
Notice that using the transmittance $e^{-\tau(r)}$ with the optical depth only in the outward-radial direction (dashed line in Fig.~\ref{fig:L-Primakoff}) would lead, for the trapping regime, to the bound $G_{a\gamma\gamma}=6.1\times10^{-6}\,{\rm GeV}^{-1}$, a factor 3 more stringent than the correct one. This further stresses the importance of considering the correct angle-averaged transmittance as already discussed around Eq.~\eqref{eq:transmittance-2}.
\subsection{Energy transfer by ALPs}
Besides the SN energy loss (the luminosity seen by a distant observer) we may also ask for the ALP flux $L_a(r)$ as a function of radius in and near the SN. Its radial variation reveals the energy gain or loss by the local medium caused by ALP emission and absorption. In the source-perspective expression of Eq.~\eqref{eq:source-flux} follows that the kernel ${\sf E}_\omega(r,a)$, in our present case, does not depend on $\omega$ and only on the radial variation of the MFP that here does not depend on temperature, so the kernel depends only on $\hat n(r)$ and the chosen value of $\sigma_a$.
For illustration we use the trapping limit and specifically $\sigma_{41}=1.27$, where the ALP flux at infinity matches $L_\nu=5.68\times10^{52}\,{\rm erg/s}$. In Fig.~\ref{ALPflux-radial} we show as a blue line the radial flux variation based on the diffusion approximation. As an orange line we show the true flux based on
Eq.~\eqref{eq:source-flux}. The two curves separate in the decoupling region around 17\,km where $\tau=2/3$. Beyond this region, the true ALP flux is constant. Deeper inside, it agrees with the diffusive result. We see that for radii smaller than the decoupling region, ALPs carry a significant energy flux and so would play a significant role for energy transfer within the star.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{fig11.pdf}
\caption{Radial variation of ALP luminosity in our SN model for $\sigma_{41}=1.27$. {\em Blue line}: Flux predicted in the diffusion approximation. {\em Red line}: True flux based on Eq.~\eqref{eq:source-flux}. The Stefan-Boltzmann radius of 17.0\,km, where $\tau=2/3$, is marked with a vertical dashed line.}\label{ALPflux-radial}
\end{figure}
\section{Explicit example II: Two-photon decay and photon coalescence}
As a second explicit case we consider ALPs with a mass $m_a$ so large that photon coalescence $2\gamma\to a$ is the main production process, not Primakoff production which we now ignore. In a SN core, this situation pertains for $m_a\gtrsim 60\,{\rm MeV}$ \cite{Lucente:2020whw} or in the core of horizontal-branch stars for $m_a\gtrsim 50\,{\rm keV}$ \cite{Lucente:2022wai}. In this situation, the only information from the stellar model is the temperature profile, whereas for the ALP both the coupling strength and the mass enter.
\subsection{Interaction model}
Once more we consider generic ALPs with a two-photon coupling discussed in Sec.~\ref{sec:PrimakoffInteractionModel}. In the ALP rest frame, the two-photon decay rate is
\begin{equation}
\Gamma_a=\frac{G_{a\gamma\gamma}^2m_a^3}{64\pi},
\end{equation}
which we use as our primary parameter to quantify the interaction strength.
The ``absorption'' rate caused by the decay $a\to2\gamma$ for pseudoscalar FIBs was explicitly provided in the Supplementary Material of Ref.~\cite{Caputo:2022mah}. Starting from their Eqs.~(S10) and (S11), the reduced absorption rate is
\begin{equation}\label{eq:Gamma-decay}
\Gamma_\omega=\Gamma_a\,\frac{m_a}{\omega}\,g_{\rm B}(\omega),
\quad\hbox{where}\quad
g_{\rm B}(\omega)=\frac{2T}{v_\omega\omega}
\log\frac{\sinh\frac{(1+v_\omega)\,\omega}{4T}}{\sinh\frac{(1-v_\omega)\,\omega}{4T}}
\end{equation}
and $v_\omega=(1-m_a^2/\omega^2)^{1/2}$ is the boson velocity. Here $g_{\rm B}$ accounts for final-state Bose stimulation in the decay. Compared with the factor $f_{\rm B}$ of Ref.~\cite{Caputo:2022mah}, $g_{\rm B}$ includes $(1-e^{-\omega/T})$ for the {\em reduced} absorption rate. In the limit $T\to0$ it is $g_{\rm B}\to1$ and we are back to the vacuum decay rate. The boson flux arising in Eq.~\eqref{eq:Ftrue-geometric} is here physically produced by photon coalescence $2\gamma\to a$, a process encoded in the reduced absorption rate of Eq.~\eqref{eq:Gamma-decay}. In particular, the temperature of the background medium enters only through $g_{\rm B}$.
The local energy production rate in the form of ALPs is $B_\omega v_\omega \Gamma_\omega$ or explicitly
\begin{equation}
Q_\omega=\frac{\Gamma_a}{\pi^2}\,\frac{m_a\omega T}{e^{\omega/T}-1}\,
\log\frac{\sinh\frac{\omega+\sqrt{\omega^2-m_a^2}}{4T}}{\sinh\frac{\omega-\sqrt{\omega^2-m_a^2}}{4T}},
\end{equation}
for example in units of ${\rm erg}\,{\rm cm}^{-3}\,{\rm s}^{-1}\,{\rm MeV}^{-1}$.
\subsection{Diffusive energy transfer}
To calculate the luminosity $L_\omega(r)$ in Eq.~\eqref{eq:source-flux} we need the MFP, which in our case is explicitly
\begin{equation}
\frac{1}{\lambda_\omega}=\frac{\Gamma_\omega}{v_\omega}
=\Gamma_a\,\frac{2 m_a T}{\omega^2-m_a^2}\,\log\frac{\sinh\frac{\omega+\sqrt{\omega^2-m_a^2}}{4T}}{\sinh\frac{\omega-\sqrt{\omega^2-m_a^2}}{4T}}.
\end{equation}
According to Eq.~\eqref{eq:Rosseland}, the Rosseland average for the effective MFP is
\begin{equation}\label{eq:lambda-eff}
\lambda_{\rm eff}=\frac{1}{\Gamma_a}\,\frac{15}{32\,\pi^4}\,\frac{1}{m_aT^6}
\int_{m_a}^\infty d\omega\, \left(\omega\,
\frac{\omega^2-m_a^2}{\sinh \frac{\omega}{2T}}\right)^2\bigg/
\log\frac{\sinh\frac{\omega+\sqrt{\omega^2-m_a^2}}{4T}}{\sinh\frac{\omega-\sqrt{\omega^2-m_a^2}}{4T}}.
\end{equation}
We show this result as a function of $T/m_a$ in Fig.~\ref{fig:Lambda}. For $T\ll m_a$ the effective MFP is exponentially suppressed. The interpretation is that we have defined it to describe energy transport relative to a massless boson and for large $m_a$ relative to $T$, the production of thermal bosons is suppressed.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{fig12.pdf}
\vskip-6pt
\caption{Effective mean-free path according to Eq.~\eqref{eq:lambda-eff}.}\label{fig:Lambda}
\end{figure}
To estimate the scale for the MFP required to have a significant impact on SN physics, we consider the temperature profile of our numerical SN model shown in Fig.~\ref{fig:SN-Model}. Around a radius of 10\,km the temperature is around 30\,MeV and the temperature gradient 4\,MeV/km, then Eq.~\eqref{fig:Diffuse-Flux} implies a luminosity carried by ALPs of
$L_a\simeq (\lambda_{\rm eff}/{\rm km})\,66\,L_\nu$, where $L_\nu=5.68\times10^{52}\,{\rm erg}/{\rm s}$ is the neutrino luminosity of this model. In other words, unless $\lambda_{\rm eff}\ll 1\,{\rm km}$, ALPs dominate the energy transport within the SN core. On the other hand, for a sufficiently large ALP mass, the effect is much smaller near the PNS surface where temperatures are much smaller.
We illustrate this point in Fig.~\ref{fig:DiffusiveLa}, where we show the diffusive ALP flux for $\Gamma_a^{-1}=1\,{\rm km}$ for the indicated range of masses. For small radii, where the $T$ gradient is inward, the negative fluxes are shown as dashed lines. Taking the neutrino decoupling region to be around 17\,km, we see that for $m_a\gtrsim30\,{\rm MeV}$, the ALP flux near the surface is smaller than $L_\nu$, whereas inside it is much larger. To avoid ALPs to dominate energy transfer within the entire SN core, and taking $m_a=100\,{\rm MeV}$, would require $\Gamma_a^{-1}\lesssim 0.01\,{\rm km}$ and thus $G_{a\gamma\gamma}\gtrsim 2\times10^{-6}\,{\rm GeV}^{-1}$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.4\textwidth]{fig13.pdf}
\caption{Diffusive ALP energy flux carried within our numerical SN model shown in Fig.~\ref{fig:SN-Model}, based on $\lambda_{\rm eff}$ given in Eq.~\eqref{eq:lambda-eff} with $\Gamma_a=1\,{\rm km}^{-1}$ and for the masses $m_a=10$, 30, 100, and 300\,MeV (top to bottom). Negative fluxes (inward bound) shown as dashed lines. We also show the neutrino luminosity $L_\nu(r)$ that reaches its final value between 15 and 16~km. Notice that the luminosities are in local variables, not for a distant observer, and they are in units of $5.68\times10^{52}\,{\rm erg/s}$, the local $L_\nu$ near the decoupling radius.}\label{fig:DiffusiveLa}
\end{figure}
In Ref.~\cite{Lucente:2020whw} an ALP exclusion plot is shown in the plane of $m_a$ and $G_{a\gamma\gamma}$, where our region of parameters is allowed. Therefore, there is a range of nominally allowed ALP parameters where they would contribute dominantly to energy transfer within the SN core, but not to energy loss. If this effect would actually make an observational difference is another question, but probably it would modify the appearance of convection in the PNS as well as the duration of the neutrino burst.
In any event, we here have an explicit example of a particle that is too heavy and too short-lived to provide a SN energy-loss channel, yet has a significant effect for the energy transfer within the SN core.
\section{Conclusions}
Motivated by several recent studies about the role of feebly-interacting bosons in stars, notably in supernova cores, we have derived the equations for radiative transfer from first principles for such particles. The main simplification compared with photons is motivated by the feebleness of the interaction and leads us to neglect scattering. So we only consider boson emission and absorption by the background medium. We include systematically the effect of the boson mass that may be comparable to the local temperature or even much larger. After solving the Boltzmann collision equation for a single ray of the boson radiation field, solutions for plane-parallel and spherical geometry follow essentially from phase-space integrations, although these are not entirely trivial.
For the case of spherical geometry, the monochromatic boson luminosity at a radius $r$ from the center of the star is expressed in the form
\begin{equation}
L_\omega(r)=\int_0^\infty dr'\,4\pi r^{\prime\,2} Q_\omega(r')\,{\sf E}_\omega(r,r'),
\end{equation}
where $Q_\omega(r)$ is the monochromatic energy-loss rate for the medium conditions at radius $r$ and ${\sf E}_\omega(r,r')$ is an integral kernel that depends on the reduced boson absorption rate as a function of $r$ or equivalently, the corresponding MFP $\lambda_\omega(r)$. One of our main technical results is to provide the integral kernel explicitly.
The luminosity at a given radius depends on $Q_\omega$ a few MFPs upstream and downstream. If this distance is short compared with the radius itself, the energy flux can be understood in the plane-parallel approximation. In this case the integral kernel simplifies considerably and corresponds to standard results in the literature. Moreover, when the MFP is small compared with the scale height of the temperature variation, one obtains the usual diffusion-limit result, where the energy flux is proportional to the MFP and the temperature gradient.
In the trapping limit, the boson luminosity can be seen as emerging from a quasi-thermal emission surface at an optical depth $\tau\simeq2/3$. On the other hand, the bosons still emerge from a shell, not a surface, and thus from a volume of considerable radial extent. We clarify the relation between the two perspectives and also find that the picture of quasi-thermal emission from a surface provides an excellent approximation in practice.
While our derivations and discussions are based entirely on standard radiative transfer theory, not all of our results can be found explicitly in the literature. In this sense we hope that our systematic exposition is useful to the astroparticle community and clarifies some issues that have emerged in the recent literature on FIB emission from stellar bodies.
\exclude{
In this work we have provided a pedagogical derivation of the correct volume emission of bosons from a stellar object. We started from the derivation of the boson occupation number as a consequence of the Boltzmann collision equation. We then discussed the strong trapping regime in the limit of a plane-parallel atmosphere, and introduced some useful concepts and quantities, such as the extinction functions and the boson fluxes under various approximations. Finally, we performed the complete volume integration in spherical symmetry and showed that previously adopted recipes in the literature could lead to too restrictive bounds, due to an overestimate of the escaping flux. This is the main result of the present analysis and we think our recipe should be the routinely adopted one when a numerical evaluation is performed.
We stress that our kernel, as well as the previously adopted ones, assumes the unperturbed model to be a good approximation to the real atmosphere condition. Such an approximation breaks down when the new boson couplings become too large, possibly altering the stellar atmosphere structure which one would need to study ab-initio. To this end, it may not be necessary to perform fully self-consistent SN simulations. It may be enough to study analytic models of the radiating atmosphere where the self-consistent
atmosphere is governed by the new boson energy transfer itself. This would amount to generalize the formalism of this work to the case in which two species are present with different decoupling regions: the neutrinos and a new boson. Such a generalization is left for future research.
}
\section*{Acknowledgements}
We thank Hans-Thomas Janka for helpful discussions on different aspects of this work. AC is supported by the Foreign Postdoctoral Fellowship Program of the Israel Academy of Sciences and Humanities and also acknowledges support from the Israel Science Foundation (Grant 1302/19), the US-Israeli BSF (Grant 2018236), the German-Israeli GIF (Grant I-2524-303.7) and the European Research Council (ERC) under the EU Horizon 2020 Programme (ERC-CoG-2015-Proposal n. 682676 LDMThExp). GR acknowledges support by the German Research Foundation (DFG) through the Collaborative Research Centre “Neutrinos and Dark Matter in Astro and Particle Physics (NDM),” Grant SFB-1258, and under Germany’s Excellence Strategy through the Cluster of Excellence ORIGINS EXC-2094-390783311. EV thanks the Niels Bohr Institute for hospitality, and acknowledges support by the US Department of Energy (DOE) Grant DE-SC0009937, the Rosenfeld Foundation, and the Carlsberg Foundation (CF18-0183).
\bibliographystyle{bibi}
|
2,869,038,156,242 | arxiv | \section{Introduction}
Theoretical explanation of the existence of self-sustained
spherically symmetric plasmoids is still a challenge in plasma
physics~\cite{Ste99}. A static bunch of charged particles confined
by its own internal forces does not seem to be stable. The
hydrodynamical pressure $p$ and the magnetic pressure $B^2/8\pi$
will try to expand a plasmoid (see Ref.~\cite{LanLif82p322})
making its existence impossible in the absence of external forces
such as gravity, etc.
However a spherical plasmoid can be implemented in the form of
spherically symmetric oscillations of electrons in
plasma~\cite{DvoDvo07}. This kind of plasma pulsation does not
have a magnetic field (see Sec.~\ref{PHIA} below) and thus does
not lose energy for radiation. This fact can explain the relative
stability of spherical plasmoid generated in natural conditions.
It is, however, known that the frequency of free electrons
oscillations in plasma cannot be less than the Langmuir frequency.
For an electrons density in plasma of $\sim
10^{15}\thinspace\text{cm}^{-3}$ the Langmuir frequency is about
$100\thinspace\text{GHz}$. It is rather difficult to create a
strong external field of such a high frequency in natural
conditions to generate a plasmoid. Therefore one should point out
a physical mechanism that acts at the initial stages of the
plasmoid evolution, during which one can encounter a relatively
low-frequency external field and which makes the plasmoid
generation possible. We suggest that plasma superconductivity can
be one such mechanisms.
The idea that a dense plasma can reveal superconducting properties
was put forward in Ref.~\cite{Dij80Zel08}. The superconductivity
seems to play a key role at the stages of the plasmoid formation.
However it is difficult to find the physical mechanism which would
be responsible for the appearance of the superconducting phase in
rather hot plasma.
It is known that the formation of a bound state of two electrons,
a Cooper pair, underlies the superconductivity phenomenon in
metals. A Cooper pair is formed when the electrostatic repulsive
field of an electron is shielded by the effective attractive
interaction due to the exchange of virtual phonons. Such a bound
state is destroyed when the temperature of a metal exceeds a few
kelvin degrees~\cite{Mad80p315}.
Although the temperature of plasmas in natural conditions is far
greater than a typical temperature of a superconducting metal one
can find physical processes leading to the appearance of the
effective attraction between electrons. A charged test particle,
e.g. an electron, moving in plasma is known to emit ion acoustic
waves. Therefore a test electron can be surrounded by a cloud of
positively charged ions. Under certain conditions this effective
potential can screen the repulsive interaction between two
electrons and result in the creation of a bound state (see
Ref.~\cite{NamAka85}). This phenomenon, as well as the exchange of
dust acoustic waves, can lead to the effective attraction of dust
particles in a dusty plasma~\cite{Shu01}. Note that the formation
of bound states of electrons in plasma due to the exchange of ion
acoustic waves is analogous to the Cooper pairs
formation~\cite{NamVlaShu95}.
In this work we study spherically symmetric plasma structures
using the method of classical electrodynamics. Note that in
Ref.~\cite{DvoDvo07} we considered a spherical plasmoid as quantum
oscillations of electrons in plasma. Firstly, in Sec.~\ref{PHIA},
we examine the electromagnetic potentials for the spherically
symmetric motion of charged particles and find a gauge in which
the vector potential is zero. Secondly, in Sec.~\ref{CLOSC}, we
study free and forced spherically symmetric oscillations of
electrons on the basis of the system of equations of classical
plasma hydrodynamics. In Sec.~\ref{WAKE}, using the methods of
Ref.~\cite{NamAka85} we calculate the scalar potential created by
a test electron participating in forced oscillations in plasma. We
examine the conditions when the effective potential is attractive
and consider the possibility of forming a bound state. Finally, in
Sec.~\ref{DISC}, we examine possible applications of the obtained
results to the description of natural spherical plasmoids.
\section{A description of
spherically symmetric oscillations of electrons in plasma based on
classical electrodynamics\label{CLPL}}
In this section we will study oscillations of elections in plasma
which have the spherical symmetry. The method of classical
electrodynamics will be used to describe this process. Firstly, in
Sec.~\ref{PHIA}, we will be interested in the various choices of
electromagnetic potentials for such a system. Then, in
Sec.~\ref{CLOSC}, we will obtain the exact solution of the
hydrodynamical equations which describes oscillations of electrons
density as well as the dispersion relation for these oscillations.
\subsection{Electromagnetic potentials in the system of
radially oscillating particles\label{PHIA}}
It is clear that the scalar potential of an electric field
$\varphi$ in the system making spherically symmetric pulsations
can depend only on the radial coordinate, $\varphi(r,t)$. The
vector potential $\mathbf{A}$ has only radial component which is
also a function of only radial coordinate, $A_r(r,t)$. Due to the
existence of the spatial dispersion in plasma, oscillations of
charged particles should vanish on big distances from the center
of the system, i.e. $A_r(r,t) \to 0$ at $r \to \infty$.
Suppose that we have found the potentials $\varphi$ and
$\mathbf{A}$. Now we can make the gauge transformation,
$\mathbf{A}' = \mathbf{A} + \nabla f$ and $\varphi' = \varphi +
(1/c)\partial f/\partial t$, where $f(r,t)=\int_r^\infty
A_r(r',t)\mathrm{d}r'$. This gauge transformation does not change
the electric field. Note that the magnetic field is identically
equal to zero for a spherically symmetric motion of charged
particles, $\mathbf{B}= \nabla \times \mathbf{A} = 0$. For the
chosen function $f$ we obtain that the vector potential can be
eliminated in all the space. Now the electric field has the form,
$\mathbf{E}=-\nabla\varphi'$. In the following we will omit the
prime in the definition of the scalar potential.
\subsection{Classical plasma hydrodynamics description
of a spherical plasmoid\label{CLOSC}}
In the first approximation we suggest that only electrons
participate in oscillations of plasma since the mobility of ions
is low. In the absence of collisions and other forms of
dissipation the system of the hydrodynamic equations for the
description of plasma oscillations can be presented in the
following way (see Ref.~\cite{Jac65p369}):
\begin{align}
\notag
& \frac{\partial n_e}{\partial t} + \nabla \cdot (n_e\mathbf{v}) = 0, \\
\notag
& \frac{\partial\mathbf{v}}{\partial t} + (\mathbf{v} \cdot \nabla)\mathbf{v} =
-\frac{e}{m}\mathbf{E}-\frac{1}{m n_e}\nabla p, \\
\notag
& \nabla \cdot \mathbf{E} = -4\pi e(n_e-n_i)+4\pi\rho_\mathrm{ext}(\mathbf{r},t), \\
\label{hydrodyn}
& \frac{\partial\mathbf{E}}{\partial t} =
4\pi e n_e \mathbf{v},
\end{align}
where $n_e$ is the electrons density, $n_i$ is the ions density,
$p$ is the plasma pressure, $m$ is the mass of the electron and
$e>0$ is the proton charge. In Eq.~\eqref{hydrodyn} we include the
possible external source $\rho_\mathrm{ext}$ and take into account
that the magnetic field is equal to zero (see Sec.~\ref{PHIA}).
Supposing that $n_i = n_0$ and the deviation of the electrons
density from the equilibrium value is small, $n_e-n_0 = n \ll
n_0$, we can linearize the system~\eqref{hydrodyn} and obtain the
single differential equation for the perturbation of the electrons
density,
\begin{equation}\label{closcil}
\frac{\partial^2 n}{\partial t^2}+\omega_e^2 n -
\frac{1}{m}
\left(
\frac{\partial p}{\partial n}
\right)_0 \nabla^2 n = \frac{4\pi e n_0}{m}\rho_\mathrm{ext},
\end{equation}
where $\omega_e=\sqrt{4\pi e^2 n_0/m}$ is the plasma frequency for
electrons and $(\partial p/\partial n)_0$ is the derivative taken
at $n_e=n_0$. The latter quantity depends on the equation of state
of electrons in plasma.
In the absence of the external source the spherically symmetric
solution of Eq.~\eqref{closcil} has the form
\begin{equation}\label{clsol}
n(r,t) = A \cos(\omega t)\frac{\sin\gamma r}{r},
\end{equation}
where $A$ is the constant chosen to satisfy the condition $|n| \ll
n_0$. The frequency of oscillations $\omega$ and the length scale
parameter $\gamma$ are related by the following identity:
\begin{equation}\label{cldisprel}
\omega^2=\omega_e^2+\frac{1}{m}
\left(
\frac{\partial p}{\partial n}
\right)_0\gamma^2,
\end{equation}
which shows that free oscillations with $\omega \geq \omega_e$
exist in plasma. However, if $\rho_\mathrm{ext} \sim \cos\Omega
t$, it is clear that forced oscillations with $\Omega<\omega_e$
can be also excited.
In Ref.~\cite{DvoDvo07} we studied spherically symmetric
oscillations of electrons in plasma using the quantum mechanical
approach. In that paper, we solved the non-linear Schr\"{o}dinger
equation for the wave function normalized on the number density of
electrons, $|\psi(\mathbf{r},t)|^2=n_e(\mathbf{r},t)$. It was
obtained that in the spherically symmetric case the density of
electrons has the form, $n_e(r,t)=n_0+A\cos(\omega t)\sin(\gamma
r)/r+\dotsb$, i.e. is similar to Eq.~\eqref{clsol}. However the
dispersion relation in Ref.~\cite{DvoDvo07} was different from
Eq.~\eqref{cldisprel}. Moreover quantum oscillations of electrons
reveal the typical size of the system, where the most intensive
oscillations happen, $L = \pi/\gamma =
\pi\sqrt{\hbar/2m\omega_e}$, at the critical frequency
$\omega=2\omega_e$. On the contrary, if we use the classical
electrodynamics method, the parameter $\pi/\gamma$, in principle,
can be arbitrary. Of course, Eq.~\eqref{cldisprel} is valid only
for quite long waves, when $\gamma \ll k_e$ (see
Ref.~\cite{Jac65p374}), where $k_e$ is the Debye wave number (see
the definition in Sec.~\ref{WAKE}).
\section{Formation of bound states of electrons at the initial
stages of the spherical plasmoid evolution\label{WAKE}}
Any particle in plasma has a two-fold life. On the one hand it is
a test particle moving through plasma and interacting with the
whole plasma rather than with separate plasma particles. On the
other hand any test particle is a part of plasma and hence it
contributes to self-consistent electromagnetic fields in plasma.
In this section we will use the method of test particles (see
Refs.~\cite{NamAka85,NamVlaShu95}) to calculate the effective
potential of a radially oscillating electron.
Let us study the electric field created by electrons participating
in spherically symmetric motion considering each electron as a
test particle with the charge $q$. Each of the test particles is
taken to interact with the rest of hot electrons, having the
temperature $T$, and with cold ions. The permittivity of this
plasma has the form,
\begin{equation}\label{perm}
\varepsilon(\mathbf{k},\omega)=
1+
\left(
\frac{k_e}{k}
\right)^2-
\left(
\frac{\omega_i}{\omega+\mathrm{i}0}
\right)^2,
\end{equation}
where $k_e=\sqrt{4\pi n_0 e^2/T}$ is the Debye wave number and
$\omega_i=\sqrt{4\pi (Z e)^2 n_0/M}$ is the plasma frequency for
ions with the mass $M$ and the charge $Ze$, $Z$ is the degree of
the ionization of an ion.
We suppose that a test particle makes harmonic oscillations around
the point $\mathbf{r}_0$ with the frequency $\Omega$ and the
amplitude $\mathbf{a}$: $\mathbf{r}'(t) = \mathbf{r}_0 +
\mathbf{a}\sin \Omega t$. To find the electric potential created
by one of the charged particles we should account for the Maxwell
equation for the electric displacement field, $\nabla \cdot
\mathbf{D} = 4 \pi q \delta^3(\mathbf{r}-\mathbf{r}')$.
Expressing the electric field as $\mathbf{E}(\mathbf{k},\omega) =
- \mathrm{i} \mathbf{k} \varphi(\mathbf{k},\omega)$, we obtain the
scalar potential of the system in the form
\begin{align}\label{varphi1}
\varphi(\mathbf{k},\omega) = &
\frac{4 \pi q}{k^2 \varepsilon(\mathbf{k},\omega)}
\notag
\\
& \times
\int \mathrm{d}t
e^{\mathrm{i} \omega t -
\mathrm{i} \mathbf{k} \cdot (\mathbf{r}_0 + \mathbf{a}\sin \Omega t)}.
\end{align}
We recall that the vector potential can be taken to be equal to
zero for the spherically symmetric system (see Sec.~\ref{PHIA}).
Let us decompose the exponential factor in the integrand of
Eq.~\eqref{varphi1} using the series of Bessel functions of the
$n$-th order,
\begin{equation}\label{bessel}
e^{-\mathrm{i} \mathbf{k} \cdot \mathbf{a} \sin \Omega t}=
\sum_{n=-\infty}^{+\infty}
J_n(\mathbf{k} \cdot \mathbf{a})e^{-\mathrm{i} n \Omega t}.
\end{equation}
Finally on the basis of Eqs.~\eqref{varphi1} and~\eqref{bessel} we
obtain the scalar potential in the form
\begin{align}\label{varphi2}
\varphi(\mathbf{r},t) = &
\frac{q}{2\pi^2}
\sum_{n=-\infty}^{+\infty}
\int \mathrm{d}\omega\mathrm{d}^3\mathbf{k}
e^{-\mathrm{i} \omega t + \mathrm{i} \mathbf{k} \cdot (\mathbf{r}-\mathbf{r}_0)}
\notag
\\
& \times
\delta(\omega-n\Omega)
\frac{J_n(\mathbf{k} \cdot \mathbf{a})}
{k^2\varepsilon(\mathbf{k},\omega)}.
\end{align}
To analyze Eq.~\eqref{varphi1} we present the reciprocal of the
permittivity~\eqref{perm} in the following way:
\begin{equation}\label{iaw}
\frac{1}{\varepsilon(\mathbf{k},\omega)}=
\frac{k^2}{k^2+k_e^2}
\left(
1+\frac{\omega_a^2}{\omega^2-\omega_a^2}
\right),
\end{equation}
where $\omega_a=k\omega_i/\sqrt{k^2+k_e^2}$ is the dispersion
relation for ion acoustic waves.
Using Eq.~\eqref{iaw} we present $\varphi$ in Eq.~\eqref{varphi2}
as a sum of two terms, $\varphi = \varphi_D + \varphi_W$, where
\begin{align}\label{debye1}
\varphi_D(\mathbf{r},t) = &
\frac{q}{2\pi^2}
\sum_{n=-\infty}^{+\infty}
\int \mathrm{d}^3\mathbf{k}
e^{-\mathrm{i} n \Omega t + \mathrm{i} \mathbf{k} \cdot (\mathbf{r}-\mathbf{r}_0)}
\notag
\\
& \times
\frac{J_n(\mathbf{k} \cdot \mathbf{a})}{k_e^2+k^2},
\end{align}
is the analog of the Debye-H\"{u}ckel screening potential and
\begin{align}\label{wake1}
\varphi_W(\mathbf{r},t) = &
\frac{q}{2\pi^2}
\sum_{n=-\infty}^{+\infty}
\int \mathrm{d}^3\mathbf{k}
e^{-\mathrm{i} n \Omega t + \mathrm{i} \mathbf{k} \cdot (\mathbf{r}-\mathbf{r}_0)}
\notag
\\
& \times
\frac{\omega_a^2}{(n\Omega)^2-\omega_a^2}
\frac{J_n(\mathbf{k} \cdot \mathbf{a})}{k_e^2+k^2},
\end{align}
is the wake potential due to the emission of ion acoustic
waves~\cite{NamAka85}.
To study the behaviour of the potentials $\varphi_D$ and
$\varphi_W$ we choose the specific coordinate system with
$\mathbf{r}_0 = 0$ and $\mathbf{a} = a\mathbf{e}_z$, where
$\mathbf{e}_z$ is the unit vector along the $z$-axis. We also
decompose the vectors $\mathbf{k}$ and $\mathbf{r}$ using the
cylindrical coordinates: $\mathbf{k} = (k_\rho,k_z,\phi_k)$ and
$\mathbf{r} = (\rho,z,\phi)$.
Taking into account the value of the following integral:
\begin{equation}
\int_0^{2\pi}\mathrm{d}\phi_k e^{\mathrm{i} k_\rho \cos(\phi_k-\phi)}
= 2\pi J_0(k_\rho \rho),
\end{equation}
we rewrite the potential $\varphi_D$ in the form,
\begin{align}\label{debye2}
\varphi_D(\rho,z,t) = &
2q
\sum_{n=0}^{\infty} (-1)^n
\int_0^\infty k_\rho \mathrm{d}k_\rho
\notag
\\
& \times
\frac{e^{-|z|\sqrt{k_\rho^2+k_e^2}}J_0(k_\rho \rho)}{\sqrt{k_\rho^2+k_e^2}}
\notag
\\
& \times
\Big\{
I_{2n}
\left(
a \sqrt{k_\rho^2+k_e^2}
\right)
\cos[2n \Omega t]
\notag
\\
& \pm
I_{2n+1}
\left(
a \sqrt{k_\rho^2+k_e^2}
\right)
\notag
\\
& \times
\sin[(2n+1) \Omega t]
\Big\},
\end{align}
where $I_n(x) = (\mathrm{i})^{-n} J_n(\mathrm{i}x)$ is the Bessel
function of the imaginary argument.
In Eq.~\eqref{debye2} the `$+{}$' sign stands for $z>0$ and
`$-{}$' for $z<0$.
The electromagnetic field of a charged linear oscillator in
vacuum, $\varepsilon=1$, was studied in Ref.~\cite{SokTer74}. The
scalar potential found in that book is different from
Eq.~\eqref{debye2} since in Ref.~\cite{SokTer74} the problem of
radiation of a linear oscillator was considered and the Lorentz
gauge for potentials was used. We study the case of the
spherically symmetric motion of plasma. As we demonstrated in
Sec.~\ref{PHIA} such a system does not have any magnetic field and
thus cannot emit radiation. In our situation it is more convenient
to use the gauge in which $\mathbf{A}=0$. That is why we get a
different expression for the scalar potential.
It is interesting to analyze Eq.~\eqref{debye2} in the static
limit. For the test particle at rest, i.e. at $a \to 0$, only the
term with the function $I_0(x)$ survives. It is possible to show
that in this limit Eq.~\eqref{debye2} transforms to
$\varphi_D(\rho,z,t) = (q/r) e^{-k_e r}$,
where $r=\sqrt{z^2+\rho^2}$. We can see that
one recovers the usual form of the Debye-H\"{u}ckel potential.
This analysis justifies our definition of $\varphi_D$.
Now we study the wake potential~\eqref{wake1} using the same
technique as for $\varphi_D$. We present the value of $\varphi_W$
in the cylindrical coordinates as
\begin{align}\label{wake2}
\varphi_W(\rho,z,t) & =
-2q
\int_0^\infty k_\rho \mathrm{d}k_\rho
\frac{e^{-|z|\sqrt{k_\rho^2+k_e^2}}J_0(k_\rho \rho)}{\sqrt{k_\rho^2+k_e^2}}
\notag
\\
& \times
I_0
\left(
a \sqrt{k_\rho^2+k_e^2}
\right)
\notag
\\
& + 2q
\sum_{n=1}^{\infty}
\left\{
\begin{matrix}
1 \\
(-1)^n \
\end{matrix}
\right\}
\frac{\omega_i^2}{\omega_i^2-(n\Omega)^2}
\notag
\\
& \times
\frac{k_n^3}{k_e^2+k_n^2}
\int_0^1 \mathrm{d}x
J_0(k_n \rho \sqrt{1-x^2})
\notag
\\
& \times
J_n(a k_n x)\sin(k_n|z|x - n \Omega t),
\end{align}
where
\begin{equation}\label{kn}
k_n = k_e \frac{n\Omega}{\sqrt{\omega_i^2-(n\Omega)^2}}.
\end{equation}
The upper multiplier in Eq.~\eqref{wake2} corresponds to $z>0$ and
the lower one -- to $z<0$. Comparing Eq.~\eqref{debye2} and
Eq.~\eqref{wake2} we can see that the terms containing the
integrals of $I_0(x)$ cancel in the sum of the two potentials.
To derive Eq.~\eqref{wake2} we suggest that $n\Omega<\omega_i$,
i.e. the wake potential appears only for non-rapid oscillations.
For example, we can consider the forced oscillations of electrons
in plasma described in Sec.~\ref{CLOSC} and suppose that $\Omega$
is a bit less than $\omega_i$. It means that only the first
harmonic is excited.
Let us study Eq.~\eqref{wake2} at the line of the test particle
oscillations, $\rho = 0$, and at big distances from the test
particle, $|z| \gg a$. Putting $n=1$ we obtain from
Eq.~\eqref{wake2}
\begin{equation}\label{wake3}
\varphi_W(z,t) \approx \mp
q \frac{\Omega^2}{\omega_i^2-\Omega^2}
\frac{k_1 a}{|z|}
\cos(k_1 |z| - \Omega t),
\end{equation}
where the `$-{}$' sign stands for $z>0$ and `$+{}$' for $z<0$.
Returning to the general Eq.~\eqref{debye2} for $\varphi_D$ we
also rewrite it for $\rho = 0$ as
\begin{align}\label{debye4}
\varphi_D(z,t) = &
\frac{2q}{|z|}
\sum_{n=0}^{\infty} (-1)^n
\int_{k_e|z|}^\infty \mathrm{d}x
e^{-x}
\notag
\\
& \times
\Big\{
I_{2n}
\left(
\frac{a}{|z|}x
\right)
\cos[2n \Omega t]
\notag
\\
& \pm
I_{2n+1}
\left(
\frac{a}{|z|}x
\right)
\notag
\\
& \times
\sin[(2n+1) \Omega t]
\Big\}.
\end{align}
Comparing Eqs.~\eqref{wake3} and~~\eqref{debye4} we can see that
at large distances the non-Coulomb wake potential transcends the
Debye-H\"{u}ckel potential. For example, the term with the
function $I_1(x)$ in Eq.~\eqref{debye4} at $|z| \gg a$ and $k_e a
\ll 1$ has the form
\begin{equation}\label{debye5}
\mp \frac{2qa}{z^2}e^{-k_e |z|}\sin(\Omega t),
\end{equation}
which has much smaller value than the wake potential~\eqref{wake3}
at $|z|>1/k_e$. The terms which contain the functions $I_n(x)$
with $n>1$ will give contributions to $\varphi_D$ smaller than
that in Eq.~\eqref{debye5}.
The wake potential is attractive when $\cos(k_1 |z| - \Omega t)>0$
for $z>0$ and when $\cos(k_1 |z| - \Omega t)<0$ for $z<0$. We
recall that we study the radial pulsation of plasma. Thus the
coordinate $z$ coincides with the radial direction. Suppose that a
test electron attracts another electron which is situated at the
distance $d$ away the center of the system during the certain
period of time, $\Delta t_1 = \pi/\Omega$. During the next half a
period of the wake potential variation, $\Delta t_2 = \pi/\Omega$,
the same test electron will attract a different electron situated
at the same distance $d$ but closer to center of the system. It
means that a test electron can always attract some charged
particles in plasma.
To form a bound state with another electron in plasma the energy
of interaction of a test electron $e\varphi_W$ should be greater
than the total energy of its oscillations, $E_\mathrm{osc} = m a^2
\Omega^2/2$. Suppose that two electrons are at the distance
$d=1/k_e$ and the amplitude of oscillations $a = 0.1d \ll d$.
Studying the plasma consisting of electrons, with temperature $T =
10^3\thinspace\text{K}$ and number density $n_0 =
10^{15}\thinspace\text{cm}^{-3}$, as well as of singly ionized
nitrogen atoms we obtain that the ratio
$|e\varphi_W|/E_\mathrm{osc}>1$ if $\Omega > 10^{-4}\omega_i$.
Taking into account that $\omega_i \sim
10^{10}\thinspace\text{s}^{-1}$ we obtain that the frequency of
the forced oscillations should be in the region
$10^6\thinspace\text{s}^{-1}<\Omega<10^{10}\thinspace\text{s}^{-1}$.
Thus we get that the effective attraction can take place in the
atmospheric plasma for the reasonable frequencies of an external
field.
\section{Discussion\label{DISC}}
We studied electrons oscillations in plasma using classical
electrodynamics. It is found that spherically symmetric
oscillations are possible in the classical case, although the
dispersion relation~\eqref{cldisprel} is different from the
previously found for quantum oscillations~\cite{DvoDvo07}. We
suggest that the radial pulsation of plasma underlies the rare
atmospheric electricity phenomenon called a ball lightning
(BL)~\cite{Ste99}.
Radial oscillations of electrons in plasma described in the
present work are unbounded and occure in all the space. It is the
case when a plasmoid, BL, propagates far away from any external
surface. The situation changes when one considers spherically
symmetric oscillations of plasma in a cavity inside a dielectric
material. This kind of oscillations was studied in
Ref.~\cite{SteYuVla93} in presence of external electric and
magnetic fields. The exact system of non-linear equations
describing oscillating spatial patterns was obtained and analyzed
both analytically and numerically. The solutions obtained in
Ref.~\cite{SteYuVla93} can correspond to BL passing through
microscopic cracks in a dielectric material, e.g. glass. There are
several reports of such events collected in Ref.~\cite{Sta85}. The
problem of the interaction of BL with external materials should be
analyzed in out future works.
As we demonstrated, in order to generate spherically symmetric
oscillations, one should excite them with rather high frequencies,
$\Omega > \omega_e$ in the classical case (or $\Omega > 2\omega_e$
in the quantum case). The electron plasma frequency is
$(10^{12}-10^{13})\thinspace\text{s}^{-1}$ for $n_0 =
(10^{15}-10^{17})\thinspace\text{cm}^{-3}$. Such high frequencies
are very difficult to obtain in any natural conditions.
We showed in Sec.~\ref{CLOSC} that forced oscillations are
possible with frequencies less than $\omega_e$. Of course, forced
oscillations will decay as soon as the external force is switched
off. There should be a mechanism which provides the smooth
transition from the generation regime of a spherical plasmoid with
the external harmonic source having $\Omega<\omega_e$ to a regime
with self-sustained oscillations having $\Omega \sim \omega_e$.
We suggest that this mechanism could be a formation of bound
states of electrons in plasma. The motion of bound states of
electrons can result in the appearance of a superconducting state
of plasma inside a spherical plasmoid. Previously the idea that
the superconductivity can exist in plasma was put forward in
Ref.~\cite{Mei84}. Without the existence of superconducting phase
electrons participating in radial oscillations will lose their
energy very quickly, because of the various friction mechanisms,
and will recombine into the initial neutral gas.
In Sec.~\ref{WAKE} we obtained that a test electron oscillating in
plasma with the frequency $\Omega\lessapprox\omega_i$ would emit
ion acoustic waves. Thus the test electron appears to be
surrounded by a cloud of `phonons' that shield its repulsive
potential. Under some conditions the effective interaction between
the test electron and other electrons in plasma turns out to be
attractive. This process can lead to the formation of bound states
of electrons.
Note that for the first time the role of `phonons', or acoustic
waves, for the description of the stability of a spherical
plasmoid was discussed in Ref.~\cite{VlaYak78}. However, in that
work, the phonons exchange between ions and neutral atoms was
considered in frames of the quantum theory. The applications to
the obtained results to the theory of BL were also studied in
Ref.~\cite{VlaYak78}.
The existence of the superconducting phase inside BL was
previously proposed in Ref.~\cite{Dij80Zel08}. However in these
papers the dense plasma of BL was already supposed to be in the
superconducting phase and the phenomenological consequences of
this phenomenon were described. No physical mechanisms for the
formation of the superconducting state were proposed. In the
present work we suggest that the exchange of ion acoustic waves
created in spherically symmetric oscillations of electrons in
plasma results in the effective attractive potential between
electrons. This mechanism can lead to the formation of bound
states of electrons and possibly to a superconducting state inside
a spherical plasmoid.
After one switches off the low frequency external field, bound
states of electrons will be destroyed. We assume that during the
superconducting stage of the plasmoid evolution a source of
internal energy should appear. It was suggested in
Ref.~\cite{nuclfus} that nuclear reactions can serve as the energy
source of BL. The recombination of charged particles will be
compensated by the processes of ionization and creation of new
charged particles owing to this internal source of energy of BL.
\section*{Acknowledgments}
The work has been supported by the CONICYT (Chile) through
Programa Bicentenario PSD-91-2006. The author is very thankful to
Sergey Dvornikov and Timur Rashba for helpful discussions.
|
2,869,038,156,243 | arxiv | \section{Introduction}
Complex fluid dynamics problems are generally solved using discretization methods such as Finite Difference, Finite Element, Finite Volume (FV) or spectral element methods. However, it is usually not feasible to use these methods for applications that require to be solved almost in real time, such as on-the-spot decision making, (design) optimization or control~\cite{sartori2016reduced}. The high fidelity Computational Fluid Dynamics (CFD) tools, used for numerical simulations of the Navier--Stokes equations, are too computationally expensive for those purposes. This has motivated the development of reduced order modeling techniques. However, low degree-of-freedom models that are solely based on input-output data do not represent the physics of the underlying systems adequately and, moreover, may be sensitive to operating conditions~\cite{ravindran2000reduced}.
Therefore, techniques, such as Reduced Basis (RB) methods, have been developed that retain the essential physics and dynamics of a high fidelity model that consists of discretized Partial Differential Equations (PDEs) describing the fluid problem~\cite{rozza2007reduced,veroy2003reduced}. The basic principle of these reduced order methods is to project the (parametrized) PDEs onto a low dimensional space, called the reduced basis space, in order to construct a physics-based model that is reduced in size and, therefore, in computational cost~\cite{hesthaven2016certified, quarteroni2015reduced, benner2015survey}.
Fluid flows can be controlled in several ways. As an example, the system configuration can be manipulated by modifying the physical properties. However, in this work the focus is on controlling boundary conditions (BC) that are essential for defining flow problems.
An example of a boundary control application from the nuclear field is the coupling of thermal-hydraulic system codes, i.e. transient simulations that are based on one-dimensional models of physical transport phenomena, with three-dimensional CFD codes~\cite{bandini2015assessment,toti2018coupled}. These type of system codes are, in general, based upon the solution of six balance equations for liquid and steam that are coupled with conduction heat transfer equations and that are supplemented by a suitable set of constitutive equations~\cite{petruzzi2008thermal}.
One of the main purposes of this coupling is to speedup the CFD calculations by only including the region of interest in the CFD model and the rest of the domain in the much faster system code. However, the gain in computational time of such a coupled model is still limited by the CFD part. To overcome this burden, the system codes can be coupled with reduced order models (ROM) of the high fidelity CFD codes. For transient problems, time-dependent boundary conditions of the ROM are then to be controlled based on the BCs obtained from the systems codes.
For industrial applications, the Finite Volume discretization method is widely used by commercial software and open-source codes, as the method is robust~\cite{Eymard} and satisfies locally the conservation laws~\cite{versteeg2007, Fletcher}.
By using a RB technique, the non-homogeneous BCs are, in general, no longer satisfied at the reduced order level. Furthermore, the BCs are not explicitly present in the ROM and therefore they cannot be controlled directly~\cite{lorenzi2016pod}. In literature~\cite{lorenzi2016pod,graham1999optimal1,kalashnikova2012efficient,Stabile2017CAIM}, different approaches to control the ROM BCs can be found of which two common approaches are extended and compared in this work: the lifting function method and the penalty method. The aim of the lifting function method~\cite{graham1999optimal1,Stabile2017CAIM} is to homogenize the BCs of the basis functions contained in the reduced subspace, while the penalty method~\cite{lorenzi2016pod,graham1999optimal1,kalashnikova2012efficient,Sirisup} weakly enforces the BCs in the ROM with a penalty factor. A disadvantage of the penalty method is that it relies on a penalty factor that has to be tuned with a sensitivity analysis or numerical experimentation~\cite{Sirisup}. Therefore, an iterative method is presented for tuning the penalty factor, which is, to the best of the authors' knowledge, introduced here for the first time in the context of Finite-Volume based POD-Galerkin reduced order methods. The novelty of this method is that a error tolerance for the enforced BC has to be set instead of an arbitrary value for the factor. Also the factor is determined automatically by iterating rather than manually via numerical experimentation.
The work is organized as follows: in Section~\ref{sec:FOM} the formulation of the full-order approximation of the PDEs is given and the methodology of the POD-based Galerkin projection is addressed in Section~\ref{sec:ROM}. In Section~\ref{sec:BCs} the two boundary control methods, the lifting function method and the iterative penalty method, are presented. In Section~\ref{sec:setup} the set-up of two numerical experiments, a lid driven cavity and a Y-junction test case, are given and the results are provided and discussed in Sections~\ref{sec:results} and~\ref{sec:discussion}, respectively. Finally, conclusions are drawn in Section~\ref{sec:conclusion} and an outlook for further developments is provided.
\section{Full order model of the incompressible Navier--Stokes equations}\label{sec:FOM}
The fluid dynamics problem is physically described by the unsteady incompressible Navier--Stokes equations. In an Eulerian framework on a domain $\Omega$ $\subset$ $\mathbb{R}$$^d$ with $d$ = 2, 3 and boundary $\Gamma$ = ($\Gamma_{D_U} $ $\cup$ $\Gamma_{N_U} $) $\cap$ ($\Gamma_{D_p}$ $\cup$ $\Gamma_{N_p}$), the governing system of equations is given by
\begin{align} \label{eq:FOM_mat}
\begin{cases}
\frac{\partial\boldsymbol{u}}{\partial t} + \nabla \cdot \left(\boldsymbol{u} \otimes \boldsymbol{u}\right)- \nabla \cdot (\nu \nabla \boldsymbol{u}) = - \nabla p + \boldsymbol{F} &\mbox{in } \Omega \times [0,T], \\
\nabla \cdot \boldsymbol{u} = 0 &\mbox{in } \Omega \times [0,T], \\
\boldsymbol{u}(\boldsymbol{x},0) = \boldsymbol{u}_0(\boldsymbol{x})&\mbox{in } \Omega \times \{0\}, \\
\boldsymbol{u} = \boldsymbol{f}(\boldsymbol{x},t) &\mbox{on } \Gamma_{D_U} \times [0,T], \\
\left(\nabla \boldsymbol{u} \right)\boldsymbol{n} = 0 &\mbox{on } \Gamma_{N_U} \times [0,T],\\
\left(p\boldsymbol{I}\right)\boldsymbol{n} = 0 &\mbox{on } \Gamma_{D_p} \times [0,T], \\
\left(\nabla p \right)\boldsymbol{n} = 0 &\mbox{on } \Gamma_{N_p} \times [0,T],
\end{cases}
\end{align}
\noindent where $\boldsymbol{u}$ = $\boldsymbol{u}(\boldsymbol{x},t)$ represents the vectorial velocity field that is evaluated at $\boldsymbol{x} \in \Omega$ and $p = p(\boldsymbol{x},t)$ is the normalized scalar pressure field, which is divided by the constant fluid density $\rho$. $\nu$ is the kinematic viscosity and $\boldsymbol{F}$ is a body force term. For velocity, the (time-dependent) non-homogeneous Dirichlet boundary condition on $\Gamma_{D_U}$ is represented by $\boldsymbol{f}(\boldsymbol{x},t)$ and $\boldsymbol{u}_0(\boldsymbol{x})$ denotes the initial condition for the velocity at time $t$ = 0 s. On $\Gamma_{N_U}$ a homogeneous Neumann boundary condition for velocity is applied and $\Gamma_{D_p}$ and $\Gamma_{N_p}$ are the Dirichlet and homogeneous Neumann boundary conditions for pressure. $\boldsymbol{n}$ denotes the outward pointing normal vector on the boundary and $T$ is the total simulation time.
The equations are presented here in a general format. The problem-specific (boundary) conditions are specified in Section~\ref{sec:setup}, in which the numerical experiments are presented.
\subsection{Pressure Poisson equation} \label{sec:PPE}
Standard Galerkin projection-based reduced order models are unreliable when applied to the non-linear unsteady Navier--Stokes equations~\cite{Lassila}. Furthermore, the ROMs need to be stabilized in order to produce satisfactory results for both the velocity and pressure fields~\cite{Sirisup,rozza2007stability,caiazzo2014numerical,Akhtar,bergmann2009enablers,noack2005need}. Two different stabilization techniques are compared in~\cite{Stabile2017CAF}; the supremizer enrichment of the velocity space in order to meet the inf-sup condition (SUP) and the exploitation of a pressure Poisson equation during the projection stage (PPE). The SUP-ROM performed about an order worse with respect to the PPE-ROM for what concerns the velocity field but better for what concerns the pressure field. This difference can be explained by the fact that within a supremizer stabilization technique, the POD velocity space is enriched by non-necessary (for the correct reproduction of the velocity field) supremizer modes. As the focus of this work is on controlling velocity boundary conditions, it is decided to use the PPE approach for stabilizing the ROM. Moreover, other approaches to simultaneously deal with velocity and pressure are the pressure stabilised Petrov--Galerkin methods~\cite{caiazzo2014numerical,baiges2014reduced,yano2014space} or assuming that velocity and pressure share the same temporal coefficients~\cite{lorenzi2016pod,bergmann2009enablers}.
For fluid problems that are solved numerically using a Finite Volume discretization technique~\cite{versteeg2007,moukalled2016finite}, often a Poisson Equation is solved for pressure as there is no dedicated equation for pressure in Equation~\ref{eq:FOM_mat}. The PPE is obtained by taking the divergence of the momentum equations and subsequently exploiting the divergence free constraint $\nabla \cdot \boldsymbol{u} = \boldsymbol{0}$. The resulting set of governing full order equations is then given by
\begin{align} \label{eq:PPE}
\begin{cases}
\frac{\partial\boldsymbol{u}}{\partial t} + \nabla \cdot \left(\boldsymbol{u} \otimes \boldsymbol{u}\right)- \nabla \cdot (\nu \nabla \boldsymbol{u}) = - \nabla p + \boldsymbol{F} &\mbox{in } \Omega \times [0,T], \\
\Delta p = - \nabla \cdot \left(\nabla \cdot \left(\boldsymbol{u} \otimes \boldsymbol{u} \right)\right) + \nabla \cdot \boldsymbol{F} &\mbox{in } \Omega \times [0,T], \\
\boldsymbol{n} \cdot \nabla p = - \boldsymbol{n} \cdot \left( \nu \nabla \times \nabla \times \boldsymbol{u} + \frac{\partial\boldsymbol{f}}{\partial t} \right) + \boldsymbol{n} \cdot \boldsymbol{F} &\mbox{on } \Gamma \times [0,T], \\
\end{cases}
\end{align}
In order to simplify the problem, no body force term, $\boldsymbol{F}$, is considered in this work. For more details on the derivation of the PPE the reader is referred to J.-G Liu et al.~\cite{liu2010stable}. These equations are discretized with the Finite Volume method and solved using a PIMPLE~\cite{ferziger2002computational} algorithm for the pressure-velocity coupling, which is a combination of SIMPLE~\cite{patankar1983calculation} and PISO~\cite{issa1986solution}.
\section{POD-Galerkin reduced order model of the incompressible Navier--Stokes equations}\label{sec:ROM}
There exist several techniques in literature for creating a reduced basis space onto which the full order system (\ref{eq:FOM_mat}) is projected such as the Proper Orthogonal Decomposition (POD), the Proper Generalized Decomposition (PGD) and the Reduced Basis (RB) method with a greedy approach. For more details about the different methods the reader is referred to~\cite{rozza2007reduced,hesthaven2016certified,quarteroni2015reduced,chinesta2011short}.
In this work, the Proper Orthogonal Decomposition method is used to create a reduced set of basis functions, or so-called modes, governing the essential dynamics of the full order model (FOM). For this, full order solutions are collected at certain time instances, the so-called snapshots. These snapshots do not necessarily have to be collected at every time step for which the full order solution is calculated.\\
Subsequently, it is assumed that the solution of the FOM can be expressed as a linear combination of spatial modes multiplied by time-dependent coefficients. The velocity snapshots $\boldsymbol{u}(\boldsymbol{x},t_n)$ and pressure snapshots $p(\boldsymbol{x},t_n)$ at time $t_n$ are approximated, respectively, by
\begin{equation}\label{eq:approx}
\boldsymbol{u}(\boldsymbol{x},t_n) \approx \boldsymbol{u}_r(\boldsymbol{x},t_n) = \sum\limits_{i=1}^{N_r^u} \boldsymbol{\varphi}_i(\boldsymbol{x})a_{i}(t_n), \hspace{0.5cm} p(\boldsymbol{x},t_n) \approx p_r(\boldsymbol{x},t_n) = \sum\limits_{i=1}^{N_r^p} \chi_i(\boldsymbol{x})b_{i}(t_n),
\end{equation}
where $\boldsymbol{\varphi}_i$ and $\chi_i$ are the modes for velocity and pressure, respectively. \\$\boldsymbol{a}(t_n)$ = $\left[a_1(t_n), a_2(t_n), ..., a_2(t_n) \right]^T$ and $\boldsymbol{b}(t_n)$ = $ \left[b_1(t_n), b_2(t_n), ..., b_2(t_n) \right]^T$ are column vectors containing the corresponding time-dependent coefficients. $N_r^u$ is the number of velocity modes and $N_r^p$ is the number of pressure modes and thus it is assumed that velocity and pressure at reduced order level can be approximated with a different number of spatial modes. Furthermore, the modes are orthonormal to each other: ${\left( \boldsymbol{\varphi}_i,\boldsymbol{\varphi}_j\right)_{L^2(\Omega)}} = \delta_{ij}$, where $\delta$ is the Kronecker delta. The $L^2$-norm is preferred for discrete numerical schemes~\cite{Stabile2017CAF,busto2020pod} with ${\left( \cdot,\cdot\right)_{L^2(\Omega)}}$ the $L^2$-inner product of the fields over the domain $\Omega$. \\
The optimal POD basis space for velocity, $E^{POD}_{u}$ = $\left[\boldsymbol{\varphi}_1,\boldsymbol{\varphi}_2, ... ,\boldsymbol{\varphi}_{N_r^u}\right]$ is then constructed by minimizing the difference between the snapshots and their orthogonal projection onto the basis for the $L^2$-norm~\cite{quarteroni2014reduced}. This gives the following minimization problem:
\begin{equation} \label{eq:min}
E^{POD}_{u} = \textrm{arg}\underset{\boldsymbol{\varphi}_1, ... ,\boldsymbol{\varphi}_{N_s^u}}{\textrm{min}} \frac{1}{N_s^u}\sum\limits_{n=1}^{N_s^u} \left\Vert \boldsymbol{u}(\boldsymbol{x},t_n) - \sum\limits_{i=1}^{N_r^u} \left( \boldsymbol{u}(\boldsymbol{x},t_n), \boldsymbol{\varphi_i} (\boldsymbol{x}) \right)_{L^2(\Omega)} \boldsymbol{\varphi}_i(\boldsymbol{x})\right\Vert_{L^2(\Omega)}^2,
\end{equation}
\noindent where $N_s^u$ is the number of collected velocity snapshots and $N_r^u$ (with $1\leq N_r^u$ $\leq$ $N_s^u$) denotes the dimension of the POD space $E^{POD}_{u}$. The POD modes are then obtained by solving the following eigenvalue problem on the snapshots~\cite{Stabile2017CAIM, Lassila,Stabile2017CAF,sirovich1987turbulence}:
\begin{equation} \label{eq:ev}
\boldsymbol{C}\boldsymbol{Q}=\boldsymbol{Q}\boldsymbol{\lambda},
\end{equation}
\noindent where $C_{ij}$ = ${\left( \boldsymbol{u}(\boldsymbol{x},t_i),\boldsymbol{u}(\boldsymbol{x},t_j)\right)_{L^2(\Omega)}}$ for $i$,$j$ = 1, ..., $N_s^u$ is the correlation matrix, $\boldsymbol{Q}$ $\in \mathbb{R}^{N_s^u \times N_s^u}$ is a square matrix of eigenvectors and $\boldsymbol{\lambda}$ $\in \mathbb{R}^{N_s^u \times N_s^u}$ is a diagonal matrix containing the eigenvalues. The POD modes, $\boldsymbol{\varphi}_i$, are then constructed as follows
\begin{equation} \label{eq:POD}
\boldsymbol{\varphi}_i (\boldsymbol{x}) = \frac{1}{N_s^u\sqrt{\lambda_i}} \sum\limits_{n=1}^{N_s^u} \boldsymbol{u}(\boldsymbol{x},t_n) Q_{in}\text{\hspace{0.5cm} for \hspace{0.1cm}}i = 1,...,N_r^u,
\end{equation}
of which the most energetic (dominant) modes are selected. The procedure is the same for obtaining the pressure modes.\\
To obtain a reduced order model, the POD is combined with the Galerkin projection, for which the full order system of equations (Equation~\ref{eq:PPE}) is projected onto the reduced POD basis space. For more details about POD and Galerkin projection methods the reader is referred to~\cite{Stabile2017CAIM,Stabile2017CAF,georgaka2018parametric}.
The following reduced system of momentum equations is then obtained
\begin{equation}\label{eq:ROM}
\boldsymbol{M_r} \dot{\boldsymbol{a}} + \boldsymbol{C_r} (\boldsymbol{a}) \boldsymbol{a} - \nu \boldsymbol{A_r} \boldsymbol{a} + \boldsymbol{B_r} \boldsymbol{b} = 0,
\end{equation}
\noindent where the 'over-dot' indicates the time derivative and
\begin{equation}\label{eq:ROM_matrices}
\begin{split}
M_{r_{ij}} = {\left( \boldsymbol{\varphi}_i, \boldsymbol{\varphi}_j \right)_{L^{2}(\Omega)}}\text{\hspace{0.5cm} for \hspace{0.1cm}}i,j = 1,...,N_r^u, \\
A_{r_{ij}} = {\left( \boldsymbol{\varphi}_i, \Delta \boldsymbol{\varphi}_j \right)_{L^{2}(\Omega)}}\text{\hspace{0.5cm} for \hspace{0.1cm}}i,j = 1,...,N_r^u, \\
B_{r_{ij}} = {\left( \boldsymbol{\varphi}_i, \nabla \chi_j \right)_{L^{2}(\Omega)}}\text{\hspace{0.5cm} for \hspace{0.1cm}}i = 1,...,N_r^u \text{\hspace{0.1cm} and \hspace{0.1cm}}j = 1,...,N_r^p.
\end{split}
\end{equation}
These reduced matrices can be precomputed during an offline stage except for the non-linear term $\boldsymbol{C}_r$, which is given by
\begin{equation}\label{eq:C_matrix}
\begin{split}
C_{r_{ijk}} = {\left( \boldsymbol{\varphi}_i, \nabla \cdot (\boldsymbol{\varphi}_j\otimes\boldsymbol{\varphi}_k) \right)_{L^{2}(\Omega)}}\text{\hspace{0.5cm} for \hspace{0.1cm}}i,j,k = 1,...,N_r^u,
\end{split}
\end{equation}
This non-linear term is stored as a third order tensor~\cite{Stabile2017CAIM,quarteroni2007numerical} and the contribution of the convective term to the residual of Eq.~\ref{eq:ROM}, $R$, is evaluated at each iteration during the ROM simulations, or so-called online stage, as
\begin{equation}\label{eq:res}
R_i = \boldsymbol{a}^T C_{r_{i\bullet\bullet}}\boldsymbol{a}.
\end{equation}
The dimension of the tensor $\boldsymbol{C}_r$ (Equation~\ref{eq:C_matrix}) is growing with the cube of the number of modes used for the velocity space and therefore this approach may lead in some cases, especially when a large number of basis functions are employed, to high storage costs. Other approaches, such as EIM-DEIM~\cite{xiao2014non,barrault2004empirical} or Gappy-POD~\cite{carlberg2013gnat} may be more affordable~\cite{Stabile2017CAF}.
As the pressure gradient term is present in the momentum equation the system is also coupled at reduced order level~\cite{Stabile2017CAIM}. The projection of the PPE leads to the following reduced system
\begin{equation}\label{eq:ROM2}
\boldsymbol{D_r} \boldsymbol{b} + \boldsymbol{G_r}(\boldsymbol{a}) \boldsymbol{a} - \nu \boldsymbol{N_r} \boldsymbol{a} - \boldsymbol{T_r} \dot{\boldsymbol{a}} = 0,
\end{equation}
\noindent where
\begin{equation}\label{eq:ROM_matrices2}
\begin{split}
D_{r_{ij}} = {\left( \nabla \chi_i, \nabla \chi_j \right)_{L^{2}(\Omega)}} \text{\hspace{0.5cm} for \hspace{0.1cm}}i,j = 1,...,N_r^p, \\
G_{r_{ijk}} = {\left( \nabla \chi_i, \nabla \cdot (\boldsymbol{\varphi}_j \otimes \boldsymbol{\varphi}_k ) \right)_{L^{2}(\Omega)}} \text{\hspace{0.5cm} for \hspace{0.1cm}}i = 1,...,N_r^p \text{\hspace{0.1cm} and \hspace{0.1cm}}j,k = 1,...,N_r^u, \\
N_{r_{ij}} = {\left( \boldsymbol{n} \times \nabla \chi_i, \nabla \times \boldsymbol{\varphi}_j \right)_{L^{2}(\Gamma)}}\text{\hspace{0.5cm} for \hspace{0.1cm}}i = 1,...,N_r^p \text{\hspace{0.1cm} and \hspace{0.1cm}}j = 1,...,N_r^u,\\
T_{r_{ij}} = {\left( \chi_i, \boldsymbol{n} \cdot \boldsymbol{\varphi}_j \right)_{L^{2}(\Gamma)}} \text{\hspace{0.5cm} for \hspace{0.1cm}}i = 1,...,N_r^p \text{\hspace{0.1cm} and \hspace{0.1cm}}j = 1,...,N_r^u,
\end{split}
\end{equation}
\noindent where the last two terms on the right hand side of Equation~\ref{eq:PPE} are projected on the boundary $\Gamma$.
Following the same strategy as in Equation~\ref{eq:res}, the non-linear term in Equation~\ref{eq:ROM2} is evaluated by storing the third order tensor $\boldsymbol{G_r}$. Equation~\ref{eq:ROM_matrices2} consists only of first order derivatives as integration by parts of the Laplacian term is used together with exploiting the pressure boundary condition after the PPE is projected onto the POD space spanned by the pressure modes. In that way, the numerical differentiation error can be reduced~\cite{Stabile2017CAF}.
\subsection{Initial conditions} \label{sec:IC}
The initial conditions (IC) for the reduced system of Ordinary Differential Equations (Equations~\ref{eq:ROM} and~\ref{eq:ROM2}), are obtained by performing a Galerkin projection of the full order initial conditions onto the POD basis spaces as follows
\begin{equation}\label{eq:ROM_IC}
\begin{split}
a_i(0) = \left(\boldsymbol{\varphi}_i,\boldsymbol{u}(0)\right)_{L^{2}(\Omega)}, \;\;\;\;\; b_i(0) = \left(\chi_i,p(0)\right)_{L^{2}(\Omega)},
\end{split}
\end{equation}
\noindent for velocity and pressure, respectively. It is important to note that the reduced system of equations are coupled and need to be solved iteratively. Moreover, the pressure is only defined up to an arbitrary constant, as in the FOM. Therefore, next to an initial condition for velocity, an initial guess for pressure is required for the system to converge more easily and to ensure the consistency between the FOM and the ROM~\cite{busto2020pod}.
In the online stage, the reduced system of equations (Equations~\ref{eq:ROM} and~\ref{eq:ROM2}) is solved for the velocity and pressure coefficients in the time period [$t_{1}$, $t_{\textrm{online}}$], where $t_{\textrm{online}}$ is the final simulation time.
\subsection{Relative error} \label{sec:error}
Three types of fields are considered: the full order fields, $X_{FOM}$, the projected fields, $X_r$, which are obtained by the $L^2$-projection of the snapshots onto the POD bases and the prediction fields obtained by solving the ROM, $X_{ROM}$. For every time instance, $t_n$, the basis projection error, $\|\hat{e}\|_{L^2}(\Omega)$, is given by
\begin{equation}\label{l2_prediction}
\|\hat{e}\|_{L^2(\Omega)}(t_n) = \frac{\|X_{FOM}(t_n)-X_{r}(t_n)\|_{L^{2}(\Omega)}}{\|X_{FOM}(t_n) \|_{L^{2}(\Omega)}},
\end{equation}
\noindent and the prediction error $\|e\|_{L^2}$, is determined by
\begin{equation}\label{l2_projection}
\|e\|_{L^2(\Omega)}(t_n) = \frac{\|X_{FOM}(t_n)-X_{ROM}(t_n)\|_{L^{2}(\Omega)}}{\|X_{FOM}(t_n) \|_{L^{2}(\Omega)}},
\end{equation}
\noindent where $X$ is either representing the velocity or pressure fields.
\section{Non-homogeneous (time-dependent) Dirichlet boundary conditions of the incompressible Navier--Stokes equations} \label{sec:BCs}
In a POD-based ROM, the non-homogeneous BCs are, in general, not satisfied by the ROM, as the basis functions, and in the same way their BCs, are a linear combination of the snapshots. Furthermore, the BCs are not explicitly present in the reduced system and therefore they cannot be controlled directly~\cite{lorenzi2016pod}. Two common approaches are presented in this section for handling the BCs: the lifting function and the penalty method~\cite{graham1999optimal1}. The aim of the lifting function method is to have homogeneous POD modes and to enforce the BCs by means of a properly chosen lifting function in the ROM. On the other side, the penalty method enforces the BCs in the ROM with a penalty factor. In this work only the velocity BCs are controlled with the two methods.
\subsection{The lifting function method}\label{sec:control}
The lifting function method for the non-homogeneous boundary conditions is often used in the continuous Galerkin finite-element setting to reformulate a boundary control problem into a distributed one~\cite{chirco2019optimal,demkowicz2006computing,bornia2013distributed}. The method imposes the non-homogeneous (Dirichlet) conditions to the problem through lifting. This is done by subtracting the lifting function from the unknown variable in the original PDE problem, solving for the modified variable and adding the lifting function to the solution~\cite{ullmann2014pod}.
In a similar way, this method is used to impose non-homogeneous (Dirichlet) boundary conditions in reduced order models for which the lifted fields are projected onto the reduced bases spanned by the POD modes~\cite{fick2018stabilized}.
In this work, the velocity snapshots are made homogeneous by subtracting suitable lifting functions from all of them on which then the POD is performed. The result is a set of velocity modes that individually fulfill the homogeneous BCs as they are linear combinations of the modified velocity snapshots. The lifting functions, which fulfill the original non-homogeneous boundary conditions, are then added to a linear combination of POD basis functions. As a result, the non-homogeneous Dirichlet boundary conditions are included in the reduced basis space spanned by the POD modes and the lifting functions.
This lifting function method is also known as the "control function method" in literature~\cite{graham1999optimal1,Akhtar,lasiecka1984ritz} for PDE problems whose Dirichlet conditions can be parametrized with a single time-dependent coefficient~\cite{ullmann2014pod}. This is the type of problem that is presented in this work. The method is generalized in ~\cite{gunzburger2007reduced} for generic functions with multiple parameters at distinct boundary sections.
The functions to be chosen are system-specific and they have to satisfy the divergence free constraint in order to retain the divergence-free property of the snapshots~\cite{Stabile2017CAIM}.
One way to generate a lifting function, $\tilde{\boldsymbol{\zeta}}_{c}(\boldsymbol{x})$ is by solving a problem as close as possible to the full order problem, where the boundary of interest is set to its value and everywhere else to a homogeneous BC. There are several other ways to compute the lifting function. For instance, the snapshot average can be used, although this does not always lead to a discretely divergence-free function. Alternatively, the solution of the stationary version of the considered problem can be computed~\cite{burkardt2006pod}. Two other common approaches are solving a non-homogeneous Stokes problem~\cite{fick2018stabilized,girault1999analysis,fonn2019fast} or solving a potential flow problem~\cite{eftang2010evaluation,HijaziStabileMolaRozza2020a}.
As one of the characteristics of the POD modes is that they are orthonormal, the lifting functions are normalized as follows
\begin{equation}\label{eq:norm_lift}
\boldsymbol{\zeta}_c(\boldsymbol{x}) = \frac{\tilde{\boldsymbol{\zeta}}_c(\boldsymbol{x})}{\|\tilde{\boldsymbol{\zeta}}_{c}(\boldsymbol{x})\|_{L^{2}(\Omega)}},
\end{equation}
before subtracting them from all snapshots and applying POD. The snapshots are then modified accordingly
\begin{equation}\label{eq:control}
\boldsymbol{u}^\prime(\boldsymbol{x},t)=\boldsymbol{u}(\boldsymbol{x},t)-\sum_{j=1}^{N_{BC}}\boldsymbol{\zeta}_{c_j}(\boldsymbol{x})u_{BC_j}(t) ,
\end{equation}
\noindent where $N_{BC}$ is the number of non-homogeneous BCs, $\boldsymbol{\zeta}_{c}(\boldsymbol{x})$ the normalized lifting functions and $u_{BC}$ is the normalized value of the corresponding Dirichlet boundary condition.
The POD modes, $\boldsymbol{\varphi}^\prime_i$, that satisfy the homogeneous boundary conditions are obtained by solving an eigenvalue problem similar to Equation~\ref{eq:ev} on the homogenized snapshots $\boldsymbol{u}^\prime(\boldsymbol{x},t)$. The control functions are then added as additional modes to the reduced velocity basis
\begin{equation}\label{eq:approx_phi_lifted}
E^{\prime}_{u} = \left[\boldsymbol{\zeta}_{c_1}, ...,\boldsymbol{\zeta}_{c_{N_{BC}}}, \boldsymbol{\varphi}^\prime_1, ..., \boldsymbol{\varphi}^\prime_{N_r^u} \right].
\end{equation}
Consequently, the velocity field at time $t_n$ is approximated by
\begin{equation}\label{eq:approx_temp}
\boldsymbol{u}_r(\boldsymbol{x},t_n)=\sum_{j=1}^{N_{BC}}\boldsymbol{\zeta}_{c_j}(\boldsymbol{x})u_{BC_j}(t_n) + \sum_{i=1}^{N_r^u}\boldsymbol{\varphi}^\prime_i(\boldsymbol{x})a_i(t_n),
\end{equation}
which satisfies the boundary conditions of the problem. $u_{BC}$ can be time-dependent. The Dirichlet boundary condition can be parametrized by assigning a new value to $u_{BC}$ in Equation~\ref{eq:approx_temp}. In other words, the lifting functions can be scaled by a factor.
For more details on the lifting function the reader is referred to~\cite{Stabile2017CAIM,georgaka2018parametric}. The overall algorithm for the lifting function method is given below.
\clearpage
\begin{tabularx}{0.95\textwidth}{X}
\toprule \textbf{Algorithm 1: lifting function method} \\\midrule
\hspace{0.0cm}\textbf{OFFLINE PHASE}\\
\hspace{0.0cm}\textbf{Solve full order model:}\\
\hspace{0.0cm}(1) Generate snapshots over a time period [0, $T$] by solving the full order problem of Eq.~\ref{eq:FOM_mat};\\
\hspace{0.0cm}\textbf{Obtain the lifting functions:}\\
\hspace{0.0cm}(2) Generate the lifting functions by solving a flow problem: \\
\hspace{0.6cm} \textbf{for} $i$ = 1 to $N_{BC}$ \textbf{do} \\
\hspace{1.3cm} \textbf{for} $j$ = 1 to $N_{BC}$ \textbf{do} \\
\hspace{2.0cm} \textbf{if} $i$ = $j$ \textbf{then}
\\ \hspace{2.7cm}$\boldsymbol{u}$$\vert$$\Gamma_{D_j}$ = 1 \\
\hspace{2.0cm} \textbf{else} \\
\hspace{2.7cm}$\boldsymbol{u}$$\vert$$\Gamma_{D_j}$ = 0\\
\hspace{1.3cm} \textbf{end for}\\
\hspace{1.3cm} Solve a flow problem for $\tilde{\boldsymbol{\zeta}}_{c_i}$\\
\hspace{0.6cm} \textbf{end} \textbf{for};\\
\hspace{0.0cm}(3) Normalize the lifting functions to obtain $\boldsymbol{\zeta}_{c}$ as in Eq.~\ref{eq:norm_lift};\\
\hspace{0.0cm}(4) Subtract the normalized lifting functions from the velocity snapshots as in Eq.~\ref{eq:control};\\
\hspace{0.0cm}\textbf{Perform POD:}\\
\hspace{0.0cm}(5) Retrieve the correlation matrix $\boldsymbol{C}$ from the solution snapshots;\\
\hspace{0.0cm}(6) Solve the eigenvalue problem of Eq.~\ref{eq:ev} to obtain the POD modes using Eq.~\ref{eq:POD};\\
\hspace{0.0cm}(7) Add the normalized lifting functions, $\boldsymbol{\zeta}_{c}$, as additional modes to the set of velocity POD modes $\boldsymbol{\varphi}$ according to Eq.~\ref{eq:approx_phi_lifted};\\
\hspace{0.0cm}\textbf{Projection:}\\
\hspace{0.0cm}(8) Project the discretized full order system onto the obtained reduced bases as done in Eq.~\ref{eq:ROM}-~\ref{eq:ROM_matrices2};\\
\vspace{0.1cm}
\hspace{0.0cm}\textbf{ONLINE PHASE}\\
\hspace{0.0cm}\textbf{Solve reduced order model:}\\
\hspace{0.0cm}(9) Project the initial fields for the parametrized BC onto the POD bases to get the initial condition/guesses for the ROM according to Eq.~\ref{eq:ROM_IC};\\
\hspace{0.0cm}(10) Solve the reduced order problem of Eq.~\ref{eq:ROM} with the reduced Poisson equation, Eq.~\ref{eq:ROM2}, for pressure in the time period [$t_{1}$, $t_{\textrm{online}}$]; \\
\hspace{0.0cm}(11) Reconstruct the full order fields from the obtained coefficients using Eq.~\ref{eq:approx_temp};
\\\bottomrule
\end{tabularx}
\subsection{The iterative penalty method}
The penalty method was originally proposed in the context of finite element methods~\cite{lions1973non,babuvska1973finite}. The method transforms a strong non-homogeneous Dirichlet boundary condition into a weak Neumann boundary condition by the means of a small parameter whose inverse is called the penalty factor~\cite{placzek2008hybrid}. Thus, the method uses a penalty parameter to weakly impose the boundary conditions. In the POD-Galerkin reduced order modeling setting, the penalty method has been first introduced by Sirisup and Karniadakis~\cite{Sirisup} for the enforcement of boundary conditions.
For the penalty method, no modification of the snapshots is needed as the velocity Dirichlet BCs are directly enforced as constraints in the reduced system in the following way:
\begin{equation}\label{eq:pen_ROM_min}
\boldsymbol{M_r} \dot{\boldsymbol{a}} - \nu \boldsymbol{B_r} \boldsymbol{a} + \boldsymbol{a}^T \boldsymbol{C_r} \boldsymbol{a} + \boldsymbol{K_r} \boldsymbol{b} + \sum_{l=1}^{N_{BC}} \tau_l\left( \boldsymbol{P1}_l\boldsymbol{a} - u_{BC_l}(t) \boldsymbol{P2}_l \right) = 0 ,
\end{equation}
\noindent where $\tau$ is the penalty factor~\cite{Sirisup} and the additional terms with respect to Equation~\ref{eq:ROM} are projected on the boundary as follows
\begin{equation}\label{eq:D}
\begin{split}
P1_{lij} = \left( \boldsymbol{\varphi}_i, \boldsymbol{\varphi}_j \right)_{L^{2}(\Gamma_l)} \text{\hspace{0.5cm} for \hspace{0.1cm}}l = 1,...,N_{BC} \text{\hspace{0.1cm} and \hspace{0.1cm}} i,j = 1,...,N_r^u,\\
P2_{li} = \left( \boldsymbol{\varphi}_i, \boldsymbol{\phi} \right)_{L^{2}(\Gamma_l)} \text{\hspace{0.5cm} for \hspace{0.1cm}}l = 1,...,N_{BC} \text{\hspace{0.1cm} and \hspace{0.1cm}} i = 1,...,N_r^u,
\end{split}
\end{equation}
where $\boldsymbol{\phi}$ is a unit field. This minimization problem is formulated at reduced order level and, therefore, the penalty method does not depend on the full order snapshots.
In order to have an asymptotically stable solution, the penalty factors $\tau$ should be larger than 0. If $\tau \rightarrow \infty$ the solution generally converges to a true optimal solution of the original unpenalized problem~\cite{hou1999numerical}. Nevertheless, a strong imposition would be approached and the ROM becomes ill-conditioned~\cite{lorenzi2016pod,epshteyn2007estimation}. Therefore, the penalty factor needs to be chosen above a threshold value for which the method is stable and converges~\cite{epshteyn2007estimation,dai2013analysis}. On the other hand, it is important to find a penalty factor as small as possible to obtain a numerical stable solution. This is usually done by numerical experimentation~\cite{lorenzi2016pod,graham1999optimal1,kalashnikova2012efficient, bizon2012reduced}.
Several techniques exist in literature to optimize the numerical experimentation. Kelley~\cite[page~214]{kelley1962method} used a simple iteration scheme to optimize the trial-and-error process of the numerical experimentation. With this scheme the penalty value is adjusted each iteration by using the absolute value of the ratio between the constraint violation and a preassigned tolerance as a factor to increase or decrease the values at the end of each iteration. Basically, the idea is that the penalty factor obtained by the iteration scheme is optimal in the sense that it perturbs the original problem by a minimum for the given tolerance~\cite{huettelminimum}.
In this work the experimentation is optimized using a first-order iterative optimization scheme~\cite{leitmann1962optimization} to determine the factors that is based on the iteration scheme described in the previous paragraph. The penalty factors, $\tau$, are updated each iteration $k$, as follows
\begin{equation}\label{eq:tau}
\tau_l^{k+1}(t_n) = \tau_l^k (t_n) \frac{\left|r_l^k(t_n)\right|}{\epsilon} = \tau_l^k (t_n) \frac{\left|\tilde{u}_{BC_l}^k(t_n)-u_{BC_l}(t_n)\right|}{\epsilon} \text{\hspace{0.5cm} for \hspace{0.1cm}}l = 1,...,N_{BC},
\end{equation}
with $r^k(t_n)$ the residual between $\tilde{u}_{BC}^k$, the value of a certain boundary at the $k^{th}$ iteration, and $u_{BC}$, the enforced boundary condition, at an evaluated time $t_n$. $\tilde{u}_{BC}^k$ is obtained during the online phase by reconstructing the boundary. $\epsilon$ $>$ 0 is the given error tolerance for the residual which has to be set. There is no single approach that can be considered the best for choosing $\epsilon$, as the preferred tolerance depends on the problem and on both physical and geometrical parameters. The eigenvalue truncation error of the POD modes gives a good indication for the value of $\epsilon$. The penalty method is therefore no longer based on an arbitrary value for the penalty factor.
As long as $\left|\tilde{u}_{BC_l}^k(t_n)-u_{BC_l}(t_n)\right|$ $>$ $\epsilon$ the penalty factors grow every update and converge to the smallest penalty factors that satisfy the required tolerance. Thus, if the initial guess for the factor is below the minimum value for $\tau$ for which the boundary condition is enforced in the ROM, the factor is approached from below using this method. For a time-dependent problem it is not needed to determine a penalty factor for all time steps $N_t$. Often the factor determined after the first couple of time steps, $N_{\tau}$, can be used for the whole ROM solution.
The step-by-step demonstration of the iterative function method is given below by Algorithm 2.
\begin{tabularx}{0.95\textwidth}{X}
\toprule \textbf{Algorithm 2: Iterative penalty method} \\\midrule
\hspace{0.0cm}\textbf{OFFLINE PHASE}\\
\hspace{0.0cm}\textbf{Solve full order model:}\\
\hspace{0.0cm}(1) Generate snapshots over a time period [0, $T$] by solving the full order problem of Eq.~\ref{eq:FOM_mat};\\
\hspace{0.0cm}\textbf{Perform POD:}\\
\hspace{0.0cm}(2) Retrieve the correlation matrix $\boldsymbol{C}$ from the solutions;\\
\hspace{0.0cm}(3) Solve the eigenvalue problem of Eq.~\ref{eq:ev} to obtain the POD modes using Eq.~\ref{eq:POD};\\
\hspace{0.0cm}\textbf{Impose BCs with penalty method:}\\
\hspace{0.0cm}(4) Project the modes on the reduced basis at the boundary of the domain to determine $\boldsymbol{P1}$ and $\boldsymbol{P2}$ for each non-homogeneous Dirichlet boundary condition as in Eq.~\ref{eq:D};\\
\hspace{0.0cm}(5) Solve iteratively for the penalty factor using Eq.~\ref{eq:tau}:\\
\hspace{0.6cm}\textbf{for} $i = 1$ to $N_{\tau}$ \textbf{do} \\
\hspace{1.3cm}\textbf{while} $\left|\tilde{u}_{BC_l}^k(t_i)-u_{BC_l}(t_i)\right| > \epsilon$ \textbf{do}\\
\hspace{2.0cm} $\tau_l^{k+1}(t_i)= \tau_l^k (t_i) \frac{\left|\tilde{u}_{BC_l}^k(t_i)-u_{BC_l}(t_i)\right|}{\epsilon}$ \text{\hspace{0.5cm}}\\
\hspace{1.3cm}\textbf{end while}\\
\hspace{0.6cm}\textbf{end for};\\
\hspace{0.0cm}\textbf{Projection:}\\
\hspace{0.0cm}(6) Project the discretized full order system onto the obtained reduced bases as done in Eq.~\ref{eq:ROM}-~\ref{eq:ROM_matrices2};\\
\vspace{0.1cm}
\hspace{0.0cm}\textbf{ONLINE PHASE}\\
\hspace{0.0cm}\textbf{Solve reduced order model:}\\
\hspace{0.0cm}(7) Project the initial fields for the parametrized BC onto the POD bases to get the initial condition/guesses for the ROM using Eq.~\ref{eq:ROM_IC};\\
\hspace{0.0cm}(8) Solve the reduced order problem of Eq.~\ref{eq:ROM} with the reduced Poisson equation, Eq.~\ref{eq:ROM2}, for pressure in the time period [$t_{1}$, $t_{\textrm{online}}$]; \\
\hspace{0.0cm}(9) Reconstruct the full order fields from the obtained coefficients using Eq.~\ref{eq:approx};
\\\bottomrule
\end{tabularx}\\
It is important to note that the penalty factor can affect the number of iterations needed to solve the reduced system and therefore the convergence and cost of the reduced order model~\cite{nour1987note}.
\section{Numerical simulation tests} \label{sec:setup}
In this section the set-up of two cases are described for which the boundary control methods, the lifting function method and the iterative penalty method, are tested. The first one test case is the classical lid driven cavity benchmark problem and the second one is a Y-junction with two inlets and one outlet channel whose time-dependent inlet boundary conditions are controlled.
\subsection{Lid-driven cavity flow problem}
The first test case consists of a lid driven cavity problem. The simulation is carried out on a two-dimensional square domain of length $L$ = 0.1 m on which a (200 $\times$ 200) structured mesh with quadrilateral cells is constructed. The boundary is subdivided into two different parts $\Gamma$ = $\Gamma_{LID}$ $\cup$ $\Gamma_w$ and the boundary conditions for velocity and pressure are set according to Figure~\ref{fig:LID_setup}. The pressure reference value is set to 0 m$^2$/s$^2$ at coordinate (0,0). At the top of the cavity a constant uniform and horizontal velocity equal to $\boldsymbol{u}$ = ($U_{LID}$,0) = (1,0) m/s is prescribed. A no slip BC is applied at the walls, $\Gamma_w$. The kinematic viscosity is equal to $\nu$ = 1 $\cdot$ \num{e-4} m$^2$/s and the corresponding Reynolds number is 1000, meaning that the flow is considered laminar.
\begin{figure}[h!]
\centering
\captionsetup{justification=centering}
\includegraphics[width=8.0cm]{FiguresFinal/LID_setup.pdf}
\caption{Sketch of the geometry of the 2D square cavity with moving top lid including boundary conditions.}
\label{fig:LID_setup}
\end{figure}
The unsteady full order equations are iteratively solved by the FV method with the $pimpleFoam$ solver of the open source C++ library OpenFOAM 6~\cite{Jasak}. The PIMPLE algorithm is used for the pressure-velocity coupling~\cite{ferziger2002computational}. For the full order simulations, the spatial discretization of all terms is performed with a central differencing scheme (linear). The temporal discretization is treated using a second order backward differencing scheme (BDF2). A constant time step of $\Delta t$ = 5 $\cdot$ \num{e-4} s has been applied and the total simulation time is 10 s. Snapshots of the velocity and pressure fields are collected every 0.01 s, resulting in a total of 1001 snapshots (including 1 for the initial condition). The initial condition field with $U_{LID}$ = 1 m/s is used as a lifting function.
For this test case the same boundary conditions are applied in the ROM as in the FOM for which the snapshots are collected. The temporal discretization of the ROM is performed with a first order Newton's method.
POD, projection of the full order solution on the reduced subspace and the reduced order simulations are all carried out with ITHACA-FV, a C++ library based on the Finite Volume solver OpenFOAM. For more details on the ITHACA-FV code, the reader is referred to~\cite{Stabile2017CAIM,Stabile2017CAF,ITHACA}.
\subsection{Y-junction flow problem}
Junctions are often used for the combination or separation of fluid flows and can be found in all types of engineering applications from gas transport in pipes till micro flow reactors. As a second test case a Y-junction with one outlet channel and two inlet channels is modeled. The angle between each inlet and the horizontal axis is 60 degrees, as shown in Figure~\ref{fig:Y_setup} on the left~\cite{Stephenson}. The length of the channels is 2 m.
The 2D geometry is split in 6 zones as depicted in Figure~\ref{fig:Y_setup} on the left. On the three rectangular zones a mesh with quadrilateral cells is constructed. The remaining three zones are meshed with hexagonal cells. The different meshes are depicted in Figure~\ref{fig:Y_setup} on the right. The total number of cells is 13046.
The boundary is subdivided into four different parts $\Gamma$ = $\Gamma_{i1}$ $\cup$ $\Gamma_{i2}$ $\cup$ $\Gamma_o$ $\cup$ $\Gamma_w$. The two inlets, $\Gamma_{i1}$ and $\Gamma_{i2}$, have a width of 0.5 m, while the outlet, $\Gamma_o$, has a width of 1 m. The kinematic viscosity is equal to $\nu$ = 1 $\cdot$ \num{e-2} m$^2$/s meaning that the Reynolds number at the inlet is 50 and the flow is considered laminar. The uniform inlet velocities are time-dependent and the velocity magnitude of the flow at the inlets is set according to figure~\ref{fig:Y_BCs_eps}.
\begin{figure}[h!]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_setup_LR.pdf}
\end{subfigure}%
\begin{subfigure}{.55\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_setup_LR_zoom_in.pdf}
\end{subfigure}%
\caption{(Left) Sketch of the geometry and mesh of Y-junction test case including boundary conditions. (Right) close up of the mesh in different zones.}
\label{fig:Y_setup}
\end{figure}
A homogeneous Neumann boundary condition is applied for pressure at the inlet and wall boundaries. At the outlet, $\Gamma_o$, $p$ = 0 m$^2$/s$^2$ together with a homogeneous Neumann BC for velocity. A no slip BC is applied at the walls, $\Gamma_w$.
As initial conditions the steady state solution, obtained with the $simpleFoam$ solver, for a velocity magnitude of 1 m/s at both inlets is chosen. The other boundary conditions are the same as for the unsteady simulation described above.
As done previously for the lid driven cavity case, the unsteady governing equations are iteratively solved by the FV method with the $pimpleFoam$ solver of OpenFOAM 6~\cite{Jasak}. For the full order simulations, the discretization in space is performed with a central differencing scheme for the diffusive term and a combination of a second order central-differencing and upwind schemes for the convective term. The temporal discretization is treated using a second order backward differencing scheme (BDF2). A constant time step of $\Delta t$ = 5 $\cdot$\num{e-4} s has been applied and the total full order simulation time is 12 s for which snapshots of the velocity and pressure fields are collected every 0.03 s, resulting in a total of 401 snapshots (including 1 for the initial condition). The inlet velocity BCs are time-dependent and the velocity magnitude of, alternately, inlet 1 or 2 is increased or decreased linearly between 1 m/s to 0.5 m/s as shown in Figure~\ref{fig:Y_BCs_eps} on the left.
\begin{figure}[h!]
\centering
\begin{subfigure}{.42\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{FiguresFinal/Y_training_FOM-eps-converted-to.pdf}
\end{subfigure}%
\begin{subfigure}{.58\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{FiguresFinal/Y_training_ROM-eps-converted-to.pdf}
\end{subfigure}%
\caption{Boundary conditions for the Y-junction test case. (Left) inlet velocity BCs for the FOM. (Right) inlet velocity BCs for ROM. }
\label{fig:Y_BCs_eps}
\end{figure}
\newpage
In that way, the ROM is trained for all possible combinations of inlet velocities within the specified range. The inlet boundary conditions of the ROM are then controlled according to Figure~\ref{fig:Y_BCs_eps} on the right, where the inlet velocity magnitude is increased or decreased linearly over time between the maximum of 1 m/s and minimum of 0.5 m/s. The magnitude of the inlet velocities of the ROM decreases and increases faster or slower over time compared to the training run. Also the ROM is tested for a longer time period, 18 s, compared to the full order simulation of 12 s. In that way, the ROM performance can be tested on the long term.
The temporal discretization of the ROM is performed with a first order Newton's method.
Both the iterative penalty method and lifting function method are tested. The lifting functions are determined by solving for a potential flow field problem given by
\begin{align} \label{eq:potential_flow}
\begin{cases}
\nabla \cdot \boldsymbol{u} = 0 &\mbox{in } \Omega, \\
\nabla^2 p = 0 &\mbox{in } \Omega,\\
\left(p(\boldsymbol{x})\boldsymbol{I}\right)\boldsymbol{n} = 0 &\mbox{on } \Gamma_o, \\
\left(\nabla p(\boldsymbol{x},t) \right)\boldsymbol{n}= 0 &\mbox{on }\Gamma \not\ni \Gamma_o, \\
\left(\nabla \boldsymbol{u}(\boldsymbol{x}) \right)\boldsymbol{n} = 0 &\mbox{on } \Gamma_o,\\
\left(\nabla \boldsymbol{u}(\boldsymbol{x}) \right)\boldsymbol{n} = 0 &\mbox{on } \Gamma_{w}, \\
\boldsymbol{u}(\boldsymbol{x}) = \boldsymbol{g1}(\boldsymbol{x}) &\mbox{on } \Gamma_{i1}, \\
\boldsymbol{u}(\boldsymbol{x}) = \boldsymbol{g2}(\boldsymbol{x}) &\mbox{on } \Gamma_{i2},
\end{cases}
\end{align}
with the magnitude of the inlet velocity at inlet 1, $\Gamma_{i1}$, set to 1 m/s while inlet 2, $\Gamma_{i2}$, is kept at 0 m/s as shown in Figure ~\ref{fig:Y_control} for the first lifting function. To obtain the second lifting function $\|\boldsymbol{u}\|$ = 0 at $\Gamma_{i1}$ and 1 m/s at $\Gamma_{i2}$. Both lifting functions are shown in Figure~\ref{fig:Y_control}.
\begin{figure}[h!]
\centering
\begin{subfigure}{.35\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{FiguresFinal/Y_U_lift_0_LR.png}
\end{subfigure}%
\begin{subfigure}{.35\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{FiguresFinal/Y_U_lift_1_LR.png}
\end{subfigure}%
\caption{The lifting functions for velocity for the Y-junction.}
\label{fig:Y_control}
\end{figure}
The test case of a Y-junction is more complicated than the lid driven cavity case as not only one, but two boundaries need to be controlled, which are also time dependent. Furthermore, as the channel inlets are placed under an angle, one needs to take into account that the inlet velocity can be decomposed in an x- and a y-direction. Therefore, the vectorial lifting functions are split into their components before normalization. Also in the case of the penalty method, four penalty factors are determined; one for each inlet and each direction. This will be further discussed in Section~\ref{sec:discussion}.
\section{RESULTS AND ANALYSIS}\label{sec:results}
\subsection{Lid driven cavity flow problem}
First the full order simulation for the lid driven cavity test case is performed and 1001 velocity and pressure snapshots are collected, including the initial conditions, which are then used to create the POD basis functions. Stabile and Rozza~\cite{Stabile2017CAF} concluded in their research that 10 velocity and pressure modes are enough to retain 99.99$\%$ of the energy contained in the snapshots. Therefore, the same number of modes for the reduced basis creation are used in this work.
Reduced order models are constructed with both the lifting function and penalty method and compared with a ROM without boundary enforcement. With the use of the iterative procedure a penalty factor of 0.058 is determined within 2 iterations by evaluating only the first five time steps with a maximum error tolerance, $\epsilon$, of~\num{e-5} for the value of the boundary condition of the ROM and starting from an initial guess of~\num{e-6}. For a similar study of the lid driven cavity benchmark problem, Lorenzi et al.~\cite{lorenzi2016pod} had found a factor between~\num{e-5} and~\num{e2} using numerical experimentation. The value found here using the iterative method is thus within the same range. A higher value for the penalty factor can be used, but it is then more likely that the ROM becomes ill-conditioned.
The obtained ROMs are tested for the same initial and boundary conditions as the high fidelity simulation. The evolution in time of the relative $L^2$-error between the reconstructed fields and the full order solutions is plotted in Figure~\ref{fig:LID_L2_error} together with the basis projection.
In case no boundary enforcement method is used the flow field remains zero throughout the simulation and therefore the relative error is 1. When either the lifting function or penalty method is used the relative $L^2$-error for both the velocity and pressure fields are about the order of \num{e-1} due to the relatively low number of snapshots acquired during the initial part of the transient. The snapshots are equally distributed in time, while this time span exhibits the most non-linear behavior. Therefore one should concentrate the snapshots in this time span to enhance the performance of the ROM~\cite{Stabile2017CAF}. After about 2 seconds of simulation time, for both boundary control methods, the velocity relative error drops till about the order of \num{e-3}. At the final time of the simulation the penalty method is performing slightly better than the lifting function method, but the order is the same.
Contrary to velocity, the relative error for pressure stays about 4$\cdot$\num{e-1} after 2 s of simulation time, while the projection error drops till about \num{e-3}. This has been previously acclaimed by Stabile et al. in~\cite{Stabile2017CAIM}. The PPE stabilization method is less accurate concerning pressure compared to the supremizer enrichment method. This has also been found by Kean and Schneier~\cite{kean2020error} in the finite element-based ROM setting.
Furthermore, the absolute error between the FOM and the ROMs is shown in Figure~\ref{fig:LID_U} and~\ref{fig:LID_p} for velocity magnitude and pressure, respectively.
It is observed that both methods lead, for velocity, to an absolute error between the FOM and the ROM of the order \num{e-2} at the beginning of the simulation and about \num{e-3} once the flow has reached its steady state solution. Furthermore, the velocity error slightly increases between 5 and 10 s of simulation time. This can also be observed in the $L^2$-error analysis over time in Figure~\ref{fig:LID_L2_error}. For pressure, the error is largest near the top corners of the cavity and are of the order \num{e-3}. Note that the scale does not show the whole range of absolute errors. This is done to better visualize the error. The maximum error for pressure is about 5$\cdot$\num{e-2} m$^2$/s$^2$ at the top right corner. As the pressure relative to its reference point at (0,0) plotted in Figure~\ref{fig:LID_p} is always less than 1 m$^2$/s$^2$, the relative error plotted in Figure~\ref{fig:LID_L2_error} is greater than the absolute error plotted in Figure~\ref{fig:LID_p}. Furthermore, the error distribution, for both the velocity and pressure fields, is similar all over the domain, meaning the methods are performing the same, as previously confirmed by the $L^2$-error analysis over time in Figure~\ref{fig:LID_L2_error}.
\begin{figure}[h!]
\centering
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_L2_U_pdf.pdf}
\end{subfigure}%
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_L2_p_pdf.pdf}
\end{subfigure}
\caption{Relative $L^2$-error of velocity (left) and pressure (right) between the FOM and ROM with lifting function and with penalty method.}
\label{fig:LID_L2_error}
\end{figure}
The relative error for the total kinetic energy is determined and plotted in Figure~\ref{fig:LID_KE}. The order is more or less the same for both boundary control methods. From time to time the penalty method is performing slightly better and the other way around and the relative velocity error is less than \num{e-2} for the vast part of the simulation.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\linewidth]{FiguresFinal/LID_KE_pdf.pdf}
\caption{Kinetic energy relative $L^2$-error for the ROM with lifting function and with penalty method.}
\label{fig:LID_KE}
\end{figure}
\newpage
Finally, the computational times for performing the full order simulation (Eq.~\ref{eq:FOM_mat}), calculating the POD modes (Eqs.~\ref{eq:approx}-~\ref{eq:POD}), the reduced matrices (Eqs.~\ref{eq:ROM_matrices},~\ref{eq:C_matrix} and~\ref{eq:ROM_matrices2}) and performing the simulation at reduced level (Eq.~\ref{eq:ROM} (lifting function method) or Eq.~\ref{eq:pen_ROM_min} (penalty method) \& Eq.~\ref{eq:ROM2}) are all listed in Table~\ref{tab:times}. Calculating the POD modes, reduced matrices and the ROM solutions takes more time in the case of the lifting function method as the reduced basis space consists of an additional mode, namely the normalized lifting function for the boundary with the lid, compared to the penalty method. Determining the penalty factor with the iterative method takes only 0.11 s. The speedup ratio between the ROM and the FOM is about 270 times for the lifting method and 308 times for the penalty method.
\begin{table}[h!]
\caption{Computational time (clock time) for the FOM simulation, POD, calculating reduced matrices offline (Matrices), determining penalty factor with iterative method (Penalty factor) and ROM simulation.}
\centering
\begin{tabular}{lllllll}
\hline
\multicolumn{1}{c}{Method} &FOM &POD &Matrices &Penalty factor &ROM \\ \hline
Lifting &37 min. & 50 s & 8.2 s & - &8.2 s \\
Penalty & 37 min. & 45 s & 6.8 s & 0.11 s &7.2 s \\ \hline
\end{tabular}
\label{tab:times}
\end{table}
\begin{figure}
\centering
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_FOM_02.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_ROM_lifting_02.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_ROM_PEN_02.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_COMP_lifting_02.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_DIFF_PEN_02.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_FOM_1.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_ROM_lifting_1.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_ROM_PEN_1.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_COMP_lifting_1.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_DIFF_PEN_1.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_FOM_5.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_ROM_lifting_5.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_ROM_PEN_5.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_COMP_lifting_5.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_DIFF_PEN_5.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_FOM_10.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_ROM_lifting_10.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_ROM_PEN_10.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_COMP_lifting_10.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_U_DIFF_PEN_10.png}
\end{subfigure}
\caption{Comparison of the full order velocity magnitude fields (1st column), the ROM fields obtained with the lifting function method (2nd column) and penalty method (4th column) and the difference between the FOM and ROM fields obtained with the lifting function method (3rd column) and penalty method (5th column) at $t$ = 0.2, 1, 5 and 10 s (from top to bottom) for the lid driven cavity problem.}
\label{fig:LID_U}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_FOM_02.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_ROM_lifting_02.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_ROM_PEN_02.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_COMP_lifting_02.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_DIFF_PEN_02.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_FOM_1.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_ROM_lifting_1.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_ROM_PEN_1.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_COMP_lifting_1.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_DIFF_PEN_1.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_FOM_5.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_ROM_lifting_5.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_ROM_PEN_5.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_COMP_lifting_5.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_DIFF_PEN_5.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_FOM_10.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_ROM_lifting_10.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_ROM_PEN_10.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_COMP_lifting_10.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/LID_p_DIFF_PEN_10.png}
\end{subfigure}
\caption{Comparison of the full order pressure fields (1st column), the ROM fields obtained with the lifting function method (2nd column) and penalty method (4th column) and the difference between the FOM and ROM fields obtained with the lifting function method (3rd column) and penalty method (5th column) at $t$ = 0.2, 1, 5 and 10 s (from top to bottom) for the lid driven cavity problem. }
\label{fig:LID_p}
\end{figure}
\clearpage
\subsection{Y-junction flow problem}
A full order simulation is performed for the Y-junction test case with varying inlet velocities (magnitude) according to figure~\ref{fig:Y_BCs_eps} on the left. In total 401 velocity and pressure snapshots are collected, which are then used to created the POD basis functions. To determine the number of basis functions necessary for the creation of the reduced subspace, the cumulative eigenvalues (based on the first 20 most energetic POD modes) are listed in Table~\ref{tab:Y_cumm_ev}.
\begin{table}[h!]
\caption{The cumulative eigenvalues for the Y-junction test case. The second and third columns report the cumulative eigenvalues (total of the first 20 modes) for the velocity and pressure fields, respectively.}
\centering
\begin{tabular}{lll}
\hline
\multicolumn{1}{l}{N modes} & \textbf{u} & $p$ \\ \hline
1 & 0.976478 & 0.967073 \\
2 & 0.998492 & 0.989840 \\
3 & 0.999724 & 0.998781 \\
4 & 0.999859 & 0.999741 \\
5 & 0.999924 & 0.999933 \\
6 & 0.999967 & 0.999975 \\
7 & 0.999989 & 0.999995 \\
10 & 0.999999 & 0.999999 \\ \hline
\end{tabular}
\label{tab:Y_cumm_ev}
\end{table}
5 velocity and pressure modes are sufficient to retain 99.99$\%$ of the energy contained in the snapshots. These first five (homogenized) velocity and pressure modes are plotted in Figure~\ref{fig:Y_modes}. The first velocity magnitude mode has a symmetric pattern and is close to the time-averaged solution when it has non-homogeneous BCs and looks more like a fluctuation around the mean when it has homogeneous BCs. From the third mode and higher, the modes are more or less alike, whether the modes have homogeneous BCs or not.
In Figure~\ref{fig:Y_ev} for each number of modes the time-averaged relative $L^2$-error between the FOM and the basis projection is plotted, on the left for velocity and on the right for pressure. For velocity this is repeated with a set of homogenized snapshots. As there are two inlet boundary conditions, the first two modes are the normalized lifting functions and all sequential modes are then the homogeneous basis functions obtained with the POD method. Therefore the average $L^2$-error is still above the order \num{e-1} as these modes do not contain any information about the full order solution. The figure shows that 11 velocity basis functions and 10 pressure basis functions are required to have a truncation error less than \num{e-3}. Taking also into account previous observation, these number of modes are used for calculating the ROM matrices. \newpage
\begin{figure}[h!]
\centering
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_1.png}
\end{subfigure}%
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_2.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_3.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_4.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_5.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_1_homogenized.png}
\end{subfigure}%
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_2_homogenized.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_3_homogenized.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_4_homogenized.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_mode_5_homogenized.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_mode_1.png}
\end{subfigure}%
\begin{subfigure}{.195\textwidth}
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_mode_2.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_mode_3.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_mode_4.png}
\end{subfigure}
\begin{subfigure}{.195\textwidth}
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_mode_5.png}
\end{subfigure}
\caption{First 5 POD modes for (top) velocity, (middle) velocity with homogeneous BCs and (bottom) pressure for the Y-junction flow problem. }
\label{fig:Y_modes}
\end{figure}
After applying the Galerkin projection with the obtained modes, the penalty factors are determined using the iterative procedure. Starting from an initial guess of \num{e-6} the penalty factors found are 5.9$\cdot$\num{e-8} and 88.3 for inlet 1 and 1.1$\cdot$\num{e-7} and 125 for inlet 2 in the x-direction and y-direction, respectively. The factors are determined within 41 iterations for an error tolerance of \num{e-5} and only the first five time steps are evaluated. However, it took only 15 iterations to have an error of 1.00009$\cdot$\num{e-5} with penalty factors 0.0327, 88.3, 0.048, 124.5. So one could relax the criteria for the error a bit for a faster convergence. \newpage
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\linewidth]{FiguresFinal/Y_ave_L2_pdf.pdf}
\caption{The time-averaged $L^2$-error per number of (left) velocity modes (Umodes) and (right) pressure modes (Pmodes) for the Y-junction test case.}
\label{fig:Y_ev}
\end{figure}
Thereafter, three ROMs are obtained; one without boundary enforcement method, one with the lifting function method and one with the penalty method. These are then consecutively tested for the time-dependent boundary conditions of Figure~\ref{fig:Y_control}. The evolution in time of the $L^2$ relative error between the reconstructed fields is plotted in Figure~\ref{fig:Y_L2_error}.
In case no boundary enforcement method is used, the relative error for both velocity and pressure is of the order 1 and larger for the vast part of the simulation.
The relative error is more or less the same for both boundary control methods, as also was observed previously for the lid driven cavity test case, except around 9 s of simulation time. Then the difference in relative error for pressure between the two methods is the largest; the penalty method is about 2 $\cdot$ \num{e-1} larger than the error obtained with the lifting function method. However, on the long term the penalty method performs slightly better. This can also be concluded by having a look at the kinetic energy relative error in Figure~\ref{fig:Y_KE_eps}. Other than that, the relative velocity error is of the order \num{e-2} and for pressure \num{e-1}.
A possible source for the larger pressure error is that the PIMPLE algorithm, consisting of predictor and correction steps for pressure and velocity, is used at full order level, while the coupled (pressure-velocity) system at reduced order level is solved with Newton's iterative method. This is causing a discrepancy between the full order and reduced order model formulation. Nevertheless, the difference between the minimum and maximum relative error for both variables is about one order.
Furthermore, the absolute error between the FOM and the ROMs is shown in Figures~\ref{fig:Y_U} and~\ref{fig:Y_p} for velocity magnitude and pressure, respectively. For velocity the absolute error between the FOM and the ROM is of the order \num{e-2} for all plotted simulation times and the absolute error for pressure is of the order \num{e-1}. For pressure, the error is indeed larger in the case of the penalty method compared to the lifting function method at 9 s of simulation time, as previously observed in Figure~\ref{fig:Y_L2_error}, but in general, the error distribution, for both the velocity and pressure fields, is similarly distributed over the domain, and thus the methods are performing the same.
\begin{figure}[h!]
\centering
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_L2_U_pdf.pdf}
\end{subfigure}%
\begin{subfigure}{.49\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_L2_p_pdf.pdf}
\end{subfigure}
\caption{Relative $L^2$-error of velocity (left) and pressure (right) between the FOM and ROM with lifting function and with penalty method for the Y-junction flow problem.}
\label{fig:Y_L2_error}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.50\linewidth]{FiguresFinal/Y_KE_pdf.pdf}
\caption{Kinetic energy relative $L^2$-error for the ROM with lifting function and with penalty method for the Y-junction flow problem.}
\label{fig:Y_KE_eps}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_FOM_3.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_ROM_lift_function_3.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_ROM_PEN_3.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_COMP_lift_3.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_DIFF_PEN_3.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_FOM_9.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_ROM_lift_function_9.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_ROM_PEN_9.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_COMP_lift_9.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_DIFF_PEN_9.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_FOM_18.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_ROM_lift_function_18.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_ROM_PEN_18.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_COMP_lift_18.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_U_DIFF_PEN_18.png}
\end{subfigure}
\caption{Comparison of the full order velocity magnitude fields (1st column), the ROM fields obtained with the lifting function method (2nd column) and penalty method (4th column) and the difference between the FOM and ROM fields obtained with the lifting function method (3rd column) and penalty method (5th column) at $t$ = 3, 9 and 18 s (from top to bottom) for the Y-junction flow problem. }
\label{fig:Y_U}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_FOM_3.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_ROM_lift_function_3.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_ROM_PEN_3.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_COMP_lift_3.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_DIFF_PEN_3.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_FOM_9.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_ROM_lift_function_9.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_ROM_PEN_9.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_COMP_lift_9.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_DIFF_PEN_9.png}
\end{subfigure}
\begin{subfigure}{.19\linewidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_FOM_18.png}
\end{subfigure}%
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_ROM_lift_function_18.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_ROM_PEN_18.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_COMP_lift_18.png}
\end{subfigure}
\begin{subfigure}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FiguresFinal/Y_p_DIFF_PEN_18.png}
\end{subfigure}
\caption{Comparison of the full order pressure fields (1st column), the ROM fields obtained with the lifting function method (2nd column) and penalty method (4th column) and the difference between the FOM and ROM fields obtained with the lifting function method (3rd column) and penalty method (5th column) at $t$ = 3, 9 and 18 s (from top to bottom) for the Y-junction flow problem. }
\label{fig:Y_p}
\end{figure}
\clearpage
Finally, the computational times for performing the full order simulation (Eq.~\ref{eq:FOM_mat}), calculating the POD modes (Eqs.~\ref{eq:approx}-~\ref{eq:POD}), the reduced matrices (Eqs.~\ref{eq:ROM_matrices},~\ref{eq:C_matrix} and~\ref{eq:ROM_matrices2}) and performing the simulation at reduced level (Eq.~\ref{eq:ROM} (lifting function method) or Eq.~\ref{eq:pen_ROM_min} (penalty method) \& Eq.~\ref{eq:ROM2}) are listed in Table~\ref{tab:Y_times}. Calculating the reduced matrices and the ROM solutions takes more time in the case of the lifting function method as the reduced basis space consists of four additional modes, namely the normalized lifting functions, compared to the penalty method. Determining the penalty factor with the iterative method takes 1.4 s. The speedup ratio between the ROM and the FOM is about 13 times for the lifting method and 24 times for the iterative penalty method.
\begin{table}[h!]
\caption{Computational time (clock time) for the FOM simulation, POD modes, calculating reduced matrices offline (Matrices), determining penalty factor with iterative method (Penalty factor) and ROM simulation.}
\centering
\begin{tabular}{lllllll}
\hline
\multicolumn{1}{c}{Method} &FOM &POD &Matrices &Penalty factor &ROM \\ \hline
Lifting &13 min. & 7.6 s & 9.2 s &- &59 s \\
Penalty & 13 min. & 7.9 s & 4.7 s &1.4 s &33 s \\ \hline
\end{tabular}
\label{tab:Y_times}
\end{table}
\section{DISCUSSION}\label{sec:discussion}
The results have shown that the lifting function method and penalty method perform equally and lead to similar results. However, they have their own advantages and drawbacks. A disadvantage of the penalty methods is that the penalty factor cannot be determined a priori~\cite{graham1999optimal1}. The implementation of an iterative solver to determine the penalty factor does however save time compared to performing numerical experimentation manually. On the other hand, even though a lifting function(s) can be determined beforehand, it may be hard to find a function that will lead to an accurate ROM and therefore extensive testing of ROMs for different functions can be needed. In this work, the lifting functions are obtained by solving a potential flow problem and are thus physics-based unlike the penalty factor, which is an arbitrary value. Moreover, this value needs to be chosen above a certain threshold to enforce the BCs in the ROM, but can lead to an inaccurate ROM solution if it is too high~\cite{Sirisup}. In that case, the penalty method fails for that specific problem.
Finally, an advantage of the penalty method stated in literature~\cite{lorenzi2016pod} is that long-time integration and initial condition issues are less of a problem compared to a lifting function method. Here the ROMs have not been tested for long-term integration, so further research is needed in order to confirm this statement. However, as tested for the Y-junction test case, the ROM is accurate and does not exhibit instabilities even outside the time domain in which snapshots were collected.
For both cases tested in this study, only one full order simulation has been performed for collecting the snapshots. However, in case the BCs of the Y-junction are not time-dependent, snapshots from at least two different offline solves are required for the penalty method. The reason for this is that the boundary conditions are a linear combination of snapshots and the boundary conditions can therefore only be scaled and not be set to any arbitrary value in case only snapshots from one full order simulation are used for the POD. When several sets of snapshots for different boundary values are required, one can optimize the POD procedure by using a nested POD approach~\cite{georgaka2018parametric}.
It is important to note that the penalty factor is determined during the online phase and does not depend on high-fidelity data. Therefore, no modification are needed in the case of parametric problems that, for example, use the viscosity as the physical parameter.
In the case of the Y-junction test case, the penalty method can be used to adjust the direction of the inlet flow in the ROM. One penalty factor is enforcing the x-direction and another the y-direction. Nevertheless, new snapshots for different inlet angles are required as the current POD bases do not contain this information. For the lifting function method, it is often problematic to determine suitable lifting functions that are physical. Ideally, the lifting functions are orthogonal to each other as in the work of Hijazi et al.~\cite{HijaziStabileMolaRozza2020a} who studied a flow past an airfoil with parameterized angle of attack and inflow velocity. They used two lifting functions with orthogonal inflow conditions: $\boldsymbol{\zeta}_{c_1}$ = (0,1) and $\boldsymbol{\zeta}_{c_2}$ = (1,0) on ${\Gamma_i}$, respectively. These lifting functions are obtained by solving two linear potential flow problems. In that way, it is possible to adjust the direction of the flow at a inlet by scaling the associate lifting functions accordingly. However, specifying a purely tangential velocity at the inlets of the Y-junction would result in unphysical lifting functions. Thus, this approach is only suitable for a few problems and will not always lead to physical results.
In the case of non-physical lifting functions, the ROM gets unstable or the ROM solutions are polluted with noise. This strongly depends on the chosen lifting functions.
Moreover, both methods can, in theory, also be used for controlling pressure boundary conditions, but this is not studied in this work.
In this study, the exploitation of a pressure Poisson equation has been incorporated in the ROM as a stabilization method. Even though the ROMs are indeed stable, the relative error for pressure is about an order higher than for velocity. Alternatively, the supremizer enrichment of the velocity space technique could be used to stabilize the ROM, which may lead to more accurate pressure fields~\cite{Stabile2017CAF,kean2020error,ballarin2015supremizer}.
Furthermore, the ROMs can be improved by using a second order backward method for the time discretization of the ROM as the FOMs are treated using a second order backward differencing scheme.
Finally, for the Y-junction test case, the full order snapshots and the ROM solutions all have inlet velocities between an identical maximum and minimum value. The ROM could become less stable and accurate in case it is tested for values outside this range. Therefore it is recommended to collect snapshots for the same range as for which the ROM boundary needs to be controlled.
\section{CONCLUSIONS AND PERSPECTIVES}\label{sec:conclusion}
Two boundary control methods are tested: the lifting function method and the iterative penalty method for controlling the velocity boundary conditions of FV-based POD-Galerkin ROMs. The penalty method has been improved by using an iterative solver for the determination of the penalty factors, rather than using numerical experimentation. The factors are determined by the iterative solver in about a second for both test cases. The results of the reconstructed velocity and pressure fields show that both methods are performing equally. Moreover, the reduced order model of which the boundary conditions are controlled with the iterative penalty method is about two times faster compared to the lifting function method for the Y-junction flow case.
A pressure Poisson equation approach is applied for the reconstruction of the pressure field and to stabilize the ROM. For time-dependent boundary problems, an additional term is added to the ROM formulation.
Finally, a speedup factor, the ratio between the FOM and ROM simulation time, of 308 is obtained with the iterative penalty method and of 270 with the lifting function method for the lid driven cavity test case. The speedup factors are 24 and 13, respectively, for the Y-junction test case.
For further development, the model will be extended for turbulent flows, which will be essential to simulate industrial flow problems. Furthermore, the control of the pressure boundary conditions needs to be investigated, which may be required when coupling 3D CFD problems with 1D system codes. Also, the accuracy of the reconstructed pressure fields can be improved by using a supremizer enrichment approach rather than solving the Pressure Poisson Equation. The effect of supremizer enrichment of the velocity space on the boundary control methods will have to be investigated.
\section*{Acknowledgments}
This work has been partially supported by the ENEN+ project that has received funding from the Euratom research and training Work Programme 2016 - 2017 - 1 \#755576. In addition we acknowledge the support provided by the European Research Council Executive Agency by the Consolidator Grant project AROMA-CFD ``Advanced Reduced Order Methods with Applications in Computational Fluid Dynamics'' - GA 681447, H2020-ERC CoG 2015 AROMA-CFD and INdAM-GNCS projects.
\bibliographystyle{ieeetr}
|
2,869,038,156,244 | arxiv | \section{\large Introduction}
\vskip .5cm
Since its introduction by Schr\"{o}dinger
\cite{schrodinger}, coherent states (CSs) have attracted
considerable attention in the literature
\cite{klauder1,klauder2,klauder3,perelomov,gilmore,GSA}. A variety
of coherent states, e.g., minimum uncertainty coherent state
(MUCS), annihilation operator coherent state (AOCS), displacement
operator coherent state (DOCS) and recently Klauder type CS
\cite{klauder3}, possessing temporal stability, have been
constructed and applied to diverse physical phenomena
\cite{klauder2}. Coherent states of systems possessing nonlinear
energy spectra are of particular interest as their temporal
evolution can lead to revival and fractional revival, leading to
the formation of Schr\"{o}dinger cat and cat-like states
\cite{averbukh,bluhm,robinett}. A celebrated example in quantum
optics of the aforementioned phenomenon is the coherent state in a
Kerr type nonlinear medium \cite{tara}. In quantum mechanical
potential problems, Hamiltonians for potentials like,
P\"{o}schl-Teller, Morse and Rosen-Morse (RM) lead to nonlinear
spectra. Time evolution of the CSs for these potentials, a subject
of considerable current interest
\cite{nietoprd3,benedict,roy,fakhri,shapiro,nieto3,nieto2,crawford,kinani,nieto1,hassouni},
can produce the above type of states.
The simplest way to construct CSs is a symmetry based approach
\cite{perelomov}. It is well-known that making use of the
Heisenberg algebra $\left[a,a^\dag\right]=1$, one can construct
all the above type of CSs for the Harmonic oscillator, which are
identical to each other. In many physical problems, groups like
SU(2) and SU(1,1) manifest naturally, enabling a straightforward
construction of CSs. For the identification of the symmetry
structure of quantum mechanical potential problems, recourse has
been taken to a number of approaches, starting from the
factorization property of the corresponding differential
equations. For the so called shape invariant potentials, super
symmetric (SUSY) quantum mechanics \cite{khare} based raising and
lowering operators have found significant application. A Klauder
type CS, using a matrix realization of the ladder operators, has
also been constructed \cite{klauder3}. The fact that, SUSY ladder
operators act on the Hilbert space of different Hamiltonians, has
led to difficulties \cite{fakhri} in proper operator
identification of the symmetry generators. The ladder operators
have been taken to be functions of quantum numbers, which makes
the corresponding algebraic structure ambiguous. This, in turn,
creates difficulty in establishing a precise connection between
the complete set of states describing the CS and the symmetry of
the potential under consideration. In a number of approaches an
additional angular variable have been employed \cite{alhassid} to
identify SU(1,1) type algebras for describing the infinite number
of states of some of these potentials. Taking advantage of the
shape invariance property, quantum group type algebras have also
been used for describing the Hilbert spaces \cite{aleixo}.
Recently, a general procedure for constructing CSs for potential
problems have been developed by some of the present authors
\cite{charan}. The approach makes use of novel exponential forms
of the solutions of the differential equations associated with
these potentials for identifying the symmetry generators
\cite{guru1}. No additional variables are introduced and unlike
SUSY based approaches one stays in the Hilbert space of a given
quantum problem while unravelling its symmetry structure. The
present paper makes use of this approach to study the DOCS of the
P\"{o}schl-Teller potential. The primary motivation for
considering the P\"{o}schl-Teller potential is two-fold. First of
all, it has a quadratic spectrum leading to a rich revival
structure for its CS, which can lead to the formation of cat-like
states. Secondly, many other potentials can be obtained from the
P\"{o}schl-Teller potential by appropriate limiting procedure and
point canonical transformations. Hence, the CSs obtained in this
case may have relevance to other potentials. The temporal
evolution, auto-correlation and quantum carpet structures
\cite{averbukh,robinett,loinaz} of the CSs are carefully analyzed
for delineating their structure and various time scales present in
this problem. These properties are then contrasted with the
corresponding ones of the AOCS \cite{charan}.
The paper is organized as follows. In the following section, we
briefly outline the procedure to identify the symmetry generators
for quantum mechanical potential problems, based on hypergeometric
and confluent hypergeometric equations. These symmetry generators
are then used for constructing the DOCS for general quantum
mechanical potential problems. Dual nature of the DOCS with the
AOCS is algebraically established. In Section 3, the DOCS for the
P\"{o}schl-Teller potential is constructed and its properties
studied. We identify and analyze the various time scales of the
system in Section 4 and compare the quantum evolution of the CS
with the classical motion. We conclude in Section 5, after
pointing out various directions for further work.
\section{\large Algebraic structure of quantum mechanical potential problems}
\vskip .5cm
As is well-known, the Schr\"{o}dinger equation for a
number of solvable potentials can be connected with the
hypergeometric (HG) and confluent hypergeometric (CHG)
differential equations (DEs). For example, harmonic oscillator,
Coulomb and Morse potentials belong to the CHG class, whereas
P\"{o}schl-Teller and Rosen-Morse belong to the HG class. Below,
we briefly outline the steps of a novel procedure for solving DEs
which connects the solution of a DE with the space of monomials
\cite{guru}. This is subsequently used for identifying the
symmetry generators underlying quantum mechanical potential
problems.
A single variable linear differential equation can be easily cast
in the form,
\begin{equation}
[F(D)+P(x,d/dx)]y(x)=0
\label{mainequation}
\end{equation}
where the first part $F(D)$ is a function of the Euler operator
$D=x d/dx$, possibly including a constant term and $P(x,d/dx)$
contains all other operators present in the DE under study. The
solution can be written in the form,
\begin{equation}
y(x)= C_\lambda \sum_{n=0}^{\infty}(-1)^n
\left[\frac{1}{F(D)}P(x,d/dx)\right]^n x^\lambda\;,
\label{solution}
\end{equation}
with the constraint $F(D)\;x^\lambda=0$ \cite{guru}. Using
Eq.~(\ref{solution}) the polynomial solutions of the HG and CHG
can be written in closed form exponential forms \cite{guru1}:
\begin{equation}
_2F_1(-n,b;c;x)\;=\;(-1)^{n}\,\frac{\Gamma{(b+n)}
\Gamma{(c)}}{\Gamma{(c+n)} \Gamma{(b)}}
e^{\frac{1}{(D+b)}P(x,\frac{d}{dx})} x^n\;, \label{hyper}
\end{equation}
and
\begin{equation}
_1F_1(-n;c;x)=\;(-1)^{n}\,\frac{\Gamma{(c)}}{\Gamma{(c+n)}}
e^{P(x,\frac{d}{dx})} x^n\;. \label{conhyper}
\end{equation}
The exponential forms of these solutions are ideal for identifying
algebraic structures of the solution spaces. For that purpose, one
first identifies raising and lowering operators in the space of
monomials. The operators at the level of polynomials can be
obtained through similarity transformations. The simplest lowering
operators at the level of monomial for CHG and HG functions can be
taken \cite{guru1} as
\begin{equation}\label{shjsjsk}
K_-=x\frac{d^2}{dx^2}+c\frac{d}{dx},\;\;and\;\;
\bar{K}_-=\frac{1}{(D+b)}(x\frac{d^2}{dx^2}+c\frac{d}{dx}),
\end{equation}
respectively.
The only criterion in choosing these operators at the monomial level is
that, these do not lead to divergent expressions after the
similarity transformation. It can be easily shown that, for the
CHG case, the following generator form a SU(1,1) algebra at the
monomial level:
\begin{equation}\label{CHGalgebra}
K_-=x\frac{d^2}{dx^2}+c\frac{d}{dx},\;\;\; K_+=x,\quad and
\;K_3=x\frac{d}{dx}+\frac{c}{2}.
\end{equation}
Similarly, for the HG case, the SU(1,1) generators are given as,
\begin{eqnarray}
\bar{K}_-&=&\frac{1}{(D+b)}(x\frac{d^2}{dx^2}+c\frac{d}{dx}),\nonumber\\
\bar{K}_+&=&(D+b-1)x,\;\;and\;\;K_0=x\frac{d}{dx}+\frac{c}{2}.
\label{HGalgebra}
\end{eqnarray}
Modulo a normalization, the DOCS for the HG type DE can be
written, at the monomial level, as
\begin{equation}
\Phi^{\beta}(x)=e^{\beta \bar{K}_+} x^0,
\end{equation}
Here, $x^0=1$ is the fiducial state satisfying,
\begin{equation}
\bar{K}_-x^0=0.
\end{equation}
To find the CS $\chi (x,\beta)$ at the level of the polynomial, we
make use of the exponential form of the solution in
Eq.~(\ref{hyper}). The DOCS, $\chi (x,\beta)$, can then be written
as,
\begin{eqnarray}
\chi (x,\beta)\;&=&\; e^{-\bar{K}_{-}} e^{\beta \bar{K}_{+}}
x^0 \nonumber\\
&=&\;e^{-K_{-}}\;\sum_{n=0}^{\infty}\;\frac{\beta^{n}}{n!}\left[(D+b-1)
x \right]^{n}
x^0\nonumber\\
&=&\sum_{n=0}^{\infty}\frac{\beta^{n}}{n!}
\frac{\Gamma{(b+n)}}{\Gamma{(b)}}e^{-K_{-}}x^n\nonumber\\
&=&\sum_{n=0}^{\infty}\frac{\beta^{n}}{n!}
(-1)^n\frac{\Gamma{(c+n)}}{\Gamma{(c)}}\;_2F_1(-n,b,c,x)\;.\nonumber\\
\label{generalcoherentstate}
\end{eqnarray}
It is worth noting that, since the similarity transformation does
not affect the algebraic structure, the SU(1,1) algebras remain
intact at the polynomial level, albeit with different expressions
for the generators.
It is interesting to note that, at the monomial level, the DOCS
found above is nothing but the AOCS of $\tilde{K}_-$, \rm{i.e.}
$\tilde{K}_-\Phi^{\beta}(x)=\beta\Phi^{\beta}(x)$, where
\begin{equation}
\tilde{K}_-\;=\;\frac{1}{(D+b)(D+c)}\;\left(x\frac{d^2}{dx^2}+c\frac{d}{dx}\right).
\end{equation}
One notices that $\left[\tilde{K}_-,\bar{K}_+\right]=1$. Hence,
the above procedure is akin to the oscillator construction of
AOCS. We can also identify a $\tilde{K}_+$, which leads to the
oscillator algebra $\left[\bar{K}_-,\tilde{K}_+\right]=1$:
\begin{equation}
\tilde{K}_+\;=\;\left(\frac{D+b-1}{D+c-1}\right)x.
\end{equation}
The AOCS considered earlier \cite{charan}, is the eigen state of
$\bar{K}_-$ and is of the form $e^{\beta \tilde{K}_+} x^0$. This
relationship between DOCS and AOCS has been referred earlier as
duality of these two type of CSs \cite{shanta}. Thus far, the
specific nature of the potential has not been invoked. Now, we
shall use this form to find out the CS for P\"{o}schl-Teller (PT)
potentials.
\section{\large Coherent state for the
symmetric-P\"{o}schl-Teller potential}
\vskip .5cm The trigonometric P\"{o}schl-Teller potential belongs
to the HG class having an infinite number of bound states. Hence
it is natural to expect an underlying SU(1,1) algebra as its
spectrum generating algebra. In reference \cite{charan} AOCS of
the P\"{o}schl-Teller potential has been constructed, making use
of a novel exponential form of the solution of the hypergeometric
differential equation. Below we will concentrate on the
construction of DOCS, following the same procedure and study its
properties. We also compare the properties of DOCS and AOCS.
The eigen values and eigen functions \cite{quesne} of the
symmetric-P\"{o}schl-Teller potential
\begin{equation}
V_{SPT}(y)\;=\;\frac{\hbar^2\alpha^2}{2m}\left[\frac{\rho(\rho-1)}{\cos^2\alpha
y}\right]\;,\;\;\quad \rho >1, \label{SPT}
\end{equation}
are given by,
\begin{eqnarray}
E_{n}^{SPT}&=&\frac{\hbar^2\alpha^2}{2m} (n+\rho)^2,\quad
n=0,1,2,... ~~~\textrm{and}\nonumber\\
\Psi_{n}^{SPT}(\bar{x})&=&\left[\frac{\alpha(n!)(n+\rho)\Gamma{(\rho)}\Gamma{(2
\rho)}}{\sqrt{\pi}\Gamma{(\rho+\frac{1}{2})}\Gamma{(n+2\rho)}}\right]^\frac{1}{2}
(1-\bar{x}^2)^{\frac{\rho}{2}} C_n^\rho (\bar{x}),\nonumber\\
\label{SPTeigenfunction}
\end{eqnarray}
with $\bar{x}=\sin{\alpha y}$. Using the relation
\cite{gradshteyn}
\begin{equation}
C_{n}^{\rho}(1-2x)\;=\;\frac{\Gamma{(2\rho+n)}}{\Gamma{(2\rho)}\Gamma{(n+1)}}\;
_2F_1(-n,b,c,x)
\end{equation}
and $\bar{x}=1-2x$, where $b=2\rho+n$ and $c=\rho+1/2$, we obtain
from Eq.~(\ref{generalcoherentstate})
\begin{equation}
\chi(\bar{x},\beta)=\sum_{n=0}^{\infty}\;\frac{(-\beta)^{n}}{n!}
\left[\frac{\Gamma{(\rho+n+1/2)}\Gamma{(2\rho)}}{\Gamma{(2\rho+n)}\Gamma{(\rho+1/2)}}\right]
C_{n}^{\rho}(\bar{x}) \label{xchharacoherentstate}
\end{equation}
Now multiplying Eq.~(\ref{xchharacoherentstate}) by
$(1-\bar{x}^2)^\rho/2$ and comparing with
Eq.~(\ref{SPTeigenfunction}), we get the coherent state in energy
eigenfunction basis as,
\begin{equation}
\bar{\chi}(\bar{x},\beta)=\sum_{n=0}^{\infty}\;\;d_{n}
\Psi_n^{\mathrm{SPT}}(\bar{x})\;\;, \label{finalSPT}
\end{equation}
where
\begin{equation}
d_n\;=\;(-\beta)^n
\left[\frac{\Gamma{(\rho+n+1/2)}^2}{\Gamma{(2\rho+n)}\Gamma{(n+1)}(n+\rho)}\right]^{1/2}.
\end{equation}
For comparison, the eigen function distribution for AOCS can be
written as
\begin{equation}
d_n^{AOCS}\;=\;(\gamma)^n
\left[\frac{1}{\Gamma{(2\rho+n)}\Gamma{(n+1)}(n+\rho)}\right]^{1/2}.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=3.5in]{dn2.eps}
\caption{The $|d_n|^2$ plots of DOCS and AOCS of
symmetric-P\"{o}schl-Teller potential for
$\rho=15$.}\label{SPTdn2}
\end{figure}
We can also obtain the DOCS of a general trigonometric
P\"{o}schl-Teller potential \cite{klauder3} modulo a normalization
factor, in the same manner as for the symmetric-P\"{o}schl-Teller
case :
\begin{equation}
\chi^{PT}(\bar{x},\beta)\;=\;\sum_{n=0}^{\infty}\;\;d_{n}^{PT}\;\psi_{n}^{PT}(\bar{x})
\end{equation}
where
\begin{eqnarray}
d_{n}^{PT}&=&(-\beta)^{n}\left[\frac{\Gamma{(k+n+1/2)}\Gamma{(\rho+n+1/2)}}{(k+\rho+2
n)\Gamma{(n+1)}\Gamma{(k+\rho+n)}}\right]^{1/2},\nonumber\\
\Psi_{n}^{PT}(\bar{x})&=&\left[\frac{2
\alpha(k+\rho+2n)\Gamma{(n+1)}
\Gamma{(k+\rho+n)}}{\Gamma{(k+n+1/2)}
\Gamma{(\rho+n+1/2)}}\right]^{1/2} \nonumber\\
&\times&(1-\bar{x})^{\rho/2} (\bar{x})^{k/2}
P_{n}^{k-1/2,\rho-1/2} (1-2\bar{x}).
\end{eqnarray}
\vskip.5in
Although the symmetric-P\"{o}schl-Teller potential has
infinite number of bound states, only a few states contribute
appreciably to the sum, which is peaked around $n=\bar{n}$. In
Fig.~\ref{SPTdn2}, we compare the nature of the distributions of
the eigen states for AOCS and DOCS. For the purpose of comparison,
we have taken the coherence parameters such that, the
distributions are comparable. It is found that, both the eigen
state distributions, peaked at $n=9$, involve the same eigen
states (from n=0 to n=30) for the same potential ($\rho=15$). For
AOCS, the distribution resembles a Gaussian distribution and is
more sharply peaked, as compared to the DOCS. Larger $\beta$ value
makes the distribution flatter for DOCS, as seen in the dashed
curve in Fig.~\ref{SPTdn2}. We now proceed to study the
spatio-temporal dynamics of these wave packets.
\section{\large Revival dynamics of coherent state}
\vskip .5cm
The time evolution of CS $\chi(\bar{x},\beta)$ can be
written as
\begin{equation}
\chi(\bar{x},t)=\sum_{n=0}^{\infty}d_{n}\psi_{n}(\bar{x})e^{-iE_nt}.
\label{timeevolution}
\end{equation}
As the energy expression contains terms up to $n^2$, the system
shows revival and fractional revival but no super-revival
phenomenon. All graphs are plotted in time, scaled by the revival
time $T_{rev}=4\pi/\alpha^2$, .
In order to throw more light on the structure of the revival
pattern, we note that the eigen functions satisfy
\begin{equation}
\psi_n(-\bar{x})=(-1)^n\psi_n(\bar{x}).
\end{equation}
From Eq.~(\ref{timeevolution}), we can easily obtain the CS wave
packet at time $t=1/2\;T_{rev}$ as
\begin{equation}
|\chi(\bar{x},t=\frac{1}{2}T_{rev})|^2=|\chi(-\bar{x},t=0)|^2.
\end{equation}
\begin{figure*}
\centering
\includegraphics[width=4. in]{DOCS.eps}
\caption{Probabity density plot of DOCS of
symmetric-P\"{o}schl-Teller potential at different times for
$\beta=0.8$ and $\rho=10$.} \label{DOCS}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=4. in]{AOCS.eps}
\caption{Probabity density plot for AOCS of
symmetric-P\"{o}schl-Teller potential at different times for
$\beta^{'}=30$ and $\rho=10$.} \label{AOCS}
\end{figure*}
Thus, at time $t=1/2\; T_{rev}$, a mirror image of the initial
wave packet is produced at the opposite end of the potential well
(Fig.~\ref{DOCS} and \ref{AOCS}). This can be observed as a bright
ray at time $t=1/2\; T_{rev}$, in the quantum carpet structure
(Fig.~\ref{SPTcarpet}). The auto-correlation function
\begin{equation}
A(t)\;=\;\langle\chi(\bar{x},t)|\chi(\bar{x},0)\rangle,
\end{equation}
yields
\begin{equation}
A(t=1/2)\;=\;\sum_{(A+n)\;even}|d_n|^2-\sum_{(A+n)\;odd}|d_n|^2\;.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=4in]{carpet.eps}
\caption{(Colour Online) Quantum carpet of the displacement
operator coherent state of symmetric-P\"{o}schl-Teller potential
for $\beta=0.8$ and $\;\rho=10$, brightness signifies the
maximum.}\label{SPTcarpet}
\end{figure}
As $d_n$ oscillates rapidly, it will not contribute significantly
to the $|A(t)|^2$ (Fig.~\ref{SPTauto}), at $t=1/2\; T_{rev}$. At
time $t=1/4\;T_{rev}$ the CS wave packet becomes
\begin{eqnarray}
\chi(\bar{x},\frac{1}{4} T_{rev})&=&
\frac{1}{\sqrt{2}}e^{-i\pi/4}\left[\chi(\bar{x},0)+e^{i\pi/2}\chi(-\bar{x},0)\right]\nonumber \\
|\chi(\bar{x},\frac{1}{4}T_{rev})|^2\;&=&
\frac{1}{2}\left[|\chi(\bar{x},0)|^2+|\chi(-\bar{x},0)|^2\right]
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=3.5in]{auto.eps}
\caption{Plot of square modulus
$|_{SPT}\langle\chi(x,t)|\chi(x,0.)\rangle_{SPT}|^2$ of the auto
correlation function of DOCS as a function of time, for
$\beta=0.8$, $\rho=10$.}\label{SPTauto}
\end{figure}
In this case, the wave packet breaks up into two parts which are
situated at the two opposite corners of the potential well
(Fig.~\ref{DOCS} and \ref{AOCS}). This gives rise to two bright
spots at the two vertical ends of the quantum carpet at $t=.25$.
At the same instance, the auto-correlation function only gives a
peak, as manifested in Fig.~\ref{SPTauto}. To explain the
probability density plot at $t=0.125$, we consider a fictitious
classical wave packet
\begin{equation}
\chi_{cl}(\bar{x},t)\;=\;\sum_{n=0}^{\infty}\;\;d_{n}\;\psi_{n}(\bar{x})e^{-2\pi
i\frac{n}{T_{cl}}t}\;, \label{classicalwave}
\end{equation}
which behaves like the initial wave packet at small time (order of
$T_{cl}$) where $T_{cl}=\frac{2\pi}{\alpha^2\rho}$. Using the
discrete Fourier transform (DFT), the original CS wave packet
(Eq.~(\ref{timeevolution})) at time $t=\frac{r}{s}T_{rev}$ can be
written as a linear combination of classical wave packet of
Eq.~(\ref{classicalwave}) as
\begin{equation}
\chi(\bar{x},\frac{r}{s}
T_{rev})\;=\;\sum_{p=0}^{l-1}\;\;a_{p}\;\chi_{cl}(\bar{x},\frac{r}{s}
T_{rev}+\frac{p}{l}T_{cl})\;, \label{rswave}
\end{equation}
where
\begin{equation}\label{DFT}
a_p=\frac{1}{l}\;\sum_{n=0}^{l-1}\;\;\exp\left[2\pi
i(n\frac{p}{l}-n^2\frac{r}{s})\right]\;.
\end{equation}
Here, $r$ and $s$ are two mutually prime integers and $l$
is the period of the quadratic phase term. In general,
$l=\frac{s}{2}$, if $s$ is integral multiple of $4$ and $l=s$ in
all other cases. In this case $\frac{r}{s}=\frac{1}{8}$ and
$l=\frac{s}{2}=4$, Substituting in Eq.~(\ref{rswave}) and
Eq.~(\ref{DFT}), we get
\begin{equation}
\chi(\bar{x},\frac{1}{8} T_{rev})=\;
a_{0}\;\chi_{cl}^{(0)}\;+\;a_{1}
\chi_{cl}^{(1)}\;+\;a_{2}\chi_{cl}^{(2)}\;+\;a_{3}\chi_{cl}^{(3)}\;
\label{chit}
\end{equation}
where
\begin{equation}
\chi_{cl}^{(i)}=\chi_{cl}(\bar{x},\frac{1}{8}
T_{rev}+\frac{i}{4}T_{cl}),\quad i=0,1,2,3\;
\end{equation}
and $a_0=-a_2=\frac{1}{2\sqrt{2}}(1-i)\;$; $a_1=a_3=1/2$.
The above expression (\ref{chit}) signifies that the wave packet
has broken into four parts, each of them differing by a phase
$\pi/4$. In the probability density
\begin{eqnarray}
|\chi(t=\frac{1}{8})|^2\;&=&\;\frac{1}{4}\left[|\chi_{cl}^1|^2\;
+\;|\chi_{cl}^2|^2\;+\;|\chi_{cl}^3|^2\;+\;|\chi_{cl}^4|^2
\right]\\\nonumber
&\;&+\frac{1}{2}\textrm{Re}\left[{\chi_{cl}^1}^*\chi_{cl}^2e^{i\pi/4}-{\chi_{cl}^1}^*\chi_{cl}^3
+{\chi_{cl}^1}^*\chi_{cl}^4e^{i\pi/4}\right.\nonumber\\
&&\;-\left.{\chi_{cl}^2}^*\chi_{cl}^3e^{-i\pi/4}
\;+{\chi_{cl}^2}^*\chi_{cl}^4-{\chi_{cl}^3}^*\chi_{cl}^4e^{i\pi/4}\right],
\end{eqnarray}
the first term carries the contribution from the individual
subsidiary waves and the second term arises due to the
interference between them. We note that the $\chi_{cl}^1$ and
$\chi_{cl}^3$ are spatially well separated, giving very less
contribution in interference. The dominant interference term is
${\chi_{cl}^2}^*\chi_{cl}^4$, as $\chi_{cl}^2$ and $\chi_{cl}^4$
are not spatially separated. Thus, at time $t=0.125$, wave
function splits into four parts, but their interference at the
middle gives a strong peak, rather than giving four distinct
similar waves. For comparison, the wave packet structure for AOCS
of symmetric-P\"{o}schl-Teller potential is also shown in
Fig.~\ref{AOCS}. In this case, as the initial wave packet is
sharper, the interference terms are less dominant than that of
DOCS.
We have observed that the initial wave packet remains in the left
corner of the potential well and oscillates due to the impulse
from the well and at later times, as it spreads, being away from
the boundary of the well. This is quite transparent from the
quantum carpet structure, which gives the space time rays of
probability density of the corresponding coherent states. We note
that, the rays in quantum carpet are not straight lines which is
the case for infinite square-well.
In order to contrast the temporal evolution of the DOCS with the
classical motion, we note that for a particle of energy $E$,
\begin{equation}
x(t)\;=\;a\;\arccos\left[\frac{\alpha_1-\beta_1}{2}+\sqrt{\Delta}\cos{(\sqrt{\frac{2E_c}{m}}\frac{t}{a})}\right]
\end{equation}
where $\Delta=(1-\frac{1}{2}
(\sqrt{\alpha_1}+\sqrt{\beta_1})^{2})(1-\frac{1}{2}(\sqrt{\alpha_1}-\sqrt{\beta_1})^{2})$,
$ V_0=\frac{\alpha^2}{m}$ and
$\alpha_1=\frac{V_0}{E_c}\rho(\rho-1),\;\beta_1=\frac{V_0}{E_c}
k(k-1)$; with the condition
\begin{equation}
E_c>\frac{V_0}{2}(\sqrt{\rho(\rho-1)}+\sqrt{k(k-1)})^2.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=5.5in]{xpect.eps}
\caption{Plots of expectation values of x for (a) the coherent
state of the P{\"o}schl-Teller potential, with different values of
$\beta$ where $\rho=5,k=5$ and $\alpha=2$.(b) classical solution
of P{\"o}schl-Teller potential, with $a=0.25,m=1,V_0=4,k=5,\rho=5$
and $E_c=$ average value of energy for $\beta=0.1$.}\label{xpect}
\end{figure}
This classical trajectory is shown in Fig.~\ref{xpect}(b). The
expectation value of the position with respect to the coherent
state of general trigonometric P\"{o}schl-Teller potential is
obtained as
\begin{equation}
\langle x(t) \rangle\;=\frac{1}{\alpha} \arcsin\sqrt{1/2\;(1-z)}
\end{equation}
where
\begin{equation*}
z=N\;\sum_{n=0}^{\infty}\;(2
A_n\;\cos{\left[2\;\alpha^{2}(2n+\rho+k+1)t\right]-C_n})
\end{equation*}
having,
\begin{equation*}
\fl A_n=-(\beta)^{2n+1}\\\left[\frac{2
\Gamma{(\rho+n+3/2)}\Gamma{(k+n+3/2)}}
{\Gamma{(\rho+k+n)}\Gamma{(n+1)}(2n+\rho+k)(2n+\rho+k+1)(2n+\rho+k+2)}\right]
\end{equation*}
and
\begin{equation*}
\fl
C_n=(\beta)^{2n}\left[\frac{
\Gamma{(\rho+n+1/2)}\Gamma{(k+n+1/2)}(k+\rho-1)(k-\rho)}
{\Gamma{(\rho+k+n)}\Gamma{(n+1)}(2n+\rho+k)(2n+\rho+k-1)(2n+\rho+k+1)}\right]
\end{equation*}
N being the normalization constant. This $\langle x(t) \rangle$ is
plotted in Fig.~\ref{xpect}(a) which nearly matches with the
classical trajectory for very small values of $\beta$ (solid line
in Fig.~\ref{xpect}(a)). In this case only a few eigen states
contribute to the coherent state wave packet. Sudden changes in
the $\langle x(t) \rangle$ values are the signatures of revivals
and the fractional revivals \cite{sudheesh}.
\section{\large Conclusions}
\vskip .5cm
In conclusion, the algebraic procedure used here,
for constructing CSs for potentials based on confluent
hypergeometric and hypergeometric differential equations depends
on the fact that, the solutions of the above differential
equations can be precisely connected with the space of monomials.
This leads to a straightforward identification of symmetry
generators, without taking recourse to additional angular
variables or SUSY type multiple, related Hamiltonians. The nature
of the specific potential enters through the corresponding ground
states and by fixing the parameters and variables of the above
solutions. We have concentrated here on the CS of
P\"{o}schl-Teller potential, since various potentials can be
obtained from the same, through limiting of parameters and point
canonical transformation. The time evolution of the CS for this
potential, having non-linear spectra, produces cat like states in
fractional revivals. We contrasted the properties of the two
different types of CSs possible here, as well as the temporal
evolution of the CS, with classical motion. As has been noted
earlier, this procedure easily extends to more complicated
non-linear coherent states arising from deformed algebras, a
subject we intend to take up in future. We also would like to
analyze the subject of mesoscopic superposition and sub-Planck
scale structure \cite{zurek}, possible in this type of quantum
systems.
\section{\large References}
\vskip .5cm
|
2,869,038,156,245 | arxiv | \section{Abstract}
The transiting planet HD~80606~b undergoes a 1000-fold increase in insolation during its 111 day orbit due to it being highly eccentric ($e$=0.93). The planet's effective temperature increases from 400~K to over 1400~K in a few hours as it makes a rapid passage to within 0.03~AU of its host star during periapsis. Spectroscopic observations during the eclipse (which is conveniently oriented a few hours before periapsis) of HD~80606~b with the James Webb Space Telescope (JWST) are poised to exploit this highly variable environment to study a wide variety of atmospheric properties, including composition, chemical and dynamical timescales, and large scale atmospheric motions. Critical to planning and interpreting these observations is an accurate knowledge of the planet's orbit. We report on observations of two full-transit events: 7 February 2020 as observed by the TESS spacecraft and 7--8 December 2021 as observed with a world-wide network of small telescopes. We also report new radial velocity observations which when analyzed with a coupled model to the transits greatly improves the planet's orbital ephemeris. Our new orbit solution reduces the uncertainty in the transit and eclipse timing of the JWST era from tens of minutes to a few minutes. When combined with the planned JWST observations, this new precision may be adequate to look for non-Keplerian effects in the orbit of HD~80606~b.
\section{Introduction}
For many years HD~80606~b held the record for the most highly eccentric planet. Discovered by the radial velocity (RV) technique in 2001 \citep{Naef2001} HD~80606~b has a mass of 4.1~\ensuremath{\,M_{\rm J}}, an orbital period of 111.4~days and an eccentricity of $\epsilon$=0.93. Its eccentricity is currently exceeded only by HD~20782~b with an eccentricity of $\epsilon$=0.95 \citep{Jones2006}.
HD~80606~b continues to be compelling for further study as it was discovered by Spitzer using the eclipse in early 2009 \citep{Laughlin2009}. The transit was then discovered and announced near-simultaneously in late February 2009 by \cite{Fossey2009}, \cite{Garcia2009}, and by \cite{Moutou2009}. HD 80606 b passes within 0.03~AU of its host G5V star, during its rapid periastron passage of a few tens of hours, the insolation and temperature of the planet increase dramatically, from 1$\times$ to almost 1000$\times$ Earth-Equivalent and from 400~K to over 1400~K.
These rapid changes, coupled with the fact that HD~806060~b transits and also eclipses (passes behind the star), provide a unique opportunity to explore the dynamical response of an atmosphere under an extreme external forcing function. Spitzer's photometric observations of eclipses in 2009 and 2010 at 8.0 and 4.5~$\mu$m\, respectively, were used to infer timescales for radiative, dynamical, and chemical processes \citep{dewit2016, Lewis2017}. As noted by \citet{Lewis2017}, ``The time-variable forcing experienced by
exoplanets on eccentric orbits provides a unique and important window on radiative, dynamical, and
chemical processes in planetary atmospheres and an important link between exoplanet observations
and theory."
The James Webb Space Telescope (JWST) will expand these studies dramatically using spectroscopy. Kataria et al.\footnote{Approved Cycle 1 program \#2008. ``A Blast From the Past: A Spectroscopic look at the Flash Heating of HD~80606~b" https://www.stsci.edu/jwst/science-execution/program-information.html?id=2008} will use the MIRI Low Resolution Spectrometer (MIRI/LRS) to observe an eclipse of HD~80606~b from 5--14~$\mu$m\ at a spectral resolution of $\sim$100. Sikora et al\footnote{Approved Cycle 1 program \#2488. ``Real Time Exoplanet Meteorology: Direct Measurement of Cloud Dynamics on the High-Eccentricity Hot Jupiter HD~80606 b" https://www.stsci.edu/jwst/science-execution/program-information.html?id=2488} will explore the formation and evolution of atmospheric clouds at shorter wavelengths using NIRSpec at 2.87-5.18 $\mu$m\ with a resolution of $\sim$2700 to observe the eclipse and periastron passage. These spectral regions contain a wealth of molecular features whose variation will reveal new insights into the chemistry and dynamics of the atmospheres of giant planets.
A challenge to transit and eclipse observations is the gradual erosion of our knowledge of a planet's orbital properties. Uncertainties in the timing of transits and eclipses lead to observing inefficiencies as longer durations must to scheduled to avoid missing some or all of an event \citep[e.g.,][]{Dragomir2020, Zellem2020}. This problem is exacerbated in the case of HD~80606~b where the relevant observations are over a decade old and uncertainties on the eclipse prediction grow with each orbit ($\sim$3 per year). Of particular importance is the knowledge of the time of periastron passage relative to the eclipse as this is needed to link the spectral observations to the insolation profile.
It was to remedy this growing uncertainty in our knowledge of the ephemerides of HD~80606~b that we undertook to analyze the TESS data and to obtain observations of the transit occurring on 7/8-Dec-2021 (Table~\ref{tab:nominal} and Figure~\ref{fig:Map}) from the ground. We also obtained new RV measurements around the time of periastron to continue to refine the RV solution. Section~$\S$\ref{sec:trans} describes the observations of the transit and $\S$\ref{sec:PRV} the RV observations. Section~$\S$\ref{sec:analysis} describes the analysis of the various datesets while $\S$\ref{sec:params} uses the combined transit and RV measurements to refine the ephemeris of HD~80606~b and to predict the times of occurrence of future transits and eclipses.
\begin{figure*}
\centering
\includegraphics[scale=0.4]{Figures/joint_fit/joint_transit_fit.png}
\caption{ A transit light curve of HD 80606 b measured with the TESS spacecraft using data from Sector 21. The TESS light curve is contaminated with light from a neighboring star causing the transit depth to appear smaller (by about $\sim$48$\%$) than it really is. The plate scale of TESS is $\sim$21'' $\times$ 21'' and that is also coincidentally the distance between the nearby stellar companion, HD~80607, and HD~80606. Light contamination from the roughly equal brightness companion was summed in the aperture used for TESS photometry and will contribute to a smaller measured depth than observations from platforms with a higher imaging resolution, where the light sources can be treated separately. Despite the contamination shrinking the measured depth, we can still detect it to $\sim$44$\sigma$ which is enough to constrain the time of mid-transit to within $\sim$3 minutes. The binned data is purely for visualization purposes and is at two different cadences, 30-minutes in black and 60-minutes in white with a black outline while the transparent points are the original data.
\label{fig:joint_transit}}
\end{figure*}
\begin{deluxetable}{lcl}[t!]
\tablecaption{Orbital Prior for HD~80606~b\label{tab:nominal}}
\tablehead{
\colhead{Parameter} & \colhead{Value} & \colhead{Reference}}
\startdata
T$_{mid}$ (MJD)&2455210.6428$\pm$0.001 &\citet{Bonomo2017}\\
E$_{mid}$ (MJD)&2454424.736 $\pm$0.003 &\citet{Laughlin2009}\\
& 14-Jan-2010 0326 UTC&\\
Period (d)&111.43670$\pm$0.0004 &\citet{Bonomo2017}\\
Eccentricity ($e$)&0.93226$\pm$0.00066&\citet{Bonomo2017}\\
Arg. Periapsis ($\omega_{peri})$&58.97$\pm$0.2 (deg)&\citet{Bonomo2017}\\
&-1.0292$\pm$0.0035 (rad)&\\
Transit Duration (hr)&11.64$\pm$0.25&\citet{Winn2009}\\
\multicolumn{3}{l}{\textit{Prediction for Dec. 2021}}\\
Accum. Unc. (hr)$^1$& 0.4 for Observed transit\\
T$_{mid}$ (MJD)&2459556.674$\pm$0.016 d &\\
Observed event&08-12-2021 0411 UTC &\\
\enddata
\tablecomments{$^1$Accumulated uncertainty in the timing of the transit occurring $N_{per}=39$ periods after the reference time, $T_c$. $\sigma T=\sqrt{\sigma(T_c)^2+N_{per}^2\sigma(Period)^2}$ (Eqn.~3 in \citet{Zellem2020})}
\end{deluxetable}
\section{Observations \label{sec:obs}}
A majority of the transit observations for HD 80606 b originate almost a decade ago when it was a targeted by the Spitzer Space Telescope. Since then, there hasn't been a full transit observation in $\sim$10 years although the star has been monitored by radial velocity surveys. In preparation for JWST observations we have combined observations of the 2020 transit taken by TESS with 2021 observations taken from the ground by the Exoplanet Watch program. Finally, the light curve measurements are combined with new and archival radial velocity measurements in order to constrain the orbit parameters and to improve our knowledge of transit and eclipse events over the next decade.
\subsection{2020 Transit With TESS}
The photometric data from TESS were processed using a custom pipeline leveraging optimal aperture selection, systematic detrending with a weighted spline and outlier rejection in order to improve and minimize the scatter in the light curve \citep{Pearson2019b}. The custom pipeline uses multiple aperture sizes during the photometric extraction in order to minimize the scatter in the residuals after fitting a light curve model. Detrending the time series and minimizing scatter in the residuals has been shown to improve light curve quality compared to the default produced from the Science Processing Operations Center (SPOC) pipeline \citep{Jenkins2016} which is based on the Kepler mission pipeline \citep{Jenkins2010}.
TESS is capable of high precision measurements for this system due to the host star being bright (V=9.0 mag). However, TESS's large pixel size (21\arcsec) is less than ideal for HD~80606 due to the presence of HD~80607, a nearby companion of similar spectral type and brightness (V=9.07 mag) separated by 20.5\arcsec. Stellar blends dilute the transit signal causing a larger planet to mistakenly appear smaller \citep[e.g.,][]{Ciardi15, Zellem2020}. In the reduction of TESS data, a wide aperture was used and includes light from both stars. Therefore, our estimate for the transit depth is underestimated. The estimated contamination is around $\sim$48$\%$ and translates to a corrected transit depth $\sim2\times$ greater than what we directly measure. Despite the contamination decreasing the transit depth, we still detect the transit at over 40 $\sigma$ which allows for a strong constraint on the time of mid-transit to within a few minutes (see Table~\ref{tab:newmid} and Figure \ref{fig:joint_transit}).
\begin{deluxetable*}{lllllll}
\centering
\tablecaption{Transit Observing Facilities\label{tab:facilities}}
\tablehead{
\colhead{Facility} & \colhead{Location (N,E)} & \colhead{Size (m)}& \colhead{UTC Start (Phase)} & \colhead{UTC Stop (Phase)}& \colhead{Precision \% $^{1}$ }& \colhead{N. Images} }
\startdata
Transiting Exoplanet & Space & 0.1 & 2020-02-07 20:32:00 (-0.0054) & 2020-02-07 07:06:00 (0.0054) & 0.06 & 1520 \\
Survey Satellite (TESS) & &&&&\\
\hline
Exoplanet Watch [HJEB] & (30.7, -104.2) & 0.4 & 2021-12-06 08:21:36 (-0.0166) &2021-12-06 09:40:50 (-0.0161) & 1.31 & 225 \\
Las Cumbres (LCO) & (30.7, -104.2) & 0.4 & 2021-12-07 06:48:56 (-0.0079) & 2021-12-07 07:39:54 (-0.0082) & 1.26 & 218 \\
Las Cumbres (LCO) & (30.7, -104.2) & 0.4 & 2021-12-07 09:46:56 (-0.0068) & 2021-12-07 10:38:05 (-0.0071) & 0.77 & 225 \\
Las Cumbres (LCO) & (30.7, -104.2) & 0.4 & 2021-12-07 11:35:45 (-0.0064) & 2021-12-07 12:26:18 (-0.0061) & 1.21 & 221 \\
Exoplanet Watch [NCC] & (23.5, 120.9) & 0.4 & 2021-12-07 17:34:11 (-0.0042) & 2021-12-07 20:13:20 (-0.0032) & 1.01 & 481 \\
GROWTH-India & (32.8, 79.0) & 0.7 & 2021-12-07 19:52:49 (-0.0033) & 2021-12-08 00:40:41 (-0.0015) & 0.53 & 609 \\
Unistellar eVscope 2 (2rz) & (49.2, -0.4) & 0.11 & 2021-12-07 20:49:47 (-0.0030) & 2021-12-08 01:38:22 (-0.0012) & 1.09 & 126 \\
Unistellar eVscope (etx) & (49.2, -0.4) & 0.11 & 2021-12-07 20:48:29 (-0.0030) & 2021-12-08 01:37:27 (-0.0012) & 0.63 & 131 \\
Unistellar eVscope (257) & (60.8, 24.4) & 0.11 & 2021-12-07 21:41:31 (-0.0027) & 2021-12-08 00:17:56 (-0.0017) & 0.36 & 79 \\
Unistellar eVscope (3mh) & (45.3, 11.1) & 0.11 & 2021-12-07 22:24:41 (-0.0024) & 2021-12-08 01:41:27 (-0.0012) & 0.67 & 55 \\
Exoplanet Watch [GDAI] & (39.0, -108.2) & 0.4 & 2021-12-08 03:37:37 (-0.0004) & 2021-12-08 11:46:49 (0.0026) & 3.11 & 503 \\% Daniel Gallego <[email protected]>, Dr. Joshua Tan
Unistellar eVscope (rev) & (30.4, 97.8) & 0.11 & 2021-12-08 04:26:52 (-0.0001) & 2021-12-08 08:09:55 (0.0013) & 0.50 & 101 \\
Unistellar eVscope (sdp) & (32.2, -111) & 0.11 & 2021-12-08 05:17:14 (0.0002) & 2021-12-08 12:18:15 (0.0028) & 0.78 & 155 \\
Exoplanet Watch [RJBA] & (34.1, -118.1) & 0.15 & 2021-12-08 06:09:47 (0.0005) & 2021-12-08 12:08:50 (0.0027) & 1.47 & 569 \\
Las Cumbres (LCO) & (30.7, -104.2) & 1 & 2021-12-08 06:41:20 (0.0007) & 2021-12-08 12:17:36 (0.0028) & 0.33 & 391 \\
Exoplanet Watch [HJEB] & (30.7, -104.2) & 0.4 & 2021-12-08 06:46:01 (0.0007) & 2021-12-08 07:36:43 (0.001) & 1.29 & 225 \\
Las Cumbres (LCO) & (30.7, -104.2) & 0.4 & 2021-12-08 11:35:50 (0.0025) & 2021-12-08 12:26:33 (0.0029) & 0.80 & 225 \\
Unistellar eVscope (8cm) & (35.1, 134.4) & 0.11 & 2021-12-08 13:19:08 (0.0032) & 2021-12-08 14:14:42 (0.0035) & 1.54 & 26 \\
Exoplanet Watch [NCC] & (23.5, 120.9) & 0.4 & 2021-12-08 16:04:28 (0.0042) & 2021-12-08 20:08:09 (0.0057) & 0.80 & 516 \\
Unistellar eVscope 2 (2rzB) & (49.2, -0.4) & 0.11 & 2021-12-08 21:47:08 (0.0063) & 2021-12-08 23:47:48 (0.0071) & 1.08 & 88 \\
Unistellar eVscope (etxB) & (49.2, -0.4) & 0.11 & 2021-12-08 21:48:00 (0.0064) & 2021-12-08 23:39:20 (0.007) & 1.25 & 152\\
Exoplanet Watch [BARO] & (32.6, -116.3) & 0.43 & 2021-12-09 01:26:11 (0.0077) & 2021-12-09 01:55:10 (0.0079) & 0.97 & 98 \\
Exoplanet Watch [LGEC] & (28.3, -16.6) & 0.4 & 2021-12-09T02:06:25 (0.008) & 2021-12-09 02:15:10 (0.008) & 0.80 & 29 \\
Exoplanet Watch [FMAA] & (31.7, -111.1) & 0.15 & 2021-12-09T04:41:25 (0.009) & 2021-12-09 12:06:02 (0.012) & 1.79 & 130 \\
\enddata
\tablecomments{$^1$Standard deviation of the residuals \\The observations are split between the archival measurements (top) and those taken for the same transit (bottom).\\For the exoplanet watch observations the letters in brackets represent the AAVSO Observer code so the datasets can be easily referenced in the future and searchable on their archive. }
\end{deluxetable*}
\begin{figure}[b!]
\centering
\includegraphics[width=0.5\textwidth]{Figures/TelescopeMapWorld.pdf} \\
\caption{A map of the facilities in the global network of small telescopes used to observe the transit on 2021, Dec 7/8.
\label{fig:Map}}
\end{figure}
\subsection{2021 Transit from the Ground}\label{sec:trans}
HD~80606~b's long transit duration, over 11.5~hr \citep{Pont2009,Winn2009}, and the accumulated uncertainty in its time of occurrence, make a world-wide program of coordinated observations essential. Fortunately, networks of small and modest sized telescopes (e.g., Exoplanet Watch\footnote{https://exoplanets.nasa.gov/exoplanet-watch/}, ExoClock\footnote{http://exoclock.space}, Unistellar\footnote{https://unistellaroptics.com/}) are now in place to support programs of this type. The global observational campaign to measure the 2021 December 7--8 transit of HD~80606~b presented here was coordinated by Exoplanet Watch. The various observatories that contributed a transit measurement in December are shown in Figure \ref{fig:Map}.
\subsubsection{Exoplanet Watch}
Exoplanet Watch is a citizen science project funded by NASA's Universe of Learning\footnote{https://www.universe-of-learning.org} for observing exoplanets with small, ground-based telescopes to maintain ephemerides and to ensure the efficient use of large telescopes, discover new exoplanets via transit timing variations, resolve blended pairs, monitor for stellar variability, and confirm exoplanet candidates \citep{Zellem2019, Zellem2020}. Anyone is able to contribute observations to a public data archive\footnote{https://app.aavso.org/exosite/}, hosted by the American Association of Variable Star Observers\footnote{http://aavso.org}, where they are analyzed on a regular basis and used to refine exoplanet ephemerides\footnote{https://exoplanets.nasa.gov/exoplanet-watch/results/}. The observations listed under Exoplanet Watch in Table~\ref{tab:facilities} are currently available online and are linked to their AAVSO observer code. A majority of the users contributed at least one hour of observations using telescopes smaller than 0.5-meters. A few notable contributors to the network include the Boyce-Astro Research Observatory (BARO) located at an observing site near Tierra Del Sol and Campo, California. BARO includes a 17-inch telescope and a ZWO ASI 1600 CMOS camera. The observing configuration provides a 8.3'$\times$6.3' field of view with a plate scale of 0.107'' per pixel. Additionally, an individual user was able to capture part of transit egress from the top of the Cahill building on the campus of California Institute of Technology using a 6~inch telescope and the ASI 224MC camera.
Another contributor is the MicroObservatory which hosts a network of automated remote reflecting telescopes, each with a 6-inch mirror, 560-mm focal length, and KAF1402ME CCD with 6.8-micron-sized pixels. With 2×2 pixel binning, the image size is 650×500 pixels at a pixel scale of approximate 5”/px. MicroObservatory takes images of exoplanet systems daily and makes the images publicly available for educational use via their DIY Planet Search program\footnote{https://mo-www.cfa.harvard.edu/MicroObservatory/}.
\begin{figure*}[b!]%
\centering
\includegraphics[width=0.99\textwidth]{Figures/global_light_curve/bestfit.png} \\
\caption{\textit{Top:} The combined light curve showing the complete transit of HD~80606~b on 7--8 Dec 2021 along with a model fit to the observations (red line). The data are binned to a resolution of 30~minutes for each individual data set and 60~minutes for the combined data set (empty circles) for the purposes of visualization. Each observation is fit simultaneously with equation \ref{expam} and requires a separate airmass model for detrending. A mosaic of individual light curves can be found in the appendix (see Figure~\ref{fig:mosaic}). \textit{Bottom:} Residuals for the light curve model are displayed at the native resolution except for a binned version shown in white circles. The standard deviation of the residual scatter is reported in the legend on the top subplot.
\label{fig:BestFit}}
\end{figure*}
\subsubsection{LCO Network}
Las Cumbres Observatory (LCO) is a global telescope network consisting of multiple meter and sub-meter sized telescopes at various locations around the Earth. HD~80606 was observed over the course of 3 days from multiple locations in the LCO network. Unfortunately, weather clouded-out most of the Northern Hemisphere so that only a few sites acquired data. A majority of the usable observations come from LCO's telescopes at McDonald Observatory in Texas and Teide Observatory in Tenerife. LCO's 0.4-meter telescopes contain SBIG CCD cameras with a field of view $\sim$29' $\times$ 29', corresponding to a plate scale of 0.571''/pixel. The 1-meter telescope apart of LCO contains a Sinistro imager with a 26' $\times$ 26' field of view and a plate scale of 0.39''/px. All of the LCO observations were acquired with the R filter and some observatory-specific details are highlighted in Table~\ref{tab:facilities}.
\subsubsection{Unistellar Network}
The Unistellar Network is a global community of citizen scientist observers with Unistellar telescopes who have open access to observing campaigns organized by SETI Institute astronomers, including exoplanet transit observations. Seven different eVscopes (``Enhanced Vision Telescopes'') acquired nine observations of HD~80606~b from six different observing locations in North America, Europe, and Japan (Table~\ref{tab:facilities}). Of those observations, seven were collected using the Unistellar eVscope~1, which is a 4.5-inch reflecting telescope with a Sony IMX224LQR CMOS sensor at its prime focus. The camera's field of view is 37.0$\arcmin$ x 27.7$\arcmin$ with a plate scale of 1.7 $\arcsec$/pixel. Individual images had an exposure time of 3.970~s and sensor gain of 2 dB. The two remaining observations were collected using the Unistellar eVscope~2, which shares the design of the eVscope~1 but has a Sony IMX347LQR CMOS sensor. The camera's field of view is 45.3$\arcmin$ x 34.0$\arcmin$ with a plate scale of 1.3 $\arcsec$/pixel. Individual images had an exposure time of 3.970~s and sensor gain of 0~dB (no digital gain).
\subsubsection{ExoClock Project}
In addition to the TESS and December transit of HD 80606 b we also report on three additional transit measurements from the project ExoClock \citep{Kokori2021}. The ExoClock project is an open-access citizen science project aimed at conducting transit measurements of exoplanets targeted by the Ariel Mission \citep{Tinetti16}. The three measurements were taken from ground-based observatories in Europe with mid-transit measurements reported in Table~\ref{tab:oldmid}.
\subsubsection{GROWTH}
The Global Relay of Observatories Watching Transients Happen (GROWTH) network involves over a dozen institutions dedicated to the follow-up of transient events \citep{Kasliwal2019}. Among these, a number of Asian observatories within the GROWTH collaboration participated in the 2021 Dec 7/8 campaign, providing critical data during transit ingress. The GROWTH-India Telescope (GIT) is a 0.7m fully robotic telescope located at the Indian Astronomical Observatory (IAO), Hanle-Ladakh. The telescope is equipped with an Andor Ikon230XL CCD camera which provides a Field of view of $\sim0.5~\rm{deg}^2$. GIT observed HD~80606~b for $\sim 5$~hrs on night of Dec~7, 2022, obtaining a total of 609 images. The details of the observations are provided in Table~\ref{tab:facilities}. Data were reduced following standard procedures, and photometry was performed with EXOTIC as described in 3.2.
\subsection{Transit Data Reduction}
Data reduction and calibrations of the individual science images was done by each observer or their group. We encouraged all groups to acquire at least a bias and flat-field frame in order to reduce noise and normalize pixel to pixel changes in sensitivity, respectively. We provided an open-source package for aperture photometry and light curve fitting in order to make extracting the time series easy and optimal with respect to minimizing sources of noise. The EXOplanet Transit Interpretation Code\footnote{https://github.com/rzellem/EXOTIC} (EXOTIC; \citealt{Zellem2020}; Fatahi et al. \textit{in prep.}) can calibrate images (i.e. bias, flat and dark), plate solve images for better centroiding and conducts an optimization over comparison star selection and aperture when extracting the photometric timeseries. After conducting aperture photometry, all of the time series files were combined in order to produce the global light curve shown in Figure \ref{fig:BestFit}. A mosaic of the individual observations is shown in the appendix (see Figure \ref{fig:mosaic}.)
\begin{deluxetable}{lll}
\centering
\tablecaption{Archival Ephemeris Times\label{tab:oldmid}}
\tablehead{
\colhead{BJD$_{TBD}$} & \colhead{Reference} & \colhead{Status}}
\startdata
2454424.736 $\pm$ 0.003 & \cite{Laughlin2009} & Full Eclipse \\
2454876.316 $\pm$0.023 & \cite{Pont2009} & Partial Transit \\%https://arxiv.org/pdf/0906.5605.pdf
2454876.338 $\pm$ 0.017 & \cite{Kokori2021} & Partial Transit\\
2454987.7842 $\pm$0.0049 & \cite{Winn2009} & Full Transit\\%https://arxiv.org/pdf/0907.5205.pdf
2455099.196 $\pm$ 0.026 & \cite{Shporer2010} & Partial Transit\\%https://arxiv.org/pdf/1008.4129.pdf
2455210.6420 $\pm$0.001 & \cite{Hebrard2010} & Full Transit \\%https://www.aanda.org/articles/aa/pdf/2010/08/aa14327-10.pdf
2455210.6502 $\pm$ 0.0064 & \cite{Shporer2010} & Full Transit\\%https://arxiv.org/pdf/1008.4129.pdf
2457439.401 $\pm$ 0.012 & \cite{Kokori2021} & Partial Transit\\
2459222.401 $\pm$ 0.016 & \cite{Kokori2021} & Partial Transit\\
\enddata
\end{deluxetable}
\begin{deluxetable}{ll}
\centering
\tablecaption{New Mid-transit Times\label{tab:newmid}}
\tablehead{
\colhead{Facility} & \colhead{BJD$_{TBD}$}}
\startdata
TESS & 2458888.07466 $\pm$ 0.00204 \\
Multiple (7--8 Dec. 2021) & 2459556.7007 $\pm$ 0.0035 \\
\enddata
\end{deluxetable}
\begin{deluxetable}{lll}
\centering
\tablecaption{New Radial Velocity Observations\label{tab:NEW_RV}}
\tablehead{
\colhead{Instrument} & \colhead{BJD$_{TBD}$}& \colhead{Relative RV}}
\startdata
HIRES &2459514.0886&-133.668$\pm$1.168\\
APF & 2459533.0674 & 37.779$\pm$2.332\\
APF & 2459535.9405 & 15.584$\pm$8.951\\
APF & 2459541.0692 & -13.924$\pm$2.248\\
APF & 2459541.8002 & -9.460$\pm$2.288\\
APF & 2459544.0027 & -28.552$\pm$2.413\\
\enddata
\end{deluxetable}
\begin{deluxetable}{lll}
\centering
\tablecaption{Archival Radial Velocity Observations\label{tab:OLD_RV}}
\tablehead{
\colhead{Instrument} & \colhead{BJD$_{TBD}$}& \colhead{Relative RV}}
\startdata
ELODIE & 2452075.359 & -134.46$\pm$13\\
... \\
HIRES$_K$ &2452219.162 & -85.11$\pm$1.6\\
... \\
HRS &2453433.606&119.8$\pm$8.6\\
... \\
HIRISE$_J$ &2453398.854&-171.57$\pm$0.89\\
... \\
SOPHIE &2454876.729&222.1$\pm$5\\
... \\
\enddata
\tablecomments{These measurements are available online in a machine readable format \footnote{\url{https://exofop.ipac.caltech.edu/tess/view_tag.php?tag=418623}}. }
\end{deluxetable}
\subsection{Radial Velocity Observations\label{sec:PRV}}
New radial velocity observations were obtained around periapsis in December 2021 using the Levy spectrometer on the 2.4m Automated Planet Finder telescope (APF) \citep{Vogt2014} and the High Resolution spectrometer (HIRES, on the 10m Keck I telescope). The new RV measurements are processed using standard data reduction techniques described in \citet{Butler1996}. The APF and HIRISE RV values are measured using an Iodine cell-based design in order wavelength calibrate the stellar spectrum. The spectral region from 5000-6200 $\AA$ is used for measuring the radial velocities. The new observations are listed in Table~\ref{tab:NEW_RV}. We used a total of 593 RV measurements spanning 22 years for the data analysis (see Figure \ref{fig:joint_rv}) and they are available in a machine readable format online (see Table \ref{tab:OLD_RV}).
\section{Analysis\label{sec:analysis}}
The newly acquired data of HD~80606 b along with the historical measurements for RV, transit and eclipse are analyzed in a self-consistent manner in order to place constraints on the system parameters. The radial velocity observations help constrain the orbit and alignment of HD~80606~b, which is particularly important considering the high eccentricity of the planet can drastically change the transit duration based on the argument of periastron \citep{Hebrard2010}. The transit observations help the size of the planet once the orbit is reliably known and disentangled from degeneracies involving the stellar radius, inclination and contamination by HD 80607. Additionally, using the measured times of mid-transit and mid-eclipse we can search for deviations from a Keplerian orbit, which is potentially indicative of a companion in the system (\citealt{Holman2005}; \citealt{Nesvorney2008}).
\subsection{Global Light Curve Analysis}
Observations for the transit of HD~80606~b on the night of 2021 December 7--8 are combined and fitted simultaneously in order to derive the time of mid-transit and radius ratio between the planet and star. Since each observation was acquired at a different location, it requires individual treatment of extinction from Earth's atmosphere. We adopt a parameterization \citep[e.g.,][]{Pearson2019a} which scales exponentially with airmass and has resemblance to a solution of the radiative transfer equation when the source function is $I(\tau) = I(0)e^{-\tau}$. The following equation is used to maximize the likelihood of the transit model and airmass signal simultaneously:
\begin{equation} \label{expam}
F_{obs} = a_{0} e^{a_{1} \beta } F_{transit}.
\end{equation}
\noindent Here $F_{obs}$ is the flux recorded on the detector, $F_{transit}$ is the actual astrophysical signal (i.e., the transit light curve, given by pyLightcurve \citep{Tsiaras2016}, $a_{i}$ are airmass correction coefficients and $\beta$ is the airmass value. Since the underlying astrophysical signal is shared between all the observations we leave $R_{p}/R_{s}$ and $T_{mid}$ as free parameters during the retrieval and share the values between each dataset.
The free parameters are optimized using the multimodal nested sampling algorithm called UltraNest (\citealt{Feroz2008}; \citealt{Buchner2014}; \citealt{Buchner2017}). Ultranest is a Bayesian inference tool that uses the Monte Carlo strategy of nested sampling to calculate the Bayesian evidence allowing simultaneous parameter estimation and model selection. A nested sampling algorithm is efficient at probing parameter spaces which could potentially contain multiple modes and pronounced degeneracies in high dimensions; a regime in which the convergence for traditional Markov Chain Monte Carlo (MCMC; e.g., \citealt{ford05}) techniques becomes comparatively slow (\citealt{Skilling2004}; \citealt{Feroz2008}). Convergence for such a large retrieval can take a long time if the priors are very large and sometimes the solutions will not converge at all within a given range for likelihood evaluations for such a large dataset. Therefore, to aid with convergence, each observation was fit individually before being fit simultaneously and given priors to reflect $\pm5\sigma$ around the individual fits. The nested sampling algorithm runs for 500,000 likelihood evaluations before terminating with the resulting posterior distribution shown in Figure \ref{fig:posterior}. An open source version of the global retrieval is available through the EXOTIC repository on GitHub \footnote{https://github.com/rzellem/EXOTIC}. A non-linear 4 parameter limb darkening model is used for the both the ground-based measurements and TESS but corresponding to their respective filters \citep{Morello2020}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figures/hd80606_orbit.png}
\caption{Position vectors for the HD~80606 system showing the planet and star plotted over the course of one orbital period for the planet. The colored segments represent chunks of the orbit spanning $\sim$1 day. The big plot has a viewing angle 90 degrees above the line of sight. The small subplot also has a top-down view but of the star's orbit. The markers indicate where mid-transit, mid-eclipse and periastron occur for the planet.
\label{fig:orbit}}
\end{figure}
\subsection{Radial Velocity Analysis}
The archival and new RV measurements (Table~\ref{tab:NEW_RV} and Table~\ref{tab:OLD_RV}) are analyzed using a joint simultaneous fit between a TESS light curve and historical measurements for mid-transit and mid-eclipse in order to constrain a consistent orbital solution across 10 years of heterogeneous data. The radial velocity model uses the same orbit equation and Keplerian solver as the transit light curve model (PyLightcurve; \citealt{Tsiaras2016}). The orbit equation used in the transit model is
\begin{equation}
r_t = \frac{a}{R_s}\frac{(1-e^2)}{(1+e*cos(\nu_t))}
\end{equation} where $a$ is the semi-major axis, R$_s$ is the stellar radius, $e$ is the eccentricity, and $\nu$ is the true anomaly at some time $t$. The true anomaly can be solved for using equations 1 and 2 in \citealt{Fulton2018} by finding the root of an equation to get the eccentric anomaly which is then used to compute the true anomaly. The orbit equation is projected onto a Cartesian grid which is necessary for the transit model and useful for taking the dot product along our line of sight ensuring it matches the transit geometry (see Figure~\ref{fig:orbit}). The projection along the x-axis, or our line of sight is
\begin{equation}
x_t = r_t sin(\nu_t + \omega) sin(i)
\end{equation}
where $i$ is the inclination of the orbit and $\omega$ is the argument of periastron. The star's velocity is estimated after applying a scaling relation to the planet's orbit assuming it is in a two body system. Coupling the orbit solutions ensures a self consistent system where gravity balances the centripetal acceleration of the planet. The velocity vector of the planet is scaled to match that for the star's orbit and then projected along a line of sight in order to produce the RV signal. A velocity is estimated by evaluating the orbit equation twice in order to compute a numerical derivative using a time step of $\sim$8.5 seconds (0.0001 day):
\begin{equation}
v_{r,t} = \frac{M_p}{M_s} R_s \frac{x_{t+\Delta t}-x_t}{\Delta t}
\end{equation}
In addition to scaling the planet's orbit by a mass ratio to mimic the stellar position it must also be scaled by the stellar radius in order to acquire units of meters. The stellar radius is given a Gaussian prior during the retrieval process in order to reflect uncertainties on that scale factor and because it is correlated with the planet's inclination. For instance, for a given transit duration there could be a small star with a non-inclined planet or a big star with an inclined planet. Either way they can produce the same transit duration and it is difficult to disentangle the two parameters without an additional constraint on the likelihood function (e.g. some spectral modelling is needed to constrain the stellar properties). We do not have enough information to uniquely constrain the stellar radius and inclination simultaneously which leads to a degeneracy in our retrieval if each parameter uses a uniform prior. Therefore, the stellar radius is given a Gaussian prior which is constructed to be consistent with past derivations in the literature (\citealt{Bonomo2017}; \citealt{Rosenthal2021}).
\begin{figure*}[b!]%
\centering
\includegraphics[width=0.9\textwidth]{Figures/joint_fit/joint_rv_fit.png}
\caption{Data from 2000 to 2022 show the extremely eccentric orbit of HD~80606~b. The time series RV measurements are plotted in the top panel, the best-fit model is in the middle panel, and the residuals are on the bottom panel. The standard deviation of the residuals is listed in the legend of the top subplot for each dataset.
\label{fig:joint_rv}}
\end{figure*}
\subsection{Joint simultaneous fit}
Fitting three different types of measurements in a joint analysis requires a likelihood function with contributions from each data set. The system parameters are used to generate a coupled physical model for the transit, RV and ephemeris data in order to enforce consistency between the data sets. The likelihood function includes the sum of the chi-square values when comparing the data sets to their respective model. The TESS light curve is compared to a transit model in a manner similar to the global fit for all the ground-based measurements except the airmass correction is left out. The historic mid-transit and mid-eclipse measurements are compared to a linear ephemeris and then folded into the total chi-squared estimate. The radial velocity measurements are also folded into the total chi-squared however the uncertainties are adjusted prior to the joint fit. The radial velocity likelihood ($\mathcal{L}$) adopts a parameterization similar to RADVEL \citep{Fulton2018} in order to account for underestimated uncertainties,
\begin{equation} \label{rv_likelihood}
\mathcal{L}_{RV} = -\frac{1}{2} \sum_{i} \sum_{t} (\frac{d_t - v_{r,t}}{\sigma_{i,t} + \sigma_i})^2
\end{equation}
\noindent where d$_t$ is the velocity measurement at time, $t$, $v_{r,t}$ is the Keplerian model predicted for each RV measurement, $\sigma_{i,t}$ is the original uncertainty on the radial velocity measurement and $\sigma_i$ is an RV jitter term for each data set, $i$. The jitter term is set after an individual fit to the radial velocity data and before the joint fit. The jitter term scales the uncertainty such that the average uncertainty is roughly equal to the standard deviation of the residuals from the individual fit. Additionally, the solution to the individual fit $\pm$ 5 sigma is used to constrain the priors for the joint fit. Our uncertainty scaling is similar to RADVEL however we do not include a penalty term which is required when fitting for an error scaling term. We adopt an easier correction for underestimated uncertainties while still being able to leverage the optimizations behind nested sampling. The errors are scaled after an individual fit to the RV data such that the average uncertainty is roughly equal to the scatter in the residuals for that particular data set. After inflating each uncertainty, we found our error estimate for orbital period increased by a factor of $\sim$2 and other orbit parameters similarly.
The likelihood function for the joint fit has contributions from transit data, RV measurements and historic ephemerides using
\begin{equation} \label{joint_likelihood}
\mathcal{L}_{Joint} = \mathcal{L}_{RV} + \mathcal{L}_{Transit} + \mathcal{L}_{Mid-transit} + \mathcal{L}_{Mid-eclipse}
\end{equation}.
The likelihood function for mid-transit and mid-eclipse represent the error for a linear ephemeris estimate compared to existing measurements Whereas the transit likelihood function uses the photometric time-series. Nested sampling is used to efficiently explore a large parameter space defining the system and to build a posterior distribution with which to infer uncertainties \citep{Buchner2021}. The free parameters include orbital period, time of mid-transit, inclination, argument of periastron, eccentricity, a planet mass and the radius ratio between the planet and star. Posteriors for the free parameters in the joint fit are shown in Figure 9. We also include a Gaussian prior on the stellar radius because it is needed to convert our radial velocity model into meters. The stellar radius is degenerate with inclination and difficult to constrain if left as a uniform prior. Another relationship in the posteriors is the perfect correlation between eccentricity and argument of periastron. We have seen similar correlations when fitting for $a_{0}$ and $\gamma$ that allowed us to simplify the retrieval and solve for them instead. It is theoretically possible to remove one of these parameters ($e$ or $\omega$) from the sampling process and solve for the other at run-time without having to build it into the posteriors. That solution however requires solving a transcendental equation on top of the existing orbit solution and would increase the computation time of the likelihood function. Therefore, we include both $e$ and $\omega$ in the retrieval and let the sampler handle the correlation which decreases its efficiency slightly.
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{Figures/joint_fit/joint_oc_transits.png}
\includegraphics[width=0.49\textwidth]{Figures/joint_fit/joint_oc_eclipses.png}
\caption{ left) A comparison of residuals between the measured mid-transit times and a calculated linear ephemeris (reported in the plot legend). The grey shaded region indicates the uncertainty in the ephemeris extending to $\pm$1 $\sigma$ using our best estimates in Table \ref{tab:final}. The pink shaded region indicates an uncertainty based on the prior listed in Table \ref{tab:nominal}. Some mid-transit measurements are not used in the joint analysis because they were measured from partial transits. right) An ephemeris estimate for mid-eclipse times. The pink shaded region shows the uncertainty in a linear solution if we use the Spitzer measurement \citep{Laughlin2009} as $E_{mid}$ along with the period from \citealt{Bonomo2017}. The grey shaded region indicates an uncertainty based on the orbital information listed in Table \ref{tab:final}.
\label{fig:period}}
\end{figure*}
\section{Results and Conclusions \label{sec:params}}
As part of an effort to refine the orbital ephemeris for HD 806060 b, we have obtained new radial velocity and transit measurements for HD~80606~b. The transit measurements were obtained with TESS in 2020 and a ground-based campaign in 2021; together, the new data, coupled with archival RV and transit observations provide a valuable constraint on the time of conjunction. We are able to refine the estimate on the orbital period of HD~80606~b by taking advantage of the 10-year baseline between the archival and the new observations. Using only the data from 2009-2010, the uncertainty on the orbital period was $\sigma$(P) $=4\times 10^{-4}$; combining the old data with the new observations, the new value of the period 111.436971~days has an improved uncertainty of $\sigma$(P)= $7.4\times 10^{-5}$~days (Figure~\ref{fig:period}). The period estimate is improved by factor $\sim$5 compared to \cite{Bonomo2017} along with significant improvements for the system parameters as summarized in Table~\ref{tab:final}. The immediate benefit of these new observations is to greatly reduce the uncertainty in the timing of future events (transits or eclipses; e.g., \citealt{Zellem2020}).
In the case of an eclipse in November 2022, e.g., in mid Cycle 1 for JWST, the uncertainty resulting from propagating the ephemeris in Table~\ref{tab:nominal} is $\sim$24 minutes, whereas with the new linear ephemeris the uncertainty is $\sim$5 minutes (See Figure~\ref{fig:period}). The linear ephemeris uses the eclipse mid-point from \citep{Laughlin2009} and our new period estimate. We also provide a more conservative error estimate based on the orbit solution which yields an uncertainty $\sim$30 minutes. The orbit solution has a larger uncertainty than the linear propagation due to the uncertainty in $e$ and $\omega$ on the estimated eclipse time. For example, the mid-eclipse time predicted from the prior is 2458882.207 $\pm$ 0.10 and from our posterior we get 2458882.214 $\pm$ 0.021 which leads to a difference in uncertainty of $\sim$2 hours. The errors are significantly larger on predicting mid-eclipse because of a degeneracy between $e$ and $\omega$ and it is exacerbated with larger orbital periods. Removing the degeneracy may be possible by simultaneously fitting a transit and eclipse. The uncertainties reported in Figure $\ref{fig:period}$ are smaller than the ones estimated above because they use a linear propagation of the average orbit solution. It is also important to note that the uncertainty on inclination in the prior does not always yield a transiting planet when conducting a Monte Carlo simulation. Simultaneously fitting a TESS light curve with RV data allowed for a strong constraint on the inclination that helped measure the transit duration to within $\sim$ 7 minutes compared to the full event that is almost 12 hours.
For the analysis of the JWST phase curve it is important to know the offset between the eclipse, which will be well determined by the JWST observations, and time of periapsis, which will not be directly measured. The timing of eclipse relative to periapsis depends on three key variables: orbital period $P$, eccentricity $e$, and argument of periapsis $\omega$ in Eqn~(\ref{deltaT}) \citep{Huber2017, Alonso2018}:
\begin{equation}
T_{ecl}-T_{peri}= \frac{P}{2 \pi \sqrt{
1 - e^2}} \int_{0}^{-\frac{\pi}{2}
-\omega} \left( \frac{(1 - e^2)}{1 + e cos(x)} \right)^2 \,dx \label{deltaT}
\end{equation}
A Monte Carlo simulation for the parameters with their associated uncertainties (Table~\ref{tab:final}) yields an offset in time between the eclipse and periapsis of $\Delta T =-3.104\pm0.011$ hr, i.e with the eclipse occurring before periapsis. This is to be compared with -3.069$\pm$0.049 hr derived using the \citet{Bonomo2017} parameters in Table~\ref{tab:nominal}, a difference of $\sim$2 minutes. Table~\ref{tab:ephemeris} takes the times of periapsis, eclipse and conjunction from our solution (Table~\ref{tab:final}) and propagates these forward in time from 2020 to 2031. The uncertainties include a constant term from the initial Monte Carlo estimates plus the growth in uncertainty occurring $N$ periods after the reference time.
Finally, we note that the increased precision of the ephemeris, when combined with new JWST observations, may allow an exploration of non-Keplerian effects such as tidal dissipation \citep{Fabrycky2010} or General Relativistic effects similar to those seen in the precession of the periapsis in orbit of Mercury in our solar system, but greatly enhanced by the high eccentricity of HD~80606~b. \citet{Blanchet2019} calculate that offsets between transit and eclipse midpoints should grow as the number of orbits increases. While the precision and temporal baseline of the 2009--2010 measurements is inadequate to measure the predicted effects of 3--4 minutes, the high precision expected from JWST's great sensitivity make such measurements possible over the next few years. Additionally, our measurements reported in this paper will be archived on ExoFOP enabling future studies to search for long-term perturbations that may affect the ephemeris estimates.
\begin{deluxetable*}{llccc}
\centering
\tablecaption{System Parameters for HD~80606\label{tab:final}}
\tablehead{
\colhead{Parameter} & \colhead{Explanation} & \colhead{Our Study} & \cite{Rosenthal2021} & \cite{Bonomo2017} }
\startdata
M$_*$ [M$_\odot$] & Stellar Mass & 1.05 & 1.047$\pm$0.047 & 1.018$\pm$0.035 \\
R$_*$ [R$_\odot$] & Stellar Radius & 1.050 $\pm$ 0.01$^a$ & 1.066$\pm$0.024 & 1.037$\pm$0.032 \\
T$_*$ [K] & Stellar Temperature & 5565 & 5565 $\pm$ 92 & 5574$\pm$72 \\
Fe/H & Stellar Metallicity & 0.35 & 0.348$\pm$0.057 & 0.340$\pm$0.050 \\
$(R_{p}/R_*)_{contaminated}$ & Planet-Star Radius Ratio & 0.07268 $\pm$ 0.00085 \\
$(R_p/R_*)^2_{contaminated}$ & Radius Ratio Squared & 0.00528 $\pm$ 0.00012 \\
$(R_p/R_*)^2_{corrected}$ & Radius Ratio Squared & 0.01019 $\pm$ 0.00023$^b$ & & 0.00991$\pm$0.00076 \\
$R_p$ [R$_{Jupiter}$] & Planet Radius & 1.032$\pm$0.015 & & 1.003$\pm$0.023 \\
$M_{p}$ [M$_{Jupiter}$] & Planet Mass & 4.1641 $\pm$ 0.0047 & 4.16 $\pm$0.13$^c$ & 4.1$\pm$0.1 \\
K [m/s] & RV Semi-Amplitude & 469.22 $\pm$ 0.61 & 465.5$\pm$2.8 & 474.9$\pm$2.6 \\
Period [day] & Orbital period & 111.436765 $\pm$ 0.000074 & 111.43639$\pm$0.00032 & 111.4367$\pm$0.0004 \\
E$_{mid}$ [BJD] & Eclipse Midpoint & 2458882.214 $\pm$ 0.0021$^d$ & & \\
E$_{14}$ [day] & Eclipse Duration & 0.07169$\pm$0.00073 & & \\
T$_{peri}$ [BJD] & Epoch of periastron & 2458882.344 $\pm$ 0.0021 & & \\
T$_{mid}$ [BJD] & Transit Midpoint & 2458888.07466 $\pm$ 0.00204 & 2455099.39$\pm$0.13 & 2455210.6428$\pm$0.001 \\
T$_{14}$ [day] & Transit Duration & 0.4990 $\pm$ 0.0048 & & \\
$i$ [deg] & Inclination & 89.24 $\pm$ 0.01 & & 89.23$\pm$0.3 \\
a/R$_*$ & Scaled Semi-major axis & 94.452 $\pm$ 0.014 & 92.8$\pm$2.5 & 94.6$\pm$3.1 \\
a [au] & Semi-major axis & 0.4603$\pm$0.0021 & 0.4602$\pm$0.0071 & 0.4565$\pm$0.0053 \\
$e$ & Eccentricity & 0.93183 $\pm$ 0.00014 & 0.93043$\pm$0.00068 & 0.93226$\pm$0.00064 \\
$\omega$ [deg] & Arg. of periastron & -58.887 $\pm$ 0.043 & -58.95$\pm$0.25 & -58.97$\pm$0.2 \\
\enddata
\tablecomments{The values in parentheses are calculated using the respective column's orbit solution and a Monte Carlo simulation with 10,000 forward model evaluations. $^a$Gaussian Prior; $^b$Corrected for stellar contamination using brightness values for HD~80606: V-mag=9.00 and HD80607: V-mag=9.07; $^c$ M$_{p}$sin($i$); $^d$Uncertainty estimated with fixed $\omega$;}
\end{deluxetable*}
\begin{deluxetable*}{lllll}
\tabletypesize{\scriptsize}
\tablecaption{Predicted Transit, Eclipse and Periapsis Times \label{tab:ephemeris}}
\tablehead{
\colhead{Period} & \colhead{Periapsis Date}& \colhead{T$_{Peri}$ (BJD$_{TBD}$)}& \colhead{E$_{mid}$ (BJD$_{TBD}$)}
& \colhead{T$_{mid}$ (BJD$_{TBD}$)}}
\startdata
0 & 2020-02-02 20:15:10 & 2458882.344 $\pm$ 0.0021 & 2458882.214 $\pm$ 0.0021 & 2458888.0746 $\pm$ 0.0020 \\
1 & 2020-05-24 06:44:36 & 2458993.781 $\pm$ 0.0021 & 2458993.651 $\pm$ 0.0021 & 2458999.5116 $\pm$ 0.0020 \\
2 & 2020-09-12 17:14:03 & 2459105.218 $\pm$ 0.0021 & 2459105.089 $\pm$ 0.0021 & 2459110.9487 $\pm$ 0.0020 \\
3 & 2021-01-02 03:45:18 & 2459216.656 $\pm$ 0.0021 & 2459216.527 $\pm$ 0.0021 & 2459222.3855 $\pm$ 0.0021 \\
4 & 2021-04-23 14:12:08 & 2459328.092 $\pm$ 0.0021 & 2459327.962 $\pm$ 0.0021 & 2459333.8225 $\pm$ 0.0021 \\
5 & 2021-08-13 00:42:14 & 2459439.529 $\pm$ 0.0022 & 2459439.400 $\pm$ 0.0022 & 2459445.2595 $\pm$ 0.0022 \\
6 & 2021-12-02 11:10:00 & 2459550.965 $\pm$ 0.0022 & 2459550.836 $\pm$ 0.0022 & 2459556.6963 $\pm$ 0.0021 \\
7 & 2022-03-23 21:40:27 & 2459662.403 $\pm$ 0.0021 & 2459662.274 $\pm$ 0.0022 & 2459668.1333 $\pm$ 0.0021 \\
8 & 2022-07-13 08:09:58 & 2459773.840 $\pm$ 0.0022 & 2459773.711 $\pm$ 0.0021 & 2459779.5704 $\pm$ 0.0021 \\
9 & 2022-11-01 18:39:52 & 2459885.278 $\pm$ 0.0023 & 2459885.148 $\pm$ 0.0022 & 2459891.0073 $\pm$ 0.0022 \\
10 & 2023-02-21 05:08:03 & 2459996.714 $\pm$ 0.0022 & 2459996.584 $\pm$ 0.0023 & 2460002.4443 $\pm$ 0.0022 \\
11 & 2023-06-12 15:37:31 & 2460108.151 $\pm$ 0.0023 & 2460108.021 $\pm$ 0.0021 & 2460113.8814 $\pm$ 0.0022 \\
12 & 2023-10-02 02:07:17 & 2460219.588 $\pm$ 0.0023 & 2460219.459 $\pm$ 0.0022 & 2460225.3183 $\pm$ 0.0022 \\
13 & 2024-01-21 12:36:17 & 2460331.025 $\pm$ 0.0022 & 2460330.896 $\pm$ 0.0022 & 2460336.7554 $\pm$ 0.0023 \\
14 & 2024-05-11 23:06:49 & 2460442.463 $\pm$ 0.0024 & 2460442.334 $\pm$ 0.0023 & 2460448.1923 $\pm$ 0.0023 \\
15 & 2024-08-31 09:35:05 & 2460553.899 $\pm$ 0.0023 & 2460553.770 $\pm$ 0.0023 & 2460559.6291 $\pm$ 0.0023 \\
16 & 2024-12-20 20:01:54 & 2460665.335 $\pm$ 0.0023 & 2460665.205 $\pm$ 0.0024 & 2460671.0663 $\pm$ 0.0023 \\
17 & 2025-04-11 06:33:17 & 2460776.773 $\pm$ 0.0024 & 2460776.644 $\pm$ 0.0024 & 2460782.5030 $\pm$ 0.0023 \\
18 & 2025-07-31 17:03:20 & 2460888.211 $\pm$ 0.0024 & 2460888.081 $\pm$ 0.0025 & 2460893.9400 $\pm$ 0.0025 \\
19 & 2025-11-20 03:30:27 & 2460999.646 $\pm$ 0.0025 & 2460999.517 $\pm$ 0.0024 & 2461005.3771 $\pm$ 0.0024 \\
20 & 2026-03-11 14:00:39 & 2461111.084 $\pm$ 0.0024 & 2461110.954 $\pm$ 0.0024 & 2461116.8140 $\pm$ 0.0025 \\
21 & 2026-07-01 00:29:17 & 2461222.520 $\pm$ 0.0024 & 2461222.391 $\pm$ 0.0025 & 2461228.2509 $\pm$ 0.0025 \\
22 & 2026-10-20 10:59:11 & 2461333.958 $\pm$ 0.0026 & 2461333.828 $\pm$ 0.0025 & 2461339.6880 $\pm$ 0.0025 \\
23 & 2027-02-08 21:26:36 & 2461445.393 $\pm$ 0.0025 & 2461445.264 $\pm$ 0.0026 & 2461451.1249 $\pm$ 0.0026 \\
24 & 2027-05-31 07:56:26 & 2461556.831 $\pm$ 0.0025 & 2461556.701 $\pm$ 0.0026 & 2461562.5616 $\pm$ 0.0027 \\
25 & 2027-09-19 18:27:19 & 2461668.269 $\pm$ 0.0026 & 2461668.140 $\pm$ 0.0026 & 2461673.9988 $\pm$ 0.0026 \\
26 & 2028-01-09 04:54:59 & 2461779.705 $\pm$ 0.0027 & 2461779.575 $\pm$ 0.0027 & 2461785.4358 $\pm$ 0.0027 \\
27 & 2028-04-29 15:27:05 & 2461891.144 $\pm$ 0.0028 & 2461891.014 $\pm$ 0.0027 & 2461896.8728 $\pm$ 0.0028 \\
28 & 2028-08-19 01:54:49 & 2462002.580 $\pm$ 0.0028 & 2462002.450 $\pm$ 0.0028 & 2462008.3097 $\pm$ 0.0029 \\
29 & 2028-12-08 12:23:09 & 2462114.016 $\pm$ 0.0029 & 2462113.887 $\pm$ 0.0029 & 2462119.7467 $\pm$ 0.0030 \\
30 & 2029-03-29 22:52:37 & 2462225.453 $\pm$ 0.0030 & 2462225.324 $\pm$ 0.0029 & 2462231.1838 $\pm$ 0.0031 \\
31 & 2029-07-19 09:22:28 & 2462336.891 $\pm$ 0.0030 & 2462336.761 $\pm$ 0.0031 & 2462342.6207 $\pm$ 0.0031 \\
32 & 2029-11-07 19:51:46 & 2462448.328 $\pm$ 0.0032 & 2462448.198 $\pm$ 0.0030 & 2462454.0576 $\pm$ 0.0030 \\
33 & 2030-02-27 06:22:46 & 2462559.766 $\pm$ 0.0031 & 2462559.636 $\pm$ 0.0032 & 2462565.4946 $\pm$ 0.0031 \\
34 & 2030-06-18 16:50:58 & 2462671.202 $\pm$ 0.0032 & 2462671.072 $\pm$ 0.0032 & 2462676.9315 $\pm$ 0.0031 \\
35 & 2030-10-08 03:19:59 & 2462782.639 $\pm$ 0.0033 & 2462782.509 $\pm$ 0.0033 & 2462788.3685 $\pm$ 0.0032 \\
\enddata
\end{deluxetable*}
\section{Acknowledgements}
Some of the research described in this publication was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This research has made use of the NASA Exoplanet Archive and ExoFOP, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
This publication makes use of data products from Exoplanet Watch, a citizen science project managed by NASA's Jet Propulsion Laboratory on behalf of NASA's Universe of Learning. This work is supported by NASA under award number NNX16AC65A to the Space Telescope Science Institute, in partnership with Caltech/IPAC, Center for Astrophysics|Harvard $\&$ Smithsonian, and NASA Jet Propulsion Laboratory.
We acknowledge with thanks the use of the AAVSO Exoplanet Database contributed by observers worldwide and used in this research.
This work makes use of observations from the Las Cumbres Observatory global telescope network. The authors thank Dr. Lisa Storie-Lombardi for the grant of Director's Discretionary Time with the Los Cumbres Observatory (LCO) which was critical to the execution of this program. Dr. Rachel Street helped to identify the appropriate telescopes and observing modes for LCO.
Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
Some of the scientific data presented herein were obtained using the eVscope Network, which is managed jointly by Unistellar and the SETI Institute. The Unistellar Network and work by T.M.E. and A.A. are supported by grants from the Gordon and Betty Moore Foundation. The authors wish to thank Prof. S. Kulkarni for an introduction to members of the GROWTH consortium,
The results reported herein benefited from collaborations and/or information exchange within NASA's Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA's Science Mission Directorate.
K.W. acknowledges support from NASA through the NASA Hubble Fellowship grant HST-HF2-51472.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
The ExoClock project has received funding from the UKSA and STFC grants: ST/W00254X/1 and ST/W006960/1.
This work made use of data from the GROWTH-India Telescope (GIT) set up by the Indian Institute of Astrophysics (IIA) and the Indian Institute of Technology Bombay (IITB). It is located at the Indian Astronomical Observatory (Hanle), operated by IIA. We acknowledge funding by the IITB alumni batch of 1994, which partially supports operations of the telescope. Telescope technical details are available at \url{https://sites.google.com/view/growthindia/}.
This work uses funding from the Ministry of Science and Technology (Taiwan) under the contract 109-2112-M-008-014-MY3 and we are thankful for their support. The queue observations were done using the 0.4m SLT telescope located at the Lulin Observatory, with assistance from observatory staff C.-S. Lin, H.-Y. Hsiao, and W.-J. Hou.
\facility{Keck:I (HIRES), Lick:APF, LCO, TESS, Spitzer Space Telescope, Keck Observatory Archive (KOA)}
|
2,869,038,156,246 | arxiv | \section{Introduction}
Let $A$ be an $n\times n$ Hermitian matrix, and let $\Delta(A)$ and $\lambda(A)$ be the real $n$-vectors whose entries are the diagonal entries and the eigenvalues of $A,$ respectively. The celebrated Schur-Horn theorem gives a relationship between $\Delta(A)$ and $\lambda(A).$ This is described in terms of majorisation.\\\par
Let $x=(x_1,\ldots,x_n)$ be a vector in $\mathbb{R}^n.$ The co-ordinates of $x$ rearranged in decreasing order will be denoted by $x_{1}^{\downarrow} \geq \cdots \geq x_{n}^{\downarrow}$ and the same numbers listed in increasing order will be denoted as $x_{1}^{\uparrow} \leq \cdots \leq x_{n}^{\uparrow}.$ Let $x,y$ be two vectors in $\mathbb{R}^n.$ We say that $x$ is \emph{weakly submajorised} by $y,$ in symbols $x \prec_w y,$ if
\begin{equation*}\label{eq1}
\sum_{j=1}^{k} x_{j}^{\downarrow} \leq \sum_{j=1}^{k} y_{j}^{\downarrow}, \text{ for } 1\leq k \leq n.
\end{equation*}
In addition if, \begin{equation*}\label{eq2}
\sum_{j=1}^{n} x_{j}^{\downarrow} = \sum_{j=1}^{n} y_{j}^{\downarrow},
\end{equation*}
we say that $x$ is majorised by $y,$ in symbols $x\prec y.$ The book \cite{mo} provides an encyclopedic coverage of majorisation; a brief treatment is given in \cite{rbh}.\\\par
In 1923, I. Schur \cite{s} showed that for every Hermitian matrix $A,$ we have the majorisation $\Delta(A) \prec \lambda(A).$ In 1954, A. Horn \cite{h} proved the converse: if $x,y$ are two real $n$-vectors, with $x\prec y$ then there exists a real symmetric matrix A such that $x=\Delta(A)$ and $y=\lambda(A).$ See \cite{mo} Sect. 9.B.\\\par
One of the basic theorems on majorisation says that $x\prec y$ if and only if $x$ is in the convex polytope whose vertices are the vectors $y_{\sigma}$ obtained by permuting the coordinates of $y$ according to the permutation $\sigma.$ Using this, the Schur-Horn theorem may be reformulated as follows: Let $\lambda$ be a vector in $\mathbb{R}^n$ and $\Lambda$ the diagonal matrix with diagonal $\lambda.$ Let $U(n)$ be the group of unitary matrices and let $\mathcal{O}_{\lambda}=\{U\Lambda U^*:U \in U(n)\},$ the unitary orbit of $\Lambda.$ (This is the collection of all Hermitian matrices whose eigenvalues are $\lambda.$) Then the range of the map $\Delta: \mathcal{O}_{\lambda} \rightarrow \mathbb{R}^n$ coincides with the polytope whose vertices are the vectors $\lambda_\sigma.$ In this form, the theorem has been extended to the more general setting of Lie groups starting with B. Kostant \cite{k}, followed by \cite{a} and \cite{g}. \\\par
The goal of this paper is to formulate and prove a Schur-Horn like theorem for the action of the real symplectic group on real positive definite matrices. \\\par
Let $\mathbb{M}(2n)$ be the space of $2n\times 2n$ real matrices and let $\mathbb{P}(2n)$ be the subset consisting of real positive definite matrices. Let $J=\begin{bmatrix}
0 & I\\
-I & 0
\end{bmatrix}$
and let $Sp(2n)=\{M\in\mathbb{M}(2n):M^TJM=J\}.$ This is the Lie group of real symplectic matrices. By a theorem of Williamson \cite{dms, w} for every real positive definite matrix $A,$ there exists a symplectic matrix $M$ such that
\begin{equation*}\label{eq3}
M^TAM=\begin{bmatrix}
D & 0\\
0 & D
\end{bmatrix},
\end{equation*}
where $D$ is a positive diagonal matrix. The diagonal entries of $D$ are enumerated as $$d_1(A) \leq \cdots \leq d_n(A),$$ and are called the \emph{symplectic eigenvalues} of $A.$ They are uniquely determined by $A$ and are complete invariants for the orbits of $\mathbb{P}(2n)$ under the action of the group $Sp(2n).$ The matrix $A^{1/2}JA^{1/2}$ is skew-symmetric, and so its eigenvalues occur in conjugate imaginary pairs. The absolute values of these are the symplectic eigenvalues $d_j(A),$ $1\leq j\leq n.$ See \cite{bj}\.\\\par
We denote the $n$-vector of symplectic eigenvalues of $A$ by $d_s(A).$ Let $\Delta(A)$ be the diagonal of $A$ and let $\Delta_s(A)=d_s(\Delta(A)).$ If the matrix $A$ is partitioned into $n\times n$ blocks as $A=\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix},$ then $\Delta_s(A)=[\Delta(A_{11})\Delta(A_{22})]^{1/2}.$
(Here both the product of two vectors and the square root are taken entrywise.)
This can be seen from the relationship between symplectic eigenvalues of $A$ and the eigenvalues of $JA.$ In our version of the Schur-Horn theorem we compare $\Delta_s(A)$ and $d_s(A).$ One major difference from the case of the classical Schur-Horn theorem is that while the unitary group $U(n)$ is compact, the group $Sp(2n)$ is not. \\\par
Let $x,y \in \mathbb{R}^n.$
We say $x$ is \emph{weakly supermajorised} by $y,$ in symbols $x \prec^w y,$ if
\begin{equation*}\label{eq4}
\sum_{j=1}^{k} x_{j}^{\uparrow} \geq \sum_{j=1}^{k} y_{j}^{\uparrow}, \text{ for } 1\leq k \leq n.
\end{equation*}
It is easy to see that $x$ is majorised by $y$ if and only if it is both weakly submajorised and supermajorised by $y.$\\\\
Our symplectic version of the Schur-Horn theorem is the following:
\begin{thm}\label{thm1}
Let $A$ be any $2n\times 2n$ real positive definite matrix. Then we have the weak majorisation \begin{equation}\label{eq5}
\Delta_s(A)\prec^w d_s(A).
\end{equation}
Conversely, if $x,y$ are two $n$ vectors with positive entries such that $x\prec^w y,$ then there exists a $2n\times 2n$ real positive definite matrix $A$ such that $x=\Delta_s(A)$ and $y=d_s(A).$
\end{thm}
Just like the classical Schur-Horn theorem, this theorem can be reformulated as a convexity statement. The convex polytope of the classical theorem is replaced by an unbounded convex set. This is a manifestation of the noncompactness of the group $Sp(2n).$ From the theory of majorisation we know that $x\prec^w y$ if and only if there exists a vector $z$ such that $z\prec y$ and $z\leq x.$ See \cite{mo}, Ch.5, Sect. A.9.a. Thus
\begin{equation}\label{eq6}
\{x:x\prec^w y\} = \bigcup_{z\prec y}\{x:z\leq x\}.
\end{equation}
This is an unbounded convex set, and can be visualized as follows. Let $S$ be the convex polytope whose vertices are the permuted vectors $y_\sigma.$ At each point of $S$ attach a copy of the positive orthant $\mathbb{R}^{n}_{+}.$ The set in \eqref{eq6} is the union of all these. We denote this set by $\Sigma_y.$ Then, Theorem 1 can be restated as:
\begin{thm}\label{thm2}
Let $d=(d_1,\ldots,d_n)$ be a positive $n$-vector and $D$ the diagonal matrix with $d_1,\ldots,d_n$ on its diagonal. Let $$\mathcal{O}_d=\{M\begin{bmatrix}
D & 0\\
0 & D
\end{bmatrix}M^T: M \in Sp(2n)\},$$ be the symplectic orbit of $\begin{bmatrix}
D & 0\\
0 & D
\end{bmatrix}.$ (This is the collection of all real positive definite matrices with symplectic eigenvalues $d_1,\ldots,d_n.$) Then the range of the map $\Delta_s: \mathcal{O}_d \rightarrow \mathbb{R}^{n}$ is the convex set $\sum_d.$
\end{thm}
\section{Proof of the Theorem}
The weak majorisation \eqref{eq5} is a consequence of results proved in \cite{sanders} and in our paper \cite{bj}. Let $A=\begin{bmatrix}
A_{11} & A_{12}\\
A_{12}^{T} & A_{22}
\end{bmatrix}$ and define the \emph{symplectic diagonal} of $A$ as $$\mathfrak{D}_s(A)=\begin{bmatrix}
\Delta(A_{11}) & \Delta(A_{12})\\
\Delta(A_{12}^{T}) & \Delta(A_{22})
\end{bmatrix}=\begin{bmatrix}
\Delta(A_{11}) & \Delta(A_{12})\\
\Delta(A_{12}) & \Delta(A_{22})
\end{bmatrix}.$$ In \cite{sanders} and \cite{bj} it has been shown that \begin{equation}\label{eq7}
d_s(\mathfrak{D}_s(A))\prec^w d_s(A).
\end{equation} If $\Delta(A_{11})=(\alpha_1,\ldots,\alpha_n), \Delta(A_{12})=(\beta_1,\ldots,\beta_n)$ and $\Delta(A_{22})=(\gamma_1,\ldots,\gamma_n),$ then one can see that $d_s(\mathfrak{D}_s(A))$ is the $n$-vector with entries $(\alpha_j\gamma_j-\beta_j^2)^{1/2},$ whereas $\Delta_s(A)$ is the $n$-vector with entries $(\alpha_j\gamma_j)^{1/2}, 1\leq j\leq n.$ This shows that \begin{equation}
d_s(\mathfrak{D}_s(A))\leq \Delta_s(A).
\end{equation} The relation \eqref{eq5} follows from \eqref{eq7} and (8).\\\par
Now let $x,y$ be two positive vectors with $x\prec^w y.$ We have to produce a $2n\times 2n$ real positive definite matrix $A$ such that $d_s(A)=y$ and $\Delta_s(A)=x.$ Because of \eqref{eq6} we can find a vector $z$ such that $z\prec y$ and $z\leq x.$ Denote by $Y$ the diagonal matrix with diagonal entries $y_1,\ldots,y_n.$ By Horn's Theorem, there exists a real orthogonal matrix $\Omega$ such that $\Delta(\Omega Y\Omega^T)=z.$ The matrix $\begin{bmatrix}
\Omega & 0\\
0 & \Omega
\end{bmatrix}$ is symplectic. Let $$B=\begin{bmatrix}
\Omega & 0\\
0 & \Omega
\end{bmatrix}\begin{bmatrix}
Y & 0\\
0 & Y
\end{bmatrix}\begin{bmatrix}
\Omega^T & 0\\
0 & \Omega^T
\end{bmatrix}$$
Then $B$ is positive definite and $d_s(B)=y.$ The diagonal $\Delta(B)=\begin{bmatrix}
Z & 0\\
0 & Z
\end{bmatrix},$ and therefore $\Delta_s(B)=z.$ \\\par
Let $P,Q,R,S$ be $n\times n$ diagonal matrices with \begin{equation}
PS-QR=I.
\end{equation} Then the matrix $M=\begin{bmatrix}
P & Q\\
R & S
\end{bmatrix}$ is symplectic. Let $A=MBM^T.$ Then $$\Delta(A)=\begin{bmatrix}
PZP+QZQ & 0\\
0 & RZR+SZS
\end{bmatrix}.$$The vector $\Delta_s(A)$ has entries
\begin{equation}
[(p_j^2+q_j^2)(r_j^2+s_j^2)]^{1/2}z_j , \text{ }1\leq j \leq n.
\end{equation}
We claim that we can choose $P,Q,R,S$ subject to the constraint (9) in such a way that the expressions in (10) assume all values $x_j\geq z_j, \ 1\leq j\leq n.$ For this, consider the group $SL(2,\mathbb{R})$ consisting of $2\times 2$ real matrices $\begin{bmatrix}
p & q\\
r & s
\end{bmatrix}$ with determinant 1. On this define the function $$f\left(\begin{bmatrix}
p & q\\
r & s
\end{bmatrix}\right)=[(p^2+q^2)(r^2+s^2)]^{1/2}.$$ This is a continuous function. Since $SL(2,\mathbb{R})$ is connected and unbounded, and $f(I)=1,$ the range of $f$ contains the interval $[1,\infty).$ This establishes our claim. \\\par
With $P,Q,R,S$ as above, and $M=\begin{bmatrix}
P & Q\\
R & S
\end{bmatrix},$ the matrix $A=MBM^T$ has the desired properties: $d_s(A)=y$ and $\Delta_s(A)=x.$ This completes the proof of Theorem \ref{thm1}.
\hfill \vrule height6pt width 6pt depth 0pt
\section{Remarks}
In spite of there being several differences between symplectic eigenvalues of positive definite matrices and ordinary eigenvalues, it is remarkable that, with appropriate interpretations, many known theorems for eigenvalues of Hermitian matrices have symplectic versions. See \cite{bj}, \cite{jm} and references therein. In the formulation of our Theorem 1 we have chosen one particular interpretation for the \q{diagonal} of $A.$ Another is the $n$-vector $d_s(\mathfrak{D}_s(A)).$ Given positive vectors $x,y,$ necessary and sufficient conditions for the existence of a positive definite matrix $A$ with $d_s(A)=y$ and $d_s(\mathfrak{D}_s(A))=x$ have been found in \cite{sanders}. These conditions consist of \eqref{eq7} and one additional inequality. Formally, this is an analogue of the Sing-Thompson theorem comparing the diagonal of a matrix and its singular values, which is quite different from the Schur-Horn theorem. \\\par
In Theorem \ref{thm1} we chose the geometric mean $[\Delta(A_{11})\Delta(A_{22})]^{1/2}$ of $\Delta(A_{11})$ and $\Delta(A_{22})$ for our \q{diagonal}.
We could have chosen the arithmetic mean
$$\Delta_c(A)=\frac{\Delta(A_{11})+\Delta(A_{22})}{2},$$
and there is a good reason for this alternative choice.
In many calculations with symplectic eigenvalues expressions like $\frac{1}{2}[\langle u,Au\rangle+\langle v,Av\rangle],$ where $u,v$ are vectors with $\langle u,Jv\rangle =1,$ play the same role as the \q{Rayleigh quotients} $\langle x,Ax\rangle$ with $\langle x,x\rangle=1$ in the ordinary eigenvalue problem. With this choice again we get a version of the Schur-Horn theorem:
\begin{thm}\label{thm3}
Let $A$ be any $2n\times 2n$ positive definite matrix. Then we have the weak majorisation
\begin{equation}
\Delta_c(A)\prec^w d_s(A).
\end{equation} Conversely, if $x,y$ are two positive $n$-vectors with $x\prec^w y,$ then there exists a $2n\times 2n$ positive definite matrix $A$ such that $x=\Delta_c(A)$ and $y=d_s(A).$
\end{thm}
\begin{proof}
Since $\Delta_s(A)\leq \Delta_c(A),$ the weak majorisation (11) is a consequence of \eqref{eq5}. To prove the converse, proceed as in the proof of Theorem 1. Define the matrix $B$ exactly in the same way as was done there. Let $\alpha=(\alpha_1,\ldots,\alpha_n)$ be an $n$-vector with positive entries and let $M_\alpha$ be the $2n\times 2n$ diagonal matrix with diagonal entries $(\alpha_1,\ldots,\alpha_n,\alpha_{1}^{-1},\ldots,\alpha_{n}^{-1}).$ This is a symplectic matrix. Let $A_\alpha=M_{\alpha} BM_{\alpha}^T.$ Then $d_s(A_{\alpha})=y$ for all $\alpha,$ and $$\Delta(A_{\alpha})=(\alpha_1z_1,\ldots,\alpha_nz_n,\alpha_{1}^{-1}z_1,\ldots,\alpha_{n}^{-1}z_n).$$ Hence, $$\Delta_c(A_{\alpha})=\frac{1}{2}((\alpha_1+\alpha_{1}^{-1})z_1,\ldots,(\alpha_n+\alpha_{n}^{-1})z_n).$$
Now $z_j\leq x_j$ for all $j.$ The function $f(t)=\frac{1}{2}(t+t^{-1})$ takes the value $1$ at $t=1$ and increases monotonically in $[1,\infty).$ Hence for each $j,$ there exists $\alpha_j\geq 1$ such that $x_j=\frac{1}{2}(\alpha_j+\alpha_{j}^{-1})z_j.$ For this $\alpha,$ let $A_\alpha=A.$ Then $A$ is a $2n\times 2n$ positive definite matrix with $d_s(A)=y$ and $\Delta_c(A)=x.$ This completes the proof.
\end{proof}
\vskip.2in
|
2,869,038,156,247 | arxiv | \section{Introduction}\label{intro}
Examining the spread of opinions, actions, memes, information, and misinformation in a population has received intense scrutiny in sociology, economics, computer science, physics, and many other fields \cite{porter2016,fowlerreview,yamir-jcn2013,granovetter78,valente-book,jackson2013,jackson2014,kkt2003,watts2002,loreto2009,Christakis07,centola2007,dodds2005,aral2009,ugander2012,goel-preprint,mollison1977}. Such phenomena --- including the spread of defaults of banks, norms in populations, and products or new practices in populations --- are often modeled as contagion processes that spread from node to node in a network \cite{elsinger_risk, Centola2005_norms, 10.1257/aer.99.5.1899}, in analogy with the spread of infectious diseases in a population.
There are some similarities between social and biological contagions \cite{yy-epidemic}, and phrases like ``going viral'' arise from such similarities. Compartmental models, which were first developed in the context of biological contagions \cite{rom-review2015,porter2016}, are often used for modeling social contagions, although there are substantial differences between the spread of information and the spread of diseases \cite{Centola1194, PhysRevE.88.012818}. For example, media broadcasts affect the masses differently in social versus biological contagions \cite{goel-preprint}.
In addition to modeling spreading processes themselves, it is important to consider the effect of network structure on contagions \cite{porter2016,rom-review2015}. For example, it can have an important effect on phenomena such as the peak size and temporal development of outbreaks \cite{rom-review2015, taylor2015, colizza-prx2015, gleeson2013PRX, gleeson_2008_PRE, centola2007, gleeson-watts-weighted, MelnikChaos13, PhysRevE.88.012818, Centola2005_norms}. Various approaches have been used to understand such effects, including coupled differential equations, discrete dynamical systems, stochastic processes, agent-based models, and game theory \cite{porter2016,rom-review2015}. Of course, these different approaches are not completely independent from each other, as many have important connections to each other. For example, it is possible to construe a dynamical system on a network as an agent-based model (although the choice of different terminology often belies substantial differences in perspective) \cite{porter2016,loreto2009}, and some threshold models of contagions can also be derived from a game-theoretic perspective \cite{NWS:8888780}.
In the study of social contagions, many studies suppose that some small fraction of the nodes are infected initially, and they ask when a meme or disease can spread widely in a network \cite{porter2016,gleeson2013PRX}. When many nodes have adopted the meme (or become infected, in the context of a disease), it is said that a \textit{cascade} has occurred\cite{watts2002,goel-preprint}. A cascade can either be good or bad: a game developer may dream about his/her app becoming viral, but defaulting banks from systemic risk is a source of fear and dread in the financial sector. Seemingly viral spread of misinformation was also a prominent aspect of the 2016 U.S. presidential campaign and election.
As our discussion suggests, in applications ranging from finance \cite{elsinger_risk} to meme spreading on Twitter\cite{PhysRevX.6.021019}, researchers are very interested in trying to identify what causes cascading behavior on networks \cite{goel-preprint}. In one prominent family of models, known as \emph{threshold models}, nodes survey their neighborhoods and adopt a meme (i.e., change their state) if sufficiently many of their neighboring nodes have already adopted this meme \cite{porter2016,granovetter78,valente-book, watts2002,gleeson2013PRX}. In most such models (and in most compartmental models), nodes are influenced only by their immediate neighbors, but in many situations (e.g., including social media such as Facebook and LinkedIn), individuals are able to observe actions by individuals beyond those to whom they are connected directly by an edge.\footnote{In fact, the sizes of the observable neighborhoods are different in different media (e.g., Facebook versus LinkedIn), and this can have profound effects on user experience, company algorithms, and more\cite{borgs2012}.}
In such situations, \textit{synergistic} effects can occur, as a node can be influenced by multiple nodes at the same time, and the combined influence differs from the sum of the individual influences. Synergistic effects can either increase or decrease the chance that a node will adopt a meme.
Synergistic effects can contribute to the dynamics of spreading processes in a diverse variety of contexts. Examples include the spread of behavior\cite{Centola1194}, the transmission of pathogens\cite{ludlam2012applications}, and the spread of new opportunities for farm activities among vineyards that form a wine route together\cite{brunori2000synergy}. Other phenomena with synergistic effects, which should be interesting to examine in the context of synergistic dynamical processes on networks, include the classical psychological ``sidewalk experiment'' with people staring up at the sky \cite{sidewalk}, increased value from the merging of companies (see, e.g., \cite{sudi1996}), and ``learning'' of delinquent and criminal behavior \cite{ballester2010}.
A few years ago, P\'erez-Reche et al. \cite{perez2011synergy} introduced a simple model of synergistic spreading in the context of a compartmental model for a biological contagion, and they examined its dynamics on a square lattice in two dimensions (2D). Their model was based on the standard susceptible--infectious--removed (SIR) model \cite{rom-review2015,porter2016}, in which an \textit{infectious} (I) node infects a \textit{susceptible} (S) neighbor at a constant rate $r_{\mathrm{SI}}=\alpha$. In this SIR model, an infectious node is infectious for a time $\tau$ before it switches states to \textit{removed} (R) (or ``recovered'', if one is less fatalistic), and then it can never become susceptible or infectious again.
P\'erez-Reche et al. generalized this SIR model so that $r_{\mathrm{SI}}$ includes not only the parameter $\alpha$ but also a synergy term $r_{\mathrm{syn}}=\beta m_i$, where $m_i$ is the number of nodes that contribute to the synergy when updating node $i$.
They used a linear form of synergy: $r_{\mathrm{SI}}=\alpha+\beta m_i$. For $\beta<0$, the synergy is \textit{interfering}, as synergy lowers the chance that node $i$ becomes infectious; for $\beta>0$, the synergy is \textit{constructive}, as synergy increases the chance of node $i$ to become infectious. For $\beta=0$, the model in \cite{perez2011synergy} reduces to the standard SIR model, and there is no synergy.
Several studies have followed up on the work of P\'erez-Reche et al. in \cite{perez2011synergy}. We mention some examples in passing now, and we give some more details in Section \ref{model-syn}. In \cite{Perez2013}, Taraskin et al. extended the theoretical analysis of \cite{perez2011synergy} and performed numerical computations in multiple types of $2D$ lattices. Reference \cite{PhysRevE.89.052811} studied a so-called ``generalized epidemic process'' (GEP), with interfering or constructive synergistic effects depending on the value of a parameter that models the amount of memory in social interactions. Reference~\cite{Perez2015} considered the effect of edge rewiring and nearest-neighbor synergy (so-called ``r-synergy'') on the invasiveness of diseases in various 2D lattices. Finally, a very recent paper \cite{liu2016explosive} explored a model for reversible synergistic spreading. The model was based on the susceptible--infectious--susceptible (SIS) model rather than the SIR model, and infectious nodes become susceptible again at some rate $\mu$. They defined synergistic effects (so-called ``d-synergy'') using a synergy parameter that depends on the next-nearest neighbors of a susceptible node. They found a critical value of their synergy parameter, above which the infectious fraction of nodes increases abruptly and dramatically.
One thing that the above models have in common is that the update rules for node states include stochasticity. To facilitate analytical treatments of problems and to help isolate the effects of novel features in a model, it is often convenient to use deterministic update rules \cite{porter2016}. To better understand synergistic effects in spreading processes on networks, it is thus useful to examine such effects in models with deterministic update rules. By simplifying the framework in this way, we hope to improve understanding of synergistic effects in spreading processes. We will use a two-state deterministic model in the form of a linear threshold model \cite{granovetter78,valente-book,watts2002}, and in particular we will consider a binary (i.e., two-state) model in which a node can be \textit{active} or \textit{inactive}. In the context of social contagions, ``inactive'' nodes are susceptible, and ``active'' nodes are infected. Upon becoming infected, a node remains infected forever. We also focus on nearest-neighbor interactions (and, in particular, on what P\'erez-Reche call ``r-synergy'') although our approach is also amenable to models with next-nearest-neighbor interactions (what P\'erez-Reche call ``d-synergy'').
In the present paper, we introduce two models for the synergistic spread of memes on networks using threshold models with deterministic update rules. We develop analytical approximations for the spread of memes on networks constructed using a configuration model. To test our analytical approximations, we consider degree distributions from both empirical data and standard synthetic network models. We also compare the synergistic spread of memes on two empirical networks to configuration-model networks that we construct using degree distributions derived from the degree sequences of these two networks. We thereby hope to learn whether synergistic effects can produce different dynamics on empirical versus synthetic networks.
The rest of our paper is organized as follows. In Section \ref{model-syn}, we provide additional discussion of existing attempts to model synergy in spreading processes on networks. In Sections \ref{sec:synergistic}, \ref{sec:initial}, and \ref{sec:analytical}, we introduce our models for synergistic spreading on networks, examine this model on two empirical networks, and develop an analytical approximation to describe the fraction of activated nodes with degree $k$ and threshold $\phi$ in a network as a function of time. We also demonstrate that we expect certain values of a synergy parameter in the models to lead to abrupt changes in the dynamics. In Section \ref{synth}, we study synergistic spreading processes on several families of random networks. In Sections \ref{sec:3reg} and \ref{sec:ER}, we simulate synergistic spreading on $3$-regular and Erd\H{o}s\--R\' enyi{} (ER) random networks and compare our analytical approximation to the simulated spreading processes. In Section \ref{sec:realistic}, we simulate synergistic spreading on networks that we construct using the configuration model with degree distributions from two empirical networks. In all of these networks, we observe that our analytical approximation indicates when it becomes possible for a spreading meme to activate a node with degree $k$ and threshold $\phi$. We conclude in Section \ref{conc}.
\section{Modeling Synergy in Spreading Processes} \label{model-syn}
We now give some additional details about the model of synergistic spreading that was introduced by P\'erez-Reche et al. in \cite{perez2011synergy}. They defined two types of synergistic dynamics: (1) \textit{r-synergy}, in which $m_i+1$ is the total number of infectious nearest neighbors that simultaneously attempt to infect a focal susceptible node $i$; and (2) \textit{d-synergy}, in which $m_i$ is the number of infectious nodes that are connected to the infectious nearest-neighbor that attempts to infect the susceptible node $i$.
In the simulations of P\'erez-Reche et al.~\cite{perez2011synergy}, only the node at the center of the square grid is infectious at time $t=0$; all other nodes start out in the susceptible state. P\'erez-Reche et al. called a disease ``invasive'' if it has a nonzero probability of reaching all four edges of the square grid before it is no longer possible to infect any other nodes. They illustrated that the value of a synergy parameter can affect whether a disease is invasive or noninvasive. They also illustrated that the value of a synergy parameter can affect whether an infectious host can infect more than one node.
Several papers have built on \cite{perez2011synergy} and produced additional insights on synergistic spreading dynamics on networks. In \cite{Perez2013}, Taraskin et al. extended the theoretical analysis from \cite{perez2011synergy} by taking into account that the neighborhood of a node might change during its infectious period. They also simulated spreading via r-synergy on several types of 2D lattices. (Reference~\cite{perez2011synergy} considered only square lattices.) Each node in their lattices has the same degree (i.e., coordination number). They suggested that the synergy effects are most prominent in lattices with high node degree because of the increased number of possible contributors to the synergy effects. They also reported that lattices with high coordination number can have invasive synergistic diseases even when the transmission rate $\alpha \to 0$.
Recently, reference\cite{Perez2015} considered the effect of r-synergy and rewiring of edges in various 2D lattices (square, triangular, and honeycomb) on the invasiveness of diseases. They examined a synergistic SIR model with three different expressions for the synergistic contribution to the infection rate. One of these expressions was the linear one introduced in \cite{perez2011synergy} and mentioned above, the other two were the exponential form $r_{SI} = \alpha e^{\beta n_i}$ and the corresponding linear approximation for small $\beta n_i$.
They considered spatial small-world (SSW) networks in which edges in a lattice are rewired from a neighbor to a ``nearby'' node (there is a maximum distance) with a certain probability and small-world (SW) networks in which there is no maximum distance for the rewiring. They studied the invasiveness of these synergistic contagions on these networks as a function of the number $k$ of nearest neighbors (i.e., coordination number) of the nodes in these different networks. In these networks, they reported that rewiring always lowers the rate $\alpha$ at which a contagion becomes invasive, independent of the value of the synergy parameter $\beta$ if the coordination number of the network is sufficiently small (e.g., $k=3$), that rewiring lowers the value of $\alpha$ at which contagions with interfering or low constructive synergy $\beta \in (0,\beta_*)$ become invasive regardless of the coordination number, and that rewiring increases the value of $\alpha$ at which contagions with a synergy parameter higher than a specific value (i.e., $\beta > \beta_* > 0$) become invasive in networks with sufficiently large coordination number (in particular, $k \ge 4$).
Reference \cite{PhysRevE.89.052811} examined a so-called ``generalized epidemic process (GEP)'' with interfering or constructive synergistic effects depending on the value of a parameter that models the amount of memory in social interactions. In their GEP, the probability that a susceptible node is infected by an infectious neighbor depends on the number of previous unsuccessful attempts to infect that node. In their GEP, the first attempt to infect a node succeeds with a rate $\lambda$, and all subsequent attempts succeed with another rate $T$. Thus, the synergy is constructive for $T>\lambda$ but interfering for $T<\lambda$. Their updating rule differs slightly from those in the above studies: instead of infecting nodes that trying to infect their neighbors, susceptible nodes choose to ``adopt'' the state of a neighboring node with some probability. They interpreted constructive synergistic effects as social reinforcement, and they showed analytically that there is a continuous phase transition in the outbreak size for a family of modular networks when the social reinforcement is small. They constructed their modular networks by starting with $c$ complete subgraphs of equal size, and then rewiring each edge with independent probability $p$. Thus, lower values of $p$ correspond to more modular networks, and $p=1$ yields an Erd\H{o}s\--R\' enyi{} network.
Using the same family of networks, they also showed that their GEP undergoes a discontinuous phase transition in the contagion outbreak size when social reinforcement is high.
Finally, a recent paper \cite{liu2016explosive} explored a variant of d-synergy. They used an SIS model to study the effect of d-synergy in networks. In contrast to the aforementioned studies, the spreading dynamics of this model is reversible, as infectious nodes eventually become susceptible again. They defined the probability that a susceptible node was successfully infected by an infectious node as $1-(1-\alpha)^{1+\beta n}$, where $n$ is the number of infectious nodes that are adjacent to the infectious node that is attempting to infect the susceptible node, $\alpha$ is the base transmission rate, and $\beta$ is the synergy parameter. As in standard SIS models, an infectious node becomes susceptible again at a rate $\mu$. They found a critical value for the synergy parameter in their model. Below this value, the steady-state density of infectious nodes increases continuously with the base infection rate. Above this parameter value, the steady-state density of infectious nodes in the network increases in an ``explosive'' manner (i.e., abruptly and drastically) as a function of their base infection rate.
The models that we discussed above all have stochastic update rules, which can make it difficult to study models analytically. In the present paper, we consider synergetic dynamics in models with deterministic update rules. This facilitates analytical treatments, which we will use to shed light on synergistic spreading processes on networks.
\section{Synergistic Threshold Models}\label{sec:synergistic}
Perhaps the most popular type of deterministic model of meme spreading are \emph{threshold models} of social influence \cite{porter2016,granovetter78,valente-book,watts2002,kkt2003,centola2007}. In the simplest type of threshold model, which is a generalization of bootstrap percolation \cite{miller-roof2015,chalupa1979}, one chooses a threshold $\phi_i$ for each node independently from a probability distribution $f(\phi)$ at time $t = 0$ (in traditional bootstrap percolation, all nodes have the same threshold), and a node becomes ``active'' (i.e., it adopts the meme) if the fraction of its neighbors (or, in some variants, the number of its neighbors) that are active is at least this threshold. Because of the simplicity of basic threshold models, one can derive analytical approximations for cascade conditions in a variety of settings and in various extensions of the model \cite{watts2002,kkt2003,holme2005,gleeson2013PRX,MelnikChaos13}.
We seek to develop a synergistic threshold model. We focus on r-synergy and hence on nearest-neighbor interactions. (It is also worth thinking about d-synergy models, but we leave this for future work.) We examine networks that consist of unweighted, undirected $N$-node graphs. At each point in time, a node can be in one of two states: \textit{inactive} ($S_0$) or \textit{active} ($S_1$). Inactive nodes exert no influence on their neighbors, and active nodes exert some amount of influence on their neighbors. The total amount of influence exerted by all neighbors of a node $i$ gives the \emph{peer pressure} experienced by node $i$. Each node $i$ has a threshold $\phi_i$ drawn from a distribution $f(\phi)$ at time $t = 0$. We also activate a seed set of nodes at $t = 0$. In all of our simulations, the seed consists of a single node chosen uniformly at random. Whenever we consider updating node $i$ (which we do in discrete time with synchronous updating), it becomes active if and only if the peer pressure on it is at least $\phi_i$.
We now construct a response function $F(n_i,k_i,\phi_i,\beta)$ that depends on the number $n_i$ of node $i$'s active neighbors, its degree $k_i$, its threshold $\phi_i$, and a synergy parameter $\beta$ that we will explain below. The response function is a non-decreasing function of $n_i$ and gives the probability that a node switches from the inactive state to the active one\cite{gleeson_2008_PRE}. One can use such a response function to describe numerous models of binary-state dynamics, such as bond and site percolation and the Watts threshold model (WTM) \cite{gleeson2013PRX}. We express the response function using a peer-pressure function $\Xi(n_i,\beta)$ by writing
\begin{equation} \label{eq:response_def}
F(n_i,k_i,\phi_i,\beta) =
\begin{cases}
0\,, & \text{if} \quad \Xi(n_i,\beta) < \phi_ik_i\,, \\
1\,, & \text{otherwise}\,.
\end{cases}
\end{equation}
We want to incorporate synergistic effects in $\Xi(n_i,\beta)$. P\'erez-Reche et al.~\cite{perez2011synergy} defined \textit{constructive synergy} and \textit{interfering synergy} by comparing their dynamics to a standard SIR model, which synergistic model generalizes. They defined the rate with which a susceptible node becomes infected as
\begin{equation}
\lambda = \max\{0, \alpha + (n_i-1) \beta\}\,,
\end{equation}
where $\alpha$ is a base infection rate in an SIR model without synergy, $\beta\in \mathbb{R}$ is a synergy parameter, and $n_i$ is the number of infectious nodes that exert synergistic influence on susceptible node $i$.
Whenever $\beta \neq 0$ and $n_i > 1$, this system exhibits synergy. If $\beta<0$, the synergy effects lower the rate with which susceptible nodes become infected from the combined effort of multiple infectious nodes exerting influence (as compared to the corresponding SIR model without synergy). Smaller (i.e., more negative) values of $\beta$ correspond to more powerful interfering synergy. In contrast, if $\beta>0$, the infection rate is larger, and larger $\beta$ results in more powerful constructive synergy. If $\beta = 0$, the infection rate is the same, and there is no synergy. Additionally, in P\'erez-Reche et al.'s model\cite{perez2011synergy}, synergy exists only if the number of nodes (called ``hosts'') that exert influence on a target node is strictly larger than $1$.
If the number of hosts is $1$, the dynamics reduces to that of the corresponding standard SIR model.
In the present paper, we draw inspiration from \cite{perez2011synergy} in terms of how we define interfering synergy and constructive synergy, but instead of generalizing a compartmental model of biological contagions, we start from the Watts threshold model (WTM) of social influence from \cite{watts2002}.
In the WTM, $\Xi(n_i) = n_i$. We design two peer-pressure functions, which depend on the number $n_i$ of active neighbors and on a synergy parameter $\beta$. We require that
\begin{equation}
\Xi(n_i,\beta)
\begin{cases}
=0 \,, & \text{if } n_i= 0 \,, \\
>n_i \,, & \text{if } \beta >0 \text{ and } n_i >1\,, \\
= n_i \,, & \text{if } \beta = 0 \text{ or } n_i = 1\,, \\
< n_i \,, & \text{if } \beta < 0 \text{ and } n_i > 1\,.
\end{cases}
\label{eq:model_demands}
\end{equation}
The two peer-pressure functions that we consider are
\begin{align}
\Xi_{\text{multiplicative}} &= (1+\beta)^{n_i-1}n_i \label{eq:multi_func} \,, \\
\Xi_{\text{power}} &= n_i^{1+\beta} \label{eq:power_func} \,.
\end{align}
Naturally, these are not the only two functions that satisfy our demands in Eq.~\eqref{eq:model_demands}. In Section~\ref{sec:3reg}, we will argue that any non-synergistic peer-pressure function that is non-decreasing and continuous in the synergy parameter $\beta$ exhibits the same qualitative behavior as these two functions (in the sense of experiencing the same types of bifurcations).
If a node is \textit{vulnerable} (i.e., it can be activated by a single active neighbor), it remains vulnerable if one introduces synergy using Eq.~\eqref{eq:multi_func} or Eq.~\eqref{eq:power_func}. Moreover, no non-vulnerable node can become vulnerable as a result of the synergy introduced using Eq.~\eqref{eq:multi_func} or Eq.~\eqref{eq:power_func}. We seek to examine when synergy effects, as encapsulated by the parameter $\beta$, change the number of active neighbors that can activate a degree-$k$ node. That is, we seek to examine when synergy can assist or hinder the spread of a meme through a network. We can calculate when a specific change like this occurs. Suppose that a node $i$ with degree $k_i$ can be activated when there are at least $m_i$ active neighbors for $\beta = 0$. We wish to determine the $\beta$ values for which $l_i$ active neighbors are sufficient to activate node $i$. For the power synergy model \eqref{eq:power_func}, we calculate
\begin{align}
(l_i)^{1+\beta} &\ge \phi_ik_i\\
\Rightarrow \beta &\ge \frac{\log \phi_i k_i}{\log(l_i)}-1\,. \label{eq:betacrit_power}
\end{align}
For multiplicative synergy model, we obtain
\begin{equation}\label{eq:betacrit_multi}
\beta \ge \left(\frac{\phi_i k_i}{l_i} \right)^{1/(l_i-1)}\,.
\end{equation}
More generally, except for $m_i=1$ or $l_i = 1$ (by construction, nodes cannot become or stop being vulnerable from synergistic effects), we can solve for the value at which any $l_i \in \mathbb{N}$ active neighbors can activate a node with degree $k_i$ and threshold $\phi_i$, given the synergy parameter $\beta$. Hence, the threshold $\phi_i$ exclusively determines if a node is vulnerable. If a node is not vulnerable, the synergy parameter can alter the difficulty with which it is activated for any threshold $\phi_i$.
When we initiate our simulations with only a single node as a seed, there is a risk that this seed is surrounded --- or is part of a group of vulnerable nodes of insignificant size that are surrounded --- by non-vulnerable nodes. Because such situations arise from the choice of threshold distribution $f(\phi)$ rather than from synergistic effects, we discard such simulations throughout this paper.
\section{Synergy in two Empirical Networks}\label{sec:initial}
We start by examining the synergistic threshold model \eqref{eq:power_func} on the network of condensed-matter physics paper coauthorships from \cite{leskovec_graph_2007}. (The network is available at \url{https://snap.stanford.edu/data/}.) In this network, a node represents an author, and there is an (undirected) edge between nodes $i$ and $j$ if the authors coauthored at least $1$ paper. We suppose for simplicity that all nodes have a threshold of $\phi^* = 1/10$.
We show our results in Fig.~\ref{fig:condmat_actual}. We use power synergy \eqref{eq:power_func}, and we show interfering synergy ($\beta = -0.80$) in the left panel (a) and constructive synergy ($\beta = 0.15$) in the right panel (b). Data points correspond to the mean fraction of degree $k$ nodes that are active at each time step in question. Among our simulations, we include only realizations in which the meme activates at least $0.5\%$ of the network. For each degree, an equally large or smaller fraction of nodes is activated for interfering synergy than for constructive synergy. In panel (b), we show the $k=2$ curve from panel (a) for comparison. We see that it takes longer for the meme to spread in the network for interfering synergy than it does for constructive synergy.
\begin{figure*}[tb]
\includegraphics[width=.49\linewidth]{Pics/condmat_beta-080_94oo110-eps-converted-to.pdf}
\hfill
\includegraphics[width=.49\linewidth]{Pics/condmat_beta015_31oo110_interfering_included-eps-converted-to.pdf}
\caption{Example of the behavior of the synergistic threshold model defined with \eqref{eq:power_func} using (a) interfering synergy (with $\beta = -0.80$) and (b) constructive synergy (with $\beta = 0.15$). In panel (b), we show part of the curve for $k=2$ from the case of interfering synergy for comparison. Because we choose the seed active node uniformly at random, there is a chance that only the seed is activated. We do not take such runs into consideration. For the interfering synergy plot, only the seed was activated in $94$ of $110$ runs; for constructive synergy, this occurred in $31$ of $110$ runs. For the simulations in this figure, we ran the synergistic threshold model on the condensed-matter physics coauthor network from \cite{leskovec_graph_2007}, and the threshold for each node is $\phi = \phi^* = 1/10$. For each degree, a smaller fraction of nodes become active for interfering synergy than for constructive synergu. We also see that it takes longer for the meme to spread in the network for interfering synergy than for constructive synergy.
}
\label{fig:condmat_actual}
\end{figure*}
We now examine our synergistic threshold model on another empirical network, the {\sc Northwestern25} network from the {\sc Facebook100} data set\cite{traud_social_2012}. This data set contains the complete set of people and friendships of $100$ different U.S. universities from one day in autumn $2005$. {\sc Northwestern25} is the data from Northwestern University. We show our results in Fig.~\ref{fig:facebook_original}. We suppose that all nodes have a threshold of $\phi^* = 1/33$, and we again examine power synergy with interfering synergy (with $\beta = -0.80$) in panel (a) and constructive synergy (with $\beta = 0.15$) in panel (b). For comparison, we include the curve for $k=13$ for constructive synergy for our plots for interfering synergy. We again see that it takes longer for the meme to spread in the network for interfering synergy than it does for constructive synergy, and that the fraction of nodes that are active is smaller or of equal size for interfering synergy than it is for constructive synergy.
\begin{figure*}[tb]
\includegraphics[width=0.49\linewidth]{Pics/Facebook_original_interfering-eps-converted-to.pdf}
\hfill
\includegraphics[width=0.49\linewidth]{Pics/Facebook_original_constructive-eps-converted-to.pdf}
\caption{Example of the behavior of the synergistic threshold model defined with \eqref{eq:power_func} using (a) interfering synergy (with $\beta = -0.80$) and (b) constructive synergy (with $\beta = 0.15$). In panel (b), we show the curve for $k=13$ for the case of interfering synergy for comparison. Because we choose the seed active node uniformly at random, there is a chance that only the seed is activated. We do not take such runs into consideration. For the interfering synergy plot, only the seed was activated in $30$ of $110$ runs; for constructive synergy, this occurred in $24$ of $110$ runs. In the simulations in this figure, we ran the synergistic threshold model on the {\sc Northwestern25} network from the {\sc Facebook100} data set \cite{traud_social_2012}, and the threshold of each node is $\phi = \phi^* = 1/33$. For each degree, a smaller fraction of nodes becomes active for interfering synergy than for constructive synergy. We also see that it takes longer for the meme to spread in the network for interfering synergy than it does for constructive synergy.
}\label{fig:facebook_original}
\end{figure*}
\section{Analytical Approximation of Number of Active Nodes Versus Time}\label{sec:analytical}
We now develop an analytical approximation that describes the fraction of active nodes in a network as a function of time for any choice of peer-pressure function, degree distribution, and threshold distribution.
Recall that we employ synchronous updating in our simulations. Because our model is deterministic, this choice does not affect the final infected fraction of active nodes. We activate $1$ seed node of the $N$ total nodes at time $t=0$, and it is convenient for the theory to express it as a fraction $\psi_k^{\phi} = 1/N$ of the nodes with degree $k$ and threshold $\phi$. See \cite{gleeson_2007_PRE, gleeson_2008_PRE} for a discussion of the effects on cascade side of using a single active node as a seed for the WTM, and see \cite{fennell2016} for a recent discussion of issues with synchronous versus asynchronous updating (where asynchronous updating, such as through a Gillespie algorithm, is meant to model continuous-time dynamics) for dynamical processes on networks.
To calculate the fraction $\rho_k^\phi$(n+1) of active nodes with degree $k$ and threshold $\phi$ at time step $n+1$, we write the recursive formula (as in, e.g., \cite{gleeson_2007_PRE,gleeson_2008_PRE,MelnikChaos13})
\begin{equation}\label{eq:rhok}
\rho_k^{(\phi)}(n+1) = \psi_k^{(\phi)} + (1-\psi_k^{(\phi)})\sum_{j=0}^{k}B^k_j(\bar{q_k}^{(\phi)}(n))F(j_i,k_i,\phi_i,\beta)\,,
\end{equation}
where $\bar{q}_k^{(\phi)}(n)$ is the probability that a neighbor of an inactive node with degree $k$ and threshold $\phi$ chosen uniformly at random is active at time step $n$, and
\begin{equation}
B^k_j(p) = \binom{k}{j}p^j(1-p)^{k-j}\,.
\end{equation}
We can write $\bar{q}_k^{(\phi)}(n)$ as a function of $q_{k'}^{(\phi')}(n)$, the probability that, for a given inactive node, a neighbor with degree $k'$ and threshold $\phi'$ is active at time step $n$. This probability is
\begin{equation}
\bar{q}_k^{(\phi)}(n) = \frac{\sum_{k',\phi'}P\left((k,\phi),(k',\phi')\right)q_{k'}^{\phi'}(n)}{\sum_{k',\phi'}P\left((k,\phi),(k',\phi')\right)}\,,
\end{equation}
where $P\left((k,\phi),(k',\phi')\right)$ is the probability that a node with degree $k$ and threshold $\phi$ is adjacent to a node with degree $k'$ and threshold $\phi'$. For an inactive node, the probability that a neighboring node with degree $k$ and threshold $\phi$ is active is
\begin{equation}\label{eq:qk}
q_k^{(\phi)}(n+1) = \psi_k^{(\phi)}+(1-\psi_k^{(\phi)})\sum_{j=0}^{k-1}B^k_j(\bar{q_k}^{(\phi)}(n))F(j_i,k_i,\phi_i,\beta)\,.
\end{equation}
The only difference between Eq.~\eqref{eq:qk} and Eq.~\eqref{eq:rhok} stems from the fact that the degree-$k$ neighbor, which we consider in \eqref{eq:qk}, has a maximum of $k-1$ active neighbors if it is adjacent to at least one inactive node. In these equations, we have assumed that each neighbor of node $i$ is independent of the others, so we are assuming that the network is locally tree-like \cite{localtreeapprox_mason,porter2016}
\section{Synergy in Synthetic Networks}\label{synth}
To illustrate our theoretical results, we consider synergistic spreading in several families of random graphs.
\subsection{Synergy in 3-Regular Networks}\label{sec:3reg}
We first examine 3-regular random networks, in which every node has degree 3 and stubs (i.e., ends of edges) are matched uniformly at random. That is, we consider configuration-model networks in which each node has degree 3. We study how synergy effects influence the spread of memes on these networks by examining several values of the parameter $\beta$ for both multiplicative and power synergy. In our numerical simulations, we suppose that a fraction $p_0 = 0.8$ of the nodes have threshold $\phi = 0.32 < 1/3$ and that a fraction $1 - p_0 = 0.2$ of the nodes have threshold $\phi =1$.
In all networks from this point onwards, we create a new network for each realization of a synergistic threshold model. For all networks except Erd\H{o}s\--R\' enyi (ER) networks, we specify a degree distribution $p(k)$. We use this to determine a degree for each of $10,000$ degrees, and we then connect these nodes to each other using a configuration model (connecting stubs to each other uniformly at random) \cite{Fosdick2016}.
We choose a single node uniformly at random as a seed and update nodes synchronously at each discrete time step. We stop the simulations only when we reach equilibrium (i.e., when no more nodes can eventually activate). In Fig.~\ref{fig:t_inf}, we plot the equilibrium active fractions of high-threshold and low-threshold nodes as a function of the synergy parameter $\beta$. Each data point is a mean over $10$ realizations of the spreading process.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Pics/t_inf-eps-converted-to.pdf}
\caption{Final fraction of active nodes in $3$-regular random networks of $10,000$ nodes when using the multiplicative synergistic peer-pressure function \eqref{eq:multi_func}. A fraction $p_0=0.8$ of the nodes have threshold $\phi_0 = 0.32 < \frac{1}{3}$, and a fraction $1-p_0 = 0.2$ of the nodes have threshold $\phi = 1$. Each data point is a mean of $10$ realizations of the synergistic threshold model on $10$ different $3$-regular random networks, which we create using a configuration model. For each $\beta$ value, we create $10$ networks. (In doing these simulations, we discarded $2$ realizations due to the choice of seed node; the contagion did not spread enough in those cases.)
}
\label{fig:t_inf}
\end{figure}
When $\beta$ surpasses the values $0$ and $0.5$, the final fraction of active nodes with threshold $\phi = 1$ increases dramatically. We can see this from Eqs.~\eqref{eq:betacrit_multi} and \eqref{eq:response_def}. For $\beta <0$, it is not possible to satisfy $\phi_i k_i \ge (1+\beta)^{n_i-1}n_i$, because $n_i\le k_i$. For $\beta \in [0,0.5)$, the relation $\phi_i k_i \ge (1+\beta)^{n_i-1}n_i$ holds only for $n_i=k_i$.
In this case, nodes with $\phi=1$ can be activated, but they are never able to help activate a neighbor (unless they are part of the seed set of active nodes), as all of their neighbors are necessarily already active once they have been activated. For $\beta \ge 0.5$, the relation $\phi_i k_i \ge (1+\beta)^{n_i-1}n_i $ holds for $n_i = k_i$ and $n_i=k_i-1$. In this case, nodes with $\phi=1$ can be activated even when they still have an inactive neighbor. Hence, nodes with $\phi=1$ can help spread the meme, resulting in an increase in active nodes with both $\phi=1$ and $\phi = 0.32$ compared to what occurs for $\beta < 0.5$. Rephrasing these observations, bifurcations occur at special values of $\beta$ (which are $\beta = 0$ and $\beta = 0.5$ in this example) for the peer-pressure function \eqref{eq:multi_func}, and we calculate the bifurcation points by solving $\Xi(n_i) = k_i\phi_i$ for $n_i \in \{2,\ldots, k_i \}$ (where we exclude $n_i = 1$ because it corresponds to a vulnerable node, which by design, would be vulnerable for any value of $\beta$). Such $\beta$ values exist for any non-decreasing peer-pressure function $\Xi(n_i,\beta)$ that is continuous in $\beta$. For two different peer-pressure functions, the $\beta$ value that makes it possible for a specific number (e.g., 4, to be concrete) active neighbors to activate a specific node can differ, but there is some value of $\beta$ in both peer-pressure functions. Hence, all continuous, non-decreasing synergistic peer-pressure functions behave in qualitatively the same way.
In Figs.~\ref{fig:3reg_small}(a) and \ref{fig:3reg_small}(b), we show how the meme spreads for $\beta = 0.4999$ and $\beta = 0.5001$, respectively. Each data point is a mean over $100$ realizations of the spreading process. For each realization, we create a new $3$-regular random network using a configuration model (with stubs connected uniformly at random).
One can use any response function, such as ones that use the peer-pressure functions \eqref{eq:multi_func} or \eqref{eq:power_func}, to compute when $n_i\le k_i$ nodes can activate a node with threshold $\phi_i$ by solving the equation $\Xi(n_i) = \phi_ik_i$. Therefore, different response functions can have sudden increases in the final fraction of active nodes at critical values of $\beta$ for the same reason: at these values of $\beta$, it becomes possible for some nodes to be activated with fewer active neighbors than was the case for smaller values of $\beta$. Although these critical values of $\beta$ can differ for different response functions, the different synergistic response functions exhibit qualitatively similar behavior. Therefore, we henceforth use only the response function that is specified by the power peer-pressure function \eqref{eq:power_func}.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Pics/timeseries_constant_small-eps-converted-to.pdf}
\includegraphics[width=\linewidth]{Pics/timeseries_constant_high-eps-converted-to.pdf}
\caption{Active fraction of nodes as a function of time in $3$-regular random networks of $10,000$ nodes. A fraction $p_0=0.8$ of the nodes have threshold $\phi_0 = 0.32 < \frac{1}{3}$, and a fraction $1-p_0 = 0.2$ have threshold $\psi=1$. In panel (a), the synergy parameter is $\beta = 0.4999$; in panel (b), it is $\beta = 0.5001$. In each panel, each data point is a mean over $100$ realizations of memes that spread using the synergistic response function with peer-pressure function \eqref{eq:multi_func}. We observe excellent agreement between the analytical approximation \eqref{eq:rhok} and our simulations. (In these simulations, we did not need to discard any realizations due to the choice of seed node.) For each realization, we created a $3$-regular random network using a configuration model. The two panels show results for two different sets of networks.
}
\label{fig:3reg_small}
\end{figure}
\subsection{Synergy in Erd\H{o}s\--R\' enyi {} Networks}\label{sec:ER}
We now simulate the spread of memes with synergy on ER networks. First, we consider ER networks with mean degree $z=3$, and we then consider ER networks with mean degree $z=8$. In both cases, we use the threshold distribution $f(\phi) = \delta(\phi-\phi^*)$ with $\phi^* = 1/7$.
\subsubsection{Mean Degree $z=3$}
We use our analytical approximation \eqref{eq:rhok} to find the expected equilibrium active fraction of nodes as a function of their degree and the synergy parameter $\beta$ for the response function with power peer-pressure function \eqref{eq:power_func}.
We plot these quantities in Fig.~\ref{fig:ER_z3_t_inf}. In Fig.~\ref{fig:ER_z3}, we plot the time series of the fraction of active nodes when the symmetry parameter is $\beta = -0.93$, for which our model predicts different equilibrium active fractions for nodes with degrees $1$, $2$, $3$, and $8$. We observe excellent agreement between our simulations and analytical approximation \eqref{eq:rhok} for these four node degrees.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Pics/power_phi17_ER_z3-eps-converted-to.pdf}
\caption{Final active fraction of degree-$k$ nodes as a function of the synergy parameter $\beta$ for a meme that spreads on ER networks with mean degree $z=3$ and a response function with peer-pressure function \eqref{eq:power_func}. Using \eqref{eq:betacrit_power}, our analytical approximation \eqref{eq:rhok} again predicts abrupt jumps that match the calculations.
}
\label{fig:ER_z3_t_inf}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Pics/timeseries_ER_power_z3_beta-093_phi17-eps-converted-to.pdf}
\caption{Active fraction of nodes of degrees $1$, $2$, $3$, and $8$ nodes in ER networks as a function of time with synergy parameter $\beta=-0.93$, mean degree $z=3$, and homogeneous threshold $\phi = 1/7$. The memes spread using the synergistic response function with power peer-pressure function \eqref{eq:power_func}, and each data point is a mean over $31$ realizations of the spreading process. We observe that the analytical approximation \eqref{eq:rhok} of the temporal activation of nodes of degrees $1$, $2$, $3$, and $8$ match the simulated data well. This is also true for the other node degrees. We created a new random ER network was created for each realization. (In doing these simulations, we discarded $9$ realizations due to the choice of seed node; the contagion did not spread enough in those cases.)
}
\label{fig:ER_z3}
\end{figure}
\subsubsection{Mean Degree $z=8$}
We now examine ER networks with mean degree $z=8$. We simulate the synergistic spreading processes with parameter $\beta = -0.835$ and a response function with power peer-pressure function \eqref{eq:power_func}.
We choose this value of $\beta$ so that the final fraction of active nodes is different for nodes with different degrees. In Fig.~\ref{fig:ER_z8}, we show the fraction of active nodes as a function of time, and we again observe a good match between our computations and our analytical approximation \eqref{eq:rhok}.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{Pics/timeseries_ER_power_z8_beta-0835_phi17-eps-converted-to.pdf}
\caption{Fraction of active degree-$k$ nodes as a function of time for ER networks with mean degree $z = 8$. We average the numerical results over $31$ realizations of memes spreading over the networks using the synergistic response function with power peer-pressure function \eqref{eq:power_func} with $\beta = -0.835$. We observe a good match between our numerical results and our analytical approximation, although there is a slight discrepancy for nodes with $k=1$. (In doing these simulations, we discarded $9$ realizations due to the choice of seed node; the contagion did not spread enough in those cases.)
}
\label{fig:ER_z8}
\end{figure}
\subsection{Synergy on Networks with Degree Distributions from Empirical Data}\label{sec:realistic}
We now simulate the spread of synergistic memes on two networks with degree distributions from empirical data.
In Section \ref{sec:CMP}, we consider random networks created using a configuration model (in particular, by matching stubs uniformly at random), in which we use a degree distribution from the degree sequence of the network of coauthorships in condensed-matter physics papers\cite{leskovec_graph_2007} that we examined in Section \ref{sec:initial}. This network has a mean degree of $z\approx 8$. In Section \ref{sec:Facebook}, we simulate the spread of synergistic memes on networks created using a configuration model and a degree distribution from the degree sequence of the {\sc Northwestern25} network from the {\sc Facebook100} data set \cite{traud_social_2012,Traud2010}.
This Facebook network has a mean degree of $z \approx 92$. For each realization, we create a new $10,000$-node network using a configuration model and degree sequences drawn from the associated degree distribution.
\subsubsection{Condensed-Matter Physics Collaboration Network}\label{sec:CMP}
We draw the degree of each of the $10,000$ nodes from the degree distribution of the condensed-matter physics collaboration network \cite{leskovec_graph_2007}, and we create edges using a configuration model.
In Fig.~\ref{fig:CM}, we plot the fraction of active nodes of degree $k$ as a function of time. We average over $9$ simulations (we discarded $1$ simulation because there was insufficient spreading from the seed node) of the spreading of a meme according to the power synergy model \eqref{eq:power_func} on condensed-matter collaboration networks. For each of these realizations, we create a new random network using a configuration model (as described above).
As in Section \ref{sec:initial}, we choose the threshold $\phi^* = 1/10$ for our simulations. We first consider interfering synergy with parameter $\beta = -0.85$, which makes it impossible to activate any node whose degree is $16$ or more. Our analytical approximation describes the simulated data well. In Fig.~\ref{fig:CM_constr}, we examine the effect of constructive synergy with the model \eqref{eq:power_func}. In this case, we use $\beta = 0.20$ and $\phi^* = 1/7$. For all node degrees that we checked, the final infected fraction is indistinguishable in the analytical prediction and the actual simulations.
However, our analytical approximation predicts the infected fraction increases earlier than what occurs in our simulations.
\begin{figure}[tb]
\includegraphics[width=\linewidth]{Pics/timeseries_CM_phi01_beta-085-eps-converted-to.pdf}
\caption{Fraction of active nodes with degrees $1$, $2$, $3$, $8$, $13$, and $14$ as a function of time in configuration-model networks constructed from a degree distribution determined from the degree sequence of the condensed-matter theory collaboration network from \cite{leskovec_graph_2007}. We average the results over $9$ realizations of memes spreading using the synergistic response function with peer-pressure function \eqref{eq:power_func} and interfering synergy $\beta = -0.85$. Each node has a threshold of $\phi^*=1/10$. For each realization, we create a new configuration-model network.
Our analytical approximation describes the results well.
(In doing these simulations, we discarded $1$ realization due to the choice of seed node; the contagion did not spread enough in those cases.)
}
\label{fig:CM}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=\linewidth]{Pics/condmat_beta_020-eps-converted-to.pdf}
\caption{Fraction of active nodes with degrees $1$, $2$, $3$, $8$, $13$, and $14$ as a function of time in configuration-model networks constructed from a degree distribution determined from the degree sequence of the condensed-matter theory collaboration network from \cite{leskovec_graph_2007}. We average the results over $10$ realizations of memes spreading using the synergistic response function with peer-pressure function \eqref{eq:power_func} and constructive synergy $\beta = 0.20$.
Each node has a threshold of $\phi^*=1/7$. Our analytical approximation predicts that the fraction of active nodes increases slightly earlier than what we observe in our numerical simulations, but the final fraction of infected nodes are visually indistinguishable in our analytical approximation and simulations. (In these simulations, we did not need to discard any realizations due to the choice of seed node.)
}
\label{fig:CM_constr}
\end{figure}
\subsubsection{A Facebook Network}\label{sec:Facebook}
We simulate the spread of synergistic memes on the {\sc Northwestern25} network from the {\sc Facebook100} data set \cite{traud_social_2012}. The network has a mean degree of $z \approx 92$. The minimum degree is $d=1$, and the maximum degree is $d=2105$. We assign all nodes a degree from a degree distribution based on the degree sequence of the {\sc Northwestern25} network, and we create edges using a configuration model (in particular, by matching stubs uniformly at random, as we discussed previously). We suppose that each node has a threshold of $\phi^* = 1/33$. In Fig.~\ref{fig:Facebook}, we plot the fraction of active degree-$k$ nodes as a function of time. In panel (a), each data point is a mean over $51$ realizations of the spreading process on $10000$-node configuration-model networks with the degree distribution described above. (We discarded $149$ simulations because there was insufficient spreading from the seed node.) In panel (b), each data point is a mean over $53$ realizations. (We discarded $147$ simulations because there was insufficient spreading from the seed node.) As in our other simulations, each realization is a different draw of one of these configuration-model networks.
We show results for both interfering synergy (with $\beta = -0.05$) and constructive synergy (with $\beta = 0.15$). For this family of networks, our analytical approximation departs from our numerical simulations for both the final fractions of active nodes and the times at which the fractions of the active degree-$k$ nodes saturate. We also note that our analytical approximation suggests that interfering synergy slows down the spreading process much more than is actually the case in the simulations.
Our analytical approximation assumes that we are considering dynamics on a locally tree-like network, although some processes often have ``ureasonable'' effectiveness even in many situations in which the hypotheses used to derive them fail to hold \cite{localtreeapprox_mason}. The authors of \cite{localtreeapprox_mason} discussed various reasons why a local-tree approximation may not provide a good description of the actual dynamics on a network (for a given dynamical system, such as a particular type of spreading process). For the {\sc Facebook100} networks, they found for various spreading processes (including the WTM) that simulations with a threshold distribution of $f(\phi) = \delta(\phi-\phi^*)$ yielded different dynamics in numerical simulations than in a tree-based theory. Reference \cite{localtreeapprox_mason} found with a Gaussian distribution of thresholds that simulations with a seed consisting of all nodes with $\phi<0$ yields results that are well-described by their local-tree approximation. In our case, however, altering the threshold distribution in this way does not yield agreement between our analytical approximation and simulated results.
Two properties that may provide some indication of how effectively tree-based theories for dynamical processes work on a network are the mean geodesic path length between nodes and a mean local clustering coefficient of the network. Although this is not something that is required mathematically (as one can construct counterexamples, so as a star graph), it reasonable to expect an ``typical'' tree-like network (in particular, consider an ensemble of networks drawn uniformly at random from the set of all trees with a given number of nodes) to have larger mean geodesic path lengths than ``typical'' networks of the same size that are not tree-like (e.g., an ensemble of configuration-model networks). One also expects a tree-like network to have a smaller mean local clustering coefficient than a network with the same number of nodes that is not tree-like. Averaging the mean geodesic path length between nodes in a set of $10$ randomizations (as described above) of the {\sc Northwestern25} network yields $2.510\pm 0.007$, which is much lower than any other random network in this study (see Table \ref{tab:averaged_path_length}). Averaging the local clustering coefficient for the same $10$ networks yields $0.02828\pm0.00109$, which is much higher than any other random network in this study. This suggests that the randomized {\sc Northwestern25} networks are less tree-like than the other random networks that we examine. Additionally, the mean local clustering coefficient and the mean geodesic path length in the original {\sc Northwestern25} and condensed-matter collaboration networks are larger than those of the randomized networks that we constructed from those networks. Unsurprisingly, randomization considerably decreases the value of the mean local clustering coefficients (especially for the condensed-matter collaboration network).
\begin{table*}[tb]
\begin{tabular}{|l|c|c|}
\hline
\textbf{Network} & \textbf{Mean geodesic path length
& \textbf{Mean local clustering coefficient}\\
\hline
$3$-regular & $6.359 \pm 0.001$ &$0.00033 \pm 0.00011$\\
Erd\H{o}s\--R\' enyi {}, $z = 3$ & $8.366 \pm 0.043$& $0.00020 \pm 0.00014$
\\
Erd\H{o}s\--R\' enyi {}, $z = 8$ & $4.664 \pm 0.009$& $0.00079 \pm 0.00008$
\\
Cond-mat collab. (original) & $ 5.352
& $0.64173
\\
Cond-mat collab. (random) & $ 4.091 \pm 0.024$&$0.00471 \pm 0.00049
$ \\
{\sc Northwestern25} (original) & $2.723
& $0.23828
\\
{\sc Northwestern25} (random) & $2.509 \pm 0.007$& $0.02828 \pm 0.00109$\\
\hline
\end{tabular}
\caption{Mean geodesic path length between nodes and mean local clustering coefficient in the four different random-network families that we examine. We construct the 3-regular random graphs using the configuration model (with stubs connected uniformly at random), and we also construct configuration-model networks using degree sequences (with associated degree distributions) from the condensed-matter collaboration network and {\sc Northwestern25} Facebook network.
In each case, we average our results over 10 networks, and we indicate the mean values and the standard deviation of mean in each case. We also list the values for the original {\sc Northwestern25} and condensed-matter collaboration networks. Observe that the mean geodesic path length between nodes is much smaller in the {\sc Northwestern25} networks than in the other networks. Among the random networks, the mean local clustering coefficient is also by far the largest in the {\sc Northwestern25} network, although the condensed-matter collaboration network also has a much larger mean local clustering coefficient compared to the ER networks and the $3$-regular networks. The original empirical networks have values for both the mean geodesic path length and mean local clustering coefficients that are significantly larger than the values in the random networks with degrees drawn from the same degree distributions.
}
\label{tab:averaged_path_length}
\end{table*}
\begin{figure*}[tb]
\includegraphics[width=0.49\linewidth]{Pics/Facebook_phi133_beta-005-eps-converted-to.pdf}
\hfill
\includegraphics[width=0.49\linewidth]{Pics/Facebook_phi133_beta015-eps-converted-to.pdf}
\caption{Simulations of synergistic spreading processes on $10,000$-node networks with degree distribution determined from the {\sc Northwestern25} network from the {\sc Facebook100} data set \cite{traud_social_2012}. The nodes have a homogeneous threshold of $\phi^* = 1/33$. (a) We examine interfering synergy (with $\beta = -0.05$) and plot the fraction of active nodes with degrees $1$, $2$, $3$, and $4$ as a function of time. All nodes with degree $k\ge5$ exhibit behavior similar to those with the plotted degrees; the final fractions of active nodes are between $0.79$ and $0.88$. The time until the cascade occurs is very different between our analytical approximation \eqref{eq:rhok} and numerical simulations, and there are also discrepancies in the final fraction of active nodes between our analytics and numerics. We average our results over $51$ realizations. (In doing these simulations, we discarded $149$ realizations due to the choice of seed node; the contagion did not spread enough in those cases.)
(b) We examine constructive synergy $\beta = 0.15$ and plot the fraction of active nodes with degrees $1$, $2$, $3$, and $4$ as a function of time. All nodes with degree $k\ge5$ eventually have fractions of active nodes that are larger than $0.92$. For this case as well, the time until the cascade occurs is very different in our analytical approximation \eqref{eq:rhok} and our numerical simulations, and there are also discrepancies in the final fraction of active nodes between our analytics and numerics. We average our results over $53$ realizations. (In doing these simulations, we discarded $147$ realizations due to the choice of seed node; the contagion did not spread enough in those cases.)
}
\label{fig:Facebook}
\end{figure*}
\section{Conclusions}\label{conc}
It is important to study when diseases, information, a meme, or something else (e.g., misinformation or ``alternative facts'') spreads to a large number of nodes in a network \cite{fowlerreview,porter2016}. Prior studies have suggested that some organisms and tumors spread via synergistic effects \cite{ben1994generic, liotta2001microenvironment} and that synergistic effects can also be important for the spread of information on networks \cite{PhysRevE.88.012818}, the spread of behavior in online social networks \cite{Centola1194}, the transmission of pathogens \cite{ludlam2012applications}, and the spread of opportunities among vineyards on wine routes\cite{brunori2000synergy}.
In the present paper, we developed a threshold model with synergistic spreading; and we investigated both analytically and computationally the fraction of nodes, resolved by degree and as a function of a synergy parameter, that are activated for empirical networks and several families of random graphs. We illustrated that the synergistic models \eqref{eq:betacrit_multi} and \eqref{eq:betacrit_power} lead to critical synergy levels at which non-vulnerable nodes with a certain degree $k$ can be activated by $k$ active neighbors for all synergy parameter values of at least this level.
We used a local-tree approximation to approximate the fraction of active degree-$k$ nodes as a function of time. We illustrated that our analytical approximation \eqref{eq:rhok} matches well with numerical simulations for synergistic memes that spread on $3$-regular random networks, Erd\H{o}s\--R\' enyi {} networks, and configuration-model networks constructed from a condensed-matter physics collaboration data set. However, our analytical approximation does not do well for configuration-model networks that we construct using a degree distribution of the {\sc Northwestern25} data set. We pointed out that the random networks constructed from the {\sc Northwestern25} network differ from the other networks that we examined in that, on average, they have a much larger mean local clustering coefficient and a much shorter mean geodesic path length. In all cases, we observed that constructive synergy speeds up the spreading process and that interfering synergy slows down the spreading process.
The influence of synergistic effects on spreading processes in networks is a promising area of study. It is an important component of modeling the spread of information (and misinformation) on social networks \cite{PhysRevE.88.012818} and the behavior of certain biological organisms and social processes in which a willingness to adopt either saturates or increases with the number of individuals who are trying to influence other individuals in a network. It has interesting effects on spreading behavior in various types of networks, such as lattices versus other networks\cite{PhysRevE.88.012818} and in modular networks\cite{PhysRevE.89.052811}, and it can affect whether it is possible or impossible for certain nodes to adopt a certain meme or behavior.
In the future, it will be interesting to consider synergistic spreading processes on other types of networks, such as multilayer networks \cite{domen2016,salehi2015spreading,Kivela2014} and temporal networks \cite{Holme2012}.
\section*{Acknowledgements}
This work was carried out at the Mathematical Institute at University of Oxford. We thank James Fowler, James Gleeson, Matthew Jackson, Mikko Kivel\"a, and James Moody for helpful discussions. JSJ also thanks the Mathematical Institute, University of Oxford for their hospitality and Mogens H\o gh Jensen (Niels Bohr Institute, University of Copenhagen) for making this project possible.
|
2,869,038,156,248 | arxiv | \section{Introduction}
\label{sec1}
In the last few years quite a few papers were published in which
computer simulations were used to study the time dependence of the {\it
translational} degrees of freedom (TDOF) in supercooled liquids. On
the other hand, the {\it orientational} degrees of freedom (ODOF) were
so far investigated in much less detail since the simulation and data
analysis of systems in which the particles are molecules are quite a
bit more involved than the ones in which the particles have no
structure. However, since most real materials are of molecular nature
and since experimental methods such as light scattering or dielectric
measurements probe also the ODOF, it is important to understand how the
dynamics of the TDOF and the ODOF are related to each other. Only by
understanding this relationship it will be possible to make a correct
interpretation of the experimental measurements and to gain insight
into the nature of the glass transition, i.e. the dramatic slowing down
of the relaxation dynamics of supercooled liquids upon approaching the
glass transition temperature. A more thorough discussion of these
connections can be found, e.g., in Ref.~\cite{r1}, were we also review
some of the other work in this field.
Very recently we have carried out a molecular dynamics computer
simulation of a simple molecular system in order to make a detailed
comparison between the dynamics of the TDOF and the
ODOF~\cite{r1}. Each molecule in this system is dumb-bell
shaped and consists of two Lennard-Jones particles that are separated
by a fixed distance $d$. More details on the system and the simulation
can be found in Ref.~\cite{r1}. In that paper we studied the
time and temperature dependence of the orientational correlation
functions
\begin{equation}
C_l(t) = \frac{1}{N} \sum_{n,n'} \langle
P_{l}(\vec{u}_n(0) \cdot \vec{u}_{n'}(t)) \rangle
\qquad , \qquad l\geq 1 \qquad,
\label{eq2}
\end{equation}
and the self part $C_l^{(s)}(t)$. Here $\vec{u}_n(t)$ is the unit
vector pointing along the molecular symmetry axis of molecule $n$ and
$P_l(x)$ is the $l$-th Legendre polynomial. The relevance of these type
of correlation functions is given by the fact that they can be measured
in experiments. The main results of that paper were that the
temperature dependence of the relaxation times of $C_l$, $C_l^{(s)}$
and the diffusion constant $D$ were given by a power law with the same
critical temperature $T_c$ but with critical exponents that
depend on the observable. In addition we showed that the so-called
time temperature superposition principle works well for $C_l^{(s)}$, if
$l > 2$. Thus we concluded that many of the predictions of mode-coupling
theory (MCT)~\cite{mct,gotze91} hold for these correlation functions,
although certain discrepancies are present.
In the preceding paper, subsequently called KKSI, we have investigated
the time and temperature dependence of the {\it translational} degrees
of freedom by studying quantities like the van Hove correlation
function $G(r,t)$ and the intermediate scattering function
$F(q,t)$~\cite{r2}. The main conclusion of that paper was
that MCT is able to give also a good description for the time and
temperature dependence of these correlation functions.
As we will demonstrate below, the intermediate scattering function
$F(q,t)$ and the orientational correlation functions $C_l(t)$ are just
a special case of a more general type of correlation function, which
involves the translational as well as the orientational degrees of
freedom at finite wave-vector $\vec{q}$, i.e.
$\left| \vec{q} \right| > 0$. The goal of the {\it
present} paper is therefore to investigate the time and temperature
dependence of these more general correlation functions, since it is
these correlators which are needed for a more detailed description of the
dynamics of a molecular system. In addition these correlation functions
can also be calculated directly within the framework of MCT (although
such a calculation might be in practice quite involved) thus allowing
to perform a more stringent test of whether MCT is able to give a
correct description of the dynamics of the system investigated.
Our paper is organized as follows: In the next section we will
introduce the mentioned generalized correlation functions and will
discuss some of their properties. Section~\ref{sec3} presents the
results and the MCT-analysis and the final section contains a summary
and our main conclusions.
\section{correlation functions}
\label{sec2}
We introduce a set of correlators which involves the one-particle
density (including the angular dependence) for a molecular liquid of
rigid, axially symmetric molecules:
\begin{equation}
\rho(\vec{x},\Omega,t) = \sum_{n=1}^{N} \delta(\vec{x}-\vec{x}_n(t))
\, \delta(\Omega,\Omega_n(t))
\label{eq5}
\end{equation}
where $\vec{x}_n(t)$ and $\Omega_n(t) \equiv (\theta_n(t),\phi_n(t))$
denote the center of mass position and the orientation of the
$n$-th molecule at time $t$, respectively. Due to the non-Euclidean metric
for the angles $\theta$ and $\phi$, one must use the invariant delta
function $\delta(\Omega,\Omega')$. For this
and other details of the theoretical description of molecular liquids,
the reader is referred to the textbook by Gray and Gubbins \cite{r4}.
Expansion of $\rho(\vec{x},\Omega,t)$ with respect to a product of
plane waves and spherical harmonics $Y_{lm}(\Omega)$ leads to the
tensorial density modes
\begin{equation}
\rho_{lm}(\vec{q},t) = i^l \sqrt{4\pi} \sum_{n=1}^{N}
e^{i \vec{q} \cdot \vec{x}_n(t)} \; Y_{lm}(\Omega_n(t)) \qquad ,
\label{eq6}
\end{equation}
where $l=0,1,2,\ldots$ and $-l \leq m \leq l$.
The factor $\sqrt{4\pi}$ is used so that $\rho_{00}(\vec{q},t)$
equals the definition of
$\rho(\vec{q},t)$ for simple liquids and $i^l$ is introduced
for convenience (see below). The corresponding correlators
\begin{equation}
S_{lm,l'm'}(\vec{q},t) = \frac{1}{N} \langle
\delta \rho^{\ast}_{lm}(\vec{q},t)
\: \delta \rho_{l'm'}(\vec{q},0) \rangle
\label{eq7}
\end{equation}
of the fluctuation $\delta \rho_{lm}(\vec{q},t) =
\rho_{lm}(\vec{q},t) - \langle \rho_{lm}(\vec{q},t) \rangle$ vanish
for $(q,l,m) = (0,0,0)$, and are otherwise given by:
\begin{equation}
S_{lm,l'm'}(\vec{q},t) = \frac{4 \pi}{N}
\; i^{l'-l} \sum_{n,n'} \left< \exp \left[-i \vec{q} \cdot
( \vec{x}_n(t) - \vec{x}_{n'}(0) ) \right]
\, Y_{lm}^{\ast}(\Omega_n(t) ) \, Y_{l'm'}(\Omega_{n'}(0) )
\right>
\label{eq8}
\end{equation}
which shows the explicit dependence on both, the TDOF and the ODOF. Its
corresponding self part $S_{lm,l'm'}^{(s)}(\vec{q},t)$ is obvious.
Taking into account that $Y_{00}=1/\sqrt{4\pi}$ one obtains from
Eq.~(\ref{eq8}):
\begin{equation}
\frac{S_{00,00}(\vec{q},t)}{S_{00,00}(\vec{q})} = F(q,t) \quad ,
\label{eq9}
\end{equation}
i.e. the normalized density correlator for the center of mass
positions, which was studied in KKSI. On the other hand, we find from
Eq.~(\ref{eq8}) for $\vec{q}=0$:
\begin{equation}
S_{lm,l'm'}(0,t) = C_l(t) \delta_{mm'} \delta_{ll'} \quad .
\label{eq10}
\end{equation}
Here the addition theorem for the spherical harmonics \cite{r4} and the
isotropy have been used. As already mentioned in the Introduction, this
special case was investigated in Ref.~\cite{r1}. Eqs.~(\ref{eq9}) and
(\ref{eq10}) hold for the corresponding self part, as well.
Although it is not obvious how these correlators for $l,l' \neq 0$ can
be measured in real experiments for $\vec{q}\neq 0$, they are the basic
quantities which enter the MCT for a molecule in a simple liquid
\cite{r5} and for molecular liquids \cite{r6,r7,r8,kk,r9}. To our
knowledge, there exists only one computer simulation which considers
$q$-dependent orientational correlators \cite{r10}. But the
experimental relevance of these correlators considered in
Ref.~\cite{r10} is unclear. The correlators given in Eq.~(\ref{eq8})
simplify a bit, if one uses the $q$-frame \cite{r4}, i.e.
$\vec{q}=\vec{q_0}\equiv(0,0,q)$. In that case one obtains \cite{r7}:
\begin{equation}
S_{lm,l'm'}(\vec{q}_0,t) \equiv S_{ll'}^{m}(q,t) \; \delta_{mm'} \quad ,
\label{eq11}
\end{equation}
which differ from zero only for $0 \leq \left|m\right| \leq
\mbox{min}(l,l')$ . Since $S_{ll'}^{m}(q,t) = S_{ll'}^{-m}(q,t)$, one
can restrict oneself to $m \geq 0$. The introduction of $i^l$ in
Eq.~(\ref{eq6}) makes $S_{ll'}^{m}(q,t)$ a real quantity. The same
properties hold for the self part as well. In the following we will
present all results in the $q$-frame.
Some of the equations that we will subsequently make use of have been
given in KKSI and are not reproduced here. We will refer to the $n$th
equation of that paper by (I-$n$).
\section{results}
\label{sec3}
This section is subdivided into two parts. The first part contains the
results for the static correlators $S_{ll'}^{m}(q)$,
and the second one presents the dynamical correlators $S_{ll'}^{m}(q,t)$
and $S_{ll'}^{(s)m}(q,t)$. In the following we restrict the values of
$l$ and $l'$ to 0, 1 and 2.
\subsection{Static properties}
The static correlators are shown in Figs.~\ref{fig2} -~\ref{fig4} for
the lowest investigated temperature $T = 0.477$. First of all, it becomes obvious
from these figures that $S_{ll'}^{m}(q=0)$ is $m$-independent and
diagonal in $l$ and $l'$, as it should be due to isotropy. A comparison
of the various diagonal correlators in Figs.~\ref{fig2} and \ref{fig3}
with each other shows, that the correlators $S_{ll}^{0}(q)$ for $l=1$
and 2 possess a significant $q$-dependence similar to that of
$S(q) \equiv F(q,0)$,
in contrast to those for $m \neq 0$. The same behavior was found for a
system of dipolar hard spheres~\cite{r7}, although for that system the
most prominent peak occurs for $S_{ll}^{1}(q)$ at $q=0$. In contrast
to $S(q)$ and $S_{11}^{0}(q)$, the correlator $S_{22}^{0}(q)$ has a
rather broad maximum at $q=0$ with a height which is comparable to that
at $q'_{max}\approx 7.3$, the location of the main peak in
$S_{22}^0(q)$. In Fig.~\ref{fig4} we present the non-diagonal
correlators $S_{ll'}^{m}(q)$ with $l \neq l'$. First of all one
recognizes that $S_{02}^{0}(q)$ is much larger than $S_{01}^{0}(q)$ and
$S_{12}^{m}(q)$. This can easily be understood. If the molecules had
``head-tail''-symmetry, then it can be shown that $S_{ll'}^{m}(q)
\equiv 0$, for $l,l'$ such that $l+l'$ is odd. Since for our molecules
this symmetry is only slightly broken, we expect $S_{ll'}^{m}(q)$ to be
much smaller for $l+l'$ odd than for $l+l'$ even.
The second point one recognizes from this figure is that the
non-diagonal correlators $S_{ll'}^{m}(q)$ can have the same magnitude
than the diagonal ones. Hence, there is no reason why the former should
be neglected in analytical calculations. For example, since the
solutions of the MCT-equations for the time-dependent correlators
$S_{ll'}^{m}(q,t)$ are determined by the static correlators
$S_{ll'}^{m}(q)$, it might not be a good approximation to consider $l=l'$,
only.
\subsection{Dynamical properties}
We have investigated both, the self correlators for $l=l'=0,1,\ldots,6$ and
the collective correlators for $l=l'=0,1$ and 2. Let us start with the self part
$S_{ll}^{(s)m}(q,t)$. Often it is assumed (see e.g. \cite{r11}) that the
$q$- and $(l,m)$-dependence (where $l=l'$) factorizes, i.e.:
\begin{equation}
S_{ll}^{(s)m}(q,t) \cong C_l^{(s)}(t) \; \; F_s(q,t)
\label{eq12}
\end{equation}
with $C_l^{(s)}(t)$ the self part of Eq.~(\ref{eq2}) and $F_s(q,t)
\equiv S_{00}^{(s)0}(q,t)$, the self part of Eq.~(\ref{eq9}). The reader
should note, that Eq.~(\ref{eq12}) is assumed to hold for all $m$, and
that the factorization is trivial for $q=0$. To check the validity of
Eq.~(\ref{eq12}) for $q>0$, we show $S_{ll}^{(s)m}(q,t)$ and
$C_l^{(s)}(t) \cdot F_s(q,t)$ in Fig.~\ref{fig6} ($l=1$) and Fig.~\ref{fig7}
($l=2$) for three different $q$-values and $T=0.477$. Although the
factorization becomes worse with increasing $q$, it is still a
reasonable approximation, even for $q=10.6$. Furthermore, the quality
of the factorization is better in the $\beta$-relaxation than in the
$\alpha$-relaxation regime (at least for $l=2$), and it also becomes
better with increasing temperature.
This approximate factorization does not necessarily mean that the
coupling between the TDOF and ODOF is very weak. The comparison of
$C_l^{(s)}(t)$ with $F_s(q,t)$ in Fig.~\ref{fig6} and Fig.~\ref{fig7}
reveals the reason why $S_{ll}^{(s)m}(q,t)$ can be approximately
factorized. For instance, $C_1^{(s)}(t)$ has decayed to 0.1 for
$t \cong 2 \cdot 10^4$, whereas at this time the value of $F_s(q=2.8,t)$
is still around 0.85, i.e. the
ODOF relax much faster than the TDOF. This is consistent with our
observation that in the time span of the orientational correlation
time, as deduced from $C_1^{(s)}(t)$ at the lowest temperature, the
average center of mass positions change only a fraction (about 30 $\%$)
of the mean distance between the molecular centers. We stress
that this is different to the MD-simulation of supercooled water.
There, $F_s(q,t)$ and $C_l(t)$ relax on approximately the same time
scale \cite{scio1}.
We now turn to the test of the various MCT-predictions (see KKSI). We
find~\cite{kammerer_phd} that $S_{11}^{(s)m}(q,t)$ do not obey the
second scaling law, i.e. the time-temperature superposition
principle.
This observation has already been made for the case $q=0$~\cite{r1},
which shows that this type of correlation function does not follow
the predictions of MCT. This situation is different for the
correlation function $S_{ll}^{(s)m}(q,t)$ with $l \geq 2$ for which
the second scaling law holds reasonably well. The critical exponents (which are
practically $q$-independent) for the divergence of the
relaxation time, $\gamma_1^{(s)}$ and $\gamma_2^{(s)}$, is 1.8 and 2.45,
respectively, where the latter value is fairly close
to the one found for the TDOF, $F_s(q,t)$, which was
2.56~\cite{r2}.
The exceptional role for the correlators with $l=1$ is due to the
existence of $180^{\circ}$-jumps of the molecular axis~\cite{r1},
since the Legendre polynomial $P_1(\cos\theta)$ is sensitive on
reorientations by $180^{\circ}$. The same is true for all $P_l(\cos\theta)$
with $l$ odd. But the weight of $P_l(\cos\theta)$ for $\theta \approx 0^{\circ}$
and $\theta \approx 180^{\circ}$ decreases with increasing $l$.
Since the second scaling law holds for $l \neq 1$, we can
restrict ourselves in the following to the analysis of the
correlation functions at the {\it lowest} temperature.
In Fig.~\ref{fig9} we investigate the validity of the
first scaling law [Eq.~(I-4)]. This is done for $q=0$ by fitting
$C_l^{(s)}(t)$ with the critical correlator $G(t)$.
We remind the reader that this fit is performed for {\it fixed} values
$\lambda= 0.76$ and $t_{\sigma} = 69$ as obtained from the similar fit of
$F(q_{max},t)$. More details on this analysis can be found in
section~\ref{sec4} of the preceding paper \cite{r2}.
For $l \ge 2$ (Fig.~\ref{fig9}) the critical correlator fits the data very
well over about two decades in time. This range, however, becomes
smaller with increasing $l$, which may indicate that corrections to the
asymptotic law become more important for large $l$. If one uses
$\lambda$ and $t_{\sigma}$ (cf. (I-4)) as free fit parameters, the
resulting fits follow the data longer by additional one to two orders
of magnitude in time. (We note that even $C_1^{(s)}(t)$ can be fitted
reasonably well with $G(t)$. Since we have shown in Ref.~\cite{r1} that
for this correlation function the first scaling law does not hold, one
might argue that it does not make sense to analyze $C_1^{(s)}$ in the
way proposed by MCT. However, we find that the violation of the second
scaling law is only weak and therefore it is not unreasonable to make
such an analysis.) The
so obtained values for $\lambda$ increase towards one with increasing
$l$ and reach, e.g., 0.97 for $l=6$. We also mention that we do not
observe a critical law, Eq.~(I-6), the reason for which is likely the
strong influence of the microscopic dynamics on the early
$\beta$-relaxation regime.
We have found that these results do not change significantly for
$S_{ll}^{(s)m}(q,t)$ if $q>0$. From the fit with von Schweidler law
plus corrections, Eq.~(I-9), (not shown in Fig.~\ref{fig9}) one can
deduce the critical nonergodicity parameter $f_{ll}^{(s,c)m}(q)$, the
critical amplitude $\tilde{h}_{ll}^{(s)m}(q)$ and the correction
$\tilde{h}_{ll}^{(s,2)m}(q)$ which are shown in Fig.~\ref{fig11} for
$l$ = 1, 2 and 6, for the case $m=0$ (see KKSI for the difference
between ($h(q)$, $h^{(2)}(q)$) and ($\tilde{h}(q)$, $\tilde{h}^{(2)}(q)$)).
We note that the result for $l=1$ was obtained for $\lambda = 0.76$ and
a shift of the time scale to $t_{\sigma}' = 10$.
Due to the approximate
factorization property, the $q$-dependence of $f_{ll}^{(s,c)m}(q)$ is
given by that of $f^{(s,c)}(q) \equiv f_{00}^{(s,c)0}(q)$. The
functions $f_{ll}^{(s,c)m}(q)$ decrease with increasing $l$, as
expected from Fig.~\ref{fig9}. The variation of the critical amplitude
$\tilde{h}_{ll}^{(s)m}(q)$ and the correction
$\tilde{h}_{ll}^{(s,2)m}(q)$ with $q$ is similar to that for $l=l'=0$
(cf. Fig.~13 of KKSI) with the exception that these quantities do not
vanish for $q \rightarrow 0$.
The $\alpha$-, $\beta$- and the microscopic time scale can be better
visualized from the imaginary part $(\chi^{(s)''})_{ll}^m(q,\omega)$ of
the dynamical susceptibility as a function of $\omega$, which is shown
for $m=0$ in Fig.~\ref{fig12} for $q=q_{max}$, $l=0$ and $q=0$,
$l=1,2$. The microscopic peak is at about $\omega = 1$ for all these
values of $l$. Whereas the position of the $\alpha$-peak and the
location of the minimum (for low temperatures) are approximately the
same for $l=0$ and $l=2$, these positions are shifted to higher
frequencies by about one decade for $l=1$.
We believe that this shift relates to the $180^{\circ}$-jumps of the
molecules (see Ref.~\cite{r1}), because these jumps do not affect the
correlators with even $l$, but those with odd value of $l$, and
particularly those with $l=1$.
The rest of this section is devoted to the discussion of the collective
correlators $S_{ll}^{m}(q,t)$, which are presented in Fig.~\ref{fig13}
for $q=2.8$ (the position of the main peak of $S_{11}^{0}(q)$ (cf.
Fig.~\ref{fig2})) and in Fig.~\ref{fig14} for $q=6.5$ (the location of
the main peak of $S(q)=S_{00}^0(q)$ (cf. Fig.~\ref{fig2})). Note,
that, due to symmetry (cf. section~\ref{sec2}), there are only two and
three independent correlators for $l=1$ and $l=2$, respectively. These
correlators exhibit a strong $m$-dependence, in contrast to
$S_{ll}^{(s)m}(q,t)$. The reader should also note that $S_{11}^{1}(q,t)
< S_{11}^{0}(q,t)$ for $q = 2.8$, whereas $S_{11}^{1}(q,t) >
S_{11}^{0}(q,t)$ for $q = 6.5$. These inequalities are related to the
fact that $S_{11}^{0}(q)$ has its main peak at $q \cong 2.8$ where
$S_{11}^{1}(q)$ does not have a maximum, whereas $S_{11}^{1}(q)$ has
its main peak at $q \cong 6.5$, where $S_{11}^{0}(q)$ is close to a
minimum. Similar considerations hold for the $m$- and $q$-dependence
of $S_{22}^{m}(q,t)$. These observations make it obvious that a
factorization [cf. Eq.~(\ref{eq12})] does not work for the collective
correlators.
The test of the second scaling law is shown in Fig.~\ref{fig15} for
$q=2.8$, $m=0$ and $l=1,2$. As already found for $C_l^{(s)}(t)$ and
$C_l(t)$, i.e. the correlation functions for $q=0$, this scaling law
holds for $l=2$ but not for $l=1$. We define the $\alpha$--relaxation time
$\tau_{lm,q}(T)$ as the time it takes $S_{ll}^{m}(q,\tau_{lm,q})$ to
decay to the value of $1/e$. The temperature dependence of
$\tau_{lm,q}(T)$ is shown in Fig.~\ref{fig16}. Fixing $T_c=0.475$, the
$\alpha$--relaxation times obey a power law (I-10) over about 2 - 3
decades in time. For the corresponding exponent $\gamma$ one obtains
approximately 1.9 for $l=1$ and 2.5 for $l=2$ with no significant
$q$-dependence. Again the $\gamma$-values for $l=2$
(and the same remains true for $l=3,\ldots,6$)
fit with that for $l=0$, which was around 2.55 (see KKSI),
whereas the value of $\gamma$ for $l=1$ is quite different.
The test of the first scaling law by fitting the time dependence of
$S_{ll}^m(q,t)$ with the critical correlator is done in
Fig.~\ref{fig18} for $l=2$, $m=0$. This fit (again with $\lambda=0.76$
and $t_{\sigma}=69$) works well for different values of $q$.
From the fit with the von Schweidler law plus
correction, Eq.~(I-9), (not shown in Fig.~\ref{fig18})
we compute the critical nonergodicity parameter $f_{ll}^{c,m}(q)$, the
critical amplitude $\tilde{h}_{ll}^{m}(q)$ and the correction
$\tilde{h}_{ll}^{(2)m}(q)$, shown in Figs.~\ref{fig19} and \ref{fig20}
for, respectively, $l=1$ and $l=2$. Although we have seen, that $l=1$ is
rather special, we have analyzed the corresponding correlators at the
lowest temperature and have included its result.
For reference we also show in Figs.~\ref{fig19} and \ref{fig20}
the static correlator $S_{ll}^{m}(q)$ and
the $\alpha$-relaxation time $\tau_{lm,q}(T)$ for $T=0.477$. These
quantities possess the same characteristic $q$-dependence already found
for the corresponding quantities of the TDOF, i.e. for $l=l'=m=m'=0$
(cf. Figs. 18 and 19 of KKSI). This means that (i) $\tau_{lm,q}$ and
$f_{ll}^{c,m}(q)$ are in phase and $\tilde{h}_{ll}^{m}(q)$ and
$\tilde{h}_{ll}^{(2)m}(q)$ are in anti-phase with $S_{ll}^{m}(q)$ and (ii) the
correction $\tilde{h}_{ll}^{(2)m}(q)$ is smallest at that $q$ where
$S_{ll}^{m}(q)$ has its main peak. This latter fact is well pronounced
for $(l,m) = (1,0)$ and $(l,m) = (2,0)$ and less for the others, because there
also the $q$-dependence of $S_{ll}^{m}(q)$ is less pronounced.
\section{Discussion and conclusions}
\label{sec4}
For a system of diatomic and rigid molecules interacting via
Lennard-Jones potentials we have investigated by means of a
MD-simulation the time and temperature dependence of a general class of
$\vec{q}$-, $(l,m)$- and $(l',m')$-dependent correlators. These
correlators $S_{lm,l'm'}(\vec{q},t)$ contain the TDOF and ODOF
explicitly.
The static correlators $S_{ll'}^{m}(q)$ in the $q$-frame are not
diagonal in $l$ and $l'$. Whereas those with $l+l'$ odd are smaller
than $S(q) \equiv S_{00}^{0}(q)$ by about one order of magnitude, this
is not true for $S_{02}^{0}(q)$, where $l+l'$ is even. This different
behavior results from a head-tail symmetry which is only slightly
broken for our molecules.
Our main concern has been the investigation of the time-dependent
correlators (collective and self part) and a test of the predictions of
mode coupling theory (MCT). This has been restricted to the diagonal
correlators ($l=l'$). As a by-product we have found that the $q$- and
$(l,m)$-dependence of the self-correlators $S_{ll}^{(s)m}(q,t)$
approximately factorizes, which was demonstrated for $l=1,2$ and for
$q$ up to 10.6. The reason for this factorization is based on a faster
relaxation of the ODOF, compared to that of TDOF.
Concerning the MCT predictions, we first studied the existence of a
single transition temperature $T_c$. For the $(q,l,m)$-dependent
$\alpha$- relaxation times $\tau_{lm,q}(T)$ we have found that they can
be fitted with a power law (I-10) with $T_c = 0.475 \pm 0.01$. Thus
from the numerous correlators we have investigated, one unique
temperature $T_c$ can be located, at which the dynamics of TDOF and
ODOF crosses over from an ergodic to a quasi-nonergodic
behavior. This temperature also agrees with that obtained from the
translational diffusion constant $D(T)$. This indicates that the TDOF
and the ODOF are strongly coupled. Values for $\gamma$ and the corresponding
exponent parameter $\lambda$ are given in Table I for the translational
diffusion constant and a selection of correlators. From this Table we
observe that $\gamma$ is non-universal.
Nevertheless there seems to be
some systematic behavior. The $\gamma$-values for all the correlators
with $l \neq 1$ correspond to $\lambda = 0.76 \pm 0.03$ and are
essentially independent of $q$ and
independent of whether the collective or self correlator is
considered. A deviation from this value occurs for $\gamma_D$, the
exponent for the diffusion constant, and even a stronger one for all
correlators with $l=1$. A similar discrepancy between $\gamma_D$ and
the exponent for the $l=0$ relaxation time has been reported
before~\cite{r18}, which shows that this prediction of MCT seems to be
problematic.
This exceptional role of the $(l=1)$-correlators is also observed for
the first and second scaling law of ideal MCT. A consistent picture
within ideal MCT emerges for all $q,l,m$ with $l\neq 1$. The situation
is illustrated in Fig.~\ref{fig21} for an exponent
parameter $\lambda=0.76$. There we plot $(S_{ll}^m(q,t)-f_{ll}^{c,m}(q))
/ \tilde{h}_{ll}^m(q)$ versus $t$, which should equal in the first
scaling regime the critical correlator $G(t)$. All the correlators
shown follow the ``universal'' time-dependence of the critical correlator
$G(t)$ for $\lambda=0.76$. Such a behavior was also found by
Wahnstr\"om and Lewis \cite{r13} for a simple model for
orthoterphenyl. The time range for which the correlators can be fitted
by $G(t)$ depends on $q$, $l$ and $m$ and varies between one and a half
decade (for $C_2^{(s)}(t) \equiv S_{22}^{(s)m}(0,t)$) and three decades
(for $F(q_{max},t) \equiv S_{00}^0(q_{max},t)$ ). Although this time
range increases significantly by taking $\lambda$ and the $\beta$-relaxation
time scale $t_\sigma$ as
free parameters which seems to yield $\lambda \rightarrow 1$ for
$l \rightarrow \infty$, we believe that the different time ranges relate to the
$(q,l,m)$-dependence of the size of the asymptotic regime. This has
been demonstrated earlier for the TDOF of supercooled water
\cite{scio2} and for the TDOF for our molecular system in KKSI. That
the asymptotic regime depends on $q$ has recently been shown by the
analytical calculation of the next order corrections for a system of
hard spheres \cite{r5}. We also find that for the correlators with $l =
l' \ge 0$ (with exception of $l=l'=1$) the asymptotic regime is
largest for $q_{max}^{(l)}$, the main peak of the static correlator
$S_{ll}^{m}(q)$. This is in variance with the result for
water~\cite{scio_unp}. There it has been found that the corrections are
smallest for $q=q_{FSDP}$, where $q_{FSDP}$ is the position of the
first sharp diffraction peak and not that of the main peak of $S(q)$.
This difference probably relates to the different types of glass forming
liquids. Water is a network former due to covalent bonding mechanism,
which is absent for our model liquid. The role of this correction
to the asymptotic laws is
also supported by the fact that the $(q,l,m)$-dependence of the
critical nonergodicity parameters, shown in Fig.~\ref{fig20},
is only consistent with that of $f_{ll}^{c,m}(q)$ obtained
from the molecular MCT \cite{r14} for the present liquid of diatomic
molecules, if the next order correction to the von Schweidler law (cf.
Eq.~(I-9)) is taken into account.
The result shown in Fig.~\ref{fig21} also demonstrates the validity
of the factorization of $(q,l,m)$- and $t$-dependence of the various
correlators on the time scale of $t_\sigma$. For simple liquids, i.e.
for $l=m=0$, this is a prediction of MCT~\cite{mct,gotze91}.
There it has been shown that the vertices of the mode coupling
terms are positive for a simple, one-component liquid, which, however,
is not true anymore for molecular liquids \cite{r7}. Since the
factorization theorem only requires that the largest eigenvalue of
a certain stability matrix (see Ref.~\cite{gotze91}) is non-degenerate,
for which the positivity of the vertices is sufficient but not necessary,
we still believe that this non-degeneracy is generic and that
therefore the factorization theorem holds for molecular liquids as
well. In the case that a system exhibits a type-B
transition~\cite{gotze91}, this non-degeneracy and hence the
factorization is guaranteed.
The exceptional behavior for the correlators with $l=1$ has also been
observed in the susceptibility (cf. Fig.~\ref{fig12}). The position of
the minimum between $\alpha$- and microscopic peak of
$(\chi^{(s)''})_{ll}^m(q,\omega)$ is approximately the same for $l=0$
and $l=2$, but not for $l=1$. For the latter it is shifted to higher
frequencies by about one order of magnitude. It is interesting that this
result resembles the experimental results for some glass forming
liquids. For instance it has been stressed by Cummins {\it et al.}
\cite{r15} , that light scattering data which may include contributions
from both, $l=0$ and $l=2$, are consistent with the spectra obtained
from neutron scattering (which is only $l=0$), but not with those from
dielectric measurements. This is nicely demonstrated for glycerol by
Lunkenheimer {\it et.~al.} \cite{r16,r17}. The situation illustrated in
Fig.~2 of \cite{r17} is exactly what we have found in Fig.~\ref{fig12}
for our system. The reader should also note that even the relative
weight between the intensity of $\alpha$- and microscopic peaks has the
same qualitative behavior in both cases, i.e. it is significantly
larger for $l=1$ than for $l=0$ and $l=2$. A similar result has been
recently found from a MD-simulation of CKN, where the orientational
dynamics (self part) of the NO$_3^-$ ion was studied for $l=1$ and
$l=2$~\cite{lebon97}. In that paper, and also for the collective
dynamics of dipolar hard
spheres~\cite{r6,r7}, it has been concluded that the different weights of
the $\alpha$-- and microscopic peaks relate to the different numerical
values for the critical nonergodicity parameters. For $q=0$ is
has been argued that $f_{l+1,l+1}^{(s,c)m} < f_{ll}^{(s,c)m}$ (due to $q=0$, no
$m$-dependence exists)~\cite{lebon97}. Since $f_{ll}^{(s,c)m}(q=0)$ is the
$\alpha$-relaxation strength of the corresponding susceptibility and
$(\chi^{(s)''})_{ll}^{m}(q=0)$ fulfills a sum rule (on a logarithmic frequency
scale), it becomes obvious that the ratio between the
$\alpha$-relaxation strength and the area under the microscopic peak is
larger for $l=1$ than for $l=2$. Whether this agreement between the
susceptibilities of glycerol and that for our diatomic molecular liquid
is merely accidental or not, is, however, not obvious. One has to keep
in mind, (i) that dielectric spectroscopy and light scattering measures
the collective dynamics and not their self part and (ii)
glycerol has a permanent dipolar moment, in contrast to
our diatomic molecules. How far the dipolar interaction would change our
MD-results is not clear. In addition, we believe that the special role
of $l=1$ relates to the $180^\circ$-jumps of the molecules \cite{r1}.
Whether these jumps exist for glycerol also and whether they really
cause a shift of the minimum is uncertain.
To summarize, we may say that the results obtained in
Refs.~\cite{r1,r2} and in the present paper are consistent with MCT.
There is strong evidence for a single transition temperature, as it is
predicted from molecular MCT \cite{r7} and for the validity of the two
scaling laws, with exception of the correlators with $l=1$. Concerning
the second scaling regime we have found that the $\gamma$-exponent is
not universal in agreement with earlier work on binary liquids
\cite{r18}, but at variance with the MD-simulation for water
\cite{scio1,scio2}. It will be a challenge to clarify the discrepancy
for the $\gamma$-values. The critical law, which is part of the first
scaling regime could not be observed, due to a strong interference with
the microscopic dynamics.
Acknowledgements: We thank the DFG, through SFB 262, for financial
support. Part of this work was done on the computer facilities of the
Regionales Rechenzentrum Kaisers\-lautern.
|
2,869,038,156,249 | arxiv | \section*{\centering Abstract}
{\it Learning object models from views in 3D visual object recognition is
usually formulated either as a function approximation problem of a function
describing the view-manifold of an object, or as that of learning a
class-conditional density. This paper describes an alternative
framework for learning in visual object recognition, that of learning the
view-generalization function. Using the view-generalization function, an
observer can perform Bayes-optimal 3D object recognition given one or more
2D training views directly, without the need for a separate model
acquisition step. The paper shows that view generalization functions can be
computationally practical by restating two widely-used methods, the
eigenspace and linear combination of views approaches, in a view
generalization framework. The paper relates the approach to recent methods
for object recognition based on non-uniform blurring. The paper presents
results both on simulated 3D ``paperclip'' objects and real-world images from the
COIL-100 database showing that useful view-generalization functions can be
realistically be learned from a comparatively small number of training
examples.}
\section{Introduction}
Learning view-based or appearance-based models of objects has been a
major area of research in visual object recognition (see
{\cite{Edelman97}} for reviews). One direction of research has
focused on treating the problem of learning appearance based models as
an {\tmem{interpolation problem}} {\cite{UllBas91, PogEde90}}.
Another approach is to treat the problem of learning object models as
a \textit{classification problem}.
Both approaches have some limitations. For example, acquiring a novel
object may involve fairly complex computations or model building.
They also do not easily explain how an observer can transfer his skill
at recognizing existing objects to generalizing from single or
multiple views of novel objects; to explain such transfer, a variety
of additional methods have been explored in the literature, including
the use of object classes or categories, the acquisition and use of
object parts, or the adaptation and sharing of features or feature
hierarchies.
This paper describes an approach to learning appearance-based models
that addresses these issues in a unified framework: the visual
learning problem is reformulated as that of learning \tmem{view
generalization functions}. The paper shows that knowledge of the view
generalization function is equivalent to being able to carry out
Bayes-optimal 3D optimal object recognition for an arbitrary
collection of objects, presented to the system as training views.
Model acquisition reduces to storing 2D views and does not involve
learning or model building.
This represents a significant paradigm shift relative to previous
approaches to learning in visual object recognition, which have
treated the problem of acquiring models as a separate learning
problems. While previous models of visual object recognition can be
reinterpreted in the framework in this paper (and we will do so for
two such methods), the formulation in terms of view generalization
functions makes it easy to apply any of a wide variety of standard
statistical models and classifiers to the problem of generalization to
novel objects.
In this paper, I will first express Bayes-optimal 3D object
recognition in terms of training and target views and prior
distributions on object models and viewpoints. Then, I will describe
the statistical basis of learning view generalization functions.
Finally, I will demonstrate, both on the standard ``paperclip'' model
and on the COIL-100 database, that learning view generalization
functions is feasible.
\section{Bayesian 3D Object Recognition}
This section will review 3D object recognition from a Bayesian
perspective and establish notation.
Let us look at the question of how an observer can recognize 3D
objects from their 2D views. Let $\omega$ identify an object and $B$
be an unknown 2D view (we will refer to $B$ also as the {\em target
view}). Then, classifying $B$ according to $\hat{\omega}(B) =
\tmop{arg} \max_{\omega} P ( \omega | B )$ is well known to result in
minimum error classification \cite{Duda01}. Using Bayes rule, we can
rewrite this as
\begin{eqnarray}\label{bayesrule}
\tmop{arg} \max_{\omega} P ( \omega | B ) & = & \tmop{arg} \max_{\omega}
\frac{P ( B | \omega ) P ( \omega )}{P ( B )}\\
& = & \tmop{arg} \max_{\omega} P ( B | \omega ) P ( \omega ) \nonumber
\end{eqnarray}
$P(\omega)$ is simply the frequency with which object $\omega$
occurs in the world. Let us try to express $P ( B | \omega )$
in terms of models and/or training views.
Assume that we are given a 3D object model $M_{\omega}$. In the absence
of noise, the projection of this 3D model into a 2D image is determined by
some function $f$ of the viewing parameters $\phi \in \Phi$, $B = f (
M_{\omega}, \phi )$. The function $f$ usually is rigid body transformations
followed by orthographic or perspective projection.
In the presence of additive noise, $B = f ( M_{\omega}, \phi ) + N$
for some amount of noise distributed according to some prior noise
distribution $P ( N )$. With this notation, we can now express $P ( B
| \omega )$ in terms of the 3D object model\footnote{$\delta$ is
the Dirac delta function.}
\begin{equation}\label{objviews}
P ( B | \omega ) = \int \delta ( B, f ( M_{\omega}, \phi ) + N ) P ( \phi )
P ( N ) \; d \phi \; d N
\end{equation}
To simplify notation below, we write
$P(B|M_\omega,\phi)=\int \delta ( B, f ( M_{\omega}, \phi ) + N )\,P(N)\,dN$
and obtain
\begin{eqnarray}\label{mbr}
P ( B | \omega ) = \int P ( B | M_{\omega}, \phi ) P ( \phi ) d \phi & &
\end{eqnarray}
By construction, Equation~\ref{mbr} represents {\em Bayes-optimal 3D
model-based recognition}, assuming perfect knowledge of the 3D model
$M_\omega$ for a given object $\omega$.
\begin{figure}[t]
\centerline{\includegraphics[height=0.7in]{Figures/examples-noclass-1.png}}
\caption{Examples of paperclips used in the simulations.}
\label{figexclip}
\label{figexamples}
\end{figure}
In real-world recognition problems, the observer is rarely given a
correct 3D model $M_{\omega}$ prior to recognition. Instead, the
observer needs to infer the model from a set of training views\footnote{
For the rest of the paper, we limit ourselves to
the case where the training and test views are drawn in an identical
manner and independently of one another; the more general case in
which, say, the training views $\mathcal{T}_\omega$ come from a motion
sequence and hence have sequential correlations in their viewing parameters
can be treated analogously.
}
$\mathcal{T}_{\omega} = \{ T_{\omega,1}, \ldots, T_{\omega,r} \}$.
Therefore, an observer is faced with the problem of determining $P ( B |
\omega )$ as $P ( B | \mathcal{T}_{\omega} )$. In a model-based framework,
this means that the observer attempts to perform reconstruction of the object model $M$
given the training views $\mathcal{T}_{\omega}$ and then performs
recognition using the resulting distribution of probabilities over the
possible models for recognition. If we put this together with Equation~\ref{mbr},
we obtain for $P(B|\omega) = P(B|\mathcal{T}_\omega)$:
\begin{equation}\label{viewgenstat}
P ( B | \mathcal{T}_{\omega} ) = \int P ( B | M, \phi ) P ( M | \mathcal{T}_{\omega} ) P ( \phi ) d M d \phi
\end{equation}
By construction, $P(B|\mathcal{T}_\omega)$ represents the density of
target views $B$ given a set of training views $\mathcal{T}_\omega$.
Therefore, applying Equation~\ref{viewgenstat} together with
Equation~\ref{bayesrule} results in {\em Bayes-optimal 3D model-based
recognition from 2D training views}.
Now that we have derived the Bayes-optimal 3D object
recognition, let us look at some approaches that have been proposed in
the literature for solving the 3D object recognition problem and how
they relate to Bayes optimal recognition.
\begin{figure}[t]
\includegraphics[height=1in]{Figures/locshow.png}
\caption{
Illustration of $P(B|T_\omega)$. (a) The feature vector $T_\omega$,
represented as an image (vertices of the clip quantized to a grid),
(b) $\log \hat{P}(B|T_\omega) - \log \hat{P}(B)$ (darker=higher
probability).
\label{figpost}
}
\end{figure}
\paragraph{3D Model-Based Maximum Likelihood Methods.}
Traditional approaches to model-based 3D computer vision (e.g.,
\cite{Grimson90z}) generally divide recognition into two phases.
During a model acquisition phase, the recognition system attempts to
optimally reconstruct 3D models from 2D training data. During the
recognition phase, the system attempts to find the optimal match of
the reconstructed 3D model against image data.
This is often realized by estimating $M_{\omega}$ using a maximum
likelihood or maximum a posteriori (MAP) procedure (e.g., least square
methods, assuming Gaussian error), $\hat{M}_{\omega} = \tmop{arg}
\max_M P ( M | \mathcal{T}_{\omega} )$ and then performing 3D
model-based recognition in a maximum likelihood setting using
$\hat{M}_{\omega}$.
\begin{eqnarray}\label{ccv-mlv}
P ( B | \omega ) &=& P ( B | \mathcal{T}_{\omega} ) = \max_{\phi} P ( B |
\hat{M}, \phi ) \\
\hat{M} &=& \tmop{arg} \max_M P ( M |
\mathcal{T}_{\omega} ) \label{ccv-mlm}
\end{eqnarray}
It is important to remember that this approach is not Bayes optimal in
general--it is a good approximation only under certain conditions, for
example, when all the distributions $P ( B | M, \phi )$ are unimodal,
sharply peaked, and have comparable covariances. Furthermore,
computationally, the maximum likelihood estimations have proven to be
fairly difficult and costly optimization problems.
One reason that has made such approaches attractive is that, as the
amount of noise and variability become small, the reconstruction
and matching problems can be treated geometrically, and a wealth
of results has been derived in that limit (c.f. algorithms like
\cite{higgins81}). But from a statistical point of view, such geometric
approaches can be unnecessarily restrictive. For example, in the case
in which the training set $\mathcal{T}_{\omega}$ consists of only a
single view $T_\omega$, 3D reconstruction is not possible for
arbitrary 3D objects. Yet, as we will see in the experimental results
below, $P ( M | T_\omega )$ still contains considerable amounts of
information.
\paragraph{View Interpolation Approaches.}
Because the imaging transformation $f ( M, \phi
)$ is smooth, the set of views $\mathcal{B}_M = \{ f ( M, \phi ) | \phi \in
\Phi \}$ of an object itself forms a smooth, low-dimensional surface in the
space of all possible views. In fact, $\mathcal{B}_M$ is embedded in a
low-dimensional linear subspace of the space of all possible views
{\cite{UllBas91}}. The smoothness of $\mathcal{B}_M$ suggests that it might
be learned from examples using a surface or function interpolation method.
This has given rise to one of the most influential approaches to learning in
3D object recognition, developed by Poggio and Edelman {\cite{PogEde90}}.
Methods that approximate the view manifold (e.g.,
\cite{PogEde90,UllBas91,nayar96realtime}) generally attempt to compute
some geometrically motivated distance of the target view from the view
manifold and then perform nearest neighbor classification in terms of
that distance. This approach would minimize recognition error rates
if the distribution of views over the view manifolds were uniform and
several other conditions were satisfied. However, most work on
geometric and interpolation methods does not demonstrate
Bayes-optimality of the classification error, but only proves results
about the quality of the approximation to the view manifold that they
achieve. In general, a good approximation to the view manifolds is
neither necessary nor sufficient for Bayes-optimal recognition
(although it does often seem to work reasonably well).
\paragraph{Classification Approaches.}
Many classification methods (multi-layer perceptron, logistic
regression, mixture discriminant analysis, etc.) are concerned with
estimating posterior distributions like $P ( \omega | B )$ or
corresponding discriminant functions directly. They share with the
methods described in this paper that they do not necessarily involve
the two-step maximization procedure used in traditional model-based
systems (Equations~\ref{ccv-mlv} and~\ref{ccv-mlm}). Classification
methods have not been all that popular for 3D object recognition in
the past, but there has been some recent work on it (e.g.,
\cite{pontil98support}).
\paragraph{Single-View Generalization.}
Based on geometric considerations alone,
if nothing else is known about a 3D object, multiple views of an
object are needed in order to reconstruct a 3D model of the object
from views (e.g., \cite{higgins81}). Generalization from a single
view is usually only considered possible when the object is known to
have special properties like symmetry or when the object is known to
be a member of some other kind of object class (e.g.,
\cite{vetter97linear}). Geometrically, of course, this is true.
Statistically, however, even if 3D model reconstruction is not
possible, $P(B|\mathcal{T}_\omega)$ may still contain information
permitting significant single view generalization, as the experiments
below will show.
\iffalse
Translated into the Bayesian framework of this
paper, postulating the need for knowledge of special object properties
like symmetry or class membership is similar to assuming that $P ( B |
\mathcal{T_{\omega} )}$ is a fairly uninformative distribution unless
the {\tmem{a priori}} distribution of model, $P ( M )$, is more
restrictive than the set of all possible 3D models. However, this is
an odd assumption. First, of all, a limited degree of 3D
generalization from single 2D views is observed in psychophysical
experiments (noted, for example, in \cite{PogEde90}). Furthermore,
effects like the viewpoint peaking effect {\cite{BinLevMan89}}
suggest that the distributions we should expect for $P ( B |
\mathcal{T_{\omega}}$) are far from uniform.
\fi
\section{View Generalization Functions}
We have seen that previous approaches to learning object models have
concentrated on learning $f_M ( \omega )$, $P ( \omega | B )$, or $P (
B | \omega )$. This paper proposes and examines a different learning
problem for 3D object recognition: the direct estimation of the view
generalization function, defined as follows:
\begin{definition}
We define the {\bf $r$-view generalization function}
as the conditional density
$P(B|\mathcal{T}_{\omega}) = P( B | T_{\omega,1}, \ldots,
T_{\omega,r})$ given by Equation~\ref{viewgenstat}.
\end{definition}
If the training set $\mathcal{T}_\omega$ consists of a single view
$T_\omega$, we call this a {\em single view generalization function}.
Notice that view generalization functions are functions of views only;
they do not involve any object models. In some sense, they tell us
how much an unknown view is similar to a set of training views.
If we have a good estimate of the view generalization function, we can
perform Bayes-optimal 3D object recognition by a generalized nearest
neighbor procedure with a variable metric, somewhat analogous to the
procedure in \cite{Lowe95}.
That is, the vision system initially builds a good approximation of
the view generalization function ${P}(B|\mathcal{T}_\omega)$ from
visual input. This might require a lot of training data,
corresponding perhaps to several years of visual input after birth in
human vision.
Once a vision system has acquired a fairly good approximation of
${P}(B|\mathcal{T}_\omega)$, the acquisition of new object models merely
required storing the training views $\mathcal{T}_\omega$. Let us
assume that training views are unambiguous, $P(\omega|T_\omega)=1$
(otherwise, the procedure is still optimal $k$-nearest neighbor but
does not necessarily achieve Bayes-optimal classification rates
\cite{2003-breuel-icdar-2}). Given the view generalization function
and a collection of training views for each object, Bayes-optimal
recognition of an unknown view $B$ against the model base can then be
carried out by evaluating ${P}(B|\mathcal{T}_{\omega_i})\,P(\omega_i)$
for each object $\omega_i$ under consideration and classify according
to Equation~\ref{bayesrule}. Furthermore, if the view generalization
function ${P}(B|\mathcal{T}_\omega)$ can be implemented in a low-depth
circuit, the visual system will be able to carry out Bayes-optimal
recognition of novel 3D objects from 2D training views quickly,
without the need for the optimizations implicit in traditional maximum
likelihood approaches used in computer vision (see Equations~\ref{ccv-mlv}
and~\ref{ccv-mlm}).
Of course, whether this approach works hinges crucially on whether it
is possible to learn an approximation to the view generalization
function that actually generalizes to novel objects and has the
desired properties. If every new object the system encounters
requires updating of the estimate of the view generalization function
and the approach effectively reduces to traditional one-by-one
learning of object models. If, on the other hand, after an initial
set of training examples, the estimate of ${P}(B|\mathcal{T}_\omega)$
generalizes reasonably well to previously unseen objects, then the
approach is successful.
The rest of this section will explore these issues further with
examples and some theoretical arguments. Subsequent sections will
provide some experimental evidence that learning view generalization
functions is feasible.
\paragraph{Smoothness of the View Generalization Function.}
Intuitively, we would expect that, for most objects and views, if the
set of training views $\mathcal{T}_\omega$ for two objects is similar,
the distributions $P(M|\mathcal{T}_\omega)$ of possible corresponding
object models are similar as well, and so are the distributions
$P(B|M)$ of other possible views. This corresponds to a statement
about the smoothness of the view generalization function. It can be
demonstrated formally for specific model distributions, camera and
noise models by differentiating Equation~\ref{viewgenstat} with
respect to $B$ and the $T_{\omega,i}$.
Such smoothness properties suggest that the view generalization
function may be learnable using techniques like radial basis function
(RBF) interpolation or multilayer perceptrons (MLPs) that take
advantage of smoothness; \cite{PogEde90} use a similar argument to
motivate the use of RBFs for learning individual view manifolds.
Note that, in contrast to the view generalization function, the
maximum likelihood solutions given by Equations~\ref{ccv-mlv}
and~\ref{ccv-mlm} and used in many computer vision systems, when
viewed as functions of the target and training views, are not
necessarily smooth and therefore probably not easily approximated
using models like RBFs.
\paragraph{Model Priors.}
One of the important properties of the view generalization function is
that it does not depend on the specific models the observer has
acquired in his model base. Rather, it depends on the prior
distribution of models from which the actual models encountered by the
system are drawn.
\begin{theorem}
The view generalization function is completely determined by the prior
distribution of 3D models $P(M)$, the distribution of viewing
parameters $P(\phi)$, the noise distribution $P(N)$, and the choice of
imaging model $f(M,\phi)$.
\end{theorem}
{\it Proof.} In analogy to Equation~\ref{objviews}, we have for a
training view $T_\omega$, $P(T_\omega|M) =
\int\delta(T_\omega|f(M,\phi)+N)\,P(\phi)\,P(N)\,d\phi\,dN$.
Since the training views are (by assumption) drawn independently,
$P(\mathcal{T}_\omega|M) = \prod_{T_\omega\in\mathcal{T}_\omega}P(T_\omega|M)$.
Using Bayes formula, we invert this to yield $P(M|\mathcal{T}_\omega)$.
Furthermore, $P(B|M,\phi) = \delta(T_\omega|f(M,\phi)+N)\,P(N)\,d\phi\,dN$.
With this, we have all the components to evaluate Equation~\ref{viewgenstat}.
$\Box$
\paragraph{Linear Combination of Views.}
Let us now turn to the question of whether fast, or even low-depth
arithmetic circuit, implementations of view generalization functions
are plausible. To do this, we will recast two commonly used
approaches to 3D object recognition, linear combination of views
\cite{UllBas91} and eigenspace methods (below), into a
view-generalization function form. The resulting view generalization
functions implement those models exactly and hence would perform
identically to those methods if implemented.
In a linear combination of views framework, we test whether a novel
target view $B$ can be expressed as a linear combination of training
views. Let us assume concretely that we want to generalize based on
three training views per object, $P ( B | T_1, T_2, T_3 ) = g ( B,
T_1, T_2, T_3 )$. The error $\epsilon$ by which we judge similarity is
the magnitude of the residual that remains after the linear
combination of training views has been subtracted. Performing nearest
neighbor classification using $\epsilon$ corresponds to assuming any
of a wide number of unimodal, symmetric distributions $U$ for
$\epsilon$; that is, nearest neighbor classification using linear
combination of views is the same as classifying using the conditional
density $P ( B | T_1, T_2, T_3 ) = U ( \epsilon^{} )$. If we write
$\rho_v ( x ) = x - \frac{v \cdot x}{\| v \|} v$ for the residual that
remains after subtracting the projection of $x$ onto $v$ from $x$,
then we can compute $\epsilon$ as $\epsilon = \| \rho_{T_3} (
\rho_{T_2} (
\rho_{T_1} ( B ))) \|$, and the linear combination of views (LCV) view
generalization function $g_{\tmop{LCV}} ( B, T_1, T_2, T_3 ) = U ( \epsilon )
= U ( \| \rho_{T_3} ( \rho_{T_2} ( \rho_{T_1} ( B ))) \| )$. Generalizing to
$r$ training views, we can clearly compute this with an arithmetic circuit of
depth proportional to $r$. Therefore, we have seen that if we use a linear
combination of view model of object similarity, then the view generalization
function can be expressed as a fairly simple function that can be implemented
as a circuit of depth proportional to the number of views $r$.
\paragraph{Eigenspace Methods.}
Eigenspace methods and related techniques have been used extensively
in information retrieval (latent semantic analysis, LSA) and computer
vision {\cite{moghaddam_cvpr94,nene96columbia}}. In general, in
eigenspace methods, given a set of training views $T_i$ for multiple
objects, we compute a low-dimensional linear subspace $\mathcal{S}$
and evaluate similarity among a target view $B$ and a training view
$T_{\omega}$ within that low-dimensional subspace. That is,
eigenspace methods use an error $\epsilon = \| \hbox{\sf Pr}_{\mathcal{S}} ( B )
-
\hbox{\sf Pr}_{\mathcal{S}} ( B ) \|$ for nearest neighbor classification, where $\hbox{\sf Pr}_\mathcal{S}$ is
the linear projection operator onto $\mathcal{S}$. This
procedure can be justified, for example, when the training samples $T_i$
falls into a low-dimensional linear subspace in the error free
case, but are corrupted with Gaussian noise whose magnitude is small
compared to the variability of the training samples. Then, if we
determine the covariance matrix of the $T_i$, its large eigenvalues
will correspond approximately to directions representing meaningful
object variability, while its small eigenvalues will correspond
approximately to directions representing only noise \cite{Duda01}.
As before, nearest neighbor classification using $\epsilon$ is
equivalent to choosing some unimodal error distribution $U ( \epsilon
)$ (e.g., Gaussian) and approximating
\begin{equation}
P ( B | \mathcal{T}_{\omega} ) \propto \max_{T \in \mathcal{T}_{\omega}} U (
\epsilon ) = \max_{T_{} \in \mathcal{T}_{\omega}} U ( \| P_{\mathcal{S}} ( B
) - P_{\mathcal{S}} ( B ) \| ) \label{esvg}
\end{equation}
Therefore, we can view eigenspace methods as a very simple form of learning a
view generalization function; the function has the specific form given in
Equation \ref{esvg}, with only the projection operator $\hbox{\sf Pr}_{\mathcal{S}}$ being
learned by the observer.
\section{First Order Single View Model}
In this section, we will look at a simple experimental evaluation of
single view generalization functions, applied to simulated 3D
paperclips. Simulated 3D paperclips are widely used in computational
vision, psychophysical experiments, and neurophysiological work (e.g.,
\cite{PogEde90,LiuKniKer95}). Let us briefly review the model here
and state the parameters used in this and the next section.
Random 3D models are generated by picking five unit vectors in
$\mathbb{R}^3$ with uniformly random directions and putting them
end-to-end. To obtain a 2D view of the object, the 3D model is rotated
by some amount and then projected orthographically along the $z$
axis. Views are centered so that the centroid falls at the origin.
For all the experiments involving paperclips below, the training set
consisted of random views derived from a fixed set of 200 randomly
constructed 3D clip models. That is, all generalization to arbitrary,
previously unseen 3D clip models was derived from information learned
from this small, fixed sample of 200 clips.
For each test trial, novel previously unseen 3D clip models were
generated randomly and random views of those clips were generated by
random rotations in the range $[-40^{\circ},+40^{\circ} ]$ around the
$x$ and $y$ axes relative to the training view; this range of
rotations was chosen because it is comparable to what previous
authors have used and seems to be at the limit of human single view
generalization ability for these kinds of images (e.g.,
\cite{PogEde90}).
In order to be accessible to a learning algorithm, these views need to
be encoded as a feature vector. Three kinds of encodings have been
commonly used in the literature and are used in this paper. An angular
encoding uses the ordered sequence of angles around each vertex in the
projected image, giving rise to a four-dimensional feature vector. An
ordered location encoding uses the concatenation of $x$ and $y$
coordinates, in sequence, as its feature vector, resulting in a 10
dimensional feature vector. A feature map encoding projects the
vertices of the clip onto a bounded grid composed of $40\times40$
buckets, resulting in a binary feature vector of length $1600$.
\paragraph{Single View Generalization.}
Let us now look at building an empirical distribution model of $P ( B
| \mathcal{T}_{\omega} )$. We will limit ourselves to
{\tmem{single-view generalization models}}; that is, we assume that
the set of training views for an object $\omega$ consists of a single
view $\mathcal{T}_{\omega} = \{ T_{\omega} \}$. Note that this
problem has not been studied much in computer vision; this is perhaps
because, based on geometry alone, a training set consisting of a
single view $T_{\omega}$ does not permit reconstruction of the 3D
structure of an arbitrary object even in the error-free case. However,
as several authors have observed (e.g., {\cite{PogEde90}}), human
observers are capable of a significant degree of 3D generalization, so
there is reason to believe that 3D recognition based on $P ( B |
T_{\omega} )$, that is, recognition based solely on a single training
view is possible, at least to some degree.
\paragraph{First Order Approximation.}
For concreteness, let us assume the feature map representation of
views discussed above. In that representation, a view $B$ is a binary
feature vector $B = ( B_1, \ldots, B_r )$, where each $B_i$ represents
a pixel or bucket in the image, and analogously for $T$. We can try
to model $P ( B | T )$ as a an expansion {\cite{mccullagh87}}:
\begin{equation}
\log P ( B | T ) \approx
\frac{1}{Z} ( h^{( 0 )} + \sum_{i j} h_{i j}^{( 1)} ( B_i, T_j )
+ \sum_{i j k} h^{( 2 )}_{i j k} ( B_i, T_j, T_k ) + \ldots )
\end{equation}
Here, the $h^{( k )}$ are functions of their boolean-valued arguments. The
different $h^{( k )}$ correspond to taking account increasingly higher-order
correlations among features.
Of particular interest is the ``first-order'' approximation, for which we take
into account only $h^{( 0 )}$ and $h^{( 1 )}$. Let us look at the probability
that pixel $B_i$ in the view $B$ is ``on'' given the training view $T$:
\[ \log P ( B_i = 1 | T ) \propto \tmop{const} + \sum_{i j} h_{i j} ( 1, T_j )
\]
But this means that if we look at $\log P ( B_i | T )$, it is a
blurred version of the training view, with with $h_{ij}$ as a
spatially varying blurring kernel.
Blurring, with or without spatially variable kernels, has been proposed as a
means of generalization in computer vision by a number of previous authors.
In a recent result, {\cite{berg01}} derives non-uniform blurring for 2D
geometric matching problems, the ``geometric blur'' of an object. The
results sketched in this section make the connection between non-uniform
geometric blurring and first order approximations to the single view
generalization function, $g ( B, T ) = P ( B | T )$. This connection lets us
determine more precisely how we should compute geometric blurring, what
approximations it involves compared to the Bayes-optimal solution, and how we
can improve those approximations to higher-order statistical models. Let us
note also that there is nothing special about the representation in terms of
feature maps; had we chosen to represent views as collections of feature
coordinates, a first order approximation would have turned into error
distributions on the location of each model feature.
\iffalse
\begin{figure}[t]
\hbox{\includegrpahics[height=1.2in]{Figures/generalization.png}}
\caption{Graph showing the generalization achieved across viewpoints.
Shown is $P(S=1|B,B')$ as $B$ is held fixed and $B'$ is a view of the same
object rotated by different angles around the $y$ axis. The angle of
the rotation is shown on the horizontal axis; (a) estimates for a single clip,
(b) average of results from 100 clips.
\label{figgeneralization}
}
\end{figure}
\fi
\paragraph{Experimental Results.}
Using the paperclip models, we can estimate the
parameters of the first order model above by simulation: we repeatedly
generate different views of objects, compute their feature vectors, and
compute the frequency of co-occurrence of features in the training view $T$
and a target view $B$ (a kind of Hebbian learning). This allows us to
visualize the non-linear blurring that results in single-view generalization.
An example of this is shown in Figure \ref{figpost}.
Note that, similar to
\cite{berg01}, there is more blurring further away from the center of the
object. However, the two approaches differ in that geometric blur
does not take into account, among other things, the prior distribution
of models $P(M)$ and hence does not necessarily result in Bayes
optimal performance when applied to object recognition problems, while
the empirical statistical model of view similarity used here
approximates the true class conditional distribution.
In terms of error rates in a forced choice experiments, view
similarity using these non-uniform blurs achieves an error rate of
7.2\%, compared to 32\% using simple 2D similarity, demonstrating
substantial improvements from the use of the view similarity approach.
Note also that because of the nature of the feature vector used--a 2D
feature map--the system did not have access to correspondence
information.
\section{View Similarity Models}
Densities like the view generalization function $P ( B |
\mathcal{T}_{\omega} )$ can be difficult to estimate.
It would be more convenient if we could reformulate the learning
problem as that of modeling a class posterior density: there is a
wide variety of models available for class posterior density
(logistic regression, radial basis functions, multilayer-perceptrons,
etc.)
Fortunately, we can perform that transformation fairly easily. During
recognition from a model base, we compare the unknown view $B$ repeatedly
against collections of training views $\mathcal{T}_{\omega}$ for each object.
There are two conditions under which this takes place: either the view $B$
derives from the same object $\omega$ as the training views
$\mathcal{T}_{\omega}$, or the view derives from some other object. Let us
represent these two conditions by a boolean indicator variable $S$. For $B$
not derived from $\omega$, the conditional distribution $P ( B | S = 0,
\mathcal{T}_{\omega} )$ is simply the prior distribution of possible views $P
( B )$. When $B$ is derived from the same object as the training views, that
is $S = 1$, we have:
\begin{eqnarray*}
P ( B | S = 1, \mathcal{T}_{\omega} ) & = P ( S = 1 | B,
\mathcal{T}_{\omega} ) & \frac{P ( B )}{P ( S = 1 | \mathcal{T}_\omega )}
\end{eqnarray*}
Given an unknown view $B$ to recognize, $P ( B )$ does
not change with $\omega$, and $P(S=1|\mathcal{T}_\omega)=P(\omega)$.
Therefore,
\[ \hat{\omega} = \tmop{arg} \max_{\omega} P ( B | \mathcal{T}_{\omega} )
P(\omega) =
\tmop{arg} \max_{\omega} P ( S = 1 | B, \mathcal{T}_{\omega} )
\]
Let us call the distribution $P ( S = 1 | B, \mathcal{T}_{\omega} )$ the
{\tmem{view similarity function}}. If $\mathcal{T}_{\omega}$ consists of a
single view, we call this distribution the {\tmem{single view similarity
function}}. It acts like an adaptive similarity metric \cite{Lowe95} when
used for recognition from a model base using Equation~\ref{bayesrule}.
\begin{figure}[t]
\centerline{(a)~\includegraphics[height=0.7in]{Figures/coil-samples.png}
(b)\includegraphics[height=0.7in]{Figures/coil-deriv.png}}
\caption{(a) Sample images from the COIL-100 database.
(b) The feature map used as input to the recognition system.}
\label{figcoilex}
\end{figure}
\paragraph{Experiments.}
Let us look now at how view similarity functions can be learned in an
the case of 3D paperclips. As in the previous section, we consider
the single view generalization problem and apply it to the problem of
paperclip recognition. During a training phase, the experiments used
a collection of 200 paperclips, generated according to the procedure
described in the previous section. The procedure used for generating
the paperclips implies the prior distribution $P ( B ) = P (
T_{\omega} )$, and the training set is a sample from this
distribution. For training, the system chooses one of those paperclips
$\omega$ at random and generates two different views, a training view
$T_{\omega}$, and a target view $B$. Then, it picks a second
paperclip $\omega' \neq \omega$ at random and generates a view $B'$.
The pair $( B, T_{\omega} )$ is then a training example for the
condition $S = 1$, and the pair $( B', T_{\omega} )$ is a training
example for the condition $S = 0$. Generating a number of these pairs,
we obtain a training set for a Bayesian classifier $\tilde{P} ( S | B,
T_{} )$.
For testing, the experiment was carried out using novel
paperclips--paperclips not found in the training set of 200
paperclips. We could test by generating a model base of some number
of objects and then performing nearest neighbor classification; we
will do that below on the COIL-100 database of real images. However, that
introduces another unnecessary parameter into the evaluation, the size
of the model base. Therefore, here, we reduce the recognition
problems on a forced choice experiment. In such a forced-choice
experiment, we generate test samples analogous to training samples and
measure the error rate of the system on being able to distinguish $(
B, T_{\omega} )$ from $( B', T_{\omega} )$. This is also a common
paradigm used in psychophysical experiments. An example of such a
forced choice experiment can be seen in Figure~\ref{figexamples}; the
image at the left is the training view $T_\omega$, and the two images
on the right correspond to $B$ and $B'$ (not necessarily in that
order). Views were encoded using the three feature types described
in the previous section; for location features, rotations were
chosen from $\{\pm 45^\circ\}$.
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& \multicolumn{3}{c|}{Error Rate} \\
\hline
& Angles & Locations & Feature Map\\
\hline
2D Similarity & 19.9\% & 8.4\% & 32\% \\
\hline
View Similarity & 10.9\% & 0.38\% & 7.9\% \\
\hline
\end{tabular}
\end{center}
These results show a substantial improvement of view-similarity
functions over 2D similarity on single view generalization to novel
objects. Note that many traditional recognition methods, like linear
combinations of views or model-based recognition, cannot even be
applied to this case because the observer is only given a single
training view for each novel object.
\section{Experiments with COIL-100}
The experiments in the previous sections were all carried out on
simulated 3D paperclip objects--a widely used test case in the
literature. However, real-world images might show considerably more
variation and hence make the learning of view generalization functions
hard or impossible from reasonable numbers of training images.
To test whether view similarity methods are applicable to real images,
experiments were carried out on the COIL-100 database
\cite{nene96columbia}. Furthermore, the eigenspace method used in
\cite{nayar96realtime} was implemented as a control.
The COIL-100 database contains color images representing views of
objects separated by $5^\circ$ rotation around the vertical axis.
Even simple nearest neighbor classification methods perform nearly
perfectly given that sampling and color input, so using the full
database as training examples is not a very hard test of the
ability to generalize to new views based on shape.
To test for the ability to generalize to viewpoints that differ
substantially from the training view based on shape alone, the
database was preprocessed to remove color and absolute intensity
information, and only a coarser sampling of viewpoints was used.
Images were converted to grayscale and gradient features were
extracted, as shown in Figure~\ref{figcoilex}. Training was carried
out on views from the first 70 objects in the database. The methods
were tested on views from the remaining 30 objects of the database.
For each test, only collections of views whose viewpoints were spaced
apart by multiples of $30^\circ$ (12 per object) were used.
The question addressed by these experiments on the COIL-100 database
is whether it is possible to learn view generalization functions that
are capable of any kind of generalization at all. Note that the view
similarity model had no prior knowledge incorporated into it at all,
not even Euclidean distance. Without effective training, the view
similarity function performs at chance level, an error rate of 96.7\%.
Any performance better than that means that the view similarity model
successfully generalized at least to some degree from the 70 training
objects to the 30 previously unseen test objects. Error rates for
this recognition problem are shown in the following table (measured
for 2160 test views):
\begin{center}
\begin{tabular}{|c|c|}
\hline
& Error Rate\\
\hline
Euclidean Distance & 40.0\% \\
\hline
Eigenspace & 26.1\% \\
\hline
View Similarity & 20.3\% \\
\hline
\end{tabular}
\end{center}
As expected, the eigenspace method results in strong improvements over
a Euclidean Distance classifier. The view similarity approach
with a MLP model of $P(S|B,T_\omega)$ and five hidden units, results
in addition decrease of the error rate of nearly six percent, showing
not only that significant generalization has taken place between
different object models, but that even given a very small training set
of 70 objects, the method actually outperforms an established approach
to object recognition.\footnote{Of course, even better
performance can be achieved by hardcoding additional prior knowledge
about shape and object similarity into the recognition method (e.g.,
\cite{belongie01shape}). Achieving competitive performance with such
methods would either require encoding additional prior knowledge about
shape similarity in the numerical model of the view similarity function,
or simply using a much larger training set to allow the observer to
learn those regularities directly.}
\section{Discussion}
This paper has introduced the notions of view generalization and view
similarity functions. We have seen that knowledge of these functions
allows an observer to recognize novel objects from a set of training
view in a Bayes optimal (minimum classification error) way.
By expressing eigenspace and linear combination of view methods in the
framework of view generalization functions, the paper has demonstrated
that fast and compact view generalization functions exist that are at
least as good as commonly used methods for object recognition.
Furthermore, the paper has given a procedure for constructing the
Bayes optimal blurring for matching, a Bayesian version of the
geometric blur method in \cite{berg01}, and shown such blurring
methods to be first order approximations to the view generalization
function.
The paper also reported experiments on the recognition of simulated 3D
paperclips, as well as the recognition of real objects from the
COIL-100 image database of real 3D objects. In the case of
paperclips, a set of 200 training objects sufficed to reduce the error
rate on single view generalization severalfold compared to 2D view
similarity. And in the case of the COIL-100 database, the use of view
similarity cut the recognition error rate in half compared to image
based similarity. This is also one of the first demonstrations of learning
single view 3D generalization for novel objects without requiring
membership in a special object class.
Both the theoretical arguments and the experiments presented in this
paper were only designed to showed that view generalization approaches
are feasible. We would have expected learning of view generalization
functions to require a large number of training objects. But
experimental results surpassed expectations and show that view
generalization and view similarity functions that can show significant
amounts of generalization (and actually outperform eigenspace methods)
to arbitrary previously unseen objects are learnable from very modest
numbers of training examples (70 and 200).
Future work has to address a number of practical and engineering
issues.
The experiments in this paper demonstrated single-view generalization.
This was perhaps the more interesting case to address first since few
other methods for 3D object recognition are even capable of performing
meaningful 3D generalization from a single view of an unknown 3D
object. The extension of this to multi-view generalization requires
some additional tricks; in particular, instead of learning
$P(S=1|B,T_{\omega,1},\ldots,T_{\omega,r})$, it turns out to be
desirable instead to learn $P(S=1|B,f(T_{\omega,1},\ldots,T_{\omega,r}))$
for a function $f$ that ``summarizes'' the views in a way that makes
it easier to learn the view similarity function.
The statistical models used in the experiments in this paper
(empirical distributions and multilayer perceptrons) incorporated no
prior knowledge about objects or shape similarity. Work on
appearance-based 3D object recognition under 2D transformations (e.g.,
\cite{belongie01shape}, among many others) show that
systems based on hardcoding knowledge about transformations and shape
similarity into view similarity measures can by themselves achieve a
significant ability to generalize across different 3D views. Such
techniques can be combined with the adaptive view generalization
approaches presented in this paper. If such hybrid systems are
constructed carefully, they will perform no worse than the underlying
systems using hardcoded similarity measures, but have the potential
to improve their performance adaptively. Demonstrating this
also remains for a future paper.
And while it is interesting that view similarity and view
generalization methods can already learn some generalization from as
few as 70 images, training on much larger datasets is clearly
desirable. After all, we are trying to approximate a similarity
measure that performs Bayes-optimal recognition over the entire
distribution of possible 3D shapes. Fortunately, it is easy to
generate large amounts of training data without manual labeling from
video sequences, by taking advantage of the fact that video is often
composed of scenes within which individual objects undergo motion
relative to the camera; frames from such scenes provide training
samples for $P(S=1|B,T_\omega)$, while frames from different scenes
can be used as training samples for $P(S=0|B,T_\omega)$.
\bibliographystyle{plain}
|
2,869,038,156,250 | arxiv | \section{Introduction }
Graphs and digraphs are without loops or parallel edges. Given a hypergraph $H=(V,E)$, $(E\subseteq\Pscr(V),$ where $\Pscr(V)$ is the power-set of $V)$ we will call the elements of $V$ {\em vertices}, and those of $E$ {\em hyperedges}, $n:=|V|$. A {\em cover} is a family $\Cscr\subseteq E$ such that $\displaystyle \cup \Cscr := \cup_{C\in \Cscr}C=V$. We suppose that $E$ is a cover. The minimum number of hyperedges in a cover is denoted $\rho:=\rho(H)$. The {\em hereditary closure} of $H$ is $H^h=(V,E^h)$ where $E^h:=\{X\subseteq e: e\in E\}$, and $H$ is {\em hereditary}, if $H^h=H$.
In this paper we study {\em hereditary} hypergraphs (sometimes also called independence systems, or a simplicial complexes in the literature). Hyperedges of cardinality $1$ will be called {\em singletons}, and hyperedges of cardinality $2$ are called {\em edges}. Deleting a vertex $v$ of the hypergraph $H$ results in the hypergraph $H-v=(V\setminus \{v\}, \{e\in E, v\notin E\} )$. For hereditary hypergraphs this is the same as deleting $v$ from all hyperedges. Like for coloring, for hereditary hypergraphs a minimum cover can be supposed to be a {\em partition} of $V$, and {\em we will suppose this}! Indeed, a vertex contained in several hyperedges can be deleted from one of these hyperedges. This assumption is of primary importance, since the edges and the singletons play a major role in such partitions.
If $H$ is a hereditary hypergraph, we have $\rho(H)-1\le \rho(H-v)\le \rho(H)$; if the first inequality is satisfied with equality for all $v\in V$, we say that $H$ is {\em critical}.
Given a hypergraph $H=(V,E)$, denote by $E_2$ the set of its edges, $H_2:=(V,E_2)$. The {\em components} of $H$ are defined as the those of $(H^h)_2$. These form a partition of $V$, and correspond to the usual hypergraph components: $H$ is {\em connected} if $(H^h)_2$ is connected. Abusing terminology, the vertex-set of a component will also be called component.
The maximum size of a matching in a graph $G$ is denoted by $\nu(G)$.
We prove Gallai's ingenious, simple lemma \cite{gallai:factorCritical} for self-containedness:
\begin{lem*}
If $G=(V,E)$ is a connected graph, and $\nu(G-v)=\nu (G)$ for all $v\in V$, then $\nu(G)=\frac{n-1}2$.
\end{lem*}
\proof Suppose for a contradiction that $M$ is a maximum matching and $u\ne v\in V$ are not covered by $M$. Let the distance of $u$ and $v$ be minimum among all maximum matchings and two of their uncovered vertices. Let $P\subseteq E(G)$ be a shortest path between $u$ and $v$. Clearly, $|P|\ge 2$, otherwise the edge $uv$ could be added to $M$, contradicting the maximality of $M$.
Let $u'$ be the neighbor of $u$ on $P$, and $M'$ a maximum matching of $G-u'$. Each connected component of the symmetric difference $D$ of $M$ and $M'$ is the disjoint union of even paths and even circuits alternating between $M$ and $M'$. If $u$ and $u'$ are not in the same component of $D$, then interchanging the edges of $M$ in the component of $u'$, neither $u$ nor $u'$ is covered by a matching edge, leading to the same contradiction as before.
On the other hand, if they are in the same component, the same interchange of the edges leads to a maximum matching that leaves $u'$ and $v$ uncovered, contradicting the minimum choice of the distance between $u$ and $v$.
\qed
Gallai's theorem on color-critical graphs \cite{gallai:colorCritical} is a beautiful statement but its original proof was rather complicated, essentially more difficult than the above lemma on factor-critical graphs \cite{gallai:factorCritical}. Stehl\'\i k \cite{STCC} gave a simpler proof. We show here that the generalization to hereditary hypergraphs can be shortly reduced to Gallai's Lemma (Section~\ref{sec:main}), in addition giving rise to a wide range of known and new examples (Section~\ref{sec:a})\footnote{The Theorem below and its proof have been included in a more complex framework of an unpublished manuscript \cite{AM}. Several occurrences of old and recent, direct special cases of hereditary hypergraphs make it useful to provide an exclusive, short presentation of this general theorem, with some examples of hereditary hypergraphs of interest.}, to algorithms, and clarifications of their complexity issues.
\section{Theorem and Proof}\label{sec:main}
\begin{thm*
In a connected, hereditary, critical hypergraph $\rho\le \frac{n+1}{2}$. Furthermore, either the inequality is strict and there is a minimum cover without singleton, or the equality holds, and there are minimum covers with only edges and exactly one singleton that can be any vertex.
\end{thm*}
\proof For $n\le 1$ the statement is obvious, so suppose $H$ is critical, $n\ge 2$. Then for all $v\in V$ there exists a minimum cover containing $\{v\}$: indeed, adding $\{v\}$ to a minimum cover of $H-v$, we get a minimum cover of $H$. Consider a minimum cover of $H-v$ (partitioning $V\setminus \{v\}$), {\em maximizing $C_v:=\cup\,\Cscr_v$, where $\Cscr_v$ is the set of its non-singleton elements}. Clearly, $\rho= |\Cscr_v|+ |V\setminus C_v|$.
\medskip\noindent
{\bf Claim~1}. For all $u, v\in V$ and each component $C\subseteq V$ of $\Cscr_u\cup\Cscr_v:\, |C\cap C_u|=|C\cap C_v|.$
\smallskip
Indeed, if $k_u$ and $k_v$ are the number of hyperedges in this component of $\Cscr_u$ and $\Cscr_v$ respectively, then $k_u + |C\setminus C_u|= k_v + |C\setminus C_v|$ for if say $k_u + |C\setminus C_u| > k_v + |C\setminus C_v|$, then
in the minimum cover consisting of $\Cscr_u$ and $|V\setminus C_u|$ singletons,
replace the hyperedges in $C$, that is, $k_u$ hyperedges of $\Cscr_u$ and $|C\setminus C_u|$ singletons,
by the $k_v$ hyperedges of $\Cscr_v$ in $C$, and $|C\setminus C_v|$ singletons, leading to a cover of size
\[\rho+ k_v - k_u + |C\setminus C_v| - |C\setminus C_u| < \rho,\]
a contradiction. But then the proven equality implies that the same replacement -- in either directions -- of the hyperedges lead to a minimum cover, increasing the size of $C_u$ if say $|C\cap C_u|<|C\cap C_v|$, and this contradiction with the choice of $C_u$ proves the claim.
\medskip\noindent
{\bf Claim~2}. If each minimum cover of $H$ contains a singleton, then $\Cscr_v\subseteq E_2$ for all $v\in V$.
\smallskip
\smallskip
Let $u\in C_v$ be arbitrary, and let us prove that $|e_u|=2$ for the hyperedge $e_u$, $u\in e_u\in\Cscr_v$. Since $u\in C_v\setminus C_u$, by Claim~1, the component $C$ of $\Cscr_u\cup\Cscr_v$ containing $u$ also contains a vertex $v_0\in C_u\setminus C_v$. Let $P$ be a shortest path between $v_0$ and $u$ in $\Cscr_u\cup\Cscr_v$ (in the connected component $C$). Let $v_0, v_1,v_2\ldots$ be the vertices of $P\subseteq E$, in fact $P\subseteq E_2$, in this order, necessarily alternating between subsets of hyperedges in $\Cscr_u$ and subsets of hyperedges in $\Cscr_v$. We prove by induction on $|P|$ {\em the assertion that the latter subsets (of hyperedges in $\Cscr_v$) are in fact in $\Cscr_v$}:
Note first that $\{v_1,v_2\}\in\Cscr_v$, because if it was a subset of an edge $e\in\Cscr_v$, $|e|\ge 3$, then replacing $\{v_0\}$ and $e$ in the minimum cover $\Cscr_v\cup\{\{v'\}: v'\in V\setminus C_v\}$ by $\{v_0,v_1\}$, $e\setminus \{v_1\}$, we get a minimum cover, where the hyperedges of size at least two cover $C_v\cup {v_0}$ contradicting the definition of $C_v$, provided $v\ne v_0$. If $v=v_0$, $C_{v'}:=C_v\cup {v_0}$ can occur in Claim~2 choosing any $v'\in V\setminus (C_v\cup {v_0})$; $v'$ exists, since $V\setminus (C_v\cup {v_0})\ne\emptyset$ because of the condition of Claim~2.
This proves the assertion for $|P|=2$.
Let $\Cscr_{v_2}:=(\Cscr_v \setminus \{v_1,v_2\}) \cup \{v_0,v_1\}$, and $P':=P-\{v_1,v_2\}$; $\Cscr_{v_2}$ is a minimum cover of $H-v_2$ maximizing the union of non-singletons, and $|P'|<|P|$. Now the induction hypothesis finishes the proof of the assertion and of Claim~2.
\medskip
To finish the proof note first that a minimum cover without singleton implies $\rho\le\frac n2$ and we are done.
Otherwise, Claim~2 can be applied, and $\rho=\frac{|C_v|}2+ |V\setminus C_v|$ follows for all $v\in V$. This formula also shows that a larger matching $\Cscr'_v$ would provide a smaller cover. So $\Cscr_v$ is a maximum matching of $H_2$ and does not cover $v$, so $\nu(H_2-v)=\nu(H_2)$ for all $v\in V$. The connectivity of $H$ means by definition that $H_2$ is connected, so the conditions of Gallai's Lemma are satisfied for $H_2$: $H_2$ is factor-critical, and $\{v\}$ $(v\in V)$ with a perfect matching of $H_2-v$ provide a cover of size $1+\frac{n-1}{2}=\frac{n+1}{2}$.
\qed
Let us restate the inequality of the Theorem so that it directly contains the formulation of \cite{gallai:colorCritical}:
\begin{cor}\label{cor:Gallai}
A hereditary hypergraph with $n\le 2(\rho -1)$ is either not critical, or not connected.
\end{cor}
\section{Examples, Algorithms and Conclusions}\label{sec:a}
In this section we show some examples applying the results to particular hypergraphs. Any hereditary hypergraph is an example, so we cannot seek for completeness, but we try to show how the specialization works. An important surprise is that it turned out that the role of larger hyperedges is secondary, {\em $H_2$ plays the main role: the covers appearing in the Theorem consist only of edges; in the corollaries the components and connectivity depend only on $H_2$.}
\begin{cor}\label{cor:concrete}
Let $H=(V,E)$ be a hereditary hypergraph with $|V|\le 2(\rho(H) -1)$. Then
either there exists $v\in V$ so that $\rho(H-v)=\rho(H)-1$,
or $H$ is not connected, that is, there exists a partition $\{V_1, V_2\}$ of $V$ so that $\{v_1,v_2\}\notin E$ for all $v_1\in V_1$, $v_2\in V_2$. \qed
\end{cor}
\noindent
{\bf 3.1 Hereditary hypergraphs from graphs}
\smallskip
Let $G=(V(G),E(G))$ be either an undirected graph or a digraph, the context always determines the current meaning, and we define hereditary hypergraphs on $V(G)$. The more deeply the hypergraphs are related to $G$, the more interesting the results.
Fix a (not necessarily finite) set $\Gscr$ of (di)graphs and for each (di)graph $G$, let $H(G,\Gscr):=(V(G), F)$, where
\[F:=\{U\subseteq V(G): \hbox{$U$ induces in $G$ a graph without any induced subgraph in $\Gscr$}\}.\]
When are the Theorem or its corollaries meaningful or even interesting for $H(G,\Gscr)$?
If neither of the $2$-vertex graphs are in $\Gscr$, hypergraphs $H(G,\Gscr)$ are connected for every graph $G$, and our Theorem and its corollaries are trivial. On $2$ vertices there are two undirected graphs: one without, and one with an edge between the two vertices. If the only graph of $\Gscr$ on two vertices is the edge-less graph, $H(G,\Gscr)$ consists of cliques of $G$;
if it is the edge on two vertices, $H(G,\Gscr)$ consists of stable sets of $G$.
In turn, according to Corollary~\ref{cor:concrete}, in the former case the disconnectivity of $H(G,\Gscr)$ means the disconnectivity of $G$, and in the latter case it means the disconnectivity of the complement of $G$. In these cases, the Theorem specializes to Gallai's theorem.
It is easy to see that in these cases the only possibility to add to $\Gscr$ more graphs is to add a clique (or stable set) of given size. Then for some $k\in\mathbb{N}$, $H(G,\Gscr)$ is the family of cliques or stable sets of size at most $k$ on $V$. For $k\ge 2$ the Theorem applies without change and it is then about coloring with at most $k$ vertices of each color.
\medskip\noindent
{\bf 3.2 Hereditary hypergraphs from digraphs}
\smallskip
Similarly, for digraphs one of the subgraphs on two vertices has to be excluded: there are now three digraphs on two vertices: with or without an arc as in the undirected case, or with an arc in both directions ($2$-cycle). If $\Gscr$ contains only the latter, we also do not get anything new: keeping only arcs in both directions as an undirected edge we reduce the problem to Gallai's colorings in undirected graphs. However, if there are some other graphs in $\Gscr$ we have three interesting special cases: cliques, stable sets (Gallai), a third case we discuss below as also cases from multigraphs.
\begin{cor}\label{cor:graphs]}
Let $G$ be a graph, $\Gscr$ a set of graphs, $H:=H(G,\Gscr)$, and $|V(G)|\le 2(\rho(H) -1)$. Then
either there exists $v\in V$ so that $\rho(H-v)=\rho(H)-1$,
or $H$ is not connected, that is, there exists a partition $\{V_1, V_2\}$ of $V$ so that $\{v_1,v_2\}$ for all $v_1\in V_1$, $v_2\in V_2$ induces a graph in $\Gscr$.\qed
\end{cor}
As argued before the corollary, the interesting cases are when {\em the unique graph on two vertices of $\Gscr$ is an edge, a non-edge or a $2$-cycle}, and in the last case there are many possibilities to exclude further induced subgraphs. For instance we can include in $\Gscr$ $3$-cycles and all graphs on $4$ vertices having $4$-cycles. Actually an arbitrary subset of graphs having directed cycles, or the set of all such graphs can be contained in $\Gscr$, and will not make any change in the relevant critical graphs (as compared to including only $3$- and $4$-cycles, no larger hyeredge plays a role). Corollary~\ref{cor:graphs]} holds, and partitioning into hyperedges of $H(G,\Gscr)$ means then partitioning into vertex-sets that induce acyclic digraphs: this is ``digraph coloring'',
for which Corollary~\ref{cor:graphs]} was asked in \cite{BS}. (The Theorem has then already been proved, see footnote~1. Stehl\'\i k \cite{M} missed its specialization to acyclic induced subgraphs, and answered \cite{BS} using the Edmonds-Gallai structure theorem.)
\medskip\noindent
{\bf 3.3 More Examples}
\smallskip
Clearly, common hyperedges of an arbitrary number of hereditary hypergraphs on the same set of vertices form a hereditary hypergraph. If all of them arise as stable sets of graphs, the intersection will be just the stable-set-hypergraph of the graph which is the union of the considered graphs. However, if the considered hypergraphs arise in different ways, the intersections may provide nontrivial new cases, if the role of the edges is kept in mind.
More generally, a {\em stable-set} in a (not necessarily hereditary) hypergraph $H=(V,E)$ is a set $S\subseteq V$ so that $S$ does not contain any $e\in E$. (Independent sets of matroids are those that do not contain a hyperedge from the circuit-hypergraph.) The family $\Sscr$ of all stable sets is obviously a hereditary family; $S\in \Sscr$, if and only if $V\setminus S$ is a {\em transversal} or blocker of the hyperedges; the family of transversals is an upper hereditary hypergraph, another source of examples:
In {\em upper hereditary} hypergraphs the supersets of hyperedges are also hyperedges.
The {\em dual} of $H=(V,E)$ is $H^d:=(V,E^d)$, where $E^d:=\{V\setminus e: e\in E\}$. The dual of a hereditary hypergraph is upper hereditary and vice versa, generating more examples; $(H^d)^d=H$. Each example of upper hereditary hypergraphs provides an example of hereditary hypergraphs, and vice versa. Upper hereditary hypergraphs arise for instance from vertex-sets of graphs that do contain one of a fixed set of graphs as induced subgraphs; being non-planar or non-bipartite is a special case.
In multi (di)graphs $G$ with for instance edge-multipicities $z:E(G)\rightarrow \Rset$ and $\lambda\in\Rset$ we may consider the hereditary hypergraph $\{U\subseteq V(G): \hbox{sum of $z(e)$ on the edges induced by $U$}\le \lambda\}$, when Corollary~\ref{cor:graphs]} is again meaningful. The upper bound can be replaced by any monoton function of $z(e)$ and the graph, combined with vertex multiplicities or edge- and vertex-colored graphs, $\ldots$
\noindent
{\bf 3.4 Algorithms and Complexity}
\smallskip
The focus of the examples of the previous subsections was the Theorem. Algorithmic and complexity questions are less ``choosy'' and become meaningful and nontrivial for more examples.
Once in a while questions about particular, critical hereditary hypergraphs are raised anew, sometimes as open problems like in \cite{BS} about partitioning the vertex-set into acyclic digraphs. How can the NP-hard covering participate in well-characterizing minmax theorems? The discussion of this question is beyond the possibilities of this note.
This will be layed out in forthcoming papers, we mention the key to the solution only shortly:
It is NP-hard to compute $\rho_H$ and in hereditary hypergraphs $H$ it is not easier, since taking the hereditary closure does not affect $\rho$!
The covering problem for the hereditary closure of $3$-uniform hypergraphs contains the $3$-dimensional matching problem \cite{GJ}, and is therefore NP-hard even if the hyperedges of the hereditary hypergraph are given explicitly, and their number is polynomial in the input size. Indeed, $\rho = n/3$ if and only if there exists a partition into triangles.
However, the maximum $\mu$ of the number of vertices covered by non-singletons in a cover of a hereditary hypergraph can be maximized in polynomial time, and the vertex-weighted generalization can also be solved! It can be seen that this maximum does not change if we write here ``minimum cover'' instead of ``cover''. This allows to handle with well characterizing minmax theorems and in polynomial time some aspects of minimum covers \cite{T}, for which results of Bouchet \cite{B}, Cornu\'ejols, Hartvigsen and Pulleyblank \cite{CP}, \cite{CHP} play an enlightening role.
\medskip\noindent
{\bf 3.5 Conclusion}:
We tried to show by the Theorem and multiple examples how results on graph colorings may be extended to covers in hypergraphs. We continue this work with minmax and structure theorems, develop algorithms at the general level of hereditary hypergraphs, and show more applications and connections between various problems \cite{T}, \cite{AM}.
We hope the reader will also have the reflex of using hereditary hypergraphs when a new special case is coming up!
\small
|
2,869,038,156,251 | arxiv | \section{Introduction}
\label{intro}
One of the main goal of the study of relativistic heavy ion collisions is to study the deconfined matter, known as Quark-Gluon Plasma (QGP), which is expected to form at large densities. It has been suggested that the transition from hadronic to QGP state can be treated by the percolation theory \cite{celik}. The formulation of percolation problem is concerned with elementary geometrical objects placed on a random d-dimensional lattice. The objects have a well defined connectivity radius $\lambda$, and two objects can communicate if the distance between them is less than $\lambda$. Several objects can form a cluster of communication. At certain density of the objects a infinite cluster appears which spans the entire system. This is defined by the dimensionless percolation density parameter $\xi$ \cite{isich}. Percolation theory has been applied to several areas ranging from clustering in spin system to the formation of galaxies. Figure 1 shows the transition from disconnected to connected system at high densities.
\begin{figure}
\centering
\resizebox{0.70\textwidth}{!}{
\includegraphics{Fig1.eps}}
\caption{Left: Disconnected system. Right: Connected system}
\label{perc1}
\end{figure}
In nuclear collisions there is indeed, as a function of parton density, a sudden onset of large scale color connection. There is a critical density at which the elemental objects form one large cluster, loosing their independent existence. Percolation would correspond to the onset of color deconfinement and it may be a prerequisite for any subsequent QGP formation.
The determination of the EOS of hot, strongly interacting matter is one of the main challenges of strong interaction physics. Recent lattice QCD (LQCD) calculations for the bulk thermodynamic observables, e.g. pressure, energy density, entropy density and for the sound velocity have been reported \cite{bazavov}.
Recently, attention has been focused on the shear viscosity to entropy density ratio $\eta/s$ as a measure of the fluidity \cite{teaney,teaney1,lacey,rom}. The observed temperature averaged $\eta/s$, based on viscous hydrodynamics analyses of RHIC data, are suggestive of a strongly coupled plasma \cite{gul1,larry}.
In this talk a percolation model coupled with thermodynamical relations has been utilized to obtain EOS and the $\eta/s$ from the experimental data.
\section{String Interactions and Percolation}
Multiparticle production at high energies is currently described in terms of color strings stretched between the projectile and target. Hardonizing these strings produce the observed hadrons. The strings act as color sources of particles through the creation of $q \bar{q}$ pairs from the sea. At low energies only valence quarks of nucleons form strings that then hadronize. The number of strings grows with the energy and with the number of nucleons of participating nuclei. Color strings may be viewed as small discs in the transverse space filled with the color field created by colliding partons. Particles are produced by the Schwinger mechanisms \cite{swinger}. With growing energy and size of the colliding nuclei the number of strings grow and start to overlap to form clusters \cite{pajares1,pajares2}. At a critical density a macroscopic cluster appears that marks the percolation phase transition. 2D percolation is a non-thermal second order phase transition,
but in CSPM the Schwinger barrier penetration mechanism for particle production and the fluctuations in the associated string tension due to the strong string interactions make it possible to define a temperature.
Consequently the particle spectrum is "born" with a thermal distribution \cite{bialas}.
With an increasing number of strings there is a progression from isolated individual strings to clusters and then to a large cluster which suddenly spans the area. In two dimensional percolation theory the relevant quantity is the dimensionless percolation density parameter given by \cite{pajares1,pajares2}
\begin{equation}
\xi = \frac {N S_{1}}{S_{N}}
\end{equation}
where N is the number of strings formed in the collisions and $S_{1}$ is the transverse area of a single string and $S_{N}$ is the transverse nuclear overlap area. The critical cluster which spans $S_{N}$, appears for
$\xi_{c} \ge$ 1.2 \cite{satz1}. As $\xi$ increases the fraction of $S_{N}$ covered by this spanning cluster increases.
The percolation theory governs the geometrical pattern of string clustering. It requires some dynamics to describe the interaction of several overlapping strings. It is assumed that a cluster behaves as a single string with a higher color field corresponding to the vectorial sum of the color charge of each individual string. Knowing the color charge, one can calculate the multiplicity $\mu_{n}$ and the mean transverse momentum squared $\langle p_{t}^{2} \rangle$ of the particles produced by a cluster of strings. One finds \cite{pajares1,pajares2}
\begin{equation}
\mu_{n}= \sqrt {\frac {nS_{n}}{S_{1}}}\mu_{1}
\end{equation}
\begin{equation}
\langle p_{t}^{2} \rangle_{n}= \sqrt {\frac {nS_{1}}{S_{n}}\langle p_{t}^{2} \rangle_{1}}
\end{equation}
where $\mu_{1}$ and $\langle p_{t}^{2}\rangle_{1}$ are the mean multiplicity and average transverse momentum squared of particles produced by a single string . In the saturation limit, all the strings overlap into a single cluster that approximately occupies the whole interaction area, one gets the following universal scaling law
\begin{equation}
\langle p_{t}^{2}\rangle_{n} = \frac {S_{1}}{S_{n}}\frac {\langle p_{t}^{2}\rangle_{1}}{\mu_{n}}{\mu_{1}}
\end{equation}
In the limit of high density one obtains
\begin{equation}
\langle \frac {nS_{1}}{S_{n}} \rangle = 1/F^{2}(\xi)
\end{equation}
with
\begin{equation}
F(\xi) = \sqrt {\frac {1-e^{-\xi}}{\xi}}
\end{equation}
being the color suppression factor. It follows that
\begin{equation}
\mu = N F(\xi)\mu_{1}, \langle p_{t}^{2}\rangle = \frac {1}{F(\xi)}\langle p_{t}^{2}\rangle_{1}
\end{equation}
A similar scaling is found in the Color Glass Condensate approach (CGC)\cite{cgc,perx}. The saturation scale $Q_{s}$ in CGC corresponds to $ {\langle p_{t}^{2} \rangle_{1}}/F(\xi)$ in CSPM.
The net effect due to $F(\xi)$ is the reduction in hadron multiplicity and increase in the average transverse momentum of particles. The CSPM model calculation for hadron multiplicities and momentum spectra was found to be in excellent agreement with experiment \cite{diasde,diasde2}.
\section{Color Suppression Factor $F(\xi)$}
The suppression factor is determined by comparing the $\it pp$ and A+A transverse momentum spectra.
To evaluate the initial value of $F(\xi)$ from data for Au+Au collisions, a parameterization of $\it pp$ events at 200 GeV is used to compute the $p_{t}$ distribution \cite{nucleo,levente,eos}
\begin{equation}
dN_{c}/dp_{t}^{2} = a/(p_{0}+p_{t})^{\alpha}
\end{equation}
where a is the normalization factor. $p_{0}$ and $\alpha$ are parameters used to fit the data. This parameterization also can be used for nucleus-nucleus collisions to take into account the interactions of the strings \cite{pajares2}
\begin{equation}
dN_{c}/dp_{t}^{2} = \frac {a'}{{(p_{0}{\sqrt {F(\xi_{pp})/F(\xi)}}+p_{t})}^{\alpha}}
\end{equation}
In pp collisions $F(\xi) \sim$ 1 at these energies due to the low overlap probability.
Figure 2 shows a plot of $F(\xi)$ as a function of charged particle multiplicity per unit transverse area $\frac {dN_{c}}{d\eta}/S_{N}$ for Au+Au collisions at 200 GeV for various centralities for the STAR data \cite{nucleo,levente,eos}.
$F(\xi)$ decreases in going from peripheral to central collisions. The $\xi$ value is obtained using Eq. (6), which increases with the increase in centrality. These experimental values of $F(\xi)$ are used to obtain temperature, EOS and $\eta/s$.
\begin{figure}[thbp]
\centering
\vspace*{-0.5cm}
\resizebox{0.55\textwidth}{!}{
\includegraphics{Fig2.eps}
}
\vspace*{-0.1cm}
\caption{ Color suppression factor $F(\xi)$ as a function of $\frac {dN_{c}}{d\eta}/S_{N}(fm^{-2})$.
The solid red circles are for Au+Au collisions at 200 GeV(STAR data) \cite{nucleo}. The error is smaller than the size of the symbol. The line is fit to the STAR data. The solid blue squares are for Pb+Pb at 2.76 TeV.}
\end{figure}
\section{Temperature}
The connection between the measured $\xi$ and the temperature $T(\xi)$ involves the Schwinger mechanism (SM) for particle production.
The Schwinger distribution for massless particles is expressed in terms of $p_{t}^{2}$ \cite{swinger,wong}
\begin{equation}
dn/d{p_{t}^{2}} \sim e^{-\pi p_{t}^{2}/x^{2}}
\end{equation}
where the average value of the string tension is $\langle x^{2} \rangle$. The tension of the macroscopic cluster fluctuates around its mean value because the chromo-electric field is not constant.
The origin of the string fluctuation is related to the stochastic picture of
the QCD vacuum. Since the average value of the color field strength must
vanish, it can not be constant but changes randomly from point to point \cite{bialas}. Such fluctuations lead to a Gaussian distribution of the string tension for the cluster, which gives rise to the thermal distribution \cite{bialas,eos}
\begin{equation}
dn/d{p_{t}^{2}} \sim e^{(-p_{t} \sqrt {\frac {2\pi}{\langle x^{2} \rangle}} )}
\end{equation}
with $\langle x^{2} \rangle$ = $\pi \langle p_{t}^{2} \rangle_{1}/F(\xi)$.
The temperature is expressed as
\begin{equation}
T(\xi) = {\sqrt {\frac {\langle p_{t}^{2}\rangle_{1}}{ 2 F(\xi)}}}
\end{equation}
\section{Equation of State}
Among the most important and fundamental problems in finite-temperature QCD are the calculation of the bulk properties of hot QCD matter and characterization of the nature of the QCD phase transition.
The QGP according to CSPM is born in local thermal equilibrium because the temperature is determined at the string level. We use CSPM and thermodynamical relations to calculate energy density, entropy density and sound velocity (EOS). After the initial temperature $ T > T_{c}$ the CSPM perfect fluid may expand according to Bjorken boost invariant 1D hydrodynamics \cite{bjorken}
\begin{eqnarray}
\frac {1}{T} \frac {dT}{d\tau} = - C_{s}^{2}/\tau \\
\frac {dT}{d\tau} = \frac {dT}{d\varepsilon} \frac {d\varepsilon}{d\tau} \\
\frac {d\varepsilon}{d\tau} = -T s/\tau \\
s =(1+C_{s}^{2})\frac{\varepsilon}{T}\\
\frac {dT}{d\varepsilon} s = C_{s}^{2}
\end{eqnarray}
where $\varepsilon$ is the energy density, s the entropy density, $\tau$ the proper time, and $C_{s}$ the sound velocity. Above the critical temperature only massless particles are present in CSPM.
The initial energy density $\varepsilon_{i}$ above $T_{c}$ is given by \cite{bjorken}
\begin{equation}
\varepsilon_{i}= \frac {3}{2}\frac { {\frac {dN_{c}}{dy}}\langle m_{t}\rangle}{S_{n} \tau_{pro}}
\end{equation}
To evaluate $\varepsilon_{i}$ we use the charged pion multiplicity $dN_{c}/{dy}$ at midrapidity and $S_{n}$ values from STAR for 0-10\% central Au+Au collisions at $\sqrt{s_{NN}}=$200 GeV \cite{levente}. The factor 3/2 in Eq.(17) accounts for the neutral pions. The average transverse mass $\langle m_{t}\rangle$ is given by $\langle m_{t}\rangle = \sqrt {(\langle p_{t}\rangle^2 + m_{0}^2)}$, where $\langle p_{t}\rangle$ is the transverse momentum of pion and $m_{0}$ being the mass of pion.
The dynamics of massless particle production has been studied in two-dimensional quantum electrodynamics (QED2).
QED2 can be scaled from electrodynamics to quantum chromodynamics using the ratio of the coupling constants \cite{wong}. The production time $\tau_{pro}$ for a boson (gluon) is \cite{swinger}.
\begin{equation}
\tau_{pro} = \frac {2.405\hbar}{\langle m_{t}\rangle}
\end{equation}
From the measured value of $\xi$ and $\varepsilon$
it is found that $\varepsilon$ is proportional to $\xi$ for the range
$1.2 < \xi < 2.88$. Figure 3 shows a plot of energy density as a function of $\xi$. $\varepsilon_{i}= 0.788$ $\xi$ GeV/$fm^{3}$ \cite{nucleo,levente}.
\begin{figure}[thbp]
\centering
\vspace*{-0.2cm}
\resizebox{0.55\textwidth}{!}{
\includegraphics{Fig3.eps}
}
\vspace*{-0.2cm}
\caption{Energy density $\epsilon$ as a function of the percolation density parameter $\xi$. The value for LHC energy is shown as blue square.}
\end{figure}
Figure 4 shows a plot of $\varepsilon/T^{4}$ as a function of T/$T_{c}$. The lattice QCD results are from HotQCD Collaboration \cite{bazavov}. CSPM is in excellent agreement with the LQCD calculations in the phase transition region for $T/T_{c} \leq $1.5. The sound velocity and entropy density results are also in good agreement with LQCD calculations \cite{eos}.
\begin{figure}[thbp]
\centering
\vspace*{-0.2cm}
\resizebox{0.55\textwidth}{!}{
\includegraphics{Fig4.eps}
}
\vspace*{-0.2cm}
\caption{ $\varepsilon/T^{4}$ as a function of T/$T_{c}$.The lattice QCD calculation is shown as dotted blue line \cite{bazavov}. }
\end{figure}
\section{Shear Viscosity}
The relativistic kinetic theory relation for the shear viscosity over entropy density ratio, $\eta/s$ is given by \cite{gul1,gul2}
\begin{equation}
\frac {\eta}{s} \simeq \frac {T \lambda_{mfp}}{5}
\end{equation}
where T is the temperature and $\lambda_{mfp}$ is the mean free path
$\lambda_{mfp} \sim \frac {1}{(n\sigma_{tr})}$.
$\it n $ is the number density of an ideal gas of quarks and gluons and $\sigma_{tr}$ the transport cross section for these constituents. In CSPM the number density is given by the effective number of sources per unit volume
\begin{equation}
n = \frac {N_{sources}}{S_{N}L}
\end{equation}
L is the longitudinal extension of the source, L = 1 $\it fm $. The effective no. of sources is given by the total area occupied by the strings divided by the effective area of the string $S_{1}F(\xi) $ \cite{eos2}.
\begin{equation}
N_{sources} = \frac {(1-e^{-\xi}) S_{N}}{S_{1} F(\xi)}
\end{equation}
$\eta/s$ is obtained from $\xi$ and the temperature
\begin{equation}
\frac {\eta}{s} ={\frac {TL}{5(1-e^{-\xi})}}
\end{equation}
Figure 5 shows a plot of $\eta/s$ as a function of T/$T_{c}$.
The lower bound shown in Fig. 5 is given by AdS/CFT \cite{kss}.
The theoretical estimates of $\eta/s$ obtained for both the weakly (wQGP) and strongly (sQGP) coupled QCD plasma are shown in Fig. 5 \cite{gul1}. It is seen that at the RHIC top energy $\eta/s$ is close to the sQGP. Even at the LHC energy it follows the trend of the sQGP. By extrapolating the $\eta/s$ CSPM values to higher temperatures it is clear that $\eta/s$ could approach the weak coupling limit near $T/T_{c}$ $\sim$ 5.8.
\begin{figure}[thbp]
\centering
\vspace*{-0.2cm}
\resizebox{0.55\textwidth}{!}{
\includegraphics{Fig5.eps}
}
\vspace*{-0.1cm}
\caption{$\eta/s$ as a function of T/$T_{c}$. Au+Au at 200 GeV for 0-10$\%$ centrality is shown as solid black square. wQGP and sQGP values are shown as dotted blue and green lines respectively \cite{gul1}. The estimated value for Pb+Pb at 2.76 TeV for 0-5$\%$ centrality is shown as a solid blue square. The red dotted line represents the extrapolation to higher temperatures from the CSPM. The hadron gas value for $\eta/s$ $\sim$ 0.7 is shown as solid black circle at T/$T_{c} \sim $0.88 \cite{meson}.}
\end{figure}
\section{RHIC to LHC}
The STAR results for Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV have been extrapolated to estimate F$(\xi)$ values for Pb+Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV. The STAR points in Fig.2 has the functional form
\begin{equation}
F(\xi) = exp [-0.165 -0.094 \frac {dN_{c}}{d\eta}/S_{N}]
\end{equation}
Recently, the ALICE experiment at LHC published the charged-particle multiplicity density data as a function of centrality in Pb+Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV \cite{alice1}. The ALICE data points are shown in Fig.2. For central 0-5$\%$ in Pb+Pb collisions $\xi$ = 10.56 as compared to $\xi$ = 2.88 for central Au+Au collisions at 200 GeV. The extrapolated value of $\varepsilon$ for central Pb+Pb collision at 2.76 TeV is 8.32 $GeV/fm^3$ as shown in Fig.3. For Pb+Pb collisions the temperature is $\sim$ 262.2 MeV for 0-5$\%$ centrality, which is $\sim$ 35 $\%$ higher than the temperature from Au+Au collisions \cite{eos}.
One way to verify the validity of extrapolation from RHIC to LHC energy is to compare the energy density expressed as $\varepsilon/T^4$ with the available lattice QCD results. The LHC point is shown in Fig. 4. It is observed that at LHC energy the CSPM results are in excellent agreement with the lattice QCD results. The lattice and CSPM results are available for T/$T_{c} < 2$.
The estimated value of $\eta/s$ for Pb+Pb is shown in Fig. 5 at T/$T_{c}$ = 1.57. These results from STAR and ALICE data show that the $\eta/s$ value is 2.5 and 3.3 times the KSS bound \cite{kss}.
\section{Summary}
Two central objectives in the experimental characterization of the QGP are the Equation Of State (EOS) and the shear viscosity to entropy ratio $\eta/s$.
The percolation analysis of the color sources applied to STAR data at RHIC provides a compelling argument that the QGP is formed in central Au+Au collisions at $\sqrt{s_{NN}}=$ 200 GeV. It also suggests that the QGP is produced in all soft high energy high multiplicity collisions when the string density exceeds the percolation transition. We found $\eta/s$ = 0.204 $\pm 0.020$ at $T/T_{c}$ = 1.15 ( RHIC ) and $\eta/s$ =0.260 $\pm 0.020$ at $T/T_{c}$ = 1.57 (LHC).
In the phase transition region $\eta/s$ is 2-3 times the conjectured quantum limit for RHIC to LHC energies.
The whole picture is consistent with the formation of a fluid with a low shear to viscosity ratio.
Thus Clustering and percolation can provide a conceptual basis for the QCD phase diagram which is more general than the symmetry breaking \cite{satzx}.
\section{Acknowledgement}
This research was supported by the Office of Nuclear Physics within the U.S. Department of Energy Office of Science under Grant No. DE-FG02-88ER40412.
|
2,869,038,156,252 | arxiv | \section{Introduction}
In recent years, artificial intelligence (AI) has made great progress due to the various deep learning (DL) algorithms and applications.
Actually in scientific research community, AI's research has been extended from algorithm/application level to system level.
More and more attentions have been taken into system design problems in AI computing.
From the point of view of ACM Turning Lectures \cite{hennessy2019new,hinton19,lecun19}, the computer architecture's evolution is driven by AI's revolution. What's more, many interdisciplinary research topics will be inspired by AI and computer architectures/systems, such as the recent rising Conference on Machine Learning and Systems (MLSys) \cite{mlsys}.
Although AI technologies have been well developed in the scope of scientific research and industrial applications, more attentions should be taken into education scope in order to satisfy the urgent requirements of industrial technologies developments.
Nowadays the curriculum system in most of colleges or universities is relative old and lacks of enough novelty compared with the rapid development in technical stacks and social applications.
If most of the students are trained with lacking of enough system capabilities, they cannot meet the full-stack talents requirement of the AI industry.
Hence, the personnel training for AI community cannot lack of computing system hierarchies. Meanwhile, the education in computer system has to consider the AI-inspired architectures and systems.
In summary, the gap between education and industry has to be bridged by incorporating the computing architecture/systems and AI algorithms/applications.
\begin{table}
\caption{Related courses in ML architecture and system.}\label{tab:courses}
\begin{threeparttable}[t]
\begin{tabular}{|l|p{170pt}|}
\hline
\rowcolor{LightCyan}
Affiliation & Course Name \bigstrut \\ \hline
UIUC & Machine Learning in Silicon \footnotemark[1] \bigstrut \\ \hline
Beihang Univ. & Intelligent Computing Architectures \bigstrut \\ \hline
MIT & Hardware Architecture for Deep Learning \footnotemark[2] \bigstrut \\ \hline
Stanford & Hardware Accelerators for Machine Learning \footnotemark[3] \bigstrut \\ \hline
Univ. of CAS & Intelligent Computing Systems \footnotemark[4] \bigstrut \\ \hline
UC Berkeley & AI-Sys \footnotemark[5] \bigstrut \\ \hline
U. Washington & System for ML \footnotemark[6] \bigstrut \\ \hline
Georgia Tech. & Hardware Acceleration for Machine Learning \footnotemark[7] \bigstrut \\ \hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] \url{https://courses.grainger.illinois.edu/ece598ns/fa2017/}
\item[2] \url{https://www.eecs.mit.edu/academics-admissions/academic-information/subject-updates-spring-2019/6s0826888}
\item[3] \url{https://cs217.stanford.edu/}
\item[4] \url{http://novel.ict.ac.cn/aics/}
\item[5] \url{https://ucbrise.github.io/cs294-ai-sys-sp19/}
\item[6] \url{https://dlsys.cs.washington.edu/}
\item[7] \url{http://tusharkrishna.ece.gatech.edu/teaching/hml\_s19/}
\end{tablenotes}
\end{threeparttable}
\end{table}
There have been many courses in AI algorithms and applications as well as the traditional computer architecture and systems.
And the related resourceful course experiments have a very good training effect.
However, it lacks of sufficient and timely exploration in the teaching AI-inspired systems and architectures.
If AI education only focuses on the higher level topics, the students will lack of the vision and capability to understand the system performance issues on AI computing.
Therefore, with the goal of cultivating talents in the full stack of AI and starting from cultivating students' computer system capabilities, we will introduce the demand of computing system in the AI era into the curriculum education. The main goals of this course exploration are listed as following:
\begin{itemize}[leftmargin=8pt,topsep=4pt,itemsep=0pt,parsep=0pt]
\item \textit{Bridge the education gap between AI and computing architecture/system:} broaden the students' research vision and towards full-stack development skills in AI-related techniques.
\item \textit{Introduce emerging research topics on AI-inspired architecture/system into college courses:} motivate students to learn and study computer architecture/system instead of only focusing on the algorithms/applications in AI era.
\end{itemize}
\section{Background and Related Courses}
The Turing Lecture \cite{hennessy2019new} points out that domain-specified computing architecture is an important trend in the future customization computing.
Especially in recent years, how to design high-performance and energy-efficient domain-specified architectures have become as very popular research directions due to the rapid growth of AI applications on the urgent requirements for computing \cite{cong2010customizable}.
As shown in Figure \ref{fig:platform}, different kind of computing platforms are compared in terms of adaptivity (AD), performance (PE), power efficiency (PO), programmability (PR) and scalability (SC) \cite{mahajan2016tabla,park2017scale}.
General computing platforms, such as CPU or GPU, have good advantages in adaptivity, scalability and programmability, but their performance and power efficiency are greatly limited for specialized computing purpose.
Application specified platforms, such as NPUs, could achieve very high performance, and power efficiency, but are very poor in adaptivity and scalability.
Reconfigurable copmuting platforms, such as FPGAs, have a good balance in performance, efficiency and adaptivity, but do not have obvious advantages in each dimension while their advantages are convenient for rapid design iteration.
In addition, there are some emerging NVM-based neuromorphic computing platforms, which have good advantages in scalability, performance and efficiency, but also have obvious limitations in adaptivity and programmability due to lack of available development tools or methodologies.
As a result, it is difficult to find a computing platform that is excellent in all dimensions for domain customized architecture and chip design.
In this course, the idea of domain-specified architecture design is introduced and some developing methodologies are provided to train students for design skills improvements.
In terms of platform consideration, FPGA is adopted in this course for rapid development iteration.
\begin{figure}[t]
\centering
\includegraphics[width=0.42\textwidth]{figs/platform.pdf}
\caption{Comparison among different design platforms \cite{mahajan2016tabla,park2017scale}, where AD means Adaptivity, PE means Performance, PO means Power Efficiency, PR means Programmability, SC means Scalability.}
\label{fig:platform}
\end{figure}
Recently, some relevant courses have appeared in many universities as shown in Table \ref{tab:courses}. Most of these courses are focusing on AI-inspired architecture and system education, which have received great attention and good effects.
Prof. Naresh R. Shanbhag in UIUC offers a course named \textit{Machine Learning in Silicon} since Fall 2017. This course aims to teach students how to design machine learning algorithms in chips directly, which mainly focuses on circuit and system implementations \cite{kang2019energy}.
Prof. Vivienne Sze and Joel Emer in MIT offer a course named \textit{Hardware Architecture for Deep Learning} since Fall 2017. This course is more likely a tutorial to present their researches in DL architectures \cite{sze2017efficient}.
Prof. Ardavan Pedram and Kunle Olukotun in Stanford Univ. offer a course named \textit{Hardware Accelerators for Machine Learning} since Fall 2018. This course introduces many latest hardware accelerators for machine learning applications.
Dr. Yunji Chen in Univ. of CAS offer a course named \textit{Intelligent Computing Systems} since Fall 2018. This course mainly promote their research achievements of DianNao architectures as well as their Cambricon chips and ecosystems \cite{cambrican}. They have published a relevant teaching textbook and provided resourceful experimental materials. This course has been promoted to several universities in China, which has a great influence in education community.
Since 2019, there have been several courses focused on ML system level.
Prof. Ion Stoica and Joseph E. Gonzalez in UC Berkeley offered a course named \textit{AI-Sys}. It describes the latest trends in systems designs to better support the next generation of AI applications, and applications of AI to optimize the architecture and the performance of systems.
Prof. Tianqi Chen in Univ. of Washington offered a course named \textit{System for ML} since Spring 2019. This course is designed to fill the gap in how to build and optimize these deep learning specified systems.
Prof. Tushar Krishna in Georgia Tech. offered a course named \textit{Hardware Acceleration for Machine Learning} since Spring 2019. This course present various development resources that can enable researchers and practitioners to quickly get started on DNN accelerator design, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware platforms being proposed in academia and industry.
In summary, these courses listed above provided a very useful early exploration by introducing circuit/chip/architecture/system levels considerations on how to build and optimize deep learning computing systems.
Our course, named \textit{Intelligent Computing Architectures}, mainly focused on the architectures explorations for deep learning tasks since Fall 2017.
Most attentions are paid to present recent advances in architectures towards the goal of enabling efficient processing of DNNs.
This course requires the students to have a wide scope of knowledge and strong practical capabilities which is very challenging to finish the experiments and projects.
But the students' system developing capabilities will be greatly improved once they have gone through the intensive training in this course.
\section{Course Design and Implementation}
\subsection{Course Prerequisites}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{figs/requiredcourses.pdf}
\caption{Course prerequisites and following courses.}
\label{fig:prerequisites}
\end{figure}
This course presents how to design domain-specific hardware architectures, which requires students to learn and practice both hardware and software knowledge. The prerequisites include three parts: computer architecture and system, hardware design, and algorithm related background. For the system part, students are fist required to learn computer architectures to understand how to build instructions set and optimize hardware implementations. Also some background of computer compiler are required for mapping ML algorithms as instructions. For the hardware design part, students are required to learn digital circuit \& system, Verilog HDL language, as well as FPGA development techniques for hardware implementations. For the algorithms related background, students are required to know some ML introductions and know well about programming techniques, including C, C++ and Python programming. With these backgrounds, students in this course could learn how to design ML architectures and implement them into FPGA platform with the support of dataflow instructions compilation. Based on these achievements, students could try to learn how to design and optimize ML systems with the architectural level considerations.
\subsection{Teaching Objectives}
In this course, some emerging research are introduced into classroom so that it involves many basic knowledge and skills, especially the software-hardware co-design methodologies. This course requires students to have very strong learning capabilities and operational capabilities.
Based on several typical applications in the field of intelligent computing, this course introduces the design and implementation methodologies of mapping software/algorithms to hardware architectures. From this course, students could understand the working principles of intelligent computing in the point of view of computer architectures. With the course experimental training, students could obtain the skills of designing and implementing domain-specified hardware for intelligent algorithms. This course requires students to learn and know how to independently design and prototype several typical intelligent computing accelerators. Their engineering opinions of heterogeneous system design could be built for extending further research vision. The main teaching objectives of this course could be listed as following:
\begin{enumerate}[leftmargin=10pt,topsep=5pt]
\item Know some typical intelligent computing algorithms, including machine learning, data mining, etc. Know some basic problems and techniques in hardware acceleration area.
\item Master some design and optimization methodologies of digital systems. Know how to map intelligent algorithms into hardware architectures.
\item Familiar with development tools, such as Verilog, Modelsim, Vivado, etc. Familiar with deep learning frameworks, such as Caffe, PyTorch, TensorFlow, etc.
\item Know how to access and utilize technical documentations.
\item Master embedded system design and optimization techniques.
\item Obtain modelling, design and analysis skills of intelligent computing systems, as well as the innovation capabilities in solving practical problems through acquired knowledge and skills.
\end{enumerate}
\subsection{Teaching Contents and Practices}
\begin{table}[b]
\caption{Teaching contents and class hours in this course, while total teaching hours are 32 hours in 16 weeks (2 hours for each week, total 16 weeks).}\label{tab:coursecontents}
\begin{center}
\begin{tabular}{|p{180pt}|p{38pt}|}
\hline
\rowcolor{LightCyan}
Teaching Contents & Schedule \bigstrut \\ \hline
\textbf{Introduction to intelligent computing architectures:} the motivations to learn this course, the teaching scope and required learning capabilities. & \makecell{2 Hours \\ Week 1} \bigstrut \\ \hline
\textbf{Mainstream computing models and methods:} DNNs and graph computing methods, especially targeted for hardware design issues. & \makecell{4 Hours \\ Week 2-3} \bigstrut \\ \hline
\textbf{Domain-specified architecture design methodologies:} design principles and development flow from algorithms to hardware, targeted on FPGA or ASIC implementations. & \makecell{4 Hours \\ Week 4-5} \bigstrut \\ \hline
\textbf{Compiling or mapping methodologies:} system modeling, functionalities partitioning, dataflow mapping and scheduling, performance and efficiency optimization techniques. & \makecell{6 Hours \\ Week 6-8} \bigstrut \\ \hline
\textbf{DNN accelerators design:} algorithms evaluations, dataflow architectures, hardware design with Verilog/HLS implementations targeted on Xilinx Zynq FPGA platforms. & \makecell{10 Hours \\ Week 9-13} \bigstrut \\ \hline
\textbf{ASIC implementation flow:} a brief introduction including behavior description, logic synthesis, physical implantation, system verification, timing analysis and optimization, etc. & \makecell{4 Hours \\ Week 14-15} \bigstrut \\ \hline
\textbf{Application perspectives of hardware accelerators:} an outlook of practical applications in big data analytic, DL, CV, robotics, etc. & \makecell{2 Hours \\ Week 16} \bigstrut \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[b]
\caption{Labs and projects design in this course.}\label{tab:labsandprojects}
\begin{center}
\begin{tabular}{|c|p{7cm}|}
\hline
\rowcolor{LightCyan}
No. & Lab or Project Details (\ul{\textit{Week Schedule}}) \bigstrut \\ \hline
Lab 1 & \textbf{Deep neural networks:} learn and write C++/Python code for some deep neural networks to understand the learning principles and network structures. \ul{\textit{Week 3}} \bigstrut \\ \hline
Lab 2 & \textbf{Deep learning frameworks:} run deep neural networks with Caffe/TensorFlow/PyTorch frameworks, evaluate the model accuracy and performance. \ul{\textit{Week 4}} \bigstrut \\ \hline
Lab 3 & \textbf{Zynq FPGA development:} learn Zynq FPGA development flows, including Vivado design suite, IP usage, AXI bus protocol, PS and PL co-design, etc. \ul{\textit{Week 5-6}} \bigstrut \\ \hline
Lab 4 & \textbf{MAC module design with Verilog:} implement Verilog modules for matrix multiplication and accumulation, validate on PL part of Zynq FPGA. \ul{\textit{Week 7-8}} \bigstrut \\ \hline
Prj 1 & \textbf{MAC design on Zynq FPGA:} implement MAC modules on PL part and perform validation with the controlling of PS part in Zynq FPGA. \ul{\textit{Week 9-10}} \bigstrut \\ \hline
Prj 2 & \textbf{LeNet design on Zynq FPGA:} implement the data path and controller to run LeNet on Zynq FPGA, only on-chip BRAMs are exploited, where the PS part is in charge of data input/output and dataflow controlling, the PL part is in charge of computations. \ul{\textit{Week 11-13}} \bigstrut \\ \hline
Prj 3 & \textbf{VGGNet design on Zynq FPGA:} VGGNet implementation which is similar to LeNet, where the off-chip DRAM is also utilized for buffering the intermediate data as well as the on-chip BRAMs. \ul{\textit{Week 14-16}} \bigstrut \\ \hline
\end{tabular}
\end{center}
\end{table}
As listed in Table \ref{tab:coursecontents}, this course consists of 32 teaching hours and mainly includes three parts: basic preparation, accelerator design, further explorations.
In the basic preparation part (6 hours), intelligent computing architectures are first introduced as well as the mainstream computing models and methods. It motivates students' interests to learn this course and know some research and technical backgrounds in related communities.
Meanwhile, some DNNs are described in the class and students evaluate some DNN models to understand their computational dataflow.
In the accelerator design part (20 hours), system architectures and dataflow mappings are detailed presented.
The basic FPGA design flows are introduced for development in Zynq platforms. Some resourceful Zynq materials, including documentations, design examples, application notes, etc., are provided to students for learning and developing.
Then, some popular architectures of DNN accelerators are discussed, which are arisen from latest research papers or chips in recent 6 years, such as DianNao\cite{chen2014diannao}, Eeyriss\cite{chen2016eyeriss}, TPU\cite{jouppi2017datacenter}, etc.
Furthermore, a simple DNN architecture is provided as a reference to our labs and projects. Several typical operations are defined as instructions or pseudo-instructions to support our architectures for running whole DNN networks.
Meanwhile, dataflow compiling or mapping techniques are introduced. With the predefined ISA, DNN dataflow is translated as instructions which could run in the designed architectures. Some optimization techniques could be involved in this process to improve the accelerator's performance or energy efficiency.
In the further explorations part (6 hours), ASIC implementation flow is introduced and accelerators' applications are discussed.
Beyond the implementations on FPGA platforms, some ASIC design flows could broaden the students' research or technical visions.
Also some further perspectives of DNN accelerators' applications could motivate students to learn and explore potential possibilities in this emerging research direction.
Besides the teaching contents in the class, several labs and projects are provided as shown in Table \ref{tab:labsandprojects}. Lab 1-4 help students to understand the DNN algorithms and frameworks as well as the FPGA developing skills. Based on these four labs, students are required to build a DNN accelerator step by step among project 1 to project 3. Aiming to finish these tasks, students are usually required to spend much more times than in class. Lab 1 is provided to make students familiar with the frequently used DNN algorithms. Lab 2 aims to evaluate DNNs with deep learning frameworks, which is important to balance the model accuracy and performance/efficiency in architectural explorations. Lab 3 provides a chance to learn Zynq FPGA developing skills while many design examples are provided as a reference. Lab 4 requires students to design a multiply-and-accumulation (MAC) module with Verilog, which should be validated in PL part of Zynq FPGA. It should be not difficult for students who have learnt Verilog and digital systems design courses before. These four labs aims to make students familiar with DNN fundamentals and hardware design platforms.
These labs and projects are provided progressively towards a practical DNN accelerator design. The main target of this course aims to design DNNs accelerator in FPGA which is covered by the remaining three projects. In project 1, the designed MAC module in Lab 4 is integrated with ARM processor as main controller in PS part. Vivado SDK is utilized for software developing while the PS-PL communication mechanisms should be well exploited for software-hardware co-design. Project 2 aims to design a LeNet on Zynq FPGA while all computing and data buffering tasks could be finished within on-chip resources. Since the LeNet model size is relatively small, such as less than 1Mb, the model weights and intermediate data could be handled just with on-chip block RAMs. Project 3 is our final project, which targets to design a VGGNet on Zynq FPGA. The VGGNet model size is very large so that we have to store most of weights and intermediate data in off-chip DDR memory.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{figs/architecture}
\caption{A typical architecture reference for DNN accelerator design, where three factors are mainly considered: data transfer bandwidth, MAC array scale, on-chip buffer size.}
\label{fig:architecture}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{figs/pl2ddr.png}
\caption{Data access from PL part to DDR via the PL Interface to PS Memory Subsystem \cite{ds190}.}
\label{fig:pl2ddr}
\end{figure}
A typical architecture is provided as shown in Figure \ref{fig:architecture}, which is a simple reference for students to design DNN accelerators. The original data including network models and images are pre-stored in on-board SD card. The PS part in Zynq is in charge of system controlling, including data input/output, data caching and computing, etc. The memory system includes CPU cache, OCM, block RAMs, off-chip DRAM, etc. Data access between PL and PS is maintained by a memory interconnect block as shown in Figure \ref{fig:pl2ddr}. Especially when DRAM is utilized for data accessing by PL part, it requires to use the PL interface to PS memory sub-system. Once system booting is finished, DNN models are loaded into DRAM and partially stored in unified buffer (in BRAMs) while weights are stored in weight RAMs. MAC array is the core computation part to finish multiplication and accumulation. After computation of MAC array, a post-processing part is utilized to compute activation, normalization, pooling, etc.
DNN computation dataflow is compiled as instructions to control typical operations, including computing, data caching, data transfer, etc. Several typical instructions includes:
\begin{itemize}[leftmargin=10pt,topsep=5pt]
\item \texttt{Read\_Host\_Memory}: read data from PS to unified buffer.
\item \texttt{Read\_Wights}: read weights from DRAM to weight FIFO.
\item \texttt{MatrixMultiply/CONV}: convolutional computation.
\item \texttt{Activate}: activation or pooling computation.
\item \texttt{Write\_Host\_Memory}: write results from unified buffer to DRAM.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{figs/fsm.png}
\caption{Finite state machine description for a typical computing dataflow.}
\label{fig:fsm}
\end{figure}
\begin{table}[t]
\caption{Resources utilization comparison between Google TPU and our referred Na\"iveTPU design.}\label{tab:naivetpu}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\rowcolor{LightCyan}
Resources & Google TPU & Our Na\"iveTPU \bigstrut \\ \hline
Matrix Multiply Unit & 256 $\times$ 256 & 32 $\times$ 32 \bigstrut \\ \hline
Accumulators RAM & 4K $\times$ 256 $\times$ 32b & 4K $\times$ 32 $\times$ 32b \bigstrut \\ \hline
Unified Buffer & 96K $\times$ 256 $\times$ 8b & 16K $\times$ 32 $\times$ 8b \bigstrut \\ \hline
\end{tabular}
\end{center}
\end{table}
The DNN computation dataflow is processed by a row-by-row manner while the finite state machine (FSM) is described as Figure \ref{fig:fsm}. It starts from data preparation, i.e., data have been pre-loaded into unified buffer or weight RAM. If the required data have been ready for each row convolutions, they are sent to MAC array for \texttt{CONV} computing. In our course, a typical systolic array is provided for \texttt{CONV} computing. As shown in Table \ref{tab:naivetpu}, our utilized Na\"iveTPU is a reduced version of Google TPU. The MAC array scale is reduced according to the available logic resources in our utilized Zynq chip. Unified buffer, FIFOs, and RFs are also determined by the available storage resources in our utilized Zynq chip.
For course labs and projects, we provide a Zynq FPGA board for each student as shown in Figure \ref{fig:ax7020}. The utilized board is named AX7020, with a Zynq chip of XC7Z0202CLG400I, including a dual Cortex-A9 ARM core (PS), 4.9Mb BRAM, and on-board 8GBit DDR3 SDRAM. For running LeNet, on-chip BRAM is enough for data caching. But for VGGNet, off-chip DRAM is required to store most of the model weights and intermediate data. Students are required to understand how to access DDR from PL via the interface to PS memory sub-system. Some important demos or reference designs are provided to help students to finished their project, including: SD card access API, DDR memory access API, AXI controller, BRAM usage, UART uasge, etc. All of these examples are included in the AX7020 suite with comprehensive code examples and documentations.
\begin{figure}[t]
\subfloat[AX7020 board]{\includegraphics[width=0.22\textwidth]{figs/ax7020board.jpg}}
\subfloat[AX7020 diagram]{\includegraphics[width=0.24\textwidth]{figs/ax7020diagram.jpg}}
\caption{The utilized AX7020 Zynq FPGA board \cite{ax7020} in this course.}
\label{fig:ax7020}
\end{figure}
\section{Teaching Practice and Discussion}
In this section, we mainly discuss some key observations we have acquired from teaching practice in each Fall semester since 2017 to illustrate the efficiency and feasibility of our course in motivating students to study AI-inspired architectures.
This course requires students to spend many time in reading documentations and system developing. These efforts are necessary to well understand the architectures and FPGA developing flows.
From our observations, most of the students are very interested in learning these knowledge and willing to spend many efforts to finish our labs/projects.
All students could finish the lab 1-4 to project 1 well since these tasks are fundamental requirements. About 80\% students could finish the LeNet design project and demo their functions to us.
The remaining 20\% students usually have some difficulties in timing optimization and software-hardware co-design stage.
For VGGNet design, about 60\% students could demo their full functionalities. Some of them even could achieve a relatively high performance when evaluated on ImageNet dataset.
The remaining 40\% students usually could not handle the complicated FSMs since there are many data transfer and data reuse strategies for VGGNet computations.
From our observations, the most excellent students (about 10\%) in this course could design a very robust DNN accelerator in Zynq FPGA while their performance and efficiency could be very close to the reported results in some early research papers \cite{qiu2016going}.
Meanwhile, a survey from the students in this course of first two years shows that the top excellent students (about 10\%) choose deep learning accelerators as their research directions in the postgraduate study.
It indicates that most of the students in this course are very interested in our teaching even though it is very challenging and has many difficulties.
These students have learnt lots of knowledge in DNN architectures and achieved strong system developing skills in this course.
What's more, their inspired interests and confidence enable them contribute to this research field in the future.
\section{Limitations and Future Work}
\textbf{Course Workloads.} From our course practice in previous three years, we have noticed that students have a very heavy workloads both in leaning new knowledge and mastering new developing skills.
We plan to suggest students to further improve their FPGA developing skills and practice more DNN algorithms in their pre-acquired courses.
Hence, we could introduce more architectural details in DNN accelerators. And furthermore, students could try to design more complicated architectures and optimize their performance and efficiency.
These achievements could provide a stronger background for their future research.
\noindent
\textbf{System Capabilities.} This course mostly focuses on architectural design issues while lacking of enough system level implementations.
Aiming to enable students to be qualified with full-stack techniques, we would like to further offer a new course from Fall 2020, named \textit{Intelligence Computing Systems}, to provide more system level techniques. This new course will mostly focus on compilers of mapping DNN algorithms into specified architectures, as well as the related optimization methodologies.
\section{Conclusions}
In this paper, we present a new course focused on intelligent computing architectures. Based on recent researches, this course introduces lots of domain-specified architectures and provides resourceful experimental materials. These training tasks are very challenging for students but they are motivated to study and practice deeply.
We believe that such practice could be helpful to cultivate students towards full-stack system developing capabilities in artificial intelligence era.
\bibliographystyle{unsrt}
|
2,869,038,156,253 | arxiv | \section{Introduction}
The motivation of this paper is to enrich the class of {\it non-elementary amenable} groups.
A group $G$ is amenable if there exists a finitely additive translation invariant probability measure on all subsets of $G$. This definition was given by John von Neumann, \cite{von-neumann1}, in a response to Banach-Tarski, and Hausdorff paradoxes. He singled out the property of a group which forbids paradoxical actions.
The class of {\it elementary amenable groups}, denoted by $EG$, was introduced by Mahlon Day in \cite{day:semigroups}, as the smallest class of groups that contain finite and abelian groups and is closed under taking subgroups, quotients, extensions and directed unions. The fact that the class of amenable groups is closed under these operations was already known to von Neumann, \cite{von-neumann1}, who noted at that at that time there was no known amenable group which did not belong to $EG$.
No substantial progress in understanding this class has been made until the~80s, when Chou, \cite{C}, showed that all elementary amenable groups have either polynomial or exponential growth, and Rostislav Grigorchuk, \cite{grigorchuk:milnor_en} gave an example of a group with intermediate growth. Grigorchuk's group served as a starting point in developing the theory of groups with intermediate growth, all of them being non-elementary amenable. In the same paper Chou showed that every simple finitely generated infinite group is not elementary amenable. In \cite{JM} it was shown that the topological full group of Cantor minimal system is amenable. By the results of Matui, \cite{Matui}, this group has a simple and finitely generated commutator subgroup, in particular, it is not elementary amenable. This was the first example of infinite simple finitely generated amenable group.
Currently there are only two sources of non-elementary amenable groups: groups acting on rooted trees and topological full groups of Cantor minimal systems. In \cite{j-trees}, the author gives a unified approach to non-elementary amenability of groups acting on rooted trees. Here we give more examples of non-elementary amenable groups coming from topological full groups of Cantor minimal systems.
\begin{theorem}
\label{main}
Consider a minimal faithful action of $\mathbb{Z}^d$ on a Cantor set conjugate to the action on a closed $\mathbb{Z}^d$-invariant subset of $\mathsf{X}^{\mathbb{Z}^d}$ for some finite alphabet $\mathsf{X}$. Then the commutator subgroup of the topological full group $[[\mathbb{Z}^d]]$ is finitely generated.
\end{theorem}
We describe an explicit generating set, and give a simple direct proof. Subsequent paper~\cite{nek:fullgr} gives a less direct proof of the finite generation of special subgroups $\mathsf{A}(G)$ of the topological full groups in the case of expansive action. The groups $\mathsf{A}(G)$ are closely related to the derived subgroups of the full groups (in particular, they coincide in the case of the actions of abelian groups), but precise relation is still not well understood.
It was proved in \cite{matui2}, that the commutator subgroup of $[[\mathbb{Z}^d]]$ is simple. The topological full groups that correspond to interval exchange transformation group were studied in \cite{JMMS}. The authors prove that subgroups of {\it rank} equals to 2 are amenable. These groups can be realized as topological full groups of minimal action of $\mathbb{Z}^2$ on the Cantor set. Therefore, Theorem~\ref{main} in the combination with \cite{matui2}, \cite{JMMS} gives more examples of simple finitely generated infinite amenable groups, and thus by the result of Chou non-elementary amenable groups. Matui, \cite{matui2}, showed that $[[T_1, \ldots, T_m]] = [[T_1, \ldots, T_n]]$ implies $m=n$. In particular, this implies that the groups from the corollary are different from the one previously obtained in \cite{JM}.
In the last Section we associate a group $\mathcal{P}$ to the Penrose tiling. The main result is
\begin{theorem}
The derived subgroup of $\mathcal{P}$ is simple and finitely generated.
\end{theorem}
It is an open question to decide if the group $\mathcal{P}$ is amenable.
\section{A finite generating set of the derived subgroup of $[[\mathbb{Z}^d]]$}
\begin{lemma}
\label{lem:free}
Let $G$ be an abelian group. If the action of $G$ on a Cantor set $\mathbf{C}$ is minimal and faithful, then it is free.
\end{lemma}
\begin{proof}
Suppose that $g\in G$ is a non-zero element. By faithfulness of the action, there exists $x\in\mathbf{C}$ such that $g(x)\ne x$. Then there exists a neighborhood $U$ of $x$ such that $g(U)\cap U=\emptyset$. Let $y\in\mathbf{C}$ be an arbitrary point. By minimality, there exists $h\in G$ such that $h(y)\in U$. Since $U$ and $g(U)$ are disjoint, the points $h(y)$ and $gh(y)$ are different.
Then $g(y)=h^{-1}gh(y)\ne h^{-1}h(y)=y$. It follows that $g$ has no fixed points in $\mathbf{C}$.
\end{proof}
Let us fix a minimal action of the free abelian group $\mathbb{Z}^d$ on a closed shift-invariant subset $\mathbf{C}\subset\mathsf{X}^{\mathbb{Z}^d}$ of the full shift over a finite alphabet $\mathsf{X}$. Then $\mathbf{C}$ is homeomorphic to the Cantor set. We use the additive notation for the group $\mathbb{Z}^d$. If $w:\mathbb{Z}^d\longrightarrow\mathsf{X}$ is a point of $\mathsf{X}^{\mathbb{Z}^d}$, then its image under the action of $g\in\mathbb{Z}^d$ is defined by the rule
\[g(w)(h)=w(h-g).\]
In other words, elements of $\mathsf{X}^{\mathbb{Z}^d}$ are labelings of the points of $\mathbb{Z}^d$ by elements of $\mathsf{X}$, and the elements $g\in\mathbb{Z}^d$ act by shifting all the labels by $g$. Alternatively, we may imagine the action of $g$ as the shift of the ``origin of coordinates'' in a given sequence $w$ by $-g$.
A \emph{patch} $\pi=(f, P)$ is a finite subset $P\subset\mathbb{Z}^d$ together with a map $f:P\longrightarrow\mathsf{X}$. The set $P$ is called the \emph{support} of the patch. We say that an element $w\in\mathsf{X}^{\mathbb{Z}^d}$ (a \emph{$\mathbb{Z}^d$-sequence}) \emph{contains} the patch $(f, P)$ if $w|_P=f$. The set $\mathcal{W}_\pi$ of all sequences containing a given patch $\pi$ is a clopen subset of $\mathsf{X}^{\mathbb{Z}^d}$ called the \emph{cylindrical set} defined by the patch, and the set of all such clopen subsets forms a basis of topology on $\mathsf{X}^{\mathbb{Z}^d}$, by definition.
We say that two patches $\pi_1=(f_1, P_1)$ and $\pi_2=(f_2, P_2)$ are \emph{compatible} if there exists sequence $w\in\mathbf{C}$ containing $\pi_1$ and $\pi_2$. In other words, the patches are compatible if the intersection of the associated cylindrical sets is non-empty. If the patches $\pi_1$ and $\pi_2$ are compatible then there \emph{union} $\pi_1\cup\pi_2$ is the patch $(f, P_1\cup P_2)$ where $f|_{P_1}=f_1$ and $f|_{P_2}=f_2$.
Note that in terms of the cylindrical sets, we have $\mathcal{W}_{\pi_1\cup\pi_2}=\mathcal{W}_{\pi_1}\cap\mathcal{W}_{\pi_2}$.
If $\pi=(f, P)$ is a patch, and $Q$ is a finite set containing $P$, then $\mathcal{W}_\pi$ is equal to the disjoint union of the sets $\mathcal{W}_{(\tilde f, Q)}$, where $\tilde f$ runs through the set of all maps $\tilde f:Q\longrightarrow\mathsf{X}$ such that $\tilde f|_P=f$. Note that some of these sets may be empty (if the corresponding patch is not allowed for the elements of $\mathbf{C}$).
If $\pi=(f, P)$ is a patch and $g\in\mathbb{Z}^d$, then we have $g(\mathcal{W}_{\pi})=\mathcal{W}_{\pi+g}$, where $\pi+g=(f+g, P+g)$, where $(f+g):(P+g)\longrightarrow\mathsf{X}$ is given by $(f+g)(h)=f(h-g)$, in accordance with the definition of the action of $\mathbb{Z}^d$ on $\mathsf{X}^{\mathbb{Z}^d}$.
\begin{lemma}
\label{lem:incompatible}
Let $A\subset\mathbb{Z}^d$ be a finite set not containing zero. Then there exists $B\subset\mathbb{Z}^d$ such that for every $w\in \mathbf{C}$ and every $g\in A$ the patches $(w|_B, B)$ and $((w+g)|_B, B)$ are not compatible.
\end{lemma}
\begin{proof}
Define the following metric on $\mathsf{X}^{\mathbb{Z}^d}$. The distance $|w_1-w_2|$ is equal to $2^{-R}$, where $R$ is the biggest number such that restrictions of $w_1$ and $w_2$ to the ball of radius $R$ in $\mathbb{Z}^d$ with center in $0$ (for example in the $\ell_\infty$ norm) coincide. The it is enough to prove that there exists $\epsilon$ such that $|g(w)-w|>\epsilon$ for all $g\in A$ and $w\in\mathbf{C}$. Namely, for every $\epsilon$ there exists a finite set $B\subset\mathbb{Z}^d$ such that for every $w\in\mathsf{X}^{\mathbb{Z}^d}$ the set of all $u\in\mathsf{X}^{\mathbb{Z}^d}$ such that $(u|_B, B)=(w|_B, B)$ is contained in the $\epsilon$-neighborhood of $w$.
Suppose that it is not true, i.e., that for every $\epsilon>0$ there exist $w$ and $g\in A$ such that $|g(w)-w|\le \epsilon$. Since $A$ is finite, this implies that there exists $g\in A$ and a sequence of points $w_n\in\mathsf{C}$ such that $|g(w_n)-w_n|\to 0$ as $n\to \infty$. Since $\mathbf{C}$ is homeomorphic to the Cantor set, this implies that $g$ has a fixed point, which is a contradiction Lemma~\ref{lem:free}.
\end{proof}
Let $U\subset\mathbf{C}$ be a clopen set, and $g_1, g_2, g_3\in\mathbb{Z}^d$ elements such that $g_1(U), g_2(U), g_3(U)$ are pairwise disjoint. Denote by $T_{U, (g_1, g_2, g_3)}$ the element of $[[\mathbb{Z}^d]]$ given by
\[T_{U, (g_1, g_2, g_3)}(w)=\left\{\begin{array}{rl} (g_2-g_1)(w) & \text{if $w\in g_1(U)$;}\\
(g_3-g_2)(w) & \text{if $w\in g_2(U)$;}\\
(g_1-g_3)(w) & \text{if $w\in g_3(U)$;}\\
w & \text{if $w\notin g_1(U)\cup g_2(U)\cup g_3(U)$.}\end{array}\right.\]
In other words, $T_{U, (g_1, g_2, g_3)}$ cyclically permutes $g_1(U)$, $g_2(U)$, and $g_3(U)$ in the natural way.
We will denote $T_{\pi, (g_1, g_2, g_3)}=T_{\mathcal{W}_\pi, (g_1, g_2, g_3)}$, for a patch $\pi$.
\begin{lemma}
Let $A_1, A_2, A_3, B_1, B_2, B_3$ be subsets of a set $X$ such that only $A_1$ and $B_1$ have non-empty intersection, while all the other pairs of subsets are disjoint. Let $a$ be a permutation of $X$ acting trivially on $X\setminus (A_1\cup A_2\cup A_3)$, and satisfying $a(A_1)=A_2$, $a(A_2)=A_3$, $a(A_3)=A_1$, and $a^3=1$. Similarly, let $b$ be a permutation acting trivially on $X\setminus (B_1\cup B_2\cup B_3)$ and satisfying $b(B_1)=B_2$, $b(B_2)=B_3$, $b(B_3)=A_3$, and $b^3=1$. Then $[[b^{-1}, a^{-1}], [b, a]]$ acts as $a$ on the set $(A_1\cap B_1)\cup a(A_1\cap B_1)\cup a^2(A_1\cap B_1)$ and identically outside of it.
\end{lemma}
Note that we use the left action here, but the usual commutator $[g, h]=g^{-1}h^{-1}gh$.
\begin{proof}
Let $C=A_1\cap B_1$. Then the sets $C, a(C), a^2(C), b(C), b^2(C)$ are pairwise disjoint. The permutations $a$ and $b$ act as cycles of length three permuting $C, a(C), a^2(C)$ and $C, b(C), b^2(C)$, respectively. The element $[[b^{-1}, a^{-1}], [b, a]]$ acts then on these fives sets in the same way as the similar expression involving commutators of the permutations $a=(1, 2, 3)$ and $b=(1, 4, 5)$ act on $\{1, 2, 3, 4, 5\}$. We have $[b, a]=b^{-1}a^{-1}ba=(1, 5, 3)$ and $[b^{-1}, a^{-1}]=bab^{-1}a^{-1}=(1, 4, 2)$, and $[[b^{-1}, a^{-1}, [b, a]]=(1, 4, 2)^{-1}(1, 5, 3)^{-1}(1, 4, 2)(1, 5, 3)=(1, 2, 3)=a$.
The set $A'=(A_1\setminus C)\cup a(A_1\setminus C)\cup a^2(A_1\setminus C)$ is $a$-invariant, and $b$ acts trivially on it. It follows that the restriction of $[b^{-1}, a^{-1}]$ and $[b, a]]$ to $A'$ is equal to the restriction of $[1, a^{-1}]$ and $[1, a]$, which are trivial. It follows that $[[b^{-1}, a^{-1}], [b, a]]$ acts trivially on $A'$. The same argument shows that $[[b^{-1}, a^{-1}], [b, a]]$ acts trivially on $(B_1\setminus C)\cup b(B_1\setminus C)\cup b^2(B_1\setminus C)$.
\end{proof}
As a corollary, we get the following relation between the elements of the form $((f, P), g_1, g_2, g_3)$.
\begin{corollary}
\label{cor:patchesunion}
Let $\pi_1$, $\pi_2$ be patches, $g_1, g_2, h_1, h_2$ be elements of $\mathbb{Z}^d$ such that $\pi_1, \pi_1+g_1, \pi_1+g_2, \pi_2, \pi_2+h_1, \pi_2+h_2$ are pairwise incompatible except for the pair $\pi_1$ and $\pi_2$. Then
\[[[T_{\pi_2, (0, h_1, h_2)}^{-1}, T_{\pi_1, (0, g_1, g_2)}^{-1}], [T_{\pi_2, (0, h_1, h_2)}, T_{\pi_1, (0, g_1, g_2)}]]=T_{\pi_1\cup\pi_2, (0, g_1, g_2)}.\]
\end{corollary}
We will use the usual $\ell_1$ metric on $\mathbb{Z}^d$, i.e., the word metric associated with the standard generating set of $\mathbb{Z}^d$. Denote by $B(R)$ the ball of radius $R$ with the center in $0\in\mathbb{Z}^d$ for this metric.
By Lemma~\ref{lem:incompatible}, there exists $R_1$ such that for every $w\in\mathbf{C}$ the patches $\pi=(B(R_1), w|_{B(R_1)})$ and $\pi+g$ are incompatible for every $g\in\mathbb{Z}^d$ of length $\le 3$.
Let $\{e_1, e_2, \ldots, e_d\}$ be the standard generating set of $\mathbb{Z}^d$.
Denote by $\mathcal{T}_R$ the set of elements of $[[\mathbb{Z}^d]]'$ of the form $T_{\pi, (0, e_i, -e_i)}$, where $\pi$ runs through the set of all patches of the form $(B(R), w|_{B(R)})$ for $w\in\mathbf{C}$.
\begin{proposition}
If $R\ge R_1+2$, then the group generated by $\mathcal{T}_R$ contains $\mathcal{T}_{R+1}$.
\end{proposition}
\begin{proof}
Denote $S=\{\pm e_1, \pm e_2, \ldots, \pm e_d\}$.
Let $B\subset\mathbb{Z}^d$ be a finite subset containing $B(R_1+2)$.
Define the patches $\rho_0=(w|_B, B)$, $\rho_h=(w|_{B+h}, B+h)$, for $h\in S$. Note that the patch $\rho_h$ contains the patch $(w|_{B(R_1)}, B(R_1))$ and that $\rho_h-h=((w-h)|_B, B)$.
Let us apply Corollary~\ref{cor:patchesunion} for $\pi_1=\rho_0$, $\pi_2=\rho_h$,
$g_1=g$, $g_2=-g$, $h_1=h$, $h_2=2h$, where $g, h$ are different elements of $S$. Since $\rho_0$ and $\rho_h$ both contain the patch $(w|_{B(R_1)}, B(R_1))$, the patches $\pi_1, \pi_1+g_1$, $\pi_1+g_2$, $\pi_2$, $\pi_2+h_1$, and $\pi_2+h_2$ are pairwise incompatible, except for $\pi_1$ and $\pi_2$, which are both patches of $w$. It follows that we can apply Corollary~\ref{cor:patchesunion}, hence
\[[[T_{\rho_h, (0, h, 2h)}^{-1}, T_{\rho_0, (0, g, -g)}^{-1}], [T_{\rho_h, (0, h, 2h)}, T_{\rho_0, (0, g, -g)}]]=T_{\rho_0\cup \rho_h, (0, g, -g)}.\]
Note that $T_{\rho_h, (0, h, 2h)}=T_{\rho_h-h, (-h, 0, h)}=T_{\rho_h-h, (0, h, -h)}$.
Let us apply now Corollary~\ref{cor:patchesunion} to $\pi_1=\rho_0-g$, $\pi_2=\rho_g-g$, $g_1=-2g$, $g_2=-g$, $h_1=h$, $h_2=-h$. The patches $\pi_1$ and $\pi_2$ are patches of $w-g$, and contain $((w-g)|_{B(R_1)}, B(R_1))$. It follows that, in the same way as above, we can apply Corollary~\ref{cor:patchesunion}, and get
\[[[T_{\rho_g-g, (0, h, -h)}^{-1}, T^{-1}_{\rho_0-g, (0, -2g, -g)}],
[T_{\rho_g-g, (0, h, -h)}, T_{\rho_0-g, (0, -2g, -g)}]]=T_{(\rho_0-g)\cup(\rho_h-g), (0, -2g, -g)}.\]
Recall that $\rho_g-g=((w-g)|_B, B)$ and that we have $T_{\rho_0-g, (0, -2g, -g)}=T_{\rho_0, (g, -g, 0)}=T_{\rho_0, (0, g, -g)}$. We also have $T_{(\rho_0-g)\cup (\rho_h-g), (0, -2g, -g)}=T_{\rho_0\cup\rho_h, (0, g, -g)}$.
We have shown that the group generated by the set
\[\{T_{\pi, (0, g, -g)}\;:\;g\in S, \pi=(w|_B, B), w\in\mathbf{C}\}\]
contains the set
\[\{T_{\pi, (0, g, -g)}\;:\;g\in S, \pi=(w|_{B\cup B+h}, B\cup B+h), w\in\mathbf{C}, h\in S\}.\]
Since $B(R+1)=\bigcup_{h\in S}B(R)+h$, this finishes the proof of the proposition.
\end{proof}
It follows that the group generated by $\mathcal{T}_{R_1+2}$ contains $\mathcal{T}_R$ for every $R\ge R_1+2$. For every cylindrical set $U\subset\mathbf{C}$ there exists $R$ such that $U$ is equal to the disjoint union of cylindrical sets $\mathcal{W}_\pi$ such that $\pi$ is a patch with support $B(R)$. It follows that every element of the form $T_{\pi, (0, e_i, -e_i)}$ can be written as a product of elements of the same type such that $\pi$ is a patch with support $B(R)$ for some $R$ big enough. Consequently, the group generated by $\mathcal{T}_{R_1+2}$ contains all elements of the form $T_{\pi, (0, e_i, -e_i)}$.
The proof of Theorem~\ref{main} is finished by the following, since the set $\mathcal{T}_{R_1+2}$ is finite.
\begin{proposition}
The derived subgroup of the full group of the action of $\mathbb{Z}^d$ on $\mathsf{C}$ is generated by the set of elements of the form $T_{\pi, (0, e_i, -e_i)}$.
\end{proposition}
\begin{proof}
It is known, see~\cite{matui1}, that the derived subgroup of $[[\mathbb{Z}^d]]$ is simple and is contained in every non-trivial normal subgroup of $[[\mathbb{Z}^d]]$.
Consider the set $\mathcal{T}\subset[[\mathbb{Z}^d]]$ of all elements elements of order three permuting cyclically three disjoint clopen subsets $U_1, U_2, U_3$ of $\mathbf{C}$ and acting identically outside their union. The set $\mathcal{T}$ is obviously invariant under conjugation by elements of $[[\mathbb{Z}^d]]$, hence the group generated by $\mathcal{T}$ is normal. On the other hand, we have $\mathcal{T}\subset [[\mathbb{Z}^d]]'$, as every element of $\mathcal{T}$ is equal to the commutator of two transformations: one permuting $U_1$ with $U_2$, and the other permuting $U_2$ with $U_3$. Consequently, $\mathcal{T}$ generates $[[\mathbb{Z}^d]]'$.
For every element $T\in\mathcal{T}$ permuting cyclically clopen sets $U_1, U_2, U_3$, there exists partitions of $U_i$ into cylindrical sets such that $T$ maps a piece of the partition to a piece of the partition, and restriction of $T$ to every piece of the partitions is equal to the restriction of an element of $\mathbb{Z}^d$. It follows that $T$ is a product of a finite set of elements of the form $T_{\pi, (g_1, g_2, g_3)}$. It remains to show that we can generate all elements of the form $T_{\pi, (g_1, g_2, g_3)}$ by elements of the form $T_{\pi, (0, e_i, -e_i)}$. It is well known that the alternating group $A_n$ is generated by cycles $(k, k+1, k+2)$. It follows that the group generated by $T_{\pi, (0, e_i, -e_i)}$ contains the set of elements of the form $T_{\pi, (g_1, g_2, g_3)}$, where $g_i$ belong to one direct factor of $\mathbb{Z}^d$.
Let us prove the following technical lemma.
\begin{lemma}\label{l1}
Let $X_d=\{x_1\ldots x_d|x_i \in \{a,b,c\}, \ 1\le i \le d \}$ be the $3^d$-element set of $d$-letter words over the alphabet $\{a,b,c\}$, and let $S_{X_d}$ be the symmetric group of permutations of $X_d$. Denote the alternating subgroup of even permutations of $X_d$ by $A_{X_d}$. Consider the set $B_d$ of all elements of the type $(XaY\ XbY\ XcY) \in S_{X_d}$, where $X$ and $Y$ are arbitrary (possibly, empty) words such that $|X|+|Y|=d-1$. Then $A_{X_d}$ is generated by the set $B_d$.
\end{lemma}
\begin{proof}
The lemma can be proved by induction on $d$.
For $d=2$, we use the well-known fact that $A_9$ is generated by 3-cycles $\{(1 2 3), (2 3 4), \ldots, (7 8 9)\}$. To apply this fact, we need to show that all 7 elements $(aa \ ab \ ac)$, $(ab \ ac \ ba)$, $(ac \ ba \ bb)$, $(ba \ bb \ bc)$, $(bb \ bc \ ca)$, $(bc \ ca \ cb)$, $(ca \ cb \ cc)$ are generated by $B_2$. This can be checked by hand:
\begin{itemize}
\item $(aa \ ab \ ac) \in B_d$;
\item $(ab \ ac \ ba) = (aa \ ba \ ca)(aa \ ab \ ac)(aa \ ca \ ba) $;
\item $(ac \ ba \ bb) = (ac \ cc \ bc)(ba \ bb \ bc)(ac \ bc \ cc) $;
\item $(ba \ bb \ bc) \in B_d$;
\item $(bb \ bc \ ca) = (aa \ ba \ ca)(ba \ bb \ bc)(aa \ ca \ ba) $;
\item $(bc \ ca \ cb) = (ac \ cc \ bc)(ca \ cb \ cc)(ac \ bc \ cc) $;
\item $(ca \ cb \ cc) \in B_d$.
\end{itemize}
Suppose the statement holds for $d=k$ and consider the case $d=k+1$. Since the alternating group is generated by 3-cycles, it's sufficient to show that every 3-cycle is generated by $B_{k+1}$. Assume we have a cycle $(Ax \ By \ Cz)$, where $A,B,C \in X_k$ are pairwise distinct, and $x,y,z \in \{a,b,c\}$, not necessarily distinct. We know the following:
\begin{itemize}
\item $(Ax \ Bx \ Cx)$, $(Ay \ By \ Cy)$, $(Az \ Bz \ Cz)$ are generated by $B_{k+1}$. Indeed, we can take the elements of $B_k$ generating $(A \ B \ C)$ and append the needed letter to each of them.
\item $(Ax \ Ay \ Az)$, $(Bx \ By \ Bz)$, $(Cx \ Cy \ Cz)$ are in $B_{k+1}$ by definition.
\end{itemize}
Then, applying the induction base for the set $\{A,B,C\} \times \{a,b,c\}$, we conclude that $(Ax \ By \ Cz)$ is also generated by $B_{k+1}$.
In case $A,B,C$ are not distinct, we can use a slightly modified version of the proof above. If, for example, $A=B$ (which automatically implies $x \neq y$), we can take an arbitrary word $D \in X_k$ distinct from $A$ and $C$ in order to apply the induction base to $\{A,C,D\} \times \{a,b,c\}$. Clearly, the 3-cycle $(Ax \ Ay \ Cz)$ will belong to $X_{k+1}$
The induction step is complete.
\end{proof}
Lemma~\ref{l1} implies that the group generated by all elements of the form $T_{\pi, (g_1, g_2, g_3)}$, where $g_i$ belong to one factor of $\mathbb{Z}^d$, contains all elements of the form $T_{\pi, (g_1, g_2, g_3)}$, where $g_i\in\mathbb{Z}^d$ are now arbitrary. This finishes the proof of the proposition and Theorem~\ref{main}.
\end{proof}
\section{Topological full group and interval exchange group}
Let $\alpha_1, \alpha_2, \ldots, \alpha_d$ be irrational numbers such that the additive group $H=\langle\alpha_1, \alpha_2, \ldots, \alpha_d\rangle/\mathbb{Z}$ generated by them modulo $\mathbb{Z}$ is isomorphic to $\mathbb{Z}^d$ (this implies that $\langle\alpha_1, \alpha_2, \ldots, \alpha_d\rangle$ is also isomorphic to $\mathbb{Z}^d$). The group $H$ is a subgroup of the circle $\mathbb{R}/\mathbb{Z}$, and hence acts on it in the natural way. By the classical Kroneker's theorem, the action of each subgroup $\langle\alpha_i\rangle$ on $\mathbb{R}/\mathbb{Z}$ is minimal, hence the action of $H$ on $\mathbb{R}/\mathbb{Z}$ is also minimal.
Let us lift $H$ as a set to $[0, 1]$ by the natural quotient map $[0, 1]\to\mathbb{R}/\mathbb{Z}$, and let $W\subset[0, 1]$ be the obtained set.
Let us replace each number $q\in W\subset[0, 1]$ by two copies: $q_{-0}$ and $q_{+0}$. Here we identify $0_{-0}$ with $1$ and $0_{+0}$ with $0$, $1_{-0}$ with 1 and $1_{+0}$ with 0, according to the natural cyclic order on $\mathbb{R}/\mathbb{Z}$ (seen also as the quotient of the interval $[0, 1]$). Denote by $R_H$ the obtained set (equal to the disjoint union of $[0, 1]\setminus W$ and the set of doubled points $W$). The set $R_H$ is ordered in the natural way (we assume that $q_{-0}<q_{+0}$), and the order is linear (total).
Let us introduce the order topology on $R_H$. Recall, that it is the topology generated by the open intervals $(a, b)=\{x\in R_H\;:\;a<x<b\}$.
\begin{lemma}
The space $R_W$ is homeomorphic to the Cantor set.
\end{lemma}
\begin{proof}
We use the following formulation of Brouwer's theorem: A topological space is a Cantor space if and only if it is non-empty, compact, totally disconnected, metrizable and has no isolated points. Note that by classical metrization theorems, we can replace metrizability by Hausdorffness and second countability.
The space $R_H$ is obviously non-empty, has no isolated points. For any $a, b\in W\cap[0, 1]$ such that $a<b$, we have $[a_{+0}, b_{-0}]=(a_{-0}, b_{+0})$, hence the intervals $(a_{-0}, b_{+0})$ are clopen. The set of such intervals is a basis of topology, since the set $W$ is dense. We also see that the space $R_H$ is second countable and Hausdorff.
Let $A\subset R_H$ be an arbitrary subset. Let us show that $\sup A$ and $\inf A$ exist, which will imply compactness. Let $\hat A$ be the image of $A$ in $[0, 1]$. We know that $\sup\hat A, \inf\hat A\in [0, 1]$ exist. If $\sup\hat A\notin W$, then the corresponding element of $R_H$ is also a supremum of $A$. If $\sup\hat A\in W$, then $\sup A=\sup\hat A_{-0}$, unless $\sup\hat A_{+0}\in A$, in which case $\sup A=\sup\hat A_{+0}$. Infima are treated in the same way.
\end{proof}
The action of $H$ on $\mathbb{R}/\mathbb{Z}$ naturally lifts to an action on $R_H$: we just set $h(q_{+0})=h(q)_{+0}$ and $h(q_{-0})=h(q)_{-0}$.
Denote by $IET_H$ the topological full group of the action $(H, R_H)$. For every element $g\in IET_H$ there exists a finite partition of $R_H$ into clopen subsets such that the action of $g$ on each of the subsets coincides with a translation by an element of $H$. Clopen subsets of $R_H$ are finite unions of intervals of the form $(a_{+0}, b_{-0})$ for $a, b\in H$. It follows that $g$ is an \emph{interval exchange transformation}: it splits the interval $[0, 1]$ into a finite number of intervals and then rearranges them. The endpoints of the intervals belong to $W$. Conversely, every interval exchange transformation such that the endpoints of the subintervals belong to $W$ is lifted to an element of $IET_H$.
We have proved the following.
\begin{lemma}
The group $IET_H$ is naturally isomorphic to the group of all interval exchange transformations of $[0, 1]$ such that the endpoints of the intervals into which $[0, 1]$ is split belong to $H$.
\end{lemma}
Theorem~\ref{main} now implies the following.
\begin{theorem}
The derived subgroup of $IET_H$ is simple and finitely generated.
\end{theorem}
A two-dimensional version of an interval exchange transformation group is considered in the next section.
\section{Penrose tiling group}
There are several versions of the Penrose
tiling~\cite{penrose}, let us describe one of them. The tiles
are two types of rhombi of equal side length 1. The angles of one
rhombus are $72^\circ$ and $108^\circ$. The angles of the other are
$36^\circ$ and $144^\circ$. We call these rhombuses ``thick'' and
``thin'', respectively. Mark a vertex of angle $72^\circ$ in the thick
rhombus, and a vertex of angle $144^\circ$ of the thin rhombus. Mark
the sides adjacent to the marked vertex by single arrows pointing towards
the marked vertex. Mark the other edges by double arrows, so that in the
thick rhombus they point away from the unmarked vertex of angle
$72^\circ$ and in the thin rhombus they point towards the unmarked
vertex of angle $144^\circ$, see Figure~\ref{fig:penrosetiling}.
A \emph{Penrose tiling} is a tiling of
the whole plane by
such rhombi, where markings of the edges match (adjacent tiles must
have same number of arrows pointing in the same direction). See
Figure~\ref{fig:tiling} for an example of a patch of a Penrose tiling.
\begin{figure}
\centering
\includegraphics{tiles.eps}
\caption{Tiles of the Penrose tilings}
\label{fig:penrosetiling}
\end{figure}
\begin{figure}
\centering
\includegraphics{tiling.eps}
\caption{Penrose tiling}
\label{fig:tiling}
\end{figure}
There are uncountably many different (up to translation and rotation)
Penrose tilings. Each of them is \emph{aperiodic}, i.e., does not
admit a translational symmetry.
Let us identify $\mathbb{R}^2$ with $\mathbb{C}$, and consider all
Penrose tilings by rhombi such that their sides are parallel to the
lines $e^{k\pi i/5}\mathbb{R}$, $k\in\mathbb{Z}$. A \emph{pointed}
Penrose tiling is a Penrose tiling with a marked vertex of a tile. Let
$\mathcal{T}$ be the set of all such pointed Penrose tilings, up to
translations (two pointed tilings correspond to the same element of
$\mathcal{T}$ if and only if there exists a translation mapping one
tiling to the other and the marked vertex of one tiling to the marked
vertex of the other). We sometimes identify a tiling with the set of vertices of
its tiles.
Let us introduce a topology on $\mathcal{T}$ in the following way. Let
$A\subset T$ be a finite set of vertices of a Penrose tiling (a \emph{patch}), and let
$v\in A$. The corresponding open set
$\mathcal{U}_{A, v}$ is the set of all pointed tilings $(T, u)$ such that
$A+u-v\subset T$. In other words, a pointed
tiling $(T, u)$ belongs to $\mathcal{U}_{A, v}$ if we can see the pointed
patch $(A, v)$ around $u$ as a part of $T$. Then the
natural topology on $\mathcal{T}$ is given by the basis of open sets
of the form $\mathcal{U}_{A, v}$ for all finite pointed patches $(A,
v)$ of Penrose tilings. It follows from the properties of Penrose
tilings that the space $\mathcal{T}$ is homeomorphic to the Cantor
set, and that for every Penrose tiling $T$ the set of pointed tilings
$(T, v)$ is dense in $\mathcal{T}$. In the literature, see..., the
space $\mathcal{T}$ is called sometimes \emph{transversal}.
Consider a patch $A$ with two marked vertices $v_1, v_2\in A$. Then we have a
natural homeomorphism
$F_{A, v_1, v_2}:\mathcal{U}_{A, v_1}\longrightarrow\mathcal{U}_{A, v_2}$
mapping $(T, u)\in\mathcal{U}_{A, v_1}$ to $(T,
u+v_2-v_1)\in\mathcal{U}_{A, v_2}$. The homeomorphism $F_{A, v_1,
v_2}$ moves in every patch $A$ the marking from the vertex $v_1$ to
the vertex $v_2$. It is easy to see that $F_{A, v_1, v_2}$ is a
homeomorphism between clopen subsets of $\mathcal{T}$.
\begin{definition}
The \emph{topological full group of Penrose tilings} is the group
$\mathcal{P}$ of
homeomorphisms of $\mathcal{T}$ that are locally equal to the
homeomorphisms of the form $F_{A, v_1, v_2}$.
\end{definition}
The set of all pointed tilings $(T, v)$ obtained from a given
tiling $T$ is dense in $\mathcal{T}$ and invariant under the action of
the topological full group. It follows that every element of the full
group is uniquely determined by the permutation it induces on the set
of vertices of the tiling. In terms of permutations of $T$ the full
group can be defined in the following way.
We say that a map
$\alpha:T\longrightarrow T$ \emph{is defined by local rules} if there exists $R$
such that for every $x\in T$ the value of $x-\alpha(x)$ depends only
on the set $B_R\cap (T-x)$, where $B_R$ is the disc of radius $R$
around the origin $(0, 0)\in\mathbb{R}^2$.
The following is straightforward.
\begin{proposition}
A permutation $\alpha:T\longrightarrow T$ is induced by the element of the full
group if and only if $\alpha$ is defined by a local
rule. Consequently, the full group is isomorphic to the group of all
permutations of $T$ defined by local rules.
\end{proposition}
Let us describe a more explicit model of the space $\mathcal{T}$ and
the full group $\mathcal{P}$ using a description of the Penrose tilings given in
the papers~\cite{bruijn:pen1,bruijn:pen2}.
Denote $\zeta=e^{\frac{2\pi i}5}$, and let
\[P=\left\{\sum_{j=0}^4 n_j\zeta^j\;:\;n_j\in\mathbb{Z}, \sum_{j=0}^4
n_j=0\right\}=
(1-\zeta)\mathbb{Z}[\zeta]\]
be the group generated by the vectors on the sides of the regular
pentagon $S=\{1, \zeta, \zeta^2, \zeta^3, \zeta^4\}$. Note that
$5=4-\zeta-\zeta^2-\zeta^3-\zeta^4\in P$.
As an abelian group, $P$ is isomorphic to $\mathbb{Z}^4$.
Denote by $\mathcal{L}$ the set of lines of the form
$i\zeta^j\mathbb{R}+w$, for $j=0, 1, \ldots, 4$ and $w\in P$. It is
easy to see that for any two intersecting lines $l_1,
l_2\in\mathcal{L}$ and any generator $z\in\{1-\zeta, \zeta-\zeta^2,
\zeta^2-\zeta^3, \zeta^3-\zeta^4\}$ there exists $z'\in P$ such that
$z'$ is parallel to $l_2$ and $l_1+z=l_1+z'$. It follows that for any
pair of intersecting lines $l_1, l_2\in\mathcal{L}$ the intersection
point $l_1\cap l_2$ belongs to $P$. Consequently, a point $\xi\in\C$
belongs either to 0, 1, or to 5 lines from $\mathcal{L}$. If $\xi\in\C$
does not belong to any line $l\in\mathcal{L}$, then we call $\xi$
\emph{regular}.
Similarly to the case of interval exchange transformations, let us double each line $l\in\mathcal{L}$.
Let $\mathcal{C}$ be the obtained space and let
$Q:\mathcal{C}\longrightarrow\mathbb{C}$ be the corresponding quotient map. If
$\xi\in\mathbb{C}$ is regular, then $Q^{-1}(\xi)$ consists of a single
point. If $\xi\in\mathcal{C}\setminus P$ belongs to a line
$l\in\mathcal{L}$,
then $Q^{-1}(\xi)$ consists of two points associated with each of the
two half-planes into which $l$ separates $\mathbb{C}$. Every point
$\xi\in P$ has 10 preimages in $\mathcal{C}$ associated with each of
the ten sectors into which the lines from $\mathcal{L}$ passing
through $\xi$ separate the plane. A sequence $\xi_n$ of points of $\mathcal{C}$
converges to a point $\xi\in\mathcal{C}$ if and only if the sequence $Q(\xi_n)$
converges to $Q(\xi)$ and the sequence $\xi_n$ eventually belongs (if $Q(\xi)$ is not
regular) to the associated closed half-plane or sector.
The space $\mathcal{C}$ is locally compact and totally
disconnected. Polygons with sides belonging to lines
from $\mathcal{L}$ form a basis of topology of $\mathcal{C}$.
The group $P$ acts on $\mathcal{C}$ in the natural way, so that the
action is projected by $Q$ to the action of $P$ on $\mathbb{C}$ by
translations. Therefore, sums of the form $\tilde\xi+a$, for
$\tilde\xi\in\mathcal{C}$ and $a\in P$, are well defined.
Let us describe, following~\cite{bruijn:pen1,bruijn:pen2}, how a Penrose tiling is
associated with a point $\tilde\xi\in\mathcal{C}$. We will usually
denote $\xi=Q(\tilde\xi)$.
Suppose that $\xi$ is regular.
The vertices of the corresponding tiling $T_{\tilde\xi}$ will be the points of the
form $\sum_{j=0}^4k_j\zeta^j$, where $k_j\in\mathbb{Z}$ are such that
\[\left(\sum_{j=0}^4 k_j,\quad\sum_{j=0}^4 k_j\zeta^{2j}+\xi\right)\in
\bigcup_{s=1}^4(s, V_s),\] where $V_1$
is the pentagon with vertices $\zeta^j$, $V_2$ is the pentagon with
vertices $\zeta^j+\zeta^{j+1}$, $V_3=-V_2$, and $V_4=-V_1$. (Note that
we have changed $\xi$ to $-\xi$ comparing with~\cite{bruijn:pen1,bruijn:pen2}.)
If $\xi$ is singular, then we
can find a sequence $\tilde\xi_n$ of regular points converging in $\mathcal{C}$ to
$\tilde\xi$, and then the tiling $T_{\tilde\xi}$ is the limit of the
tilings $T_{\tilde\xi_n}$.
Let $v=\sum_{j=0}^4n_j\zeta^{2j}\in P$
and $v'=\sum_{j=0}^4n_j\zeta^j$. Then $x\in T_{\tilde\xi}$ if and only if
$x-v'\in T_{\tilde\xi+v}$. It follows that action of $P$ on
$\mathcal{C}$ preserves the associated tilings up to translations. In
fact, it is not hard to show that
two tilings $T_{\tilde\xi_1}$ and $T_{\tilde\xi_2}$ are
translations of each other if and only if $\tilde\xi_1$ and
$\tilde\xi_2$ belong to one $P$-orbit, see~\cite{bruijn:pen1,bruijn:pen2}....
Note that sides of the pentagons $V_s'=V_s-s$ are contained in lines from the
collection $\mathcal{L}$, hence they are naturally identified with
compact open subsets of $\mathcal{C}$. See Figure~\ref{fig:vs} for the
pentagons $V_s'$. Denote by $V'=\bigcup_{s=1}^4 (s, V_s')$.
\begin{figure}
\centering
\includegraphics{pentagonsBW.eps}
\caption{Pentagons $V_s'$}
\label{fig:vs}
\end{figure}
For every $(s, \tilde\xi)\in V'$ the point
$s=s+0\cdot\zeta+0\cdot\zeta^2+0\cdot\zeta^3+0\cdot\zeta^4$ belongs to
the tiling $T_{\tilde\xi}$. We say that the pointed tiling
$(T_{\tilde\xi}, s)$ \emph{corresponds} to the point $(s,
\tilde\xi)\in V'$.
Let $x=\sum_{j=0}^4k_j\zeta^j\in T_{\tilde\xi}$, and let
$s=\sum_{j=0}^4k_j$. Then the numbers $v'=x-s$ and
$v=\sum_{j=0}^4k_j\zeta^{2j}-s$ belong to $P$, and the map $y\mapsto
y-v'$ is a bijection $T_{\tilde\xi}\longrightarrow T_{\tilde\xi+v}$. This maps
moves $x$ to the marked vertex $s=x-(x-s)$ of the tiling corresponding to $(s,
\tilde\xi+v)$. It follows that every pointed tiling, up to translation,
corresponds to a point of $V'$. It is easy to see that every
pointed Penrose tiling is represented by a unique point of $V'$, so
that we get a bijection between $V'$ and the space $\mathcal{T}$. It follows from the results
of~\cite{bruijn:pen1,bruijn:pen2} that this bijection is a homeomorphism.
\begin{proposition}
\label{pr:fullVprime}
The group $\mathcal{P}$ acts on $V'\cong\mathcal{T}$ locally by
translations by elements of $P$. In other words, for every
$\alpha\in\mathcal{P}$ there exists a partition of $V'$ into disjoint
clopen subsets $(s_i, U_i)$ such that $\alpha$ acts on each of them by
a translation $\alpha(s_i, x)=(s_i', x+\xi_i)$ for some $s_i'\in\{1,
2, 3, 4\}$ and $\xi_i\in P$.
\end{proposition}
Let us find some elements $t_i\in P$ such that $V_s'+t_i$ are pairwise
disjoint, and denote by $V''\subset\mathcal{C}$ the union of the sets
$V_s''=V_s'+t_i$. Then it follows from Proposition~\ref{pr:fullVprime}
that $\mathcal{P}$ is the group of all transformations $V''\longrightarrow V''$
that are locally equal to translations by elements of $P$.
Let us say that two clopen sets $U_1, U_2\subset\mathcal{C}$ are
\emph{equidecomposable} if there exists a homeomorphism $\phi:U_1\longrightarrow
U_2$ locally equal to translations by elements of $P$. If $U$ is any
clopen subset which is equidecomposable with $V''$, then $\mathcal{P}$
is equal to the group of all transformations of $U$ that are locally
translations by elements of $P$.
\begin{proposition}
The set $V''$ is equidecomposable with the parallelogram $F$ with vertices
$0, w_1=\zeta^2-\zeta^3, w_2=5(1-\zeta^2-\zeta^3+\zeta^4), w_1+w_2$.
\end{proposition}
\begin{proof}
Let us cut the pentagons $V_s''$ into triangles as it is shown on
Figure~\ref{fig:pentagonscut}.
\begin{figure}
\centering
\includegraphics{pentagonscut.eps}
\caption{Cutting pentagons $V_s$ into triangles}
\label{fig:pentagonscut}
\end{figure}
The obtained triangles can be grouped into pairs of triangles $T, T'$ such
that $T'$ is obtained from $T$ by a rotation by $2\pi$ (and
translation). Such pairs can be put together to form parallelograms,
as it is shown on Figure~\ref{fig:parallelograms}.
\begin{figure}
\centering
\includegraphics{parallelograms.eps}
\caption{Equidecomposability of parallelograms}
\label{fig:parallelograms}
\end{figure}
Figure~\ref{fig:parallelograms} also shows that each such
parallelogram is equidecomposable with its rotation by $\pi/5$. It
follows that each parallelogram is equidecomposable with its rotation
by any angle of the form $k\pi/5$. Consequently, every parallelogram
formed by the acute-angled triangles is equidecomposable with the
parallelogram with the set of vertices $\{0, \zeta^2-\zeta^3,
1-\zeta^2, 1-\zeta^3\}$,
and each parallelogram formed by the
obtuse-angled triangles is equidecomposable with the parallelogram
with the set of vertices $\{0, \zeta^2-\zeta^3, \zeta^4-\zeta^3,
\zeta^2-2\zeta^3+\zeta^4\}$. We get 5 parallelograms of each kind.
We can put all the obtained parallelograms
together to form the parallelogram $F$.
\end{proof}
The parallelogram $F$,
seen as a subset of $\mathcal{C}$, is
the fundamental domain of the group $\langle w_1, w_2\rangle<P$. It is
easy to check that $P/\langle w_1, w_2\rangle$ is isomorphic to
$\mathbb{Z}^2\oplus\mathbb{Z}/5\mathbb{Z}$. The space of orbits
$\mathcal{C}/\langle w_1, w_2\rangle$ is naturally homeomorphic to the
parallelogram $F$ (and hence to the spaces $V'$ and $\mathcal{T}$).
\begin{proposition}
The group $\mathcal{P}$ is isomorphic to the full topological group of
the action of $P/\langle w_1, w_2\rangle$ on the Cantor set
$\mathcal{C}/\langle w_1, w_2\rangle$.
\end{proposition}
\begin{corollary}
The derived subgroup of $\mathcal{P}$ is simple and finitely generated.
\end{corollary}
|
2,869,038,156,254 | arxiv | \section{Introduction}
\label{introduction}
The phenomenon of Gamma Ray Burst (GRB) has been puzzling
astrophysicists for many years since its discovery in
1970s~\cite{KSO73,MGI74}. The recent identification of long duration
GRBs with supernovae (see Della Valle 2006, and Woosley \& Bloom 2006
for full review) means that we are dealing with enormous amount of
energy, $10^{51}-10^{52}\mbox{erg}$, released within a very short
time, 2-100 seconds, in the form of highly relativistic collimated
outflow \cite{P05}. Most of the current GRB studies are focused on
the physics associated with production of gamma rays in such flows and
their interaction with the interstellar medium or the stellar wind of
the supernova progenitor. However, the central question in the problem
of GRBs is undoubtedly the nature of their central engines. These
powerful jets have to be produced as a result of stellar collapse,
most likely by the relativistic object, neutron star or black hole
(BH), formed in the center, and make their way through the massive
star unscathed, remaining well collimated and highly relativistic.
The most popular model of central engine is based on the ``failed
supernova'' scenario of stellar collapse, or ``collapsar'', where the
iron core of progenitor star forms a BH \cite{W93}. If the progenitor
is non-rotating then its collapse is likely to continue in a
``silent'' manner until the whole star is swallowed by the BH. If,
however, the specific angular momentum in the equatorial part of
stellar envelope exceeds that of the last stable orbit of the BH then
the collapse becomes highly anisotropic. While in the polar region it
may proceed more or less uninhibited, for a while, the equatorial
layers form dense and massive accretion disk. The gravitational energy
released in the disk can be very large, more then sufficient to stop
the collapse of outer layers and drive GRB outflows, presumably in the
polar direction where density is much lower \cite{MW99}. In addition,
there is plenty of rotational energy in the BH itself
\begin{equation}
E\sub{rot} = \frac{M\sub{bh}c^2}{2} \left\{ 2-\left[
\left(1+\sqrt{1-a^2}\right)^2+a^2 \right]^{1/2} \right\},
\end{equation}
where $M\sub{bh}$ is the BH mass and $a\in(-1,1)$ is its dimensionless
rotation parameter. For $M\sub{bh}=3M_{\sun}$ and $a=0.9$ this gives
the enormous value of $E\sub{rot} \simeq8\times10^{53}$erg.
The three currently actively discussed mechanisms of powering GRB jets
in the collapsar scenario are the heating via annihilation of
neutrinos produced in the disk \cite{MW99}, the magnetic braking of
the disk \cite{BP82,UM06}, and the magnetic braking of the BH
\cite{BZ77}. The potential role of neutrino mechanism is rather
difficult to assess as this requires accurate treatment of neutrino
transport in a complex dynamic environment of collapsar. The long and
complicated history of numerical studies of neutrino-driven supernova
explosions teaches us to be cautious. Numerical simulations by
MacFadyen \& Woosley\shortcite{MW99} and Aloy et
al.\shortcite{AIMGM00} have demonstrated that sufficiently large
energy deposition in the polar region above the disk may indeed result
in fast collimated jets. However, the neutrino transport has not been
implemented in these simulations and the energy deposition was based
simply on expectations. When Nagataki et al.\shortcite{NTMT07}
utilized a simple prescription for neutrino transport in their code
they found that neutrino heating was insufficient to drive polar jets.
A number of groups have studied the collapsar scenario using Newtonian
MHD codes and implementing the Paczynski-Witta potential in order to
approximate the gravitational field of central BH
\cite{PMAB03,FKYHS06,NTMT07}. In this approach it is impossible to
capture the Blandford-Znajek effect and only the magnetic braking of
the accretion disk can be investigated. The general conclusion of
these studies is that the accretion disk can launch
magnetically-driven jets provided the magnetic field in the progenitor
core is sufficiently strong.
Unfortunately, the jet power has not been given in most of these
papers and is difficult to evaluate from the published numbers. In
the simulations of Proga et al.\shortcite{PMAB03} the jet power at
$t\simeq 0.25$s is $\simeq 10^{50}\mbox{erg}/$s. The initial magnetic
field in these simulations is monopole with $B\simeq 2\times 10^{14}$G
at $r=3r_g$, where $r_g=GM\sub{bh}/c^2$ (private communication).
\begin{figure*}
\includegraphics[width=57mm]{figures/exp2.png}
\includegraphics[width=57mm]{figures/exp4.png}
\includegraphics[width=57mm]{figures/exp3.png}
\caption{Solution immediately before the explosion (t=0.24s). Left
panel: the baryonic rest mass density, $log_{10}\rho$, in $g/cm^3$ and
the magnetic field lines; Middle panel: the ratio of gas and magnetic
pressures, $log_{10}P/P_m$, and velocity direction vectors; Right
panel: the ratio of azimuthal and poloidal magnetic field strengths,
$log_{10}B^\phi/B_p$, and the magnetic field lines. }
\label{f0}
\end{figure*}
The study of collapsars in full GRMHD is still in its infancy.
Sekiguchi \& Shibata\shortcite{SS07} studied the collapse of rotating
stellar cores and formation of BH in the collapsar scenario. Their
results show powerful explosions soon after the accretion disk is
formed around the BH and the free falling plasma of polar regions
collides with this disk. These explosions are driven by the heat
generated as a result of such collision. However, the authors have not
accounted for the neutrino cooling and the energy losses due to
photo-dissociation of atomic nuclei.
and the explosions could be similar in nature to the ``successful''
prompt explosions of early supernova simulations \cite{bethe}. Mizuno
et al.\shortcite{MYKS04a,MYKS04b} carried out GRMHD simulations in the
time-independent space-time of a central BH. The computational domain
did not include the BH ergosphere and thus they could not study the
role of the Blandford-Znajek effect~\cite{K04a}. The energy losses
have not been included and the equation of state (EOS) was a simple
polytrope. These simulations were run for a rather short time, $\simeq
280 r_g/c$ where $r_g=GM/c^2$, and jets were formed almost immediately
due to unrealistically strong initial magnetic field.
In this letter we describe the first results of axisymmetric GRMHD
simulations of collapsars where we use realistic EOS~\cite{TS00},
include the energy losses due to neutrino emission (assuming optically
thin regime) and photo-dissociation of nuclei (see the details of
micro-physics in Komissarov \& Barkov 2007 ), use the computational
domain that includes the BH horizon and its ergosphere, and run
simulations for a relatively long physical time, up to 0.5s. The
neutrino heating is not included.
\begin{figure*}
\includegraphics[width=57mm]{figures/rho1.png}
\includegraphics[width=57mm]{figures/rho2.png}
\includegraphics[width=57mm]{figures/rho3.png}
\caption{Solution on different scales at $t=0.45$s. The colour images
show the baryonic rest mass density, $log_{10}\rho$ in g/cm$^3$, the
contours show the magnetic field lines, and the arrows show the
velocity field.}
\label{f1}
\end{figure*}
\begin{figure*}
\includegraphics[width=57mm]{figures/b1.png}
\includegraphics[width=57mm]{figures/b2.png}
\includegraphics[width=57mm]{figures/b3.png}
\caption{The inner region at $t=0.45$s. Left panel: the magnetization
parameter, $log_{10}P/P_m$, and the magnetic field lines; Middle
panel: the ratio of azimuthal and poloidal magnetic field strengths,
$log_{10}B^\phi/B_p$, and the magnetic field lines; Right panel: the
magnetic field strength, $log_{10}(B)$, and the magnetic field lines.
}
\label{f2}
\end{figure*}
\section{Computer simulations}
\label{simulations}
The simulations were carried out with 2D axisymmetric GRMHD code
described in Komissarov\shortcite{K04b}. Since this code can deal
only with time-independent spacetimes we are forced to start from the
point where the central BH has already been formed inside the
collapsing star. In the presented model the BH mass
$M\sub{bh}=3M_{\sun}$ and its angular momentum parameter $a=0.9$. The
mass density and the radial velocity of the collapsing star are
described by the free-fall model of Bethe\shortcite{bethe}
corresponding to $t=1$s since the onset of collapse (see equations in
Komissarov \& Barkov, 2007). The parameter $C$ is set to 9
corresponding to most massive stars. This gives us the free-fall mass
rate $ \dot{M} \simeq 0.5 M_{\sun} \mbox{s}^{-1}$. On top of this we
endowed the free-falling plasma with angular momentum and poloidal
magnetic field. The angular momentum distribution describes a solid
body rotation up to the cylindrical radius $\varpi=6300\,\mbox{km}$.
Further out the angular momentum is constant,
$l=10^{17}\mbox{cm}^2\mbox{s}^{-1}$. The magnetic field distribution
is that of a uniformly magnetized sphere in vacuum, the radius of this
sphere $r_1=4500$km and inside the sphere $B=3\times10^{10}$G. These
distributions are intended to describe the progenitor at the onset of
collapse rather then at the state developed one second later. We
utilize the Kerr-Schild coordinates of spacetime. The computational
grid is uniform in polar angle, $\theta$, where it has 180 cells. In
the radial direction it is uniform in $\log r$, and has 450 cells.
The inner boundary is located just inside the event horizon and adopts
the free-flow boundary conditions. The outer boundary is located at
$r=25000km$ and at this boundary the flow is prescribed according to
the Bethe's model.
At the beginning of simulations the angular momentum of accreting gas
is less than that of the last stable orbit, $l\sub{lso}$. It falls
straight into the BH, and no disk is formed. However, the magnetic
flux threading the BH gradually increases and so is the magnetic
pressure. When the outer layers with $l>l\sub{lso}$ reach the BH the
centrifugal force slows down their infall and the accretion disk is
beginning to form around the BH. At the same time the accretion shock
separates from its surface. The low angular momentum plasma of polar
regions k.png falling straight into the BH after passing the accretion
shock whereas the high angular momentum plasma fills the ``bubble''
above and below the disk (fig.\ref{f0}). Strong differential rotation
within this bubble leads to amplification of the azimuthal component
of magnetic field, the magnetic pressure grows and eventually
overwhelms the ram pressure of free-falling envelope -- the explosion
begins. The BH is a key player in the process pumping electromagnetic
energy into the bubble and the disk at the rate of $\simeq 2\times
10^{51}\mbox{erg}\,\mbox{s}^{-1}$.
Figures \ref{f1} and \ref{f2} show the solution at $t=0.45$s, near the
end of simulations. At this time, the solution exhibits two well
defined polar jets surrounded by the magnetic cocoons of high pressure
and low density. The magnetic pressure of these cocoons, which have
been inflated by the jets, exceeds by more than six orders of
magnitude the magnetic pressure in the collapsing star. These
over-pressured cocoons drive strong bow shock (blast wave) into the
star (right panel of fig.\ref{f1}). The mean propagation speed of the
shock in the polar direction $v\sub{s}\simeq 0.18c$. Near the equator
the stellar plasma compressed by the shock continues streaming
downward with supersonic speed. At the equator and well outside of
the accretion disk the stream coming from northern hemisphere collides
with the stream coming from the southern hemisphere and a pair of
oblique shocks develop at $r\simeq 200r_g$ (middle panel of
fig.\ref{f2}). These shocks are not strong enough to cause
photo-dissociation of nuclei and the high post-shock pressure drives
the reflected flows away from the equatorial plane. Plasma from the
skin layers of the reflected streams actually enters the bubbles and
interacts with the jets (We expect this effect to weaken later when
the blast wave moves further away.) The inner layers of the reflected
streams pass through another shock, at $r\simeq 50r_g$, and feed the
accretion disk. The left panel of fig.\ref{f1} shows the solution in
the immediate vicinity of the BH. Its structure is reminiscent to that
found in the previous studies of thick disks around BHs -- main disk,
its dynamic corona, and magnetically-dominated
funnel~\cite{DH03,MG04,SST07}. This funnel is the region there the
Pointing dominated jets are produced as well as the ``wind'' blowing
into the BH -- in this image one can clearly see the surface
separating these flows.
Figure \ref{f2} shows the magnetic properties of the central region.
Not only the funnel but also the disk corona are
magnetically-dominated. The magnetic field strength reaches
few$\times10^{15}$G near the BH, it is weaker in the funnel compared
to the disk at the same spherical radius but not by much. Within the
disk and corona the azimuthal magnetic field, $B^\phi$ exceeds the
poloidal one, $B_p$, by two or three orders of magnitude. In contrast,
in the funnel $B^\phi/B_p \le 1$, reaching unity only near the funnel
walls. In fact, the poloidal field in the funnel exceeds that in the
disk and corona by 1-2 orders of magnitude. This is in contrast to the
conclusion made by Ghosh \& Abramowicz\shortcite{GA97} and Livio et
al.\shortcite{LOP99}, namely that the poloidal field threading the BH
horizon should be of the same order as the poloidal field in the inner
parts of the disk. Their main argument, that both fields are produced
by the same azimuthal current flowing in the disk, misses the fact
that additional currents may flow in the magnetosphere and over the
disk/funnel surface and support the magnetic field inside the funnel
in the manner similar to solenoid. In our case, the poloidal field
threading the BH is the original field of the progenitor that has been
accumulated during the initial phase of free infall.
The left panel of fig.\ref{f3} shows the baryonic mass flux as a
function of spherical radius. One can see that it reduces from the
free-fall value $\dot{M}\simeq -0.5M_{\sun}\mbox{s}^{-1}$ down to
$\dot{M}\simeq -0.06M_{\sun}\mbox{s}^{-1}$ at the event
horizon. Between $r\simeq 60r_g$ and $r=2500r_g$ this reduction
reflects the effect of the bow shock driven into the star by the
jets. The sharp reduction at $r\simeq 60r_g$ corresponds to the
position of the accretion shock and marks the transition from
approximate free-fall to the centrifugally supported disk.
The middle panel of fig.\ref{f3} shows the integral energy fluxes of
the jets as functions of spherical radius. To be more precise the
integration is carried out over the the whole sphere but the
contribution from areas with the baryonic rest mass density
$\rho>10^8\mbox{g}\,\mbox{cm}^{-3}$ is excluded. We have verified that
the bulk contribution to the fluxes computed in this way comes from
the jets. The baryonic rest mass flux, $\rho u^r$
radial component of 4-velocity, is excluded from the total and the
matter energy fluxes, that is these fluxes are computed via
\begin{equation}
\dot{E}=-2\pi\int_S (T^r_t + \rho u^r)\sqrt{\gamma} d\theta,
\end{equation}
where $\gamma$ is the determinant of the metric tensor of space and
$\bmath{T}$ is either the total stress-energy-momentum tensor or its
hydrodynamic part. The most important conclusion suggested by this
figure is that at least $80\%$ of the jet energy is provided directly
by the BH and at a very high rate, $\dot{E}\simeq
2\times10^{51}\mbox{erg}\,\mbox{s}^{-1}$. The remaining $20\%$ seem to
be provided by the inner part of the disk -- this explains the rise of
jet power between the event horizon and $r\simeq10r_g$. Indeed,
careful examination of the solution shows that some magnetic field
lines enter the jet from the skin layers of the disk with
$\rho>10^8\mbox{g}\,\mbox{cm}^{-3}$. However, it remains to be shown
that this is not caused by the numerical diffusion of magnetic flux
from the funnel into the disk. The right panel of fig.\ref{f3} shows
the distributions of Poynting flux and hydrodynamic energy flux
(including the rest mass-energy) across the horizon and allows us to
determine whether it is the Blandford-Znajek or the MHD-Penrose
mechanism \cite{PC90,Pun01,KSKM02} or both of them that provide the
energy supply to the jets. Since the hydrodynamic flux is everywhere
negative the MHD-Penrose mechanism can be ruled out with
certainty. This is confirmed by the fact that the hydrodynamic
energy-at-infinity is positive everywhere inside the ergosphere. Thus
the jet is powered by the Blandford-Znajek mechanism. For a force-free
monopole magnetosphere the Blandford-Znajek power is given by
\begin{figure*}
\includegraphics[width=57mm,angle=0]{figures/massf.png}
\includegraphics[width=57mm,angle=0]{figures/enf.png}
\includegraphics[width=57mm,angle=0]{figures/horf.png}
\caption{Left panel: the integral baryonic mass flux in units
$M_{\sun}\mbox{s}^{-1}$ as a function of spherical radius; Middle
panel: the integral fluxes of total energy (solid line),
electromagnetic energy (dashed line), and hydrodynamic energy (dotted
line); Right panel: the energy flux densities at the event horizon for
electromagnetic energy (solid line) and hydrodynamic (matter)
energy. Time $t=0.45$s.}
\label{f3}
\end{figure*}
\begin{equation}
\dot{E}\sub{BZ}=\frac{1}{6c}\left(\frac{\Omega_h\Psi}{4\pi}\right)^2,
\end{equation}
where $\Omega_h$ is the angular velocity of the BH and $\Psi$ is the
magnetic flux threading the BH. In the derivation we assumed that the
angular velocity of magnetosphere $\Omega =0.5\Omega_h$. This holds
well even for rapidly rotating BHs with monopole magnetospheres
\cite{K01} and corresponds to the mean value of $\Omega$ measured in
our simulations as well. Using the measured value of $\Psi$ we derive
$\dot{E}\sub{BZ}\simeq 2.6\times10^{51}\mbox{erg}\,\mbox{s}^{-1}$
which agrees quite well with the value of $\dot{E}\sub{BZ}$ provided
by fig.\ref{f3}. The total amount of free energy-at-infinity in the
bow shock and the bubble at time $t=0.45$s is $E\simeq 1.37\times
10^{51}\mbox{erg}$. Since the explosion develops only at $t=0.24$s the
mean jet power over the active period is $<\dot{E}>\simeq
6\times10^{51}\mbox{erg}\,\mbox{s}^{-1} $, indicating the higher jet
power at the early stages of the explosion.
The middle panel of fig.\ref{f3} also shows that initially the jets
are Poynting-dominated but gradually the electromagnetic energy is
converted into the energy of matter. However, the accuracy of our
simulations is insufficient to capture the jet dynamics. First of
all, we are forced to keep the flow magnetization below the limit at
which the code crashes -- this is done via artificial injection of
plasma in the danger cells. This reduces the length scale for the
energy conversion via magnetic acceleration of plasma, as well as the
asymptotic Lorentz factor. The numerical mass diffusion into the jets
from the disk corona further exacerbates this problem. Finally,
numerical resistivity causes dissipation of the jet electric
current. Due to the mass diffusion and numerical viscosity the jets
never become ultrarelativistic - their Lorentz factor rarely exceeds
$\Gamma=3$. On the other hand, the total energy is conserved we do
not expect these numerical problems to have strong effect on the
dynamics of the bow shock and the bubble inflated by the jets.
\section{Conclusions}
Our results provide strong support to the idea that magnetic fields
can play a crucial role in driving powerful GRB jets and associated
stellar explosions not only in the magnetar model but also in the
collapsar model. The main energy source for the jets and explosions in
our simulations is the rotational energy of black hole and it is
released via the Blandford-Znajek mechanism. The measured rate of
energy release, $\dot{E} \ge 2\times10^{51}\mbox{erg}\,\mbox{s}^{-1}$,
can explain the energetics of even the shortest of long duration GRBs.
The fact that the rotational energy of black hole,
$E\sub{bh}\simeq\mbox{few}\times10^{53}\mbox{erg}$, exceeds the
typical explosion values derived from observations, $E\simeq
10^{52}\mbox{erg}$, suggests a self-regulating process in which the
black hole activity ceases when the blast wave terminates further mass
supply to the accretion disk. The full details of the simulations
together with the results of parameter study will be presented
elsewhere.
\section*{Acknowledgments}
We thank G.S.Bisnovatyi-Kogan for helpful discussions of this problem.
This research was funded by PPARC under the rolling grant
``Theoretical Astrophysics in Leeds''.
|
2,869,038,156,255 | arxiv | \section{Introduction}
\label{sec:intro}
\label{sec:motivation}
Ergodic theory~\cite{Sinai1976, SinaiCornfield, HalmosErgodic} concerns itself with a study of the statistical properties of classical dynamical systems, centered around a mathematically precise classification of dynamics into different levels of randomness called the ergodic hierarchy~\cite{PlatoErgodic, Ott} (see Fig.~\ref{fig:ErgodicHierarchy}). These levels, such as ergodic, mixing, K-mixing and others~\cite{HalmosErgodic, Sinai1976, SinaiCornfield, PlatoErgodic, Ott} (in order of increasing randomness, discussed in more detail in Sec.~\ref{sec:cl_ErgodicHierarchy}), can be used to motivate different elements of classical statistical mechanics~\cite{PlatoErgodic, Ott}: ergodicity justifies the use of the microcanonical ensemble, and mixing the approach to thermal equilibrium, while K-mixing is responsible for chaotic dynamics.
\begin{figure}[!hbt]
\centering
\includegraphics[scale = 0.4]{ErgodicHierarchyFig/ErgHier3.pdf}
\caption{\small The Ergodic Hierarchy (according to e.g. Ref.~\cite{PlatoErgodic}). $\lambda$ indicates the maximal Lyapunov exponent, whose nonzero value is a defining signature of chaos~\cite{PlatoErgodic, Ott}. The Bernoulli level has more randomness than the K-mixing level, but isn't directly relevant for this work. (Reused from Ref.~\cite{EfimPolygon})}
\label{fig:ErgodicHierarchy}
\end{figure}
In contrast, our present understanding of quantum statistical mechanics is founded on a much less precise, but empirically successful, connection to the statistical properties of random matrices~\cite{Haake, Mehta}. Direct contact with the thermalization of observables is made through a comparison of the energy eigenstates (or eigenvectors) of a system with random eigenvectors, via the Eigenstate Thermalization Hypothesis (ETH)~\cite{deutsch1991eth, srednicki1994eth, srednicki1999eth, rigol2008eth, DAlessio2016, deutsch2018eth, subETH} and related approaches~\cite{CanonicalTypicalityPSW, tumulka_CT, NormalTypicality, ReimannRealistic, ShortEqb, WuErgMix}.
While ETH has some basic mathematical backing from canonical typicality~\cite{CanonicalTypicalityPSW, tumulka_CT}, and shows some resemblance to ergodicity and mixing for local observables~\cite{DAlessio2016}, much remains unclear about the physical mechanism through which ETH arises in individual systems, for precisely which observables in which kinds of systems, and whether it is the only mode of quantum thermalization~\cite{deutsch2018eth, DAlessio2016}.
Such observable-dependent ambiguities are avoided in the comparison of the statistics of energy eigenvalues (i.e. level statistics) of a system with those of random matrices, at the apparent cost of direct dynamical relevance to thermalization. This approach is based on the observation that on quantization, typical classically non-ergodic systems show highly fluctuating energy spectra with Poisson (locally uncorrelated) level statistics~\cite{BerryTabor}, while classically chaotic systems show rigid spectra with the local level statistics of Wigner-Dyson random matrices (after appropriately accounting for symmetries)~\cite{DKchaos, CGV, BerryStadium, BGS}. A semiclassical ``periodic orbit'' argument for Wigner-Dyson level statistics soon followed~\cite{HOdA, BerrySpectralRigidity} (with further developments in e.g. Refs.~\cite{SieberRichter, HaakePO, HaakePO2, RichterReview}), assuming the dominance of isolated periodic orbits in semiclassical contributions to level statistics, a certain uniform distribution of these periodic orbits, and a K-mixing classical system~\cite{Haake}. However, extremely recent numerical studies of systems without K-mixing show that Wigner-Dyson level statistics can emerge even on quantization of merely mixing~\cite{Lozej2021} or merely ergodic~\cite{CasatiWang} classical systems. At minimum, this merits a theoretical explanation of spectral rigidity that connects to the classical limit but does not rely on K-mixing.
Similar trends of spectral rigidity have been observed analytically and numerically in fully quantum many-body systems~\cite{BlackHoleRandomMatrix, ShenkerThouless, KosProsen2018, ChanScrambling, ChanExtended, BertiniProsen, SSS, ExpRamp3, ExpRamp4, LiaoCircuits, DAlessio2016} (along with a number of intermediate cases~\cite{ExpRamp1, ExpRamp2, WinerSpinGlass}) which do not necessarily have a classical limit, where in the absence of a precise classification, judgements of the chaoticity of a system have been largely based on intuition.
At the same time, correlation functions of \textit{local} observables have been rigorously characterized, in a manner similar to the ergodic hierarchy, in the specific case of dual-unitary quantum circuits~\cite{ProsenErgodic, ArulCircuit} --- but without any apparent link to level statistics. In all these cases, it remains unclear if there are direct observable consequences of level statistics in time evolution, with the exception of a small correction (the ``ramp''~\cite{BlackHoleRandomMatrix}) to some correlation functions at late times~\cite{SSS, ThoulessRelaxation}, as well as protocols designed specifically to measure spectral rigidity~\cite{SFFmeas, pSFF}.
In this work, we show that level statistics does have a precise role in determining a quantum version of ergodicity in the time domain, if one considers not local observables, but dynamical structures --- cyclic permutations --- in the Hilbert space of an individual system. This quantum notion is a natural ``quantization'' of a discrete version of classical ergodicity i.e. cyclic ergodicity, that can be rigorously defined in terms of cyclic permutations in classical ergodic theory, but does not rely on a classical limit. Conversely, our results strongly suggest that cyclic ergodicity underlies ergodicity in any classical dynamical system that can be quantized. We provide analytical and numerical evidence for the applicability of this picture to Wigner-Dyson level statistics which has been near-universally seen in quantum ``chaotic'' systems, as well as the spectral rigidity of Kolmogorov-Arnold-Moser (KAM) tori --- classically ergodic systems (depending on parameters) that possess neither periodic orbits nor K-mixing and do not show Wigner-Dyson level statistics on quantization.
The rest of this paper is organized as follows. Sec.~\ref{sec:ErgodicReview} reviews some necessary aspects of classical ergodic theory, including the use of cyclic permutations to ``discretize'' a dynamical system~\cite{KatokStepin1, KatokStepin2, SinaiCornfield, Sinai1976}, and defines cyclic ergodicity and cyclic aperiodicity as discrete, primitive forms of ergodicity and mixing that can be extended to quantum mechanics. Sec.~\ref{sec:quantumcyclic} defines their analogues in quantum mechanics, and proves that cyclic ergodicity and aperiodicity are directly determined by a specific measure of level statistics and spectral rigidity (namely, mode fluctuations~\cite{Lozej2021, Aurich1, Aurich2}).
While our derivations up to this point are rigorous, the subsequent sections discuss the application of these results to physical systems using a combination of analytical and numerical arguments. Sec.~\ref{sec:TypicalCyclic} considers the typical time evolution of cyclic permutations, and provides detailed evidence that quantum systems with Wigner-Dyson spectra are ergodic and aperiodic by this definition, while those with Poisson spectra are not. Sec.~\ref{sec:classical} argues that quantum cyclic permutations may be identified with classical cyclic permutations in phase space for systems with a classical limit, and provides primarily numerical evidence that the spectra of quantized KAM tori satisfy cyclic ergodicity, suggesting that the latter is a genuine quantum version of classical ergodicity even where random matrix theory is not applicable. Finally, Sec.~\ref{sec:discussion} discusses some insights about thermalization in quantum systems that may be gained from cyclic permutations, in a largely semi-qualitative manner that may motivate future rigorous work.
\section{A short review of classical ergodic theory}
\label{sec:ErgodicReview}
In classical ergodic theory~\cite{HalmosErgodic, Sinai1976, SinaiCornfield, PlatoErgodic}, one is concerned with dynamics on a phase space (or a smaller region of interest) $\mathcal{P}$, with an operator $\mathcal{T}^t:\mathcal{P}\to \mathcal{P}$ that evolves points in the space by time $t$ (which may be a continuous or discrete variable, corresponding to flows or maps). The main questions of interest are which regions of phase space are explored over time by an initial point, and how rapidly a typical point explores these regions.
These questions are conveniently posed when there is a measure $\mu(A) \geq 0$ defined for subsets $A \subseteq \mathcal{P}$ that is preserved by time evolution, $\mu(\mathcal{T}^t A) = \mu(A)$ (in Hamiltonian dynamics, this measure is given by the phase space volume $\int_A \diff^n q \diff^n p$). An important feature of such systems is guaranteed by the \textit{Poincar\'{e} recurrence theorem}~\cite{Sinai1976, HalmosErgodic, SinaiCornfield}: for any $A \subseteq \mathcal{P}$ such that $\mu(A) > 0$, almost every point in $A$ eventually returns to $A$, each within some (long) finite time (i.e. with the exceptions forming a set of measure zero).
Given such a measure, how well an initial point explores the phase space is generally expressed through correlation functions of various sets, the behavior of which is classified into the ergodic hierarchy~\cite{Ott, PlatoErgodic}. In what follows, we normalize the measure so that $\mu(\mathcal{P}) = 1$.
\subsection{The classical ergodic hierarchy}
\label{sec:cl_ErgodicHierarchy}
We first ask whether almost all initial points explore every region of nonzero measure in $\mathcal{P}$. If so, the dynamics is said to be \textit{ergodic} in $\mathcal{P}$. If not, $\mathcal{P}$ can be decomposed into (say) $M$ subsets that are invariant under $\mathcal{T}$, i.e. $\mathcal{P} = \bigcup_{j=1}^{M}\mathcal{P}_j$ (each with a measure induced by $\mu$), such that the dynamics is ergodic within each $\mathcal{P}_j$. In terms of correlation functions, ergodicity in $\mathcal{P}$ is expressed~\cite{HalmosErgodic,Sinai1976, SinaiCornfield, PlatoErgodic, Ott} as the following condition:
\begin{equation}
\lim_{T\to\infty} \frac{1}{2T}\int_{-T}^{T}\diff t\ \mu\left[(\mathcal{T}^t A)\cap B\right] = \mu(A)\mu(B),\ \forall\ A,B \subseteq \mathcal{P}.
\label{eq:cl_ergodic}
\end{equation}
Here, we use $\diff t$ either as a continuous integration measure or that corresponding to a discrete sum, depending on the domain of $t$.
\textit{Mixing} is a property of time evolution eventually becoming uncorrelated with initial conditions, and represents how rapidly typical points explore a phase space region $\mathcal{P}$ on which time evolution is ergodic. The simplest such criterion is expressed in terms of two element correlation functions~\cite{HalmosErgodic,Sinai1976, SinaiCornfield, PlatoErgodic, Ott},
\begin{equation}
\lim_{t\to\infty}\ \mu\left[(\mathcal{T}^t A)\cap B\right] = \mu(A)\mu(B),\ \forall\ A,B \subseteq \mathcal{P},
\label{eq:cl_mixing}
\end{equation}
and is conventionally merely called mixing (with two variants, weakly mixing and strongly mixing, depending on whether the limit converges with measure zero exceptions in $t$, or exactly~\cite{HalmosErgodic, PlatoErgodic, Ott}). This can be extended to higher order correlation functions~\cite{rokhlin1967}, and the dynamics is said to be \textit{K-mixing} when all higher order correlation functions become uncorrelated in the above sense. These criteria form a hierarchy in the sense that K-mixing implies mixing, which implies ergodicity~\cite{PlatoErgodic, Ott}. Additional levels of randomness may also be considered~\cite{PlatoErgodic, Ott}; see Fig.~\ref{fig:ErgodicHierarchy} for a depiction of the hierarchy of Ref.~\cite{PlatoErgodic}.
It is interesting to note that if one defines a unitary operator $U_{\mathcal{T}}$ induced by $\mathcal{T}$ on the space of functions $L_2(\mathcal{P})$ on the phase space (Koopman and von Neumann's Hilbert space representation of classical mechanics~\cite{HalmosErgodic, Sinai1976, SinaiCornfield, KatokStepin2, KatokSinaiStepin, Nadkarni}), some of these properties can be translated to those of the eigenvalues and eigenfunctions of $U_{\mathcal{T}}$, whose direct extensions to quantum mechanics have been previously considered~\cite{CastagninoErgodic1}. For instance, ergodicity translates to non-degenerate eigenvalues with eigenfunctions of uniform magnitude, and weak mixing to a continuous spectrum with no non-constant eigenfunction, of $U_{\mathcal{T}}$~\cite{HalmosErgodic}. For a discrete quantum spectrum corresponding to phase spaces or energy shells of finite measure by Weyl's law~\cite{Haake}, the eigenvalues are almost always non-degenerate (i.e. are non-degenerate or can be made so by infinitesimal perturbations) and the spectrum is necessarily discrete, prompting us to seek alternate avenues in which the above properties are at best emergent in the classical limit.
\subsection{Discretizing ergodicity with cyclic permutations}
\label{sec:cl_cyclic}
We eventually want to understand how quantum mechanics with its discrete set of energy levels can lead to ergodic and mixing behaviors, defined classically for continuous systems. A useful bridge between continuum and discrete descriptions is offered by the technique of discretizing an arbitrary dynamical system with cyclic permutations, which have been studied in Refs.~\cite{KatokStepin1, KatokStepin2} (see also Refs.~\cite{Sinai1976, SinaiCornfield, KatokSinaiStepin, Nadkarni} for reviews and related results). Here, we discuss and adapt the elements of this framework that are most relevant for our purposes, following Ref.~\cite{KatokStepin2}.
Let $\mathtt{C}= \lbrace C_k\rbrace_{k=0}^{n-1}$ be a decomposition of the phase space $\mathcal{P}$ into a large number of $\mu$-disjoint (i.e. with measure zero intersection) closed sets of identical measure, $\mu(C_k) = 1/n$, with a well-defined $n\to\infty$ limiting procedure. Introduce a time evolution operator $\mathcal{T}_C$ on $\mathcal{P}$, which cycles the elements of the decomposition, $\mathcal{T}_C C_k = C_{k+1}$ (with $(n-1)+1 \equiv 0$ i.e. the addition is modulo $n$). As a measure of how well $\mathcal{T}_C$ approximates $\mathcal{T}^{t_0}$ for some $t_0$, define the error of the permutation (differing from that in Ref.~\cite{KatokStepin2} by the factor of $1/2$):
\begin{equation}
\overline{\epsilon}_{C}(t_0) = \frac{1}{2}\sum_{k=0}^{n-1}\ \mu\left[(\mathcal{T}^{t_0}C_k) \symdiff C_{k+1}\right],
\label{eq:cl_cyc_err}
\end{equation}
where $A \symdiff B = (A \cup B) - (A\cap B)$.
We will often drop $\mathcal{T}_C$ and directly call $\mathtt{C}$ the cyclic permutation for brevity, as any $\mathcal{T}_C$ that cyclically permutes the elements of $\mathtt{C}$ has the same error. We also note that $t_0$ could implicitly depend on $n$, in particular for a flow with continuous time. A schematic is depicted in Fig.~\ref{fig:cycpermutation1}
\begin{figure}[!hbt]
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.35]{CyclicPermutationFigs/PhaseSpacePartitionv3.pdf}
\caption{\small Partitioning of phase space into $\lbrace C_k\rbrace_{k=0}^{10-1}$.}
\label{fig:phasespacepartition}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.35]{CyclicPermutationFigs/PhaseSpaceErrorv4.pdf}
\caption{\small Contribution to error from $(\mathcal{T}^{t_0} C_0) \symdiff C_1$ (shaded region).}
\label{fig:phasespaceerror}
\end{subfigure}
\caption{\small Schematic depiction of an $(n=10)$-element cyclic permutation for some phase space $\mathcal{P}$ (interior of ellipse).}
\label{fig:cycpermutation1}
\end{figure}
The error $\overline{\epsilon}_C(t_0)$ can serve as a probe of ergodicity. In particular, Ref.~\cite{KatokStepin2} shows that with an additional assumption, the error provides a bound on the number $M$ of subsets $\mathcal{P}_j \subseteq \mathcal{P}$ that are ergodic with respect to $\mathcal{T}$, in the $n\to\infty$ limit. In Appendix~\ref{app:cl_erg_errors}, we recount a version of this argument without the additional assumption, where the bound is on the number $M_C$ of the $\mathcal{P}_j$ that completely contain at least one of the $C_k$. The essence of the argument is that $\mathcal{T}_C$ is ergodic on $\mathtt{C}$, which is a coarse graining of $\mathcal{P}$; the only way $\mathcal{T}$ can avoid being ergodic on this coarse graining is if it is sufficiently different from $\mathcal{T}_C$. In fact, one obtains the precise bound $\overline{\epsilon}_C(t_0) \geq M_C/n$ for $M_C \geq 2$.
The bound is to be interpreted as follows: the existence of any $n$-element cyclic permutation $\mathtt{C}$ that approximates $\mathcal{T}^{t_0}$ with $\overline{\epsilon}_C(t_0) < 2/n$, implies that none of the $C_k$ are contained inside any invariant subset of $\mathcal{P}$ (other than $\mathcal{P}$ itself). Motivated by this property, we define the property of ``cyclic ergodicity'' of a cyclic permutation.
\begin{definition}[\textbf{Classical cyclic ergodicity}]
A cyclic permutation $\mathtt{C}$ shows cyclic ergodicity iff any element $C_j \in \mathtt{C}$ sequentially intersects a non-vanishing fraction of every other $C_k \in \mathtt{C}$ at least once under (future and past) time evolution:
\begin{equation}
n\mu\left[(\mathcal{T}^{p t_0} C_j)\cap C_{j+p}\right] > 0 \text{ as } n\to \infty,\text{ for all } j \text{ and }\lvert p\rvert \leq \frac{n}{2},
\label{eq:cl_cyc_erg_def}
\end{equation}
where $p$ represents the number of integer steps of time evolution in units of $t_0$.
\end{definition}
Cyclic ergodicity implies ergodicity in the sense of Eq.~\eqref{eq:cl_ergodic} if we further require that all invariant subsets of $\mathcal{P}$ must contain at least one $C_k$, which is the additional assumption imposed in Ref.~\cite{KatokStepin2}. In such cases, the index $k$ of $C_k$ can be loosely thought of as an approximate time coordinate in $\mathcal{P}$ (when $n\to\infty$). Cyclic ergodicity and non-ergodicity are depicted in Fig.~\ref{fig:cycpermutation2}
\begin{figure}[!hbt]
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.35]{CyclicPermutationFigs/PhaseSpaceErgodicAperiodicv3.pdf}
\caption{\small Cyclic ergodicity.}
\label{fig:phasespaceergodicity}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.35]{CyclicPermutationFigs/PhaseSpaceNonErgodicv3.pdf}
\caption{\small Non-ergodicity.}
\label{fig:phasespacenonergodicity}
\end{subfigure}
\caption{\small Schematic depiction of cyclic ergodicity and non-ergodicity for the cyclic permutation of Fig.~\ref{fig:cycpermutation1}. The trajectory may be thought of as the future and past history of the center of $C_0$, for $5$ steps of $t_0$ each (arrows indicate the forward flow of time).}
\label{fig:cycpermutation2}
\end{figure}
It is also useful to define the ``cyclic aperiodicity'' of a cyclic permutation.
\begin{definition}[\textbf{Classical cyclic aperiodicity}]
A cyclic permutation $\mathtt{C}$ shows cyclic aperiodicity iff $\mathcal{T}^t C_k$ never returns to intersect a non-vanishing fraction of $C_k$ for any $t$ satisfying $t_0 \ll t \lesssim O(n t_0)$:
\begin{equation}
n \mu\left[(\mathcal{T}^{t} C_j)\cap C_{j}\right] = 0\ \text{as}\ n\to \infty, \text{ for all } j \text{ and } t_0 \ll t \lesssim O(t_0 n).
\label{eq:cl_cyc_apd_def}
\end{equation}
\end{definition}
This can be shown along similar lines~\cite{KatokStepin2} (see Appendix~\ref{app:cl_erg_errors} for a review) to imply $\overline{\epsilon}_C(t_0) \geq 1/n$. Cyclic aperiodicity is a necessary condition for mixing in the $n\to\infty$ limit~\cite{KatokStepin2} (albeit with a subtlety in the order of limits, requiring $n\to\infty$ faster than $t\to\infty$ in Eq.~\eqref{eq:cl_mixing}). More generally, the existence of an $n$-element cyclic permutation with $\overline{\epsilon}_C < 1/n$ rules out mixing in the sense of Eq.~\eqref{eq:cl_mixing} at least until times $t > nt_0$.
The above statements connect properties of discretized classical dynamics to levels of the ergodic hierarchy. The next section aims to find parallels to these properties in quantum mechanics.
\section{Dynamical quantum ergodicity and cyclic permutations}
\label{sec:quantumcyclic}
Let $\hat{U}_H(t)$ be the unitary time evolution operator, with $D$ (possibly nonunique) eigenstates $\left(\lvert E_n\rangle\right)_{n=0}^{D-1}$ and $D$ (correspondingly, possibly degenerate) eigenvalues $e^{-iE_n t}$:
\begin{equation}
\hat{U}_H(t)\lvert E_n\rangle = e^{-i E_n t}\lvert E_n\rangle.
\end{equation}
The time variable $t$ can be chosen to be continuous or discrete, with $E_n$ respectively corresponding to the eigenvalues of a Hamiltonian or eigenphases of a Floquet map. Without loss of generality, we will use terminology associated with Hamiltonians in what follows.
The time evolution of an arbitrary (normalized) state $\lvert \psi(t)\rangle$ preserves the overlaps $\lvert\langle E_n\vert\psi(t)\rangle\rvert^2$. Thus, if the Hilbert space $\mathcal{H}$ of states is interpreted as a phase space in the sense of classical ergodic theory, it always decomposes~\cite{DAlessio2016, deutsch2018eth} into a continuum of ergodic sectors, which are subsets of the invariant regions $\mathcal{H}(r)$ in $\mathcal{H}$ with definite values of the tuple $(r_n) = \left( \lvert\langle E_n\vert\psi\rangle\rvert^2\right)_{n=1}^D$. However, none of the $\mathcal{H}(r)$ themselves form subspaces, and consequently, they do not offer sufficient structure to consider superpositions and projective measurements.
\subsection{Pure state cyclic permutations for quantum dynamics}
\label{sec:qm_cyclic}
More useful notions of ergodicity can be defined if one attempts to construct a primitive version of a classical dynamical system with suitable primitive properties within the Hilbert space. We will see that the construction of pure state cyclic permutations, in analogy with classical cyclic permutations, provides one way to achieve this goal; in particular, quantum versions of cyclic ergodicity and cyclic aperiodicity (Eqs.~\eqref{eq:cl_cyc_erg_def} and \eqref{eq:cl_cyc_apd_def}) can then be defined naturally.
We work in an invariant subspace $\Eshell{d} \subseteq \mathcal{H}$
(an `energy subspace') spanned by any subset of suitably relabeled eigenstates $\left(\lvert E_n\rangle\right)_{n=0}^{d-1}$. This subspace contains several of the invariant regions $\mathcal{H}(r) = \Eshell{d}(r) \subset \Eshell{d}$, which we will call subshells. Among these, the unbiased subshell $\Eshell{d}(\overline{\ptup})$ with $\overline{\ptup}_n = 1/d$ is unique in containing entire orthonormal bases for $\Eshell{d}$, while no other subshell contains even a single orthonormal basis.
We seek cyclic permutations that approximate $\hat{U}_H(t)$ within this energy subspace. To this end, let $\mathcal{C} = \lbrace\lvert C_k\rangle\rbrace_{k=0}^{d-1}$ be an orthonormal basis spanning $\Eshell{d}$ with cycling operator $\hat{U}_C\lvert C_k\rangle = \lvert C_{k+1}\rangle$. The eigenvalues of $\hat{U}_C$ are necessarily distinct $d$-th roots of unity, $\lbrace\exp(-2\pi i n/d)\rbrace_{n=0}^{d-1}$. It is convenient to introduce the $p$-step persistence amplitudes (relative to the action of $\hat{U}_C^p$),
\begin{equation}
z_k(p; t_0) = \left\lvert \langle C_{k+p}\rvert \hat{U}_H(p t_0)\lvert C_k\rangle\right\rvert,
\end{equation}
for some choice of $t_0$; these satisfy $z_k(p; t_0) \in [0,1]$.
Then, we say that $\hat{U}_C$ approximates $\hat{U}_H(t_0)$ with $p$-step error
\begin{equation}
\varepsilon_C(p; t_0) = 1-\left(\min_{k \in \mathbb{Z}_d} z_k(p; t_0)\right)^2.
\end{equation}
A pure state approximation scheme for unitaries has been constructed in Ref.~\cite{Nadkarni}, in analogy to certain classical non-cyclic transformations (which are indirectly related to classical cyclic permutations~\cite{ChaconApproximation}), to formalize results on e.g. the degeneracy of $U_{\mathcal{T}}$ in classical ergodic theory~\cite{Sinai1976, HalmosErgodic, ChaconApproximation, KatokStepin2} (cf. Sec.~\ref{sec:cl_ErgodicHierarchy}). As we will see in Sec.~\ref{sec:DFTsAreOptimal}, the construction of pure state cyclic permutations as above allows us to go much further, and tackle non-trivial measures of the level statistics of $\hat{U}_H(t)$ that can e.g. meaningfully distinguish between Wigner-Dyson and Poisson statistics.
In analogy with the definitions for classical cyclic permutations (Eqs.~\eqref{eq:cl_cyc_erg_def} and \eqref{eq:cl_cyc_apd_def}), we can define cyclic ergodicity and cyclic aperiodicity for these pure state quantum cyclic permutations (see Fig.~\ref{fig:qcycpermutation} for a schematic depiction, and Fig.~\ref{fig:qcycpermutationExact} in Sec.~\ref{sec:modefluctuations} for examples with exact numerical data).
\begin{definition}[\textbf{Quantum cyclic ergodicity}]
A pure state quantum cyclic permutation $\mathcal{C}$ shows cyclic ergodicity iff
\begin{equation}
\left\lvert \langle C_{k+p}\rvert \hat{U}_H(p t_0)\lvert C_k\rangle\right\rvert^2 \gg O(d^{-1}) \text{ for all } k \text{ and } \lvert p\rvert \leq d/2,
\label{eq:q_cyclic_ergodicity}
\end{equation}
ensuring that any initial state $\lvert C_k\rangle \in \mathcal{C}$ ``visits'' all the other elements of $\mathcal{C}$ sequentially with greater-than-random overlap at least once (including its future and past evolution).
\end{definition}
Cyclic ergodicity can be expressed concisely in terms of the $p$-step error,
\begin{equation}
1-\varepsilon_C(p;t_0) \gg O(d^{-1}) \text{ for all } \lvert p\rvert \leq d/2.
\label{eq:q_err_ergodicity}
\end{equation}
\begin{definition}[\textbf{Quantum cyclic aperiodicity}]
A pure state quantum cyclic permutation $\mathcal{C}$ shows cyclic aperiodicity iff
\begin{equation}
\left\lvert\langle C_{k}\rvert \hat{U}_H(t)\lvert C_k\rangle\right\rvert^2 \lesssim O(d^{-1}), \text{ for all } k \text{ and } t_0 \ll \lvert t\rvert \lesssim O(t_0 d),
\label{eq:q_cyclic_aperiodicity}
\end{equation}
\end{definition}
The restriction $t \lesssim O(t_0 d)$ is particularly important here. The quantum recurrence theorem (Poincar\'{e} recurrence for the flow of phases of vectors in the energy eigenbasis~\cite{QuantumRecurrences}) on the subshell containing $\lvert C_k\rangle$, guarantees that aperiodicity will eventually be violated after some time (but possibly only at times exponentially large in $d$; see e.g.~\cite{BrownSusskind2} for a related discussion of recurrences). This requires the errors at nonzero integer multiples of $d$ to be large,
\begin{equation}
1-\varepsilon_C(nd;t_0) \lesssim O(d^{-1}), \text{ for all } \lvert n\rvert \in \mathbb{N}, \text{ with } n\sim O(1).
\label{eq:q_err_aperiodicity}
\end{equation}
\begin{figure}[!hbt]
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.35]{CyclicPermutationFigs/CyclicPermutationTrajectoryErgodicAperiodicv6.pdf}
\caption{\small Cyclic ergodicity and aperiodicity.}
\label{fig:quantumergodicity}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.35]{CyclicPermutationFigs/CyclicPermutationTrajectoryNonergodicv6.pdf}
\caption{\small Non-ergodicity.}
\label{fig:quantumnonergodicity}
\end{subfigure}
\caption{\small Schematic depiction of cyclic ergodicity and non-ergodicity for a $(d=10)$-element quantum cyclic permutation, in a polar representation $(r,\theta) \in [0,1]\times [0,2\pi)$ of the corresponding $(d=10)$-dimensional Hilbert space $\Sigma_d$; the angular direction is parametrized as $\theta = 2\pi p/d$, and the radial coordinate is $r = f\left(\left\lvert \langle C_0\rvert (\hat{U}_C^\dagger)^p \lvert \psi\rangle\right\rvert\right)$ for any vector $\lvert \psi\rangle \in \Sigma_d$, where $f:[0,1]\to [0,1]$ is some monotonic function and $\hat{U}_C^p$ is extrapolated to non-integer $p$ in some convenient manner (e.g. connecting the $\lvert C_k\rangle$ along some smooth path). The basis vectors $\lvert C_k\rangle$ are depicted by arrows representing the corresponding axes. Each of (\ref{fig:quantumergodicity}) and (\ref{fig:quantumnonergodicity}) may be loosely regarded as a ``quantization'' of the respective classical versions in Fig.~\ref{fig:cycpermutation2}. The trajectory indicates the persistence amplitude $z_0(p,t_0)$ of the initial state $\lvert C_0\rangle$ ($1$ at the outermost boundary, $0$ at the center) in the radial direction up to $\lvert p \rvert = 5$ (visible trajectory) and beyond ("Haar-random persistence"). The region of "Haar-random persistence" refers to $z_0(p, t_0) = O(d^{-1/2})$, which includes all Haar random states by canonical typicality~\cite{tumulka_CT, CanonicalTypicalityPSW}. Consequently, this region has by far the largest (Haar) volume in the (unbiased subshell of the) Hilbert space $\Sigma_d$, while the depicted outer regions of non-random overlap with the $\lvert C_k\rangle$ together form a relatively tiny fraction of the space.}
\label{fig:qcycpermutation}
\end{figure}
These definitions involve an explicit cutoff scale $O(d^{-1})$ for the overlaps of states, which is the order of magnitude of the overlap of a typical random state with any given state~\cite{CanonicalTypicalityPSW, CanonicalTypicalityPSW2, tumulka_CT}. This choice of the cutoff will prove most convenient for our approach, especially in the context of random matrix theory.
Additionally, we will often drop the ``cyclic'' qualifier for ergodicity and aperiodicity in the remainder of this paper, when there is no ambiguity. It is also convenient to talk of quantum systems being ergodic or aperiodic in an energy subspace, according to the following definition. We note that this definition pertains to a dynamical (i.e. time-domain) version of ergodicity, and is distinct from the use of ``quantum ergodicity'' in the mathematical literature to refer to the delocalization of energy eigenstates~\cite{Zelditch, Anantharaman}.
\begin{definition}[\textbf{Ergodicity and aperiodicity of a quantum system}]
We call a quantum system (dynamically) \textbf{ergodic} in the energy subspace $\Eshell{d}$ within a time $T > 0$, if it admits at least one cyclic permutation in the subspace satisfying cyclic ergodicity for some $t_0$ with $t_0 d < T$. Similarly, the system is \textbf{aperiodic} in $\Eshell{d}$ within $T$ if no ergodic cyclic permutation in $\Eshell{d}$ violates aperiodicity for any choice of $t_0$ with $t_0 d < T$ (and quasiperiodic otherwise).
\end{definition}
For example, every system is always ergodic and not aperiodic in any subspace $\Eshell{1}$ consisting of a single energy level. For typical quantum systems, we will implicitly assume a choice of $T$ that is as large as possible while being much less than the quantum recurrence time scale. In general, identifying which energy subspaces of the system satisfy these properties provides an observable-independent characterization of the ergodicity of a quantum system.
\subsection{Quantum error bounds on cyclic ergodicity and aperiodicity}
Now, we ask how the $1$-step error $\varepsilon_C(1; t_0)$ can be used to determine the overall ergodicity and aperiodicity of a cyclic permutation. For this, we need to determine the fastest possible decay of the persistence $z_k(p; t_0)$ with time $p$, taking into account the possibility of superposition of errors from successive times. As shown in Appendix~\ref{app:q_erg_errors}, the fastest decay of the persistence occurs when the action of the error unitary $\hat{U}_\Delta = \hat{U}_C^\dagger \hat{U}_H(t_0)$ corresponds to a rotation in a (complex) 2D plane. This is quantified by the nonlinear relation:
\begin{align}
\sgn(p_2) &\min\left\lbrace\arccos z_k(p_2; t_0), \frac{\pi}{2}\right\rbrace - \sgn(p_1)\min\left\lbrace\arccos z_k(p_1; t_0),\frac{\pi}{2}\right\rbrace \nonumber \\
&\leq\ (\sgn(p_2)-\sgn(p_1))\sum_{p=p_1}^{p_2}\arccos z_{k+p}(1;t_0).
\label{eq:q_persistence_bound}
\end{align}
For $\varepsilon_C(1; t_0) \ll 1$, as would be the case for any but the poorest possible approximation of $\hat{U}_H(t_0)$ by cyclic permutations, Eq.~\eqref{eq:q_persistence_bound} further implies
\begin{align}
\sgn(p_2) &\arccos z_k(p_2; t_0) - \sgn(p_1)\arccos z_k(p_1; t_0) \nonumber \\
&\leq\ (p_2-p_1)\varepsilon_C^{1/2}(1;t_0) + O[(p_2-p_1)\varepsilon_C^{3/2}(1;t_0)],
\label{eq:q_persistence_bound_approx}
\end{align}
when either term on the left hand side does not exceed $\pi/2$ in magnitude. Setting $p_2 = \pm d/2$, $p_1 = 0$ and imposing Eq.~\eqref{eq:q_err_ergodicity} at $p = p_2$ gives
\begin{equation}
\left[\varepsilon_C(1;t_0) \leq \frac{\pi^2}{d^2}+O(d^{-5/2}) \right] \implies\ \text{Cyclic ergodicity of } \mathcal{C}.
\label{eq:q_ergodicity_bound}
\end{equation}
Similarly, setting $p_2 = \pm d$, $p_1 = 0$ and imposing Eq.~\eqref{eq:q_err_aperiodicity} at $n = \pm 1$ (respectively) gives
\begin{equation}
\text{Cyclic aperiodicity of }\mathcal{C}\ \implies \left[\varepsilon_C(1;t_0) \geq \frac{\pi^2}{4d^2}+O(d^{-5/2}) \right].
\label{eq:q_aperiodicity_bound}
\end{equation}
The $1$-step persistence amplitudes $z_k(1; t_0)$ and error $\varepsilon_C(1;t_0)$ are quantities that are accessible at the significantly early time $t_0$, much smaller than the period $t_0 d$ of a cyclic permutation. In contrast, the higher $p$-step persistence amplitudes of the cyclic permutation --- and consequently, ergodicity and aperiodicity --- are sensitive to the detailed structure (such as the precise energy levels and eigenstates) of $\hat{U}_H(t_0)$, particularly at late times $p \sim O(d)$. The existence of bounds such as Eqs.~\eqref{eq:q_ergodicity_bound} and \eqref{eq:q_aperiodicity_bound} (or more generally, Eq.~\eqref{eq:q_persistence_bound}) allows one to prove ergodicity and disprove aperiodicity for a system based entirely on information available at time $t=t_0$, bypassing a refined knowledge of $\hat{U}_H(t_0)$.
\subsection{Optimizing cyclic permutations with energy level statistics}
\label{sec:DFTsAreOptimal}
The best possible determination of ergodicity and aperiodicity using the inequalities Eq.~\eqref{eq:q_ergodicity_bound} and Eq.~\eqref{eq:q_aperiodicity_bound} is when $\varepsilon_C(1;t_0)$ attains its minimum value over all possible choices of cyclic permutations $\mathcal{C}$. More generally, it is reasonable to expect the cyclic permutation with minimum $\varepsilon_C(1;t_0)$ to have the largest $p$-step persistence amplitudes for a range of $p$. The optimal (minimum) $p$-step errors (including for $p=1$) can be identified with the help of the following statement, proved in Appendix~\ref{app:q_cyc_dft}.
\begin{theorem}[\textbf{Optimal cyclic permutations}]
If the system (in some energy subspace $\Sigma_d$) admits some cyclic permutation $\mathcal{C}'$ with $p$-step error $\varepsilon_{C'}(p; t_0) \leq (2/d)$ for a given $p$ and $t_0$, then $\varepsilon_C(p; t_0)$ attains its minimum value among all cyclic permutations for a cyclic permutation $\mathcal{C}$ whose cycling operator $\hat{U}_C$ satisfies
\begin{equation}
\lim_{\delta \to 0}\left[\hat{U}_H(t_0)e^{i\delta\hat{Y}},\hat{U}_C\right] = 0.
\label{eq:dftoptimal}
\end{equation}
Here, $\hat{Y}$ is any fixed Hermitian operator (which effectively selects a unique eigenbasis of $\hat{U}_H(t_0)$ if the latter is degenerate). In particular, the global minimum of the error is achieved by one such $\hat{U}_C$ for every choice of $\hat{Y}$.
\end{theorem}
The rest of this paper discusses how the above statement can be utilized to reveal connections between ergodicity and energy level statistics. For now, we note a curious coincidence --- the cutoff value $(2/d)$ for the $p$-step quantum error, below which Eq.~\eqref{eq:dftoptimal} is satisfied by an optimal pure state cyclic permutation, is precisely the cutoff value of the error of a \textit{classical} $d$-element cyclic permutation that would guarantee ergodicity for $p=1$~\cite{KatokStepin2} (cf. Sec.~\ref{sec:cl_cyclic}).
\subsubsection{Discrete Fourier Transforms of energy eigenstates}
Given a complete orthonormal set of eigenvectors $\lbrace \lvert E_n\rangle \rbrace_{n=0}^{d-1}$ of $\hat{U}_H(t_0)$, it follows from Eq.~\eqref{eq:dftoptimal} that a sufficiently complete set of $\hat{U}_C$ that extremize the errors is given by
\begin{equation}
\hat{U}_C = \sum_{n=0}^{d-1}e^{-2\pi i n/d}\lvert E_{q(n)}\rangle\langle E_{q(n)}\rvert,
\end{equation}
where $q(n)$ represents an arbitrary permutation acting on $n \in \lbrace 0,...,d-1\rbrace$. The corresponding cyclic basis $\mathcal{C}$ is completely contained in the unbiased subshell; parametrizing the basis by $q$, as well as arbitrary phases $\varphi_n$ that don't influence $\hat{U}_C$, the elements of $\mathcal{C}(q,\varphi_n)$ can be written as a discrete Fourier transform (DFT) of the energy eigenstates
\begin{equation}
\lvert C_k(q,\varphi_n)\rangle = \frac{1}{\sqrt{d}}\sum_{n=0}^{d-1}e^{-2\pi i n k / d}e^{-i\varphi_n}\lvert E_{q(n)}\rangle.
\label{eq:DFTbasis}
\end{equation}
It is immediately seen that all the $p$-step persistence amplitudes that are relevant for ergodicity (Eq.~\eqref{eq:q_cyclic_ergodicity}) are equal, $z_k(p; t_0) = z(p; t_0)$ in such a basis, and can be concisely expressed as
\begin{equation}
z(p; t_0) = \left\lvert\frac{1}{d}\Tr\left[\hat{U}_H(p t_0)\hat{U}_C^{-p}\right]\right\rvert = \left\lvert\frac{1}{d}\sum_{n=0}^{d-1}\exp\left[ip \left(\frac{2\pi n}{d}-E_{q(n)}t_0\right)\right]\right\rvert.
\label{eq:q_dft_persistence}
\end{equation}
The statement of aperiodicity (Eq.~\eqref{eq:q_cyclic_aperiodicity}) in such a basis can be expressed in terms of the spectral form factor (SFF)~\cite{Haake},
\begin{equation}
K(t) \equiv \left\lvert \frac{1}{d}\Tr\left[\hat{U}_H(t)\right]\right\rvert^2 \lesssim O(d^{-1}), \text{ for } t_0 \ll \lvert t\rvert \lesssim O(t_0 d).
\label{eq:q_dft_aperiodicity}
\end{equation}
From Eq.~\eqref{eq:q_dft_persistence}, we obtain a discrete set of $d!$ possible minima (corresponding to the number of possible permutations) of each $p$-step error,
\begin{equation}
\varepsilon_C(p, t_0,q) = 1-\left\lvert\frac{1}{d}\sum_{n=0}^{d-1}\exp\left[ip \left(\frac{2\pi n}{d}-E_{q(n)}t_0\right)\right]\right\rvert^2,
\label{eq:q_cyc_err}
\end{equation}
among which some $\varepsilon_C(p, t_0,q_{\min}(p))$ is a global minimum for each $p$ (if the error is less than $(2/d)$). The minimum of the error also corresponds to a maximum of the mean $p$-step persistence amplitude. Thus, the minimum $p$-step error (or maximum $p$-step mean persistence) among cyclic permutations is an \textit{invariant feature of the energy levels}, and can itself be considered a measure of energy level statistics. In such a DFT basis, the persistence probabilities $z^2(p; t_0)$ are given by the spectral form factor of the \textit{error} unitary $\hat{U}_{\Delta}$, satisfying the simple $1$-step error bound (from Eq.~\eqref{eq:q_persistence_bound_approx})
\begin{equation}
z^2(p,t_0) \geq
\begin{dcases}
\cos^2\left(p\sqrt{\varepsilon_C(1,t_0,q)}\right),\ &\text{for}\ \lvert p\rvert < \pi/\sqrt{4\varepsilon_C(1,t_0,q)}, \\
0,\ &\text{for}\ \lvert p\rvert \geq \pi/\sqrt{4\varepsilon_C(1,t_0,q)},
\end{dcases}
\label{eq:q_cyc_persistence_bound}
\end{equation}
neglecting $O[p\varepsilon_C^{3/2}(1,t_0,q)]$ contributions.
\subsubsection{Persistence amplitudes and spectral mode fluctuations}
\label{sec:modefluctuations}
The main measure of level statistics appearing in the errors $\varepsilon_C(p, t_0,q)$ (cf. Eq.~\eqref{eq:q_cyc_err}), as well as the persistence of cyclic permutations, is the deviation of energy levels from a regularly spaced spectrum. Namely, let
\begin{equation}
\Delta_n(t_0,q) = \left(\frac{t_0 d}{2\pi}E_{q(n)}\right)-n,
\label{eq:deltamodefluctuation}
\end{equation}
representing the deviation of the $q(n)$-th level in a rescaled spectrum from the integer $n$. The persistence as a function of time as given by
\begin{align}
z^2(p,t_0) &= \left\lvert\frac{1}{d}\sum_n e^{-i (2\pip/d) \Delta_n(t_0,q)} \right\rvert^2 \nonumber \\
&\xrightarrow{d \to \infty} \left\lvert\int \diff\Delta\ f(\Delta; t_0,q)e^{-i(2\pi p/d)\Delta}\right\rvert^2,
\label{eq:persistence_modefluctuations}
\end{align}
where $f(\Delta; t_0,q)$ is the probability density function of the $\Delta_n(t_0,q)$.
Intuitively, the persistence at any time $p$ would be maximized when the $\Delta_n$ are minimized. A practically reasonable choice of $t_0$ and $q$ to estimate the global minimum of the $1$-step error, for uniform density of states $\Omega(\Sigma_d) = (d-1)/(E_{\max}-E_{\min})$ (uniform over large energy windows), is one in which the rescaled levels $t_0 d E_q(n)/2\pi$ are each close to the $n$-th integer. In other words, $t_0 \approx 2\pi \Omega(\Sigma_d)/d$, with $q$ being the permutation that sorts the energy levels in ascending order ($E_n > E_m \implies n>m$). For a given $t_0$, it is shown in Appendix~\ref{app:optimalsorting} that Eq.~\eqref{eq:persistence_modefluctuations} is indeed maximized at $p=1$ when $q(n)$ is the sorting permutation, among a certain class of ``small'' permutations when $\Delta_n \ll d$. In other words, the sorting permutation is a (discrete version of a) local minimum for the error.
In this case, the $\Delta_n$ are essentially what have been called mode fluctuations in the spectrum\footnote{The term ``mode fluctuations'' has been used with at least two different meanings in the literature~\cite{Aurich1,Aurich2, Lozej2021}. In Refs.~\cite{Aurich1, Aurich2} and related works cited there, it refers to the fluctuations of the \textit{spectral staircase} around a straight line. Our usage is in the sense of Ref.~\cite{Lozej2021}, referring to deviations of the levels themselves from a straight line. The two are different in general, but show close agreement in their statistical properties for Wigner-Dyson random matrix ensembles~\cite{DeltaStar1, DeltaStar2} (see also Sec.~\ref{sec:RMTergodicity}).}~\cite{Aurich1, Aurich2, Lozej2021}; the Gaussianity of their distribution has been conjectured to be a signature of chaos~\cite{Aurich1, Aurich2}. A minor, but important, technical distinction between $\Delta_n$ and conventional mode fluctuations, is that there is no unfolding~\cite{Haake, Mehta} of the energy levels to make $\Omega(\Sigma_d)$ appear uniform prior to calculating the $\Delta_n$. Such a procedure, while indispensable in the numerical comparison of level statistics with random matrix predictions, would not preserve the dynamics of the system in the time domain. Given this qualifier, we find in Eq.~\eqref{eq:persistence_modefluctuations} that the Fourier components of mode fluctuation distributions, \textit{without unfolding}, directly determine the optimal persistence of cyclic permutations.
By the reciprocal relation of Fourier variables, a slow decay of the persistence corresponds to a narrow distribution $f(\Delta; t_0, q)$; when $q$ is a sorting permutation, this essentially implies a high rigidity of the spectrum e.g. as measured by the variance $\sigma^2_\Delta$ of the distribution. While the precise connection between ergodicity and spectral rigidity depends on the functional form of this distribution, the following proposition acquires special importance for Wigner-Dyson level statistics:
\begin{proposition}[\textbf{Ergodicity, aperiodicity and Wigner-Dyson spectral rigidity}]
If $z(p,t_0) = e^{-\gamma p^2}+O(d^{-1/2})$ with $\gamma > 0$ when $\lvert p\rvert \lesssim O(t_0 d)$ for some DFT cyclic permutation $\mathcal{C}$, then
\begin{equation}
\text{Cyclic ergodicity and aperiodicity of } \mathcal{C} \iff \left[\sigma_\Delta^2 = \frac{\alpha^2}{4\pi^2} \ln d \text{ with } \alpha \in [1,2], \text{ as } d \to \infty\right].
\label{eq:WDproposition}
\end{equation}
\end{proposition}
This form of $\sigma^2_\Delta$ is precisely that of the Wigner-Dyson circular random matrix ensembles~\cite{Haake, Mehta, DeltaStar1, DeltaStar2} for $\alpha = 1, \sqrt{2}, 2$, while the near-Gaussianity of $z^2(p,t_0)$ for these ensembles is supported by the observed Gaussianity of $f(\Delta; t_0, q)$ for mode fluctuations. We will consider the applicability of this proposition to Wigner-Dyson random matrices in greater detail in the next section, using analytical arguments and numerical results. Overall, this proposition suggests that (circular) Wigner-Dyson random matrices are ergodic and aperiodic, which is anticipated by the numerical data in Fig.~\ref{fig:qcycpermutationExact}.
\begin{figure}[!hbt]
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[scale=0.6]{CyclicPermutationPolarData/CyclicErgodicCUEb.pdf}
\caption{\small Cyclic ergodicity and aperiodicity (staying closer to the boundary than $O(d^{-1/2})$ for more than one and less than two full rotations); data for a single Circular Unitary Ensemble (CUE~\cite{Haake, Mehta}) random matrix.}
\label{fig:quantumergodicityData}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[scale=0.6]{CyclicPermutationPolarData/CyclicNonergodicPoissonb.pdf}
\caption{\small Non-ergodicity (staying closer to the boundary than $O(d^{-1/2})$ for less than one full rotation); data for a single realization of Poisson/uncorrelated energy levels.}
\label{fig:quantumnonergodicityData}
\end{subfigure}
\vspace{0.1cm}
\caption{\small Exact numerical data for polar representation of cyclic ergodicity and aperiodicity, as well as non-ergodicity, via the trajectory of $\lvert C_0\rangle$ in a Hilbert space $\Sigma_d$ with $d=2048$. Essentially, the cyclic permutation basis elements $\lvert C_k\rangle$ are points on the boundary (with $p=k$) of the polar representation, and the permutation is ergodic if the actual trajectory of any such point remains close to the boundary for a full rotation of the angular coordinate (including future and past evolution). The angular coordinate is $\theta = 2\pi p/d$ (depicted here for $p \in \mathbb{Z}$), and the radial coordinate represents $z_0(p,t_0)$ via the map $r = g(z_0(p, t_0))/g(1)$ with $g(x) = \lbrace 1+\tanh[\log(x^2 d/2)/6]\rbrace$. The trajectories extend up to $\lvert p\rvert = 4d$. The chosen cyclic permutation in both cases is the sorted DFT cyclic permutation. This figure anticipates the ergodicity of Wigner-Dyson level statistics and non-ergodicity of Poisson level statistics (Sec.~\ref{sec:TypicalCyclic}). The central region $z_0(p,t_0) = O(d^{-1/2})$ of Haar-random persistence again corresponds to nearly all of the Haar volume of $\Sigma_d$ by canonical typicality~\cite{tumulka_CT, CanonicalTypicalityPSW}, and is where the trajectory typically remains for long times with the exception of occasional recurrences near the boundary. See Fig.~\ref{fig:RMT_ergodicity} for a different depiction of similar data.}
\label{fig:qcycpermutationExact}
\end{figure}
\section{Cyclic permutations for systems with typical rigid spectra}
\label{sec:TypicalCyclic}
Sec.~\ref{sec:modefluctuations} identified a connection between a Gaussian time-dependence of the persistence of DFT cyclic permutations, and the spectral rigidity of Wigner-Dyson ensembles. In this section, we expand on this connection by considering the development of states in the $\mathcal{C}$-basis over time, motivating a Gaussian estimate for the typical time dependence of persistence amplitudes. This Gaussian estimate is used to derive direct constraints on the error from level statistics, based on the SFF of the system. We also show in detail that ergodic and aperiodic systems with a nearly exact Gaussian persistence amplitude must have a spectral rigidity equal to any weighted average of the three circular ensembles: COE, CUE and CSE. This conclusion is supported with numerical results for individual realizations of these ensembles.
\subsection{Periodic and random parts of time evolution}
For a cyclic permutation $\mathcal{C}$ with cycling operator $\hat{U}_C$ that commutes with $\hat{U}_H$ (i.e. a cyclic permutation of a DFT basis), the $p$-step persistence probability is given by the normalized SFF of the error unitary $\hat{U}_\Delta = \hat{U}_C^{-1}\hat{U}_H(t_0)$ (as noted before Eq.~\eqref{eq:q_cyc_persistence_bound}):
\begin{equation}
\lvert z(p; t_0)\rvert^2 = \left\lvert\frac{1}{d} \Tr\left[\hat{U}_\Delta^p\right]\right\rvert^2.
\end{equation}
To study the development of the persistence over time, it is convenient to write a general expression for $\hat{U}_\Delta^p$ in terms of the $p$-step errors $\varepsilon_C(p; t_0)$. On account of $[\hat{U}_\Delta, \hat{U}_C] = 0$, we have
\begin{equation}
\hat{U}_\Delta^p = \left[\left(\sqrt{1-\varepsilon_C(p; t_0)}\right) \hat{\mathds{1}} + \left(\sqrt{\varepsilon_C(p; t_0)}\right)\sum_{m=1}^{d-1} \nu_m(p) \hat{U}_C^m\right] e^{i\phi_\Delta(p)},
\label{eq:q_periodic_random}
\end{equation}
for some phases $\phi_\Delta(p)$ and complex error coefficients $\nu_m(p)$. Unitarity $\hat{U}_\Delta^\dagger \hat{U}_\Delta = \hat{\mathds{1}}$ translates to nonlinear constraints on the $\nu_m(p)$:
\begin{equation}
\sum_{m=1}^{d-1}\lvert \nu_m(p)\rvert^2 = 1,
\label{eq:nu_constraint1}
\end{equation}
\begin{equation}
\nu_m(p) + \nu^\ast_{-m}(p) = -g_p\sum_{k=1}^{d-1}\nu_k^\ast(p)\nu_{k+m}(p), \text{ for } m \neq 0,
\label{eq:nu_constraint2}
\end{equation}
where $\nu_0(p) \equiv 0$, and $g_p \equiv \sqrt{\varepsilon_C(p, t_0)/[1-\varepsilon_C(p,t_0)]}$.
As a matter of nomenclature, we call the first term proportional to $\hat{\mathds{1}}$ in Eq.~\eqref{eq:q_periodic_random} the ``periodic part'', and the remaining terms involving $\hat{U}_C^m$ (orthogonal to the periodic part) the ``random part'', of time evolution. This is because the former becomes a term proportional to $\hat{U}_C^p$ in $\hat{U}_H(p t_0)$, while we expect the $\nu_m(p)$ to generally (but not necessarily) look ``random''. In fact, (a subset of) the $\nu_m(p)$ are directly related to the SFF of $\hat{U}_H(p t_0)$ within the subspace $\Sigma_d$, via:
\begin{equation}
K(p t_0) = \left\lvert \frac{1}{d}\Tr\left[\hat{U}_H(p t_0)\right]\right\rvert^2 = \varepsilon_C(p;t_0) \lvert \nu_{-p}(p)\rvert^2,
\label{eq:nu_sff}
\end{equation}
and the expectation of randomness in the $\nu_m(p)$ reflects the randomness in the SFF~\cite{PrangeSFF, SSS, KosProsen2018} (more precisely, particularly in the phases of $\Tr[\hat{U}_H(t)]$). Additionally, $K(p t_0)$ serves as a (rather weak) lower bound for the $p$ step errors. In particular, $\varepsilon_C(p; t_0) = O(1)$ if $K(p t_0) = O(1)$, establishing the impossibility of finding cyclic permutations that are reasonably close to $\hat{U}_H(t_0)$, when $t_0$ is in the early time slope regime of the SFF. To refine this bound, we will need a generic expression for the time dependence of $\varepsilon_C(p; t_0)$, derived in the following subsection.
\subsection{Gaussian estimate for persistence amplitudes}
Using Eq.~\eqref{eq:q_periodic_random}, one can readily express the persistence at arbitrary time $p$ in terms of the $1$-step parameters $\varepsilon_C(1;t_0)$ and $\nu_m(1)$. The resulting expression involves a complicated multinomial expansion in the $\nu_m(1)$ (with $\binom{p}{s}$ representing binomial coefficients),
\begin{equation}
\hat{U}_\Delta^{p}e^{-ip \phi_\Delta(1)} = \left(1-\varepsilon_C(1,t_0)\right)^{p/2} \sum_{s=0}^{p} \binom{p}{s} g_1^s \sum_{m_1,\ldots, m_s} \nu_{m_1}(1)\ldots\nu_{m_s}(1) \hat{U}_C^{m_1+\ldots+m_s},
\label{eq:dpt0}
\end{equation}
which is hard to extract general predictions out of. To simplify the expression, we invoke a heuristic argument relying on the expected randomness of the $\nu_m$.
Specifically, we assume that the $\nu_m(1)$ are well described by an ensemble of complex numbers with fixed magnitudes and random phases, subject to the constraints Eq.~\eqref{eq:nu_constraint1} and Eq.~\eqref{eq:nu_constraint2}. Further, if one neglects $O(\sqrt{\varepsilon_C(p;t_0)})$ corrections to the $\nu_m$, Eq.~\eqref{eq:nu_constraint2} essentially becomes
\begin{equation}
\nu_m(p) \approx -\nu_{-m}^\ast(p).
\label{eq:nu_symmetry}
\end{equation}
Thus, pairings of $\nu_{m}(1)$ and $\nu_{-m}(1)$ in Eq.~\eqref{eq:dpt0} have a definite phase and generate contributions that potentially interfere constructively, while the remaining random terms add out of phase. This suggests following a strategy similar to methods based on the pairing of closed Feynman paths in studies of generic semiclassical~\cite{BerrySpectralRigidity, Berry227, HaakePO, HaakePO2, Haake} and quantum~\cite{KosProsen2018, GarrattChalker} chaotic systems: we evaluate the contribution from terms dominated by pairings of $\nu_{m}(1)$ and $\nu_{-m}(1)$ with at most one free $\nu_{m_k}(1)$, assuming (with no proof beyond the above argument) that the remaining terms are negligible. As is common with these methods, other contributions would eventually dominate at large enough times, when $\varepsilon_C(p; t_0)$ is sufficiently large and $\nu_m(p)$ is sufficiently random, invalidating Eq.~\eqref{eq:nu_symmetry} for such $p$.
The assumed dominance of paired error coefficients can be used to derive a general form of $\hat{U}_\Delta^p$ for small $p$, and from there an estimate for $z(p; t_0)$ using a recurrence relation; this is detailed in Appendix~\ref{app:errorcoefficientpairing}, with numerical evidence for error coefficient pairing. For $\varepsilon_C(1,t_0) \ll 1$ and $p \ll 1/\sqrt{\varepsilon_C(1,t_0)}$, the general form is
\begin{equation}
\hat{U}_\Delta^{p} e^{-ip \phi_\Delta(1)} \approx \frac{z(p, t_0)}{\sqrt{1-\varepsilon_C(1,t_0)}}\left[\sqrt{1-\varepsilon_C(1,t_0)}\hat{\mathds{1}}+p \sqrt{\varepsilon_C(1,t_0)}\sum_{r=1}^{d-1}\nu_r(1)\hat{U}_C^r\right].
\label{eq:uerrT_approx}
\end{equation}
In other words, time evolution for small $p$ simply manifests as a relative growth of the random part in comparison to the periodic part, up to an overall phase. This gives a simple Gaussian expression for the persistence amplitude (in the same regime of small error and time):
\begin{equation}
z(p, t_0) \approx \exp\left[-\frac{1}{2}\frac{\varepsilon_C(1,t_0)}{1-\varepsilon_C(1,t_0)}p^2-\frac{1}{2}\varepsilon_C(1,t_0)\lvert p\rvert \right].
\label{eq:zGaussianEstimate}
\end{equation}
The second (linear) term in the exponent is negligible until $\lvert p\rvert \sim 1/\varepsilon_C(1,t_0)$, and we will simply drop it in further calculations. The Gaussian follows the sinusoidal lower bound in Eq.~\eqref{eq:q_cyc_persistence_bound} rather closely, suggesting that typical cyclic permutations are surprisingly close to saturating the lower bound. In other words, $\hat{U}_\Delta^p$ remains close to a 2D rotation in Hilbert space, until a time $p \sim 1/\sqrt{\varepsilon_C(1,t_0)}$ when the cyclic permutation develops a large ($\sim 1$) error.
\subsection{Spectral rigidity for Gaussian persistence amplitudes}
\subsubsection{The spectral form factor determines the minimum error for a system}
Now we are in a position to quantitatively analyze the connection between familiar measures of spectral rigidity and the persistence of cyclic permutations. The $1$-step error coefficients $\nu_m(1)$ can be related to the SFF $K(p t_0)$ in the $p \ll 1/\sqrt{\varepsilon_C(1, t_0)}$ regime, using Eqs.~\eqref{eq:q_periodic_random}, \eqref{eq:nu_sff} and \eqref{eq:uerrT_approx}:
\begin{equation}
\lvert \nu_{-p}(1)\rvert^2 \approx \frac{1-\varepsilon_C(1,t_0)}{z^2(p, t_0)\varepsilon_C(1,t_0)}\frac{K(p t_0)}{p^2}.
\end{equation}
Summing over $p = -\overline{p}$ to $\overline{p}$ excluding $0$, the left hand side can be at most $1$ on account of the normalization constraint, Eq.~\eqref{eq:nu_constraint1}. Expanding $z^2(p,t_0) = 1-O(\varepsilon_C(1,t_0)p^2)$ and using $K(t) = K(-t)$, we get
\begin{equation}
\sum_{p = 1}^{\overline{p}} K(p t_0)\left\lbrace\frac{1}{p^2}+O[\varepsilon_C(1,t_0)]\right\rbrace \lessapprox \frac{\varepsilon_C(1,t_0)}{2(1-\varepsilon_C(1,t_0))}.
\label{eq:SFFerror_relation}
\end{equation}
Every term on the left hand side is positive. Considering only the first term and choosing the largest possible $\overline{p}$ for which the second term is negligible, then, gives a reasonably restrictive lower bound on $\varepsilon_C(1,t_0)$. Correspondingly, we take $\overline{p} = 1/(M\sqrt{\varepsilon_C(1,t_0)})$ where $M$ is some large number satisfying $M = O(1) \geq 1$.
We can derive more explicit bounds from Eq.~\eqref{eq:SFFerror_relation} for specific cases. As sums of the SFF over time are self-averaging~\cite{KosProsen2018}, we replace $K(p t_0)$ with a smooth power law expression $K(t) = \lambda t^\gamma$ for $t_0 \leq t \ll t_0 d$, $\gamma \geq 0$, and with $\lambda \ll 1$, which accounts for the behavior of a wide variety of systems\footnote{In the following sense: $\gamma = 0$ and $\lambda = d^{-1}$ corresponds to generic integrable systems with Poisson statistics~\cite{Haake, BerryTabor}; $\gamma = 1$ and $\lambda \geq O(d^{-2})$ corresponds to generic chaotic systems when $\lambda=O(d^{-2})$~\cite{Mehta,Haake}, and those with macroscopic conserved quantities for larger magnitudes of $\lambda$~\cite{ChalkerSum, WinerHydro}; integer $\gamma > 1$ with $\lambda = O(d^{-2})$ corresponds to tensor products of $\gamma$ independent chaotic systems, as well as the $\gamma$-particle sectors of single-particle chaotic systems with $\lambda = O(\gamma! d^{-2})$ (for large $d$), in which the many-particle SFF shows an exponential ramp~\cite{ExpRamp1, ExpRamp2, ExpRamp3}.}. Evaluating the sum in Eq.~\eqref{eq:SFFerror_relation} for this power law (Appendix~\ref{app:sfferror}) gives the following constraints on the error:
\begin{equation}
\varepsilon_C(1,t_0) \gtrapprox \begin{dcases}
2\lambda t_0^\gamma\zeta(2-\gamma),\ &\text{for}\ 0 \leq \gamma < 1, \\
\lambda t_0^\gamma\ln \frac{1}{\lambda},\ &\text{for}\ \gamma = 1, \\
\left[2\lambda t_0^\gamma\frac{\gamma-1}{M^{\gamma-1}}\right]^{\frac{2}{\gamma+1}},\ &\text{for}\ \gamma > 1,
\end{dcases}
\end{equation}
where $\zeta(z)$ is the Riemann zeta function.
Now we consider the most important (i.e. typical) cases of practical interest. Poisson statistics~\cite{Haake} corresponds to $\lambda = d^{-1}$ and $\gamma = 0$, for which we obtain
\begin{equation}
\varepsilon_C(1,t_0)\rvert_{\text{Poisson}} \gtrapprox \frac{\pi^2}{3d}.
\label{eq:Poisson_minerror}
\end{equation}
Together with the conditions for Eq.~\eqref{eq:dftoptimal}, this implies that every (DFT and non-DFT) cyclic permutation for a system with Poisson level statistics has $\varepsilon_C(1,t_0) > (2/d)$. On the other hand, the circular Wigner-Dyson ensembles~\cite{Haake, Mehta} have $\gamma = 1$ and $\lambda = 2/(\beta d^2)$ with $\beta = 1,2,4$ for COE, CUE, CSE respectively. With $t_0 = 1$ ($=2\pi\Omega/d$), the error satisfies
\begin{equation}
\varepsilon_C(1,1)\rvert_{\text{Wigner-Dyson}} \gtrapprox \frac{4}{\beta d^2}\ln d, \text{ with } \beta \in \lbrace 1,2,4\rbrace.
\label{eq:WD_minerror}
\end{equation}
These relations encode the following property: any system admits cyclic permutations with large error, but only sufficiently rigid spectra can admit cyclic permutations with small error, quantifying the discussion in Sec.~\ref{sec:modefluctuations}. For instance, if a system is known to have a cyclic permutation with error smaller than $(2/d)$, we can rule out Poisson statistics for that system.
\subsubsection{Spectral rigidity for ergodic, aperiodic systems with exact Gaussianity}
From the viewpoint of the Gaussian estimate, an idealized situation is when the persistence amplitude $z(p;t_0)$ remains exactly Gaussian as it decays all the way through to the random state (order of magnitude) value $z(p; t_0) \sim O(d^{-1/2})$. Writing $g_1^2 = \varepsilon_C(1,t_0)/[1-\varepsilon_C(1,t_0)]$, we can solve for $g_1$ corresponding to ergodic or quasiperiodic evolution by imposing:
\begin{equation}
exp\left[-\frac{1}{2\alpha^2}g_1^2 d^2\right] \geq c d^{-1/2},
\label{eq:exactGaussian}
\end{equation}
where $\alpha = 2$ for ergodicity and $\alpha = 1$ for quasiperiodicity (from Eqs.~\eqref{eq:q_cyclic_ergodicity}, \eqref{eq:q_cyclic_aperiodicity}), while $c$ is some $O(1)$ positive constant.
From Eq.~\eqref{eq:persistence_modefluctuations}, we also obtain a Gaussian distribution for mode fluctuations given some $g_1$ (assuming that the DFT cyclic permutation under discussion corresponds to a level permutation function $q$),
\begin{equation}
f(\Delta; t_0, q) = \frac{1}{\sqrt{2\pi \sigma_\Delta^2}}\exp\left[-\frac{1}{2\sigma_{\Delta}^2}\Delta^2\right],
\end{equation}
with variance $\sigma^2_\Delta = g_1^2 d^2 / (4\pi^2)$. Requiring ergodicity and aperiodicity therefore gives:
\begin{equation}
\sigma^2_{\Delta} = \frac{\alpha^2}{4\pi^2} \ln d + O(1), \text{ with } \alpha \in [1,2].
\label{eq:ergodic_aperiodic_rigidity}
\end{equation}
This amounts to a derivation of Eq.~\eqref{eq:WDproposition}.
The logarithmic growth of the variance of mode fluctuations with the dimension $d$ of the energy subspace is a direct consequence of the Gaussianity of the persistence. In less idealized situations, it is possible to have a non-Gaussian tail in Eq.~\eqref{eq:zGaussianEstimate}, for $p \gtrsim 1/\sqrt{\varepsilon_C(1,t_0)}$, even if the Gaussian estimate holds for smaller times. It is worth noting that non-Gaussian tails at long times would show up as non-Gaussianities near $\Delta \approx 0$ in the mode fluctuation distribution; such deviations from Gaussianity are largely determined by the complicated correlations between the errors $\nu_m$, partly encoded in the fluctuations of the SFF $K(p t_0)$. The main takeaway here is instead the extremely specific numerical range $\alpha \in [1,2]$, of the coefficient multiplying the logarithm, demanded by ergodicity and aperiodicity. For non-Gaussian tails, one would have a similarly specific range of some other parameter.
\subsection{Cyclic ergodicity and aperiodicity for Wigner-Dyson random matrices}
\label{sec:RMTergodicity}
For Wigner-Dyson random matrix ensembles as well as individual systems with Wigner-Dyson (local) level statistics, it is convenient to choose $\Sigma_d$ to be an energy shell spanned by $d$ consecutive energy levels. There is numerical evidence that the mode fluctuation distribution is Gaussian~\cite{Aurich1,Aurich2, Lozej2021} (as well as analytical evidence for a related measure, number fluctuations~\cite{GRMGaussian}) especially near $\Delta \approx 0$, suggestive of an idealized Gaussian persistence. In Refs.~\cite{DeltaStar1, DeltaStar2}, the leading behavior of the variance $\sigma^2_\Delta$ (there called $\Delta^\ast$) for these ensembles has been shown to be equal to that of the (spectrum or ensemble averaged) spectral rigidity parameter $\Delta_3(d)$~\cite{DysonMehtaSR, Mehta, BerrySpectralRigidity} --- measuring the variance of the ``spectral staircase'' around a best fit straight line --- when $t_0$ is chosen to be the slope of the straight line and $q$ the sorting permutation. Moreover, $\Delta_3(d)$ can be calculated exactly~\cite{Mehta, Berry227, BerrySpectralRigidity} from the ensemble-averaged SFF $\overline{K(t)}$ within the energy subspace.
In fact, the leading contribution for large $d$ comes only from the early time linear ramp region, given by $\overline{K(t)} \approx t/(\beta \pi \Omega d)$ with $\beta = 1,2,4$ respectively for COE, CUE and CSE (much like in the derivation of Eq.~\eqref{eq:WD_minerror}). The result is a logarithmic dependence of $\sigma^2_{\Delta}$ on $d$ to leading order (for $t_0 = 1 $ and $q$ being the sorting permutation),
\begin{equation}
\left.\sigma^2_{\Delta}\right\rvert_{\text{Wigner-Dyson}} = \frac{1}{\beta \pi^2} \ln d+O(1), \text{ with } \beta \in \lbrace 1,2,4\rbrace.
\label{eq:WignerDyson_rigidity}
\end{equation}
This precisely corresponds (via $\sigma^2_{\Delta} = g_1^2 d^2/(4\pi^2)$ ) to an error that saturates the lower bound in Eq.~\eqref{eq:WD_minerror}, providing an important sanity check. Comparing this with Eq.~\eqref{eq:ergodic_aperiodic_rigidity} (cf. Eq.~\eqref{eq:WDproposition}), we see that the Wigner-Dyson ensembles span exactly the range of allowed coefficients for ergodic, aperiodic systems with a Gaussian persistence. CUE is well within this range, whereas COE is at the upper bound and barely ergodic while CSE is at the lower bound and barely aperiodic (here, it is worth noting that the CSE variance is for one non-degenerate half of the spectrum~\cite{Mehta, Haake}).
In generic quantized classically chaotic systems, Eq.~\eqref{eq:WignerDyson_rigidity} only holds for an energy shell with $d$ small enough to avoid longer range nonuniversal correlations between far apart energies~\cite{BerrySpectralRigidity, Berry227}, such as a varying density of states (with no unfolding). In particular, the ergodicity and aperiodicity of systems that show Wigner-Dyson level statistics only applies in a regime that avoids non-universal spectral rigidity saturation effects~\cite{BerrySpectralRigidity, Aurich2, Lozej2021, Berry227}. From a dynamical standpoint, this is to be expected --- such systems are typically ergodic only in infinitely thin energy shells (in the classical limit) and not over phase space volumes covering a wide range of energies.
It remains to be verified that the mode fluctuation distribution in random matrix ensembles is indeed well approximated by a Gaussian all the way until the persistence decays to $O(d^{-1/2})$, so that the identification between Eq.~\eqref{eq:WignerDyson_rigidity} and Eq.~\eqref{eq:ergodic_aperiodic_rigidity} can be made with some confidence. We provide numerical support for this statement in Fig.~\ref{fig:RMT_ergodicity} for $d=2048$. While we do not treat small deviations from Gaussianity here, it seems reasonable to conclude that a similar identification would hold even for a range of such deviations that do not significantly affect the time when the persistence reaches $O(d^{-1/2})$.
\begin{figure}[!hbt]
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{RMTfigs/COE3/WignerSurmise3.pdf}
\caption{\small COE level spacings}
\label{fig:coeps}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{RMTfigs/COE3/GausLin3p.pdf}
\caption{\small COE persistence (linear); $pt_0 = \text{time}$.}
\label{fig:coepl}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{RMTfigs/COE3/GausLog3p.pdf}
\caption{\small COE persistence (log-linear)}
\label{fig:coepll}
\end{subfigure}
\vspace{0.1cm}
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{RMTfigs/CUE2/WignerSurmise2.pdf}
\caption{\small CUE level spacings}
\label{fig:cueps}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{RMTfigs/CUE2/GausLin2p.pdf}
\caption{\small CUE persistence (linear)}
\label{fig:cuepl}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{RMTfigs/CUE2/GausLog2p.pdf}
\caption{\small CUE persistence (log-linear)}
\label{fig:cuepll}
\end{subfigure}
\vspace{0.1cm}
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{RMTfigs/CSE4/WignerSurmise_4.pdf}
\caption{\small CSE level spacings}
\label{fig:cseps}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{RMTfigs/CSE4/GausLin4p.pdf}
\caption{\small CSE persistence (linear)}
\label{fig:csepl}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{RMTfigs/CSE4/GausLog4p.pdf}
\caption{\small CSE persistence (log-linear)}
\label{fig:csepll}
\end{subfigure}
\caption{\small Numerical support for ergodicity and aperiodicity of realizations of Wigner-Dyson random matrix ensembles, for $d=2048$, $t_0 = 1$ and $q$ being the sorting permutation. Level spacing~\cite{Haake} data depicts the closeness of the realization to an ideal Wigner-Dyson distribution. Persistence is plotted (in red) in terms of persistence probabilities $z^2(p,t_0)$. The lower bound (``Bound'', green) of Eq.~\eqref{eq:q_cyc_persistence_bound} is satisfied, and good agreement is seen with the Gaussian estimate (``Gaussian'', blue) of Eq.~\eqref{eq:zGaussianEstimate} including the tail at long times; both are calculated based on the numerical value of $\varepsilon_C(1,t_0)$ for the realization. The Poisson persistence probability fluctuations (for a sorted distribution of an uncorrelated distribution of points in the same range of energies/eigenphases; in gray) are included to provide a visual reference for the range of persistence probabilities that should be considered $O(d^{-1})$ for random states, while simultaneously confirming the non-ergodicity of Poisson statistics. The time scales when the random matrix persistence amplitudes reach $O(d^{-1})$ are consistent with $p = d/2$ for COE, $p = d/\sqrt{2}$ for CUE and $p = d$ for CSE as predicted by Eqs.~\eqref{eq:exactGaussian}, \eqref{eq:ergodic_aperiodic_rigidity} and \eqref{eq:WignerDyson_rigidity}.}
\label{fig:RMT_ergodicity}
\end{figure}
\section{Mixed states and the classical limit}
\label{sec:classical}
Cyclic ergodicity and aperiodicity have been defined in Eqs.~\eqref{eq:q_cyclic_ergodicity} and \eqref{eq:q_cyclic_aperiodicity} for a general quantum system, irrespective of the existence of a reasonable classical limit. While these definitions are already very similar-looking to the corresponding classical ones (Eqs.~\eqref{eq:cl_cyc_erg_def} and \eqref{eq:cl_cyc_apd_def}), it is desirable to establish the connection at a somewhat more concrete level. This is the aim of the present section.
Without a clearly defined general procedure for the classical limit, some parts of our argument are necessarily heuristic (but can be motivated e.g. using Wigner quasiprobabilities~\cite{Berry227, srednicki1994eth, PolkovnikovPhaseSpace}). We state the heuristic parts in Sec.~\ref{sec:classicallimit} as a direct correspondence between phase space regions and mixed states. In addition, we motivate the existence of orthonormal bases that resemble coordinates in a phase space region, mixed states in which can presumably be identified with classical cyclic permutations.
Independent of such heuristic assumptions, we attempt to mathematically connect mixed state cyclic permutations to spectral rigidity by constructing a corresponding pure state cyclic permutation. We find that there is an unavoidable ambiguity in this procedure at the level considered here, that may require further information to resolve. Ignoring this ambiguity, for the time being, allows us to (again, heuristically) construct a pure state cyclic permutation corresponding to a classical cyclic permutation in phase space.
Finally, we consider the level statistics of a (quantized) irrational flow on a 2D Kolmogorov-Arnold-Moser (KAM) torus, a classically ergodic dynamical system that possesses no periodic orbits. We find numerically that it shows level repulsion inconsistent with random matrix predictions (echoing previous numerical results by Berry and Tabor~\cite{BerryTabor} for the closely related 2D harmonic oscillator); at the same time, it admits an ergodic pure state cyclic permutation. The error of this permutation is consistent with restrictions obtained from the arguments involving mixed states. This result, together with those of Sec.~\ref{sec:RMTergodicity}, suggests a wider applicability of cyclic permutations than random matrix theory in the study of quantum ergodicity.
\subsection{Summary of classical limit heuristics}
\label{sec:classicallimit}
\label{sec:mixedstatevolumes}
The assumed correspondence between mixed states and the classical limit is as follows:
\begin{enumerate}
\item Every classical phase space region $A$ of measure $\mu(A) \gg 1/d$ can be represented by a mixed state $\hat{\rho}_A$ with equal eigenvalues, with
\begin{equation}
\mu(A) \approx \frac{1}{d \Tr(\hat{\rho}^2_A)}.
\label{eq:mixedstatepurity}
\end{equation}
\item Given two regions $A$ and $B$, the measure of their intersection is proportional to the overlap of the corresponding mixed states:
\begin{equation}
\frac{\mu(A \cap B)}{\mu(A)\mu(B)} \approx d \Tr(\hat{\rho}_A \hat{\rho}_B).
\label{eq:mixedstateoverlap}
\end{equation}
\end{enumerate}
We emphasize that there is no reason to assume the converse - not every mixed state looks like a simple phase space region in the classical limit. The use of approximate equalities here represents the unavoidable ambiguity in quantizing a classical system (where any corrections that vanish as $d\to \infty$ are allowed). Loosely speaking, Eqs.~\eqref{eq:mixedstatepurity} and \eqref{eq:mixedstateoverlap} follow from considering each mixed state to be an (equal) ensemble of a subset of orthogonal pure states, where each pure state has a fixed phase space volume (in accordance with Weyl's law for the density of states~\cite{Haake}). In Appendix~\ref{app:wignerfunc}, we discuss an alternate argument based on Wigner quasiprobability functions.
One can then identify the pure state cyclic permutations of Sec.~\ref{sec:qm_cyclic} as a limiting case of classical cyclic permutations, when $n=d$. The error $\varepsilon_C(t_0)$ (Eq.~\eqref{eq:q_cyc_err}), however, corresponds to the maximum of $n \mu(C_k \symdiff C_{k+1})$ rather than the average as for $\overline{\epsilon}_C(t_0)$ (Eq.~\eqref{eq:cl_cyc_err}). Assuming the two are approximately equal (as for a DFT basis), it follows that any classical system with $\overline{\epsilon}_C(t_0) < (2/n)$ (a sufficient condition for cyclic ergodicity), if quantized to make the na\"{i}ve $n\to d$ limit admissible, has a more rigid spectrum than Poisson statistics by Eq.~\eqref{eq:Poisson_minerror}.
An additional link to the classical limit that is useful to consider is a \textit{phase space basis}, which can be loosely thought of as orthogonal minimum uncertainty (i.e. low eccentricity) wavepackets on a thin classical energy shell. More precisely, let $\mathcal{B}(n) = \lbrace B_k(n)\rbrace_{k=0}^{n-1}$ be a partitioning of the classical phase space $\mathcal{P}$ into $\mu$-disjoint closed sets of nonzero measure. Define an $n \to \infty$ limiting procedure such that any $B_k(n+1)$ is completely contained in some $B_j(n)$, and $\mu(B_k(n\to \infty)) = 0$ (e.g. nested cyclic permutations). One can formally define ``phase space observables'' $A[B_k(n)]$ that take distinct values on each $B_k(n)$ and have a well defined $n\to \infty$ limit, such as a coarse graining of the set of coordinates on $\mathcal{P}$. The standard procedure of quantization (e.g. the postulates in Ref.~\cite{ShankarQM}) associates an orthonormal basis of eigenstates $\mathcal{C}_B = \lbrace \lvert B_k\rangle\rbrace_{k=0}^{d-1}$ (i.e. a phase space basis) spanning some $\mathcal{H}_B \subseteq \mathcal{H}$ with projective measurements of such an observable.
The importance of such observables is that their classical evolution is first order in time, given by Liouville's equation~\cite{Goldstein, LLStatMech} with the initial phase space distribution being a delta function. Correspondingly, one would expect the quantum time evolution in such a basis to be relatively non-dispersive, and one mixed state in this basis would evolve into another in the same basis (at least over short times~\cite{ChirikovEhrenfest, ShepelyanskyEhrenfest}; see also Sec.~\ref{sec:thermaltimescales}). This allows a potential mapping of classical cyclic permutations to mixed states (only) in such a basis; see Appendix~\ref{app:oscillator_ergodicity} for a simple example\footnote{As a simple example of the lack of such a mapping in other bases, classical motion in the coordinate variables (for Hamiltonians quadratic in the momenta) is second order and one cannot define classical cyclic permutations in these variables alone; correspondingly, diagonal mixed states in the coordinate variables would instantly spread~\cite{ShankarQM} over the entire range of positions, which is not close to any other diagonal mixed state in the basis.}.
For a purely quantum system without a known classical limit, the main content of these assumptions is that the purity and overlap of (degenerate) mixed states are quantities of reasonable interest, which allow natural extensions of pure state cyclic permutations to mixed states. Purities and overlaps also have a direct relevance to quantum thermalization e.g. from the viewpoint of the eigenstate thermalization hypothesis~\cite{deutsch2018eth, AbaninMBL, subETH, GarrattChalker, pSFF}.
\subsection{Mixed state cyclic permutations for quantum dynamics}
In this section, we attempt to formalize the study of $n$-element mixed state cyclic permutations, looking for connections to spectral rigidity, prior to the $n\to d$ pure state limit (where the connection can be readily made according to the results in Secs.~\ref{sec:quantumcyclic} and \ref{sec:TypicalCyclic}). We find that significant additional structure than what we have assumed here is necessary to investigate spectral rigidity directly for $n < d$.
To define an $n$-element mixed state cyclic permutation, we partition the energy subspace into $n$ orthogonal subspaces of roughly equal dimension:
\begin{equation}
\Sigma_d = \bigoplus_{k=0}^{n-1} \Sigma(k),\ \text{such that}\ \dim(\Sigma(n)) \in \left\lbrace \left\lceil \frac{d}{n}\right\rceil-1,\left\lceil \frac{d}{n}\right\rceil \right\rbrace,
\end{equation}
where $\lceil x\rceil$ denotes the smallest integer larger than $x$. The unequal dimensions of the $\Sigma(k)$ introduces complications in defining a cycling procedure. To avoid tedium, we expand the Hilbert space to $\overline{\Sigma}_{d_n} = \Sigma_d \oplus \Sigma_{\text{aux}}$ of dimension $d_n = n\lceil (d/n)\rceil$, with expanded subspaces $\overline{\Sigma}(k) \supseteq \Sigma(k)$ of equal dimension $\lceil (d/n)\rceil = d_n/n$ (i.e. each having at most one added dimension). Correspondingly, $\hat{U}_H(t)$ is also to be expanded so that it acts as before on $\Sigma_d$ while each expanded dimension is an eigenstate: $\hat{U}_H(t) (\overline{\Sigma}(k)\cap \Sigma_{\text{aux}}) = (\overline{\Sigma}(k) \cap \Sigma_{\text{aux}})$.
Now, we can introduce mixed states $\mshell{k}$ that represent normalized (unit trace) projection operators onto $\overline{\Sigma}(k)$. We say that $\mathcal{S} = \lbrace \overline{\Sigma}(k)\rbrace_{k=0}^{n-1}$ forms a mixed state cyclic permutation that approximates $\hat{U}_H(t_m)$ at some time $t_m$, with $1$-step persistence probabilities
\begin{equation}
Z_k(1,t_m) = \frac{d_n}{n^2} \Tr\left\lbrace \vphantom{\sum}\hat{U}_H(t_m)\ \mshell{k}\ \hat{U}_H^\dagger(t_m)\ \mshell{k+1}\right\rbrace.
\label{eq:mixed_persistence_def}
\end{equation}
By the discussion in Sec.~\ref{sec:classicallimit} (specifically, Eq.~\eqref{eq:mixedstateoverlap}), these are mixed state analogues of the classical measure of the intersection $\mu(C_k \cap C_{k+1})$ for an $n$-element cyclic permutation. Consequently, the classical error associated with the cyclic permutation is
\begin{equation}
\overline{\epsilon}_C(t_m) = \sum_k (1-Z_k(1,t_m)).
\label{eq:m_error_def}
\end{equation}
Given this setup, the following statement is proved in Appendix~\ref{app:mixedapprox}.
\begin{theorem}[\textbf{Pure state cyclic permutations from mixed states}]
There exists a pure state cyclic permutation $\mathcal{C}(\mathcal{S})$, that approximates the modified time evolution operator $\hat{U}_H(t_m) \ushell$ with persistence amplitudes $z_k(1,t_m) = \langle C_{k+1}\rvert \hat{U}_H(t_m) \ushell \lvert C_k\rangle$ such that
\begin{equation}
\frac{1}{d}\sum_{k=0}^{d-1}z_k(1,t_m) \geq \sum_{j=0}^{n-1} \frac{1}{d}\left(\left\lfloor d_n Z_j(1,t_m)\right\rfloor + \sqrt{d_n Z_j(1,t_m) - \left\lfloor d_n Z_j(1,t_m)\right\rfloor}\right),
\label{eq:puremixedoverlap}
\end{equation}
for several choices of $\ushell$ each leaving the original partition invariant: $\ushell \Sigma(k) = \Sigma(k)$. Here $\lfloor x\rfloor$ represents the greatest integer smaller than $x$.
\end{theorem}
The most significant takeaway from the above statement is that given just the persistence probabilities of Eq.~\eqref{eq:mixed_persistence_def}, there is no way to further restrict $\ushell$ beyond leaving the $\Sigma(k)$ invariant, preventing us from making direct statements about the spectral rigidity of $\hat{U}_H$. The one exception is the special case $n = d$ (when we already have a pure state cyclic permutation), for which any choice of $\ushell$ merely amounts to altering the phases of the $\lvert C_k\rangle$ --- practically equivalent to setting $\ushell = \hat{\mathds{1}}$.
In general, one would have much more information about a system than just a single mixed state cyclic permutation (e.g. a family of such permutations with different values of $n$). We anticipate that such additional information could help narrow down $\ushell \approx \hat{\mathds{1}}$ as an admissible choice (most likely with some error terms, as Eq.~\eqref{eq:puremixedoverlap} requires a rather fine-tuned specification of $\ushell$). An alternative is that one would actually need a mixed state cyclic permutation to explicitly go over into its pure state version with precisely $n=d$ (or with an appropriate accounting of symmetry sectors), for cyclic ergodicity to be reflected in spectral rigidity. This is something that merits further investigation, and is outside the scope of the present work.
For now, we merely note that if there is some reason to set $\ushell \approx \hat{\mathds{1}}$, then one can heuristically constrain spectral rigidity using Eq.~\eqref{eq:puremixedoverlap} together with a quadratic estimate $\varepsilon_C(p,t_0) \approx p^2 \varepsilon_C(1,t_0)$ (consistent with both the sinusoidal lower bound of Sec.~\ref{sec:quantumcyclic} and the Gaussian estimate of Sec.~\ref{sec:TypicalCyclic} when $\varepsilon_C(p,t_0) \ll 1$). This requires assuming that $\mathcal{C}(\mathcal{S})$ is the $[p = (t_m/t_0)]$-th power of a pure state cyclic permutation $\mathcal{C}(t_0)$ defined at $t_0 = 2\pi \Omega/d$ (the inverse width of some energy shell with density of states $\Omega$). Then, by Eqs.~\eqref{eq:puremixedoverlap} and \eqref{eq:m_error_def},
\begin{equation}
\varepsilon_C\left(1,t_0 = \frac{2\pi \Omega}{d}\right) \lesssim \frac{4\pi^2\Omega^2}{t_m^2 d^2}\overline{\epsilon}_C(t_m).
\label{eq:mixedspectralrigidity}
\end{equation}
As per Sec.~\ref{sec:modefluctuations}, this restricts the width of the mode fluctuation distribution, leading to more rigid spectra for smaller classical errors $\varepsilon_C(t_m)$.
\subsection{Spectral rigidity of an ergodic flow on a KAM torus}
A simple system that can be tuned to show different behaviors is a KAM torus with a \textit{linear} flow, which occur as invariant subsets in the phase space of integrable systems~\cite{Haake, Goldstein, Ott}. The Hamiltonian of a $2D$ KAM torus is given by
\begin{equation}
H = \mathbf{J}\cdot\boldsymbol{\omega} = J_x \omega_x + J_y \omega_y,
\label{eq:torusHamiltonian}
\end{equation}
with angle variables $\boldsymbol{\theta} = (\theta_x,\theta_y) \in [0,2\pi)^2$ conjugate to the action variables $\mathbf{J} = (J_x,J_y)$. The equation of motion of the linear flow is $\diff \boldsymbol{\theta}(t)/\diff t = \boldsymbol{\omega}$.
The ergodicity of this system on the 2D phase space $\mathcal{P}_{\mathbf{J}} = \lbrace{(\theta_x,\theta_y)\rbrace}$ with fixed $\mathbf{J}$ is characterized~\cite{Sinai1976, SinaiCornfield} by the ratio $\alpha=\omega_y/\omega_x$. When $\alpha$ is irrational, the dynamics is ergodic on this phase space; but when $\alpha$ is rational, $\mathcal{P}_{\mathbf{J}}$ decomposes into an infinite number of invariant ergodic and periodic subsets, which share the same period. In both cases, there is no mixing.
We call the system of Eq.~\eqref{eq:torusHamiltonian} a KAM torus to emphasize its difference from a free particle moving on a torus. The latter is never ergodic in its \textit{phase space}, but possibly (depending on initial conditions) merely visits all points among its position coordinates with conserved momentum. The Hamiltonian of the free particle is quadratic in $\mathbf{J}$ rather than linear, and its level statistics has been found to be Poissonian~\cite{ParticleTorus1, ParticleTorus2} (see also Ref.~\cite{Aurich2} for its mode fluctuations) in accordance with the Berry-Tabor conjecture~\cite{BerryTabor} for integrable systems.
The quantization of the KAM torus of Eq.~\eqref{eq:torusHamiltonian} entails the restriction $J_x,J_y \in \mathbb{Z}$. This also leads to an infinite density of energy levels, and we need an additional ultraviolet (UV) restriction of the domain at large $\mathbf{J}$ to obtain a finite number of levels. It is convenient to choose boundaries along lines parallel and perpendicular to $\boldsymbol{\omega}$, i.e.
\begin{equation}
\mathbf{J} \in \mathbb{Z}^2\cap \lbrace (J_x,J_y) : \lvert J_x \omega_x + J_y \omega_y\rvert < L_1, \lvert J_x \omega_y-J_y \omega_x\rvert < L_2\rbrace.
\label{eq:torus_UVcutoff1}
\end{equation}
This corresponds to an energy window of width $\approx 2L_1$, with mean density of levels $\Omega \approx 2 L_2$. If there are $d$ levels in the window and $L_1 = O(L_2)$, we have $\Omega = O(\sqrt{d})$.
\subsubsection{Cyclic permutations on the torus}
Cyclic permutations that approximate a large class of time evolution operators on the 2D torus, including nonlinear flows, have been studied in great detail~\cite{KatokStepin2, SinaiCornfield, Katok67}. For our purposes, we need a much simpler construction that can be explicitly achieved for the linear flow.
Based on a result in Ref.~\cite{IwanikRotation} pertaining to irrational rotations, a classical $n$-element cyclic permutation is constructed for the 2D torus in Appendix~\ref{app:toruscyclic}, with the following error for almost all irrational $\alpha$:
\begin{equation}
\overline{\epsilon}_C\left(t_m = O(\omega n^{-1/2})\right) < O(n^{-3/2}).
\label{eq:irrationalerror1}
\end{equation}
If we directly set $n = d$, we have $t_m = O(\Omega/d)$ and the right hand side becomes $O(d^{-3/2})$. Alternatively, we can identify the cyclic permutation with mixed states diagonal in the $\boldsymbol{\theta}$ basis\footnote{Noting that the linearity of the Hamiltonian Eq.~\eqref{eq:torusHamiltonian} in $\mathbf{J}$ is crucial to prevent the fast dispersion of narrow wavepackets in the $\boldsymbol{\theta}$ basis, for heuristically identifying the classical cyclic permutation with mixed states.} (for $n\ll d$) and take the $n\to d$ limit in accordance with the estimate of Eq.~\eqref{eq:mixedspectralrigidity}. In both cases, we obtain a pure state cyclic permutation for the torus with
\begin{equation}
\varepsilon_C(1, t_0) < O(d^{-3/2}),
\label{eq:irrationalerror2}
\end{equation}
where $t_0 = 2\pi \Omega/d$, corresponding to mode fluctuations. This suggests that the quantized linear flow on the KAM torus has significantly higher spectral rigidity than Poisson statistics (on account of Eq.~\eqref{eq:Poisson_minerror}), for almost all irrational values of $\alpha$.
\subsubsection{Numerical study of level statistics}
\label{sec:torusfigs}
To observe direct quantum signatures of possible cyclic ergodicity and aperiodicity, we should look at the persistence of a suitably chosen cyclic permutation (as per Eq.~\eqref{eq:q_cyclic_ergodicity}) and the SFF (Eq.~\eqref{eq:q_cyclic_aperiodicity}). Other indicators include the bound of Eq.~\eqref{eq:q_cyc_persistence_bound} on the persistence based on the error at $t_0$, and the distribution of mode fluctuations $f(\Delta)$ in Eq.~\eqref{eq:persistence_modefluctuations}. Comparison with random matrix level statistics can be done with the spacing probability distribution $P(S)$ of neighboring levels, normalized to unit mean level spacing~\cite{Haake, Mehta}. Numerical results for these quantities are presented in Figs.~\ref{fig:irr_torus} and \ref{fig:rat_torus} for an irrational ($\alpha = \sqrt{2}$) and rational ($\alpha = 2$) ratio respectively, with approximately 2000 levels in each energy subspace. The results are consistent with cyclic ergodicity and quasi-periodicity with $\varepsilon_C(1,t_0) \sim O(d^{-2})$ (up to small corrections that may only be visible for larger $d$) for the irrational case, and nonergodicity for the rational case.
In fact, Fig.~\ref{fig:irrlogp}, in particular, indicates that $\varepsilon_C \approx \pi^2/d^2$ --- which is the largest value of the error for which the bound of Eq.~\eqref{eq:q_ergodicity_bound} guarantees cyclic ergodicity. Additionally, a \textit{slightly} faster decay than the Gaussian estimate is seen at late times in Figs.~\ref{fig:irrlinp}, \ref{fig:irrlogp} compared to Fig.~\ref{fig:RMT_ergodicity} (perhaps most visible in Fig.~\ref{fig:irrlinp}). But one should be careful not to rule out e.g. slowly changing logarithmic factors, which may become significant only at much larger $d$. Some additional visualizations and discussion of the numerical data are presented in Appendix~\ref{app:torusdata}.
A couple of additional remarks are pertinent. The 2D harmonic oscillator is a related system with the restriction $J_x,J_y > 0$ that has a finite but nonuniform density of levels at any finite energy. Its level statistics has been studied in Ref.~\cite{BerryTabor}, which sees level repulsion after unfolding the nonuniform density, but seemingly not in any random matrix universality class --- paralleling what we see for the 2D KAM torus. The central conjecture of Ref.~\cite{BerryTabor} is, however, that typical integrable systems have Poisson level statistics. This holds in spite of the rigid spectrum of each invariant torus, as typical integrable systems are collections of several tori with uncorrelated frequencies\footnote{This is related to the nonzero curvature of constant energy surfaces assumed for typical integrable systems in Ref.~\cite{BerryTabor}; the normal vector at each point on the energy surfaces determines the ratio of frequencies of the corresponding torus.}, whose combination has enhanced spectral fluctuations and low spectral rigidity (analogous to the situation in Refs.~\cite{WinerHydro, WinerSpinGlass}).
These results suggest that the criteria based on cyclic ergodicity and aperiodicity of cyclic permutations, may apply more generally as a quantum version of classical ergodicity than a comparison with random matrix level statistics, including in systems such as the 2D KAM torus that have been considered exceptions to the latter~\cite{BerryTabor}.
\begin{figure}[!hbt]
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.23]{TorusFigs/rootTwo/root2points.pdf}
\caption{\small Energy levels in $(J_x, J_y)$ space}
\label{fig:irrpoints}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{TorusFigs/rootTwo/root2spacings.pdf}
\caption{\small Torus level spacings}
\label{fig:irrspace}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.25]{TorusFigs/rootTwo/root2modes.pdf}
\caption{\small Torus mode fluctuations}
\label{fig:irrmode}
\end{subfigure}
\vspace{0.1cm}
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.33]{TorusFigs/rootTwo/root2LinPersistence.pdf}
\caption{\small Torus persistence (linear)}
\label{fig:irrlinp}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.33]{TorusFigs/rootTwo/root2LogLinPersistence.pdf}
\caption{\small Torus persistence (log-linear)}
\label{fig:irrlogp}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.33]{TorusFigs/rootTwo/root2SFF.pdf}
\caption{\small Torus SFF (log-linear)}
\label{fig:irrsff}
\end{subfigure}
\caption{\small Plots for the 2D KAM torus with irrational $\alpha=\sqrt{2}$; $\omega_x = 1$, $\omega_y = \sqrt{2}$, $L_1 = L_2 = 40$; these correspond to $d=2133$. (b) The neighboring level spacing probability distribution is seen to fall outside the random matrix universality classes (cf. Fig.~\ref{fig:RMT_ergodicity}). (c) The width of $f(\Delta)$ remains close to $O(1)$ (up to possible corrections at larger $d$). (d,e) The persistence function shows cyclic ergodicity, and is close to the lower bound (Eq.~\eqref{eq:q_cyc_persistence_bound}) and the Gaussian estimate (Eq.~\eqref{eq:zGaussianEstimate}) at early times; it continues to follow the Gaussian estimate at late times, but appears to deviate a little more than for the Wigner-Dyson ensembles (Fig.~\ref{fig:RMT_ergodicity}). (f) The SFF indicates that the system is quasiperiodic (cf. Eq.~\eqref{eq:q_dft_aperiodicity}), ruling out mixing at $t \sim O(d)$. These are consistent with the classical system being erogdic and not mixing.}
\label{fig:irr_torus}
\end{figure}
\begin{figure}[!hbt]
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.23]{TorusFigs/Two/Two_points.pdf}
\caption{\small Energy levels in $(J_x, J_y)$ space}
\label{fig:ratpoints}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.3]{TorusFigs/Two/Two_spacings.pdf}
\caption{\small Torus level spacings}
\label{fig:ratspace}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.25]{TorusFigs/Two/Two_modes.pdf}
\caption{\small Torus mode fluctuations}
\label{fig:ratmode}
\end{subfigure}
\vspace{0.1cm}
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.33]{TorusFigs/Two/Two_LinPersistence.pdf}
\caption{\small Torus persistence (linear)}
\label{fig:ratlinp}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.33]{TorusFigs/Two/Two_LogPersistence.pdf}
\caption{\small Torus persistence (log-linear)}
\label{fig:ratlogp}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[scale=0.33]{TorusFigs/Two/Two_SFF.pdf}
\caption{\small Torus SFF (log-linear)}
\label{fig:ratsff}
\end{subfigure}
\caption{\small Plots for the 2D KAM torus with rational $\alpha=2$; $\omega_x = 1$, $\omega_y = 2$, $L_1 = L_2 = 50$; these correspond to $d=1961$. (b) The neighboring level spacings are seen to be equal but with a high degree of degeneracy. (c) The width of $f(\Delta)$ is $\gg O(1)$ (as the degeneracy increases with $L_2$). (d,e) The persistence function remains close to the lower bound (Eq.~\eqref{eq:q_cyc_persistence_bound}) and Gaussian estimate (Eq.~\eqref{eq:zGaussianEstimate}) at early times, and is not ergodic; it also shows periodic revivals. (f) The SFF is periodic. These are consistent with the classical system being non-ergodic and periodic.}
\label{fig:rat_torus}
\end{figure}
\section{Discussion}
\label{sec:discussion}
Identifying ergodicity with the persistence of cyclic permutations in Hilbert space --- which is strongly suggested by the results of Secs.~\ref{sec:quantumcyclic}, \ref{sec:TypicalCyclic} and \ref{sec:classical} --- unavoidably leads us to the conclusion that every ergodic energy subspace in an arbitrary quantum system (with unitary time evolution) admits structures resembling motion in a classical phase space. In particular, an ergodic pure state cyclic permutation resembles a discretization of first-order time evolution in an ergodic region of phase space, over (at least) the long Heisenberg time scale $t_H = 2\pi \Omega(\Sigma_d)$ (we take $t_0 = 2\pi \Omega(\Sigma_d)/d$ with $\Sigma_d$ being a shell of consecutive, sorted energy levels throughout this section). Even after the persistence has decayed to the random value $O(d^{-1/2})$ in some basis, one can transform to a different basis (typically with the same cycling operator e.g. one among the family of DFT bases with different phases in Eq.~\eqref{eq:DFTbasis} for optimal cyclic permutations) in which the persistence is again large for a similarly long time.
To determine whether this resemblance between ergodic (quantum) cyclic permutations and ergodic phase space regions can be taken more seriously requires much stronger results on mixed states in the classical limit; however, we have argued in Sec.~\ref{sec:classical} that tentatively making this identification allows one to make heuristic spectral rigidity arguments that seem consistent with numerical results for linear flows on a KAM torus. Such arguments are made easier by the fact that these tori occur in integrable systems where action-angle variables associated with ergodic sectors are explicitly known, and are likely to be more challenging for genuinely chaotic systems. It also appears likely that it is typically a DFT cyclic permutation that corresponds to an ergodic phase space region; in addition to its optimality, all energy eigenstates in the subspace have a uniform magnitude (ignoring phases) of coefficients in a DFT basis, echoing classical results~\cite{HalmosErgodic} on the constant magnitude of eigenfunctions of $U_{\mathcal{T}}$ in an ergodic region (cf. Sec.~\ref{sec:cl_ErgodicHierarchy}).
Given the near-universality (with respect to the Haar measure on the space of time evolution operators) of Wigner-Dyson spectral rigidity~\cite{Mehta, Haake} and the results of Sec.~\ref{sec:TypicalCyclic}, it follows that almost any randomly chosen quantum system is guaranteed to have this cyclic permutation structure up to a time of at least $t_H/2$. However, it is not clear if this structure is easily accessible from physically observable or local bases in which one might traditionally write the Hamiltonian of the system, an ambiguity also present in the statement of ETH~\cite{DAlessio2016, deutsch2018eth, subETH, AnzaETH}.
Irrespective of this ambiguity, we can ask about the interplay of cyclic ergodicity and aperiodicity on the one hand, and quantum thermalization in the sense of random states (including ETH) on the other, when viewed in an ergodic cyclic permutation basis. These considerations are well-defined for any such basis, but acquire further physical relevance if we assume that one such basis represents the classical phase space (e.g. an ergodic cyclic permutation representing an ergodic subset of the classical phase space). In the remainder of this section, we explore some insights that may be gained from such considerations, which may offer clues to a more rigorous study of thermalization.
\subsection{Thermalization time scales, and a late-time ergodic hierarchy}
\label{sec:thermaltimescales}
In general, quantum thermalization depends on both the initial state and the choice of basis in reference to which the randomness of the state is identified~\cite{DAlessio2016}. When either is randomly chosen in the Hilbert space of an energy shell (of $d$ levels and density of states $\Omega$), quantum thermalization occurs extremely fast over the time scale $t_0 \sim 2\pi \Omega/d$, irrespective of whether the underlying system is considered chaotic or integrable~\cite{Reimann2016}. To see a difference between systems with different dynamical properties, one would have to choose one of the (relatively rare) physically meaningful bases.
One such basis is what is usually considered the ``local'' basis~\cite{Nandkishore}, in which the Hamiltonian (or Floquet unitary) of the system is easily expressed in terms of a few variables (presumably, with second-order time evolution in some classical or thermodynamic limit). It has been seen in several cases e.g.~\cite{ProsenErgodic, ShenkerThouless, ChanScrambling, GoogleScrambling} that correlation functions thermalize rapidly for chaotic systems in such a basis. However, local basis states usually involve superpositions of the entire energy spectrum, due to which all such scrambling~\cite{ShenkerThouless} phenomena occur prior to the Thouless time ($\approx t_0$ in our case), while it is only after this time that dynamics within energy shells with a uniform density of states comes into play~\cite{ShenkerThouless, ThoulessRelaxation, WinerHydro}. In particular, the effect of spectral rigidity (such as Wigner-Dyson level statistics) and cyclic ergodicity is negligible at these times.
To see the direct impact of spectral rigidity, we consider bases $\mathcal{C}$ corresponding to cyclic permutations of low error. A basis state for any ergodic cyclic permutation, by definition (Eq.~\eqref{eq:q_cyclic_ergodicity}), cannot evolve into a random state until a time $t_0 d/2 = t_H/2$ (as verified in Figs.~\ref{fig:RMT_ergodicity} and \ref{fig:irr_torus}). This rules out quantum thermalization in such a basis until the time $t_H /2$. Moreover, for non-ergodic permutations, the persistence decays to the random value much faster, allowing a significantly earlier onset of quantum thermalization. Thus, contrary to what one might expect from local basis dynamics prior to $t_0$, quantum thermalization in the $\mathcal{C}$ basis is slower for ergodic (including chaotic) systems and faster for non-ergodic (integrable) systems\footnote{An intuitive explanation for this is as follows. From the level statistics perspective, large spectral fluctuations imply a faster dephasing of the eigenphases of $\hat{U}_H(t)$. Alternatively, by appealing to the classical limit (where it exists) and the heuristic arguments of Sec.~\ref{sec:classical}, the ergodic subsets (i.e. tori with largely uncorrelated linear flows) of an integrable system correspond to different ergodic energy subspaces that make up the energy shell, such that a DFT basis state in the shell --- which is supported on all tori --- is quickly randomized due to the uncorrelated flows of the tori.}.
It is also worth noting that over times $t$ with $t_0 < t \ll 1/\sqrt{\varepsilon_C(1,t_0)}$ (the latter being $\sim O(t_H/\sqrt{\ln (t_H/t_0)})$ for Wigner-Dyson statistics), we have $\hat{U}_H(t_0) \approx \hat{U}_C$. Consequently, there is negligible inherent randomness in the evolution of an initial basis state up to some long time. Yet, we can characterize a form of randomness over these time scales, with some arguments involving the classical limit. If a phase space region $A$ corresponds to a degenerate mixed state $\hat{\rho}(A)$ that is diagonal in this basis, in accordance with Sec.~\ref{sec:classicallimit}, we have
\begin{equation}
\hat{\rho}(A) \approx \frac{1}{\mu(A)d} \sum_{k \in C(A)} \lvert C_k\rangle \langle C_k\rvert,
\label{eq:phasespacemixedstates}
\end{equation}
where $C(A) \subseteq \mathbb{Z}_d$ is a set of approximately $(\mu(A) d)$ indices of the $\lvert C_k\rangle$. For some regions $A$ and $B$, using Eq.~\eqref{eq:mixedstateoverlap} and approximating $\hat{U}_H(t_0)$ by $\hat{U}_C$, we have
\begin{equation}
\mu(\mathcal{T}^{p t_0}A \cap B) \approx \frac{1}{d}\left\lvert C(\mathcal{T}^{p t_0}A) \cap C(B)\right\rvert,
\label{eq:q_ergodic_hierarchy}
\end{equation}
with $\lvert \cdot \rvert$ denoting the cardinality of a finite set. The left hand side can be interpreted in terms of the ergodic hierarchy on the phase space (cf. Sec.~\ref{sec:cl_ErgodicHierarchy}) if $A$ and $B$ are chosen from a collection of physical phase space regions (i.e. made up of classically connected regions of nonzero measure) and identified with a ``physical'' subset of all possible mixed states of the form in Eq.~\eqref{eq:phasespacemixedstates}; the right hand side is the correlation function of discrete point distributions on a circle ($\mathbb{Z}_d$), corresponding to these mixed states, under a relative shift (of $\ll d$ steps). This construction can be naturally extended to higher order correlation functions, suggesting that any set of physical point distributions may be assigned a place in the ergodic hierarchy depending on their correlation functions under relative shifts on the circle.
One way to view these arguments is that thermal fluctuations measured by Eq.~\eqref{eq:q_ergodic_hierarchy} and (the essentially negligible) quantum fluctuations remain distinct at these times for phase space observables, in contrast to rapidly becoming equivalent for local observables satisfying ETH~\cite{srednicki1999eth}. Level statistics appears to play no role beyond determining the appropriate time scale for this description (which is directly related to cyclic ergodicity and aperiodicity). These correlation functions instead rely on the more complicated properties of eigenstates (i.e. the representation of energy eigenstates in terms of physical phase space observables). It would be interesting to see if this allows the direct discretization of classical properties beyond ergodicity and aperiodicity through eigenstates e.g. if a mixing behavior of correlation functions of the form of Eq.~\eqref{eq:q_ergodic_hierarchy} can be directly connected to the non-existence of non-constant eigenfunctions of the classical unitary $U_{\mathcal{T}}$ of a mixing system (cf. Sec.~\ref{sec:cl_ErgodicHierarchy}) in the continuum limit~\cite{HalmosErgodic}, via the apparent randomness of the phases of energy eigenstates over the pure states in $C(A)$, $C(B)$ (see Ref.~\cite{KatokSinaiStepin} for a related discussion of ``incongruity'' of discretized eigenfunctions in the continuum limit).
\subsection{Poincar\'{e} recurrences and eigenstate thermalization}
We attempt to identify an analogue of Poincar\'{e} recurrence theorem for cyclic permutations within the toy construction of Eq.~\eqref{eq:phasespacemixedstates}, as an interesting exercise that will reveal a surprising connection to ETH. We recall that quantum recurrences~\cite{QuantumRecurrences} of phases in a subshell occur over times exponentially large in $d$~\cite{BrownSusskind2}, and are not directly relevant at earlier times.
We introduce the following ad-hoc definition based on the classical statement (cf. Sec.~\ref{sec:ErgodicReview}) of the theorem: any subspace $\Sigma(A) \in \Sigma$ with projector $\hat{\Pi}(A)$ (and density matrix $\hat{\rho}(A) \propto \hat{\Pi}(A)$) is Poincar\'{e} recurrent if for any pure state $\lvert \psi\rangle \in \Sigma(A)$, there exists some time $t$, with $t_0 \ll \lvert t\rvert \lesssim O(t_0 d)$ such that the pure state returns to have a larger-than-random overlap with the subspace:
\begin{equation}
\Tr\left[\hat{\Pi}(A) \hat{U}_H(t) \lvert \psi\rangle \langle \psi\rvert \hat{U}_H^\dagger(t)\right] \gg O(d^{-1}).
\end{equation}
We note the similarity of the restriction on the range of $t$ to that in the definition of cyclic aperiodicity (Eq.~\eqref{eq:q_cyclic_aperiodicity}).
For simplicity, we assume that it is sufficient to consider DFT cyclic permutations as representing any regions in phase space. We also assume that the persistence amplitude is the only greater-than-random component of any state with respect to a DFT basis of interest (in other words, none of the terms $\varepsilon_C^{1/2}(p,t_0) \nu_m(p)$ exceed $O(d^{-1/2})$ in magnitude), as is typically the case for e.g. Wigner-Dyson or Poisson statistics (partly due to Eq.~\eqref{eq:nu_sff}). Let $t_R(\mathcal{C})$ then represent the randomization time of a DFT cyclic permutation $\mathcal{C}$ --- the smallest time for which $z(t_R/t_0, t_0) = O(d^{-1/2})$.
The time $t_R$ determines the minimum dimension of Poincar\'{e} recurrent subspaces that have the diagonal form in Eq.~\eqref{eq:phasespacemixedstates}. Specifically,
\begin{equation}
\dim \Sigma(A) \geq \frac{t_0 d}{t_R(\mathcal{C})},
\end{equation}
if $\Sigma(A)$ is a Poincar\'{e} recurrent subspace spanned by a subset of $\mathcal{C}$. Quasi-periodic cyclic permutations have $t_R(\mathcal{C}) > t_0 d$ and every subspace is recurrent; for ergodic ones, $t_R(\mathcal{C}) > t_0 d/2$, and any subspace with $\dim \Sigma(A) \geq 2$ (in other words, every subspace that is not a pure state) is recurrent.
It follows that the only DFT cyclic permutations that can serve as good candidates for the definition of phase space regions like in Eq.~\eqref{eq:phasespacemixedstates} that are also Poincar\'{e} recurrent for regions containing more than one pure state ($\mu(A) > 1/d$)\footnote{If we imagine pure states as corresponding to points with negligible ($= 1/d$) phase space measure, they are effectively measure zero sets which are not required to satisfy Poincar\'{e} recurrence even classically. More realistically, we should expect recurrence times to increase with decreasing measure}, are ergodic ones. In other DFT bases, Poincar\'{e} recurrence fails for regions $A$ with some small volume $1/d < \mu(A) \ll 1$. Thus, if we want to construct a fictitious phase space for some energy subspace $\Sigma_d$ that satisfies Poincar\'{e} recurrence for arbitrary regions (of greater than the smallest volume $1/d$), there are two possibilities
\begin{enumerate}
\item $\Sigma_d$ is itself ergodic, and the corresponding ergodic (DFT) cyclic permutation $\mathcal{C}$ allows the definition of mixed states corresponding to Poincar\'{e} recurrent phase space regions according to Eq.~\eqref{eq:phasespacemixedstates}.
\item $\Sigma_d$ must be decomposed into $M$ ergodic subspaces $\Sigma_{d_1}(1),\ldots,\Sigma_{d_M}(M)$, each spanned by $d_k$ energy levels (respectively) that add up to $d$. Poincar\'{e} recurrent phase space regions can then be defined according to Eq.~\eqref{eq:phasespacemixedstates} on the combination of their respective ergodic (DFT) cyclic permutations $\mathcal{C}(1),\ldots,\mathcal{C}(M)$. The previous case corresponds to $M=1$.
\end{enumerate}
In either case, the projectors $\hat{\Pi}(A)$ can be written as
\begin{equation}
\hat{\Pi}(A) = \sum_{m=1}^{M}\sum_{k\in C_m(A)}\lvert C_k(m)\rangle\langle C_k(m)\rvert
\end{equation}
where each $C_m(A)$ is a set of $n_m(A) = \lvert C_m(A)\rvert$ indices of elements of $\mathcal{C}(m)$, and $\sum_m n_m(A) = n(A) \approx \mu(A) d$.
The connection to ETH emerges if one asks for the matrix elements of these projectors in the energy eigenbasis. Let $\lbrace \lvert E_k(m)\rangle\rbrace_{k=0}^{d_m-1}$ be the energy eigenstates contained in $\Sigma_{d_m}(m)$, whose form is explicitly known as DFTs of the $\lvert C_k(m)\rangle$. Assuming that (in the generic case) each $C_m(A)$ is randomly distributed on $\mathcal{C}(m)$ (so the phases of the DFT can be taken to be random for $1 \ll n_m \ll d_m$), we have
\begin{equation}
\langle E_{k}(m)\rvert \hat{\Pi}(A) \lvert E_{j}(m)\rangle = \frac{n_m(A)}{d_m}\delta_{kj} + O\left(\frac{\sqrt{n_m(A)}}{d_m}\right) R_{kj}(m),
\label{eq:RecurrentProjectorMatrices}
\end{equation}
for some random $d_m \times d_m$ Hermitian matrix $R_{kj}(m)$ with $O(1)$ matrix elements (with weak correlations ensuring $\hat{\Pi}^2 = \hat{\Pi}$). On the other hand, the statement of ETH for an $n$-dimensional projector $\hat{\Pi}$ may be motivated by random matrix arguments (e.g. similar to Refs.~\cite{DAlessio2016, subETH}; see also Ref.~\cite{AnzaETH} for a discussion of the role of degenerate projectors), giving
\begin{equation}
\langle E_k\rvert \hat{\Pi}\lvert E_j\rangle = \frac{n}{d}\delta_{kj} + O\left(\frac{\sqrt{n}}{d}\right)R_{kj},
\label{eq:projectorETH}
\end{equation}
in the full energy subspace $\Sigma_d$. We see that Eq.~\eqref{eq:RecurrentProjectorMatrices} and Eq.~\eqref{eq:projectorETH} are guaranteed to agree for $M=1$. For $M>1$, the first (diagonal) terms of Eq.~\eqref{eq:RecurrentProjectorMatrices} and \eqref{eq:projectorETH} can only agree through a statistically unlikely coincidence $n_m/d_m = n/d$; even in the rare instance that this holds, the second (fluctuation) term of the former typically has matrix elements of size $O[(\sqrt{M n})/d]$ (or zero when $k,j$ correspond to different $\Sigma_{d_m}(m)$), and ETH can be satisfied only for $M=O(1)$.
We have essentially argued, under some simplifications i.e. focusing on DFT bases with typical random parts and diagonal projectors, that generic Poincar\'{e} recurrent projectors satisfy ETH only in an energy subspace that admits an ergodic cyclic permutation (and in somewhat less generic cases, up to a small number $M=O(1)$ of ergodic cyclic permutations).
The above argument could offer hints towards connecting spectral rigidity and ETH, generally seen as distinct manifestations of random matrix behavior~\cite{DAlessio2016}. To do so, it would be necessary to determine if there's any link between a Poincar\'{e} recurrence requirement for the phase space observables considered here (relevant for $t>t_0$ dynamics), and the \textit{local} or few-body observables (relevant for $t<t_0$) that are the subject of conventional ETH~\cite{deutsch1991eth,srednicki1994eth, srednicki1999eth, deutsch2018eth,DAlessio2016, subETH, Nandkishore}.
\section{Conclusions}
We have identified a fully quantum notion of ergodicity in the Hilbert space, that can be loosely interpreted as a quantum version of the ''visiting (almost) every phase space point''~\cite{Sinai1976, HalmosErgodic} sense of ergodicity. We have shown that energy level statistics determines whether this form of ergodicity is satisfied by any individual system. Individual systems with Wigner-Dyson level statistics satisfy this property, but so do quantized classically ergodic systems without such statistics as typified by irrational flows on a KAM torus. Random matrix behavior is, therefore, a sufficient condition but not necessary for this form of ergodicity. We also argued that spectral rigidity influences the thermalization of ``phase space'' observables, potentially admitting a late-time description of thermalization in terms of an ergodic hierarchy, while the connection to local observables is not yet obvious.
We recall that one of our motivations mentioned in Sec.~\ref{sec:motivation} was a semiclassical explanation of spectral rigidity that does not rely on a K-mixing classical limit. Sec.~\ref{sec:classical} demonstrates that an approach based on cyclic permutations appears to satisfy this criterion even for merely ergodic systems without periodic orbits, and could perhaps even be made mathematically rigorous in some systems if the classical limit is better understood (which has previously been possible only for eigenvectors in a local basis~\cite{Zelditch, Anantharaman}). The fact that it appears to differ significantly from traditional semiclassical periodic orbit arguments~\cite{HOdA, BerrySpectralRigidity, HaakePO, HaakePO2, Haake} warrants some discussion. Approximately periodic structures of any given period can indeed be constructed in an ergodic cyclic permutation basis (e.g. unbiased pure states with regularly spaced support, perhaps excluding a small number of energy levels for divisibility purposes), and may contribute to the calculation of the SFF like a coherent effect of closed Feynman paths in quantum chaotic systems~\cite{KosProsen2018, ChanScrambling, GarrattChalker} or Haar random unitaries~\cite{Weingarten1, Weingarten2}. However, these are not necessarily related to classical periodic \textit{orbits} or trajectories in phase space, and could involve superpositions of actual trajectories. Perhaps a closer connection to the periodic orbit approach would be through periodic structures in the Hilbert space representation of classical mechanics~\cite{HalmosErgodic, Sinai1976, SinaiCornfield}.
The fact that cyclic ergodicity appears to successfully characterize the spectral rigidity of KAM tori, where the conventional approach based on random matrices does not work, may also allow for a more precise study of the quantum analogue of KAM theory~\cite{Ott} --- the study of ergodicity under perturbations to integrable systems. At present, we are far from a detailed understanding of such perturbations~\cite{deutsch2018eth, KAM2} in quantum mechanics, particularly for many-particle systems. The development of ergodicity in integrable systems with a large number of particles under vanishingly small perturbations is believed to be essential for the applicability of statistical mechanics~\cite{deutsch2018eth}.
Cyclic permutations may offer other possibilities for transplanting precise ideas from ergodic theory into quantum mechanics. As one example, it would be interesting to explore whether a quantum Kolmogorov-Sinai (KS) entropy can be directly related to level statistics (see e.g. Ref.~\cite{AlickiFannes} for candidate definitions) and perhaps obtain a precise characterization of quantum \textit{chaos} as opposed to mere ergodicity beyond intuitive notions; classically, the KS entropy is closely related to Lyapunov exponents that characterize chaotic dynamics~\cite{PlatoErgodic, Ott}, and can be accessed through a certain type of cyclic permutations (and generalizations)~\cite{KatokStepin2, SinaiCornfield, KatokSinaiStepin}. In the other direction, it may also be interesting to see if classical cyclic permutations can be optimized in some sense for a given system, as done for quantum cyclic permutations in Sec.~\ref{sec:DFTsAreOptimal}.
\subsubsection*{Acknowledgments}
This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0001911. We thank Yunxiang Liao, Laura Shou and Michael Winer for useful discussions.
\printbibliography
\newpage
\begin{appendices}
\section{Classical cyclic permutations (review)}
\label{app:cl_erg_errors}
This proof essentially follows Ref.~\cite{KatokStepin2}. First, we discuss the bound for cyclic ergodicity. Assume that every element of $\lbrace\mathcal{P}_j\rbrace_{j=1}^{M_C}$ completely contains at least one element $C_{p(j)} \subseteq \mathcal{P}_j$ of the decomposition. As $\mathcal{T}^tC_{p(j)}\in \mathcal{P}_j$ for all $t$ by definition, we must have $\mu[(\mathcal{T}^{[p(j+1)-p(j)]t_0}C_{p(j)})\cap C_{p(j+1)}] = 0$. An important exception to this behavior is when $M_C=1$, where there is no reason to impose a vanishing intersection. Thus,
\begin{align}
\frac{1}{2}\sum_{j=1}^{M_C}\ \mu\left[(\mathcal{T}^{[p(j+1)-p(j)]t_0}C_{p(j)})\symdiff C_{p(j+1)}\right] = \frac{1}{n}M_C, \text{ for } M_C \geq 2.
\label{eq:cl_erg_bound}
\end{align}
Now, we need to know how the error in an $\ell$-step time evolution $(\mathcal{T}^{\ell t_0} C_k) \symdiff C_{k+\ell}$ is related to the error $(\mathcal{T}^{t_0}C_m)\symdiff C_{m+1}$ made in approximating each step.
For this, we note that
\begin{align}
(\mathcal{T}^{(m+1)t_0} A) - C_{m+1} &\subseteq \left[(\mathcal{T}^{t_0}C_m)-C_{m+1}\right]\cup\mathcal{T}^{t_0}\left[(\mathcal{T}^{mt_0}A)-C_m\right],\ \forall\ A \subseteq \mathcal{P} \label{eq:cl_err_recurrence}\\
\implies \mu[(\mathcal{T}^{\ell t_0} C_k) \symdiff C_{k+\ell}] &\leq \sum_{m=1}^\ell\ \mu[(\mathcal{T}C_{k+m-1})\symdiff C_{k+m}],
\label{eq:cl_err_cascade}
\end{align}
where the second line follows from recursively applying the first line to $\mathcal{T}^{(m+1)t_0}C_k\symdiff C_{k+q}$ with $A = C_k$. Using this in Eq.~\eqref{eq:cl_erg_bound}, one obtains $\overline{\epsilon}_C \geq (M_C/n)$ for $M_C\geq 2$.
Cyclic aperiodicity is more straightforward. We have $\mu[(\mathcal{T}^{nt_0}C_k) \cap C_k] = 0$ (up to possible corrections that vanish as $n\to\infty$), which implies $\overline{\epsilon}_C \geq 1/n$ from Eq.~\eqref{eq:cl_err_cascade}. More generally, we can relax the requirement of aperiodicity to only after $r$ returns, $\mu[(\mathcal{T}^{rnt_0}C_k) \cap C_k] = 0$, which gives $\overline{\epsilon}_C \geq 1/(rn)$.
\section{Quantum cyclic permutations}
\subsection{Fastest decay of persistence}
\label{app:q_erg_errors}
Given a cyclic permutation basis $\mathcal{C} = \lbrace \lvert C_j\rangle\rbrace_{j=0}^{d-1}$, consider some initial state $\lvert C_k\rangle$. After $p$ steps of time evolution, it evolves into
\begin{equation}
\hat{U}_H(p t_0) \lvert C_k\rangle = z_k(p;t_0)e^{i\phi_k(p;t_0)}\lvert C_{k+p}\rangle+\sqrt{1- z_k^2(p; t_0)}\lvert \nu_{k+p}^{(k)}\rangle,
\end{equation}
where $\lvert\nu_{k+p}^{(k)}\rangle$ is some normalized vector orthogonal to $\lvert C_{k+p}\rangle$, and $\phi_k(p; t_0)$ is an unimportant phase. This leads to a recurrence relation for the persistence amplitudes,
\begin{align}
z_{k}(p+1; t_0)e^{i\phi_k(p+1;t_0)} &= \langle C_{k+p+1}\rvert \hat{U}_H(t_0)\hat{U}_H(p t_0)\lvert C_k\rangle \nonumber \\
&= z_k(p)e^{i\phi_k(p;t_0)} \langle C_{k+p+1}\rvert \hat{U}_H(t_0)\lvert C_{k+p}\rangle \nonumber \\
&\hphantom{z_k(p)}+\sqrt{1-z_k^2(p;t_0)}\langle C_{k+p+1}\rvert \hat{U}_H(t_0)\lvert \nu_{k+p}^{(k)}\rangle.
\end{align}
Using the triangle inequality for the magnitudes of these vectors gives
\begin{align}
&\left\lvert z_k(p;t_0) z_{k+p}(1;t_0) - \sqrt{1-z_k^2(p;t_0)}\sqrt{1-z_{k+p}^2(1;t_0)}\right\rvert \nonumber \\
&\leq z_k(p+1;t_0) \nonumber \\
&\leq \left\lbrace z_k(p;t_0)z_{k+p}(1;t_0) + \sqrt{1-z_k^2(p;t_0)}\sqrt{1-z_{k+p}^2(1;t_0)}\right\rbrace,
\label{eq:q_err_ineq_1}
\end{align}
on noting that $\hat{U}_H(t_0)\lvert \nu_{k+p}^{(k)}\rangle$ is orthogonal to $\hat{U}_H(t_0)\lvert C_{k+p}\rangle$, and consequently the inner product of the former with $\lvert C_{k+p+1}\rangle$ cannot exceed $\sqrt{1-z_{k+p}^2(1;t_0)}$ in magnitude.
The above inequalities can be simplified by defining $\theta_k(p) = \arccos z_k(p;t_0) \in [0,\pi/2]$. In terms of these variables, Eq.~\eqref{eq:q_err_ineq_1} becomes
\begin{equation}
\min\left\lbrace \theta_k(p)+\theta_{k+p}(1), \frac{\pi}{2}\right\rbrace \geq \theta_k(p+1) \geq \lvert \theta_k(p)-\theta_{k+p}(1) \rvert.
\end{equation}
Summing $\theta_k(p+1)-\theta_k(p)$ from $p = p_1$ to $p = p_2$ gives
\begin{align}
\sgn(p_2)&\min\left\lbrace\theta_k(p_2),\frac{\pi}{2}\right\rbrace-\sgn(p_1)\min\left\lbrace\theta_k(p_1), \frac{\pi}{2}\right\rbrace \nonumber \\
&\leq (\sgn(p_2)-\sgn(p_1))\sum_{p=p_1}^{p_2}\theta_{k+p}(1),
\end{align}
which becomes Eq.~\eqref{eq:q_persistence_bound} when expressed in terms of the $z_k(p;t_0)$. We see that the bound is saturated when $\hat{U}_H(t_0)$ at the $p$-th step acts like a 2D rotation by the angle $\theta_{k+p}(1)$ in the same direction as the previous steps.
\subsection{Optimal errors for cyclic permutations}
\label{app:q_cyc_dft}
When $\hat{U}_C$ is a cycling operator, $\hat{U}_C^p$ is generally a permutation operator on $\mathcal{C} = \lbrace \lvert C_k\rangle\rbrace_{k=0}^{d-1}$ that can be decomposed into a direct sum of cycling operators, each acting on a separate $[d/\mathcal{N}(d,p)]$-sized subset of $\mathcal{C}$:
\begin{equation}
\hat{U}_C^p = \bigoplus_{j=1}^{\mathcal{N}(d,p)} \ucj{j}(p).
\end{equation}
The number of cycling operators $\mathcal{N}(d,p)$ is given by the greatest common divisor of $p$ and $d$; in particular, $\mathcal{N}(d,p) = 1$ when $p$ and $d$ are coprime, including $p = 1$. This is most easily seen in the eigenvalue structure of $\hat{U}_C^p$, which consists of $\mathcal{N}(d,p)$ identical (degenerate) sets of distinct $[d/\mathcal{N}(d,p)]$-th roots of unity. It is also convenient to consider \textit{twisted} versions of $\hat{U}_C^{p}$, in which each cycle acquires an additional phase $\alpha_j(w)$:
\begin{equation}
w\lbrace\hat{U}_C^{p}\rbrace \equiv \bigoplus_{j=1}^{\mathcal{N}(d,p)} e^{i\alpha_j(w)}\ucj{j}(p).
\end{equation}
It is worth noting that the twisting functional $w$ affects only the eigenvalues of $\hat{U}_C^p$, lifting the degeneracy for most values of the $\alpha_j(w)$, while preserving at least one complete orthonormal set of its eigenvectors. Also, the $p$-step persistence amplitudes $z_k(p; t_0)$ are invariant under the action of $w$.
\subsubsection{Optimizing the error via the trace inner product}
Due to its non-negativity, the minimum persistence at a given $p$ is bounded by the mean persistence at that time:
\begin{equation}
\min_{j \in \mathbb{Z}_d} z_j(p; t_0) \leq \frac{1}{d}\sum_{k=0}^{d-1}z_k(p; t_0).
\label{eq:mean_trace_bound}
\end{equation}
We also have the inequality,
\begin{equation}
\left\lvert \frac{1}{d}\Tr\left[w\lbrace\hat{U}_C^p\rbrace^\dagger \hat{U}_H(p t_0)\right]\right\rvert \leq \frac{1}{d}\sum_{k=0}^{d-1} \left\lvert \langle C_k\rvert (\hat{U}_C^{p})^\dagger\hat{U}_H(p t_0)\lvert C_k\rangle\right\rvert,
\label{eq:trace_vs_mean}
\end{equation}
for any $w$, where the right hand side is just the mean persistence at $p$, expanded out.
Let us assume that for every $\mathcal{C}$ and given a $p$, there exists a unitary $\hat{V}_C$ and a twisting functional $w$ such that
\begin{equation}
\frac{1}{d}\Tr\left[\hat{V}_C w\lbrace\hat{U}_C^p\rbrace^{\dagger}\hat{V}^\dagger_C \hat{U}_H(p t_0) \right] = \frac{1}{d}\sum_{k=0}^{d-1} \left\lvert \langle C_k\rvert w\lbrace\hat{U}_C^ p\rbrace^\dagger \hat{U}_H(p t_0)\lvert C_k\rangle\right\rvert.
\label{eq:mean_trace_assumption}
\end{equation}
If this holds, then on account of Eq.~\eqref{eq:trace_vs_mean} and the invariance of the $z_k(p; t_0)$ under the action of $w$,
\begin{equation}
\max_{\hat{V} \in \mathcal{U}(d)} \left\lvert \frac{1}{d}\Tr\left[\hat{V} w\lbrace\hat{U}_C^p\rbrace^{\dagger}\hat{V}^\dagger \hat{U}_H(p t_0) \right]\right\rvert = \max_{\text{all}\ \mathcal{C}} \frac{1}{d}\sum_{k=0}^{d-1}z_k(p; t_0),
\label{eq:b2maximizationconstraint}
\end{equation}
where $w$ is chosen so that Eq.~\eqref{eq:mean_trace_assumption} is satisfied for some $\mathcal{C}$ that maximizes the right hand side of Eq.~\eqref{eq:b2maximizationconstraint}. This follows as $\hat{V}_C$ becomes a special case of $\hat{V}$, and the only freedom to vary the orthonormal basis $\mathcal{C}$ is through its reorientations in Hilbert space --- precisely given by all possible unitary transformations $\hat{V} \in \mathcal{U}(d)$ acting on the energy subspace $\Sigma_d$.
Now, we need to establish that Eq.~\eqref{eq:mean_trace_assumption} is indeed valid, and identify $w$. It is convenient to consider the two cases of nondegenerate and degenerate $\hat{U}_C^p$ separately.
\begin{enumerate}
\item \textbf{Case 1: $\lvert p\rvert$ and $d$ are coprime.}
In this case, $\hat{U}_C^p$ is itself a cycling operator. We separate the persistence inner product into an amplitude and phase,
\begin{equation}
\langle C_k\rvert (\hat{U}_C^{p})^\dagger\hat{U}_H(p t_0)\lvert C_k\rangle = z_k(p;t_0)e^{i\phi_k(p;t_0)}.
\label{eq:amplitudephase}
\end{equation}
Let $\overline{\phi}(p;t_0) = \sum_{k=0}^{d-1}\phi_k(p; t_0)$. Define a new cyclic permutation $\mathcal{C}'$ with basis vectors
\begin{equation}
\lvert C'_k\rangle = e^{i\sum_{j=-1}^{(j+1)p = k} \left\lbrace\phi_{jp}(p;t_0) - [\overline{\phi}(p; t_0)/d]\right\rbrace}\lvert C_k\rangle,
\end{equation}
where $\sum_{j=-1}^{(j+1)p=k} \phi_{jp} = \phi_{-p} + \phi_{0} + \phi_{p}+\ldots+\phi_{k-p}$ is a sum over the index with steps of size $p$, and subtracting $\overline{\phi}(p; t_0)/d$ from each term ensures the single-valuedness of the phases in the new basis. This induces a unitary transformation $\hat{U}_C \to \hat{U}_{C'} = \hat{V}_C \hat{U}_C \hat{V}_C^\dagger$ (where $\hat{U}_{C'}$ is required to satisfy Eq.~\eqref{eq:amplitudephase} with the $\lvert C_k\rangle$ replaced by $\lvert C'_k\rangle$), such that
\begin{equation}
\langle C_k\rvert \hat{V}_C(\hat{U}_C^{p})^\dagger\hat{V}_C^\dagger \hat{U}_H(p t_0)\lvert C_k\rangle = z_k(p;t_0)e^{i\overline{\phi}(p;t_0)/d}.
\label{eq:coprime_phase_adjustment}
\end{equation}
We see that Eq.~\eqref{eq:mean_trace_assumption} is then satisfied for a twisting functional $w$ with $\alpha_1(w) = -\overline{\phi}(p; t_0)/d$ (however, this phase is inconsequential in this case, being absorbed by the absolute value in Eq.~\eqref{eq:b2maximizationconstraint}).
\item \textbf{Case 2: $\lvert p\rvert$ and $d$ have a nontrivial common factor.} For this case, we can ensure that the analogue of Eq.~\eqref{eq:mean_trace_assumption} for each $[d/\mathcal{N}(d,p)]$-element cycle is satisfied following the procedure leading up to Eq.~\eqref{eq:coprime_phase_adjustment}, with the total phase $\overline{\phi}(p; t_0)$ replaced by that corresponding to the respective cycle, $\overline{\phi}_j(p; t_0)$. Then, it follows that Eq.~\eqref{eq:mean_trace_assumption} is also satisfied overall for $\hat{U}_C^p$ with a twisting functional $w$ given by $\alpha_j(w) = -\overline{\phi}_j(p; t_0)/d$.
\end{enumerate}
Thus, from Eq.~\eqref{eq:b2maximizationconstraint}, we can maximize the mean persistence by maximizing the magnitude of the trace
\begin{equation}
f_p(\hat{U}_C) = \left\lvert \Tr\left[w\lbrace \hat{U}_C^p\rbrace^\dagger \hat{U}_H(p t_0)\right]\right\rvert
\label{eq:costfunc}
\end{equation}
with respect to reorientations $\hat{U}_C \to \hat{V} \hat{U}_C \hat{V}^\dagger$. In Sec.~\ref{sec:inflection}, this maximum is shown to occur for some $\hat{U}_C$ satisfying
\begin{equation}
\left[\hat{U}_H(p t_0), w\lbrace\hat{U}_C^p\rbrace^{\dagger} \right] = 0,
\label{eq:dftcommutator1}
\end{equation}
as long as $f_p(\hat{U}_C) \geq \sqrt{d(d-2)}$ at some such point.
If $\hat{U}_H(p t_0)$ and $w\lbrace \hat{U}_C^p\rbrace$ both have nondegenerate eigenvalues, each has a unique set of $d$ eigenvectors corresponding to the respective eigenvectors of $\hat{U}_H(t_0)$ and $\hat{U}_C$. Eq.~\eqref{eq:dftcommutator1} then implies that both sets of eigenvectors are identical, and $\hat{U}_C$ must commute with $\hat{U}_H(t_0)$ to achieve a local extremum of the mean persistence.
When there are degeneracies (in any of $\hat{U}_H(t_0)$, $\hat{U}_H(p t_0)$ or $w\lbrace \hat{U}_C^{p}\rbrace$), we can nevertheless reach a similar conclusion by infinitesimally breaking the degeneracies. We can define $\uhd{(\delta_u)} = \hat{U}_H(p t_0)e^{i\delta_u \hat{Y}}$ where $\delta_u \to 0$ and $\hat{Y}$ is any finite Hermitian operator (i.e. with finite matrix elements in any orthonormal basis), such that $\uhd{(\delta_u)}$ has nondegenerate eigenvalues when $\delta_u \neq 0$. Similarly, we define $w_{(\delta_w)}$ by $\alpha_j(w_{(\delta_w)}) = \alpha_j(w)+\delta_w \gamma_j$ with $\delta_w \to 0$, with the $\gamma_j$ chosen so as to ensure the nondegeneracy of the eigenvalues of $w\lbrace \hat{U}_C^p\rbrace$ (essentially, infinitesimally twisting any degenerate $e^{i \alpha_{j}(w)}\ucj{j}(p)$, $e^{i \alpha_{k}(w)}\ucj{k}(p)$, $\ldots$ relative to each other). Re-expressing Eq.~\eqref{eq:b2maximizationconstraint} in terms of these variables, gives
\begin{equation}
\max_{\hat{V} \in \mathcal{U}(d)} \left\lvert \frac{1}{d}\Tr\left[\hat{V} w_{(\delta_w)}\lbrace\hat{U}_C^p\rbrace^{\dagger}\hat{V}^\dagger \uhd{(\delta_u)}(p t_0) \right]\right\rvert = \max_{\text{all}\ \mathcal{C}} \frac{1}{d}\sum_{k=0}^{d-1}z_k(p; t_0)+O(\delta_u,\delta_w),
\label{eq:b2degenerateconstraint}
\end{equation}
where $O(\delta_u,\delta_w)$ consists of terms of the form $(\delta_u)^a (\delta_w)^b y_{ab}$ with $a,b \geq 1$. As with Eq.~\eqref{eq:dftcommutator1}, the solution to the maximization on the left hand side must be among its local extrema, given by
\begin{equation}
\left[\uhd{(\delta_u)}(p t_0), w_{(\delta_w)}\lbrace\hat{U}_C^p\rbrace^{\dagger} \right] = 0.
\label{eq:dftcommutator2}
\end{equation}
Now, each nondegenerate operator $\uhd{(\delta_u)}(p t_0)$ and $w_{(\delta_w)}\lbrace\hat{U}_C^p\rbrace^{\dagger}$ has a unique set of $d$ eigenvectors, which the above equation asserts are identical. We can choose $\hat{Y}$ and $\gamma_j$ to break the degeneracy of $\hat{U}_H(p t_0)$ and $w\lbrace \hat{U}_C^p \rbrace$ in any desired way i.e. to pick any complete orthonormal subset of each set of eigenvectors. By Eq.~\eqref{eq:b2degenerateconstraint}, any such choice is equally good for maximizing the mean persistence in the $\delta_u,\delta_w \to 0$ limit. In particular, we can pick $w_{(\delta_w)}\lbrace\hat{U}_C^p\rbrace^{\dagger}$ so that its eigenvectors are identical to those of $\hat{U}_C$; similarly, we can choose $\hat{Y}$ so that the eigenvectors of $\uhd{(\delta u)}(p t_0)$ are identical to any complete orthonormal set of eigenvectors of $\hat{U}_H(t_0)$. In other words, any choice of degeneracy breaking in the neighborhood of degenerate operators only infinitesimally affects the local extrema of the left hand side of Eq.~\eqref{eq:b2degenerateconstraint}.
Thus, the right hand side of Eq.~\eqref{eq:mean_trace_bound} attains its global maximum when the eigenvectors of $\hat{U}_C$ are fixed to be any complete orthonormal set of eigenvectors of $\hat{U}_H(t_0)$, with the only freedom remaining in the assignment of the distinct eigenvalues of $\hat{U}_C$ to these eigenvectors. This can be concisely expressed as follows: the global maximum of the mean persistence occurs among the solutions to
\begin{equation}
\lim_{\delta \to 0}\left[\hat{U}_H(t_0)e^{i\delta\hat{Y}},\hat{U}_C\right] = 0,
\end{equation}
for any Hermitian $\hat{Y}$. For any $\hat{U}_C$ satisfying this property, all the $z_j(p; t_0)$ are equal at any given $p$. It follows that $\min_j z_j(p; t_0)$ is also maximized, and the $p$-step error minimized, by the same $\hat{U}_C$ that maximizes the mean persistence. From the requirement $f_p(\hat{U}_C) \geq \sqrt{d(d-2)}$, we get the condition $\varepsilon_C(p,t_0) \leq (2/d)$ on such a minimum of the error.
\subsubsection{Local extrema, and the global maximum for large persistence amplitudes}
\label{sec:inflection}
For simplicity, let $\hat{U}_1 = w\lbrace \hat{U}_C^p\rbrace$ and $\hat{U}_2 = \hat{U}_H(p t_0)$. We seek stationary points of the real valued function (from Eq.~\eqref{eq:costfunc})
\begin{equation}
\left\lvert \Tr\left(\hat{U}_1^\dagger \hat{U}_2\right)\right\rvert
\label{eq:costfunc2}
\end{equation}
with respect to small reorientations of $\hat{U}_1$ by $\hat{V}$, to first order. This would yield all the local maxima and minima (as well as saddle and inflection points) of the function except the global minima when the function attains the value $0$, where it is not differentiable. We write $\hat{V} = e^{i\hat{X}}$ with $\hat{X}$ near $0$, and require the phase of the $O(\hat{X})$ term in $\Tr[\hat{V}\hat{U}_1^\dagger \hat{V}^\dagger \hat{U}_2]$ to be orthogonal to the phase of the $O(1)$ term (so that the first variation corresponds only to a change in phase and not in magnitude; alternatively, one could directly extremize the square of Eq.~\eqref{eq:costfunc2}). This gives
\begin{equation}
\Tr\left(\hat{X}\left[ \hat{U}_1^\dagger, \hat{U}_2\right]\right) = c(\hat{X}) \Tr\left(\hat{U}_1^\dagger \hat{U}_2\right) \text{ for all Hermitian } \hat{X}
\end{equation}
with $c(\hat{X})$ required to be a real-valued function, for the stationary points. As can be verified by imposing this for each independent degree of freedom in the matrix elements of $\hat{X}$, this requires
\begin{equation}
\left[ e^{-i\alpha_{12}}\hat{U}_1^\dagger, \hat{U}_2\right] = \hat{F},
\label{eq:tracecommutator_Hermitian}
\end{equation}
where $\hat{F}$ is some traceless Hermitian operator, and $\alpha_{12}$ is the phase of $\Tr(\hat{U}_1^\dagger \hat{U}_2)$.
Up to this point, the unitarity of $\hat{U}_1$ and $\hat{U}_2$ played no role. Now, we use the fact that their products are unitary, and write
\begin{equation}
e^{-i\alpha_{12}} \hat{U}_1^\dagger \hat{U}_2 = e^{i\hat{A}_{12}}, \text{ and } \hat{U}_2 e^{-i\alpha_{12}}\hat{U}_1^\dagger = e^{i \hat{A}_{21}},
\end{equation}
for Hermitian $\hat{A}_{12}$ and $\hat{A}_{21}$. Formally defining sines and cosines of Hermitian operators through their Taylor series (which are also Hermitian), Eq.~\eqref{eq:tracecommutator_Hermitian} then gives
\begin{equation}
\cos \hat{A}_{12} - \cos \hat{A}_{21} + i\left[\sin \hat{A}_{12} - \sin \hat{A}_{21} \right] = \hat{F}.
\end{equation}
The Hermiticity of $\hat{F}$ demands that the anti-Hermitian part of the left hand side vanishes, giving
\begin{equation}
\sin \hat{A}_{12} = \sin \hat{A}_{21}.
\end{equation}
Let $\lbrace a(k)\rbrace_{k=0}^{d-1}$ be the eigenvalues of $\hat{A}_{12}$ and $\hat{A}_{21}$ (which must have identical eigenvalues up to irrelevant shifts of $2\pi$, as products of two unitaries have the same eigenvalues irrespective of the order~\cite{MatrixProductEigenvalues}). As long as it is known that $a(k) \in [-\pi/2,\pi/2]$, the sine is invertible\footnote{In fact, one gets $\hat{A}_{12} = \hat{A}_{21}$ for ``generic'' values of $a(k)$ such that the set $\lbrace a(k), \pi+a(k)\rbrace$ is non-degenerate. But it is not clear if this can be guaranteed for any desired $\hat{V}$ by imposing simple conditions on $\hat{U}_2$.} and $\hat{A}_{21} = \hat{A}_{12}$. Consequently,
\begin{equation}
\left\lbrace a(k) \in [-\pi/2,\pi/2],\ \forall\ k \vphantom{e^{i\alpha_{12}}}\right\rbrace \implies \left(\left[e^{-i\alpha_{12}} \hat{U}_1^\dagger, \hat{U}_2\right] = 0\right)
\label{eq:akimplication}
\end{equation}
at a stationary point. The vanishing commutator on the right side of the implication is precisely the condition of Eq.~\eqref{eq:dftcommutator1}.
The question of interest is now if there's a simple way to guarantee the restriction on $a(k)$ in Eq.~\eqref{eq:akimplication}. To see that there is, we note that $[e^{-i\alpha_{12}}\Tr(e^{i\hat{A}_{12}})] \in \mathbb{R}$ by the definition of $\alpha_{12}$, which implies
\begin{align}
\sum_{k} \cos a(k) &= e^{-i\alpha_{12}}\Tr\left(e^{i\hat{A}_{12}}\right), \\
\sum_{k} \sin a(k) &= 0. \label{eq:sinereality}
\end{align}
Let us maximize the multivariable function $b[a(k)] = \sum_k \cos a(k)$ with \textit{fixed} $a(0)$ (and free $a(k \neq 0)$) subject to the constraint in Eq.~\eqref{eq:sinereality} (and implicitly, non-negativity) using e.g. the method of Lagrange multipliers. The stationary points of $b[a(k)]$ occur at
\begin{equation}
a(k \neq 0) = c + \pi \zeta_k, \text{ with } \zeta_k \in \lbrace{0,1}\rbrace,
\end{equation}
for some constant $c$. The global maximum of $b[a(k)]$ corresponds to $c \in [-\pi/2,\pi/2]$ and $\zeta_k = 0\ \forall k$. Imposing Eq.~\eqref{eq:sinereality} to fix $c$ in terms of $a(0)$, we get
\begin{equation}
\sum_k \cos a(k) \leq b_{\max}[a(0)] \equiv \cos a(0) + (d-1) \sqrt{1-\frac{\sin^2 a(0)}{(d-1)^2}}.
\end{equation}
This is a monotonically decreasing function of $\lvert a(0)\rvert$ in its full domain $[0,\pi]$ for $d\geq 2$. In particular, if $\lvert a(0)\rvert > \pi/2$, then it is guaranteed that $b[a(k)] < b_{\max}[\pi/2] = \sqrt{d(d-2)}$. Re-expressing $b[a(k)]$ in terms of the trace of the relevant unitaries, we then have
\begin{equation}
\left\lbrace\left\lvert \Tr\left(\hat{U}_1^\dagger \hat{U}_2\right)\right\rvert \geq \sqrt{d(d-2)}\right\rbrace \implies \left\lbrace a(k) \in [-\pi/2,\pi/2],\ \forall\ k \vphantom{e^{i\alpha_{12}}}\right\rbrace.
\end{equation}
Combined with the implication in Eq.~\eqref{eq:akimplication}, it follows that maxima for which the trace is no smaller than $\sqrt{d(d-2)}$ occur for cycling operators that commute with time evolution, i.e. when $[\hat{U}_1^\dagger, \hat{U}_2] = 0$. Such commuting operators remain local extrema of the trace in other cases, but it is unclear in the present analysis if the global maximum is among them.
For comparison with the following subsection, we note that $a(k) = -2\pi p\Delta_k/d$, where $\Delta_k$ are the mode fluctuations used elsewhere (see Eq.~\eqref{eq:persistence_modefluctuations}) in the main text.
\subsection{Decrease of persistence for small permutations of sorted energy levels}
\label{app:optimalsorting}
When $\Delta_n \ll d$, assuming that the energies $E_n$ have been shifted by some additive constant so that $\sum_k \Delta_k = 0$, we have (representing $d$ times the persistence amplitude as per Eq.~\eqref{eq:persistence_modefluctuations})
\begin{equation}
\sum_{k=0}^{d-1} e^{-2\pi i \Delta_k/d} = 1-\frac{2\pi^2}{d^2}\sum_{k=0}^{d-1}\Delta_k^2+O(\Delta^3_k d^{-3}).
\label{eq:persistenceTaylor}
\end{equation}
For simplicity, we assume that the levels are already sorted i.e. $E_n < E_m$ when $n<m$. This further implies
\begin{equation}
\Delta_n-\Delta_m > -\lvert n-m\rvert.
\label{eq:sortingcondition}
\end{equation}
Any permutation $q(n)$ can be broken up~\cite{Mehta} into a set of cyclic permutations $q_{r}(n)$, each involving a subset of $N_r$ levels $\mathcal{E}[q_r] = \lbrace E_{r(k)}\rbrace_{k=0}^{N_r-1}$. For the rest of the argument, we will require (where the subtraction of $r$ is on $\mathbb{Z}$ (linear), and not on $\mathbb{Z}_d$ (circular or modulo $d$))
\begin{equation}
\lvert r(k)-r(j)\rvert < d/2,\ \forall\ k,j \in \mathbb{Z}_{N_r},
\label{eq:smallpermutation}
\end{equation}
for each $q_r$; permutations $q$ satisfying this are what we refer to as ``small'' permutations. This will ensure that Eq.~\eqref{eq:persistenceTaylor} remains valid under these permutations without discrete shifts of some of the $\Delta_k$ by multiples of $(2\pi)$.
The new mode fluctuations after permutation are given by
\begin{equation}
\Delta'_{r(k)} = \Delta_{r(k+1)} + [r(k+1)-r(k)],
\end{equation}
for each cycle $q_r$. It follows that the mean is preserved i.e.
\begin{equation}
\sum_{k=0}^{N_r-1} \Delta'_{r(k)} = \sum_{k=0}^{N_r-1} \Delta_{r(k)}.
\end{equation}
Our goal is to show that the variance of the $\Delta'_k$ is larger than that of the $\Delta_k$, which would translate to a decreased persistence by Eq.~\eqref{eq:persistenceTaylor}. We have
\begin{equation}
\sum_{k=0}^{N_r-1} (\Delta'_{r(k)})^2 - \sum_{k=0}^{N_r-1}(\Delta_{r(k)})^2 = 2\sum_{k=0}^{N_r-1} \Delta_{r(k+1)}[r(k+1)-r(k)] + \sum_{k=0}^{N_r-1}[r(k+1)-r(k)]^2.
\label{eq:variancedifference}
\end{equation}
In general, the $r(k+1)$ are not in any simple (e.g. ascending or descending) order. We can split each difference $[r(k+1)-r(k)]$ in the first term on the right hand side into a sum of differences of the $r(\ell)$ lying between (and inclusive of) them:
\begin{equation}
\Delta_{r(k+1)}[r(k+1)-r(k)] = \sum_{r_j \in [r(k),r(k+1)]} \zeta_k\Delta_{r(k+1)} (r_{j+1}-r_j),
\end{equation}
where $\zeta_k= \sgn[r(k+1)-r(k)]$, and the $r_j$ are chosen to be sorted according to $j$. On including terms with different values of $k$, each interval $(r_{j+1}-r_j)$ occurs in an equal number of terms with positive $\zeta_k = +1$ ($r(k+1)\geq r_{j+1}$) and negative $\zeta_k = -1$ ($r(k+1) \leq r_{j})$. We can arbitrarily pair each positive term $r_{+}$ with a negative term $r_{-}$, and use Eq.~\eqref{eq:sortingcondition} for the difference $\Delta_{r_+}-\Delta_{r_-}$ noting that $r_{+} > r_{-}$. This amounts to replacing the equality with $\geq$, and each $\Delta_{r(k+1)}$ with $-r(k+1)$, in Eq.~\eqref{eq:variancedifference}. We therefore obtain
\begin{align}
\sum_{k=0}^{N_r-1} (\Delta'_{r(k)})^2 - \sum_{k=0}^{N_r-1}(\Delta_{r(k)})^2 &\geq \left\lbrace-2\sum_{k=0}^{N_r-1} r(k+1)[r(k+1)-r(k)]\right\rbrace + \sum_{k=0}^{N_r-1}[r(k+1)-r(k)]^2 \nonumber \\
\implies
\sum_{k=0}^{N_r-1} (\Delta'_{r(k)})^2 &\geq \sum_{k=0}^{N_r-1}(\Delta_{r(k)})^2.
\end{align}
The second line follows from simplifying the first. Adding all such equations from each $q_r$ together, we get
\begin{equation}
\sum_{k=0}^{d-1} e^{-2\pi i \Delta'_k/d} \leq \sum_{k=0}^{d-1} e^{-2\pi i \Delta_k/d}+O(\Delta^3 d^{-3}).
\end{equation}
This shows that sorting the energy levels corresponds to the maximum persistence at $p=1$ for a given $t_0$ and small $\Delta_k$, at least among other possibilities that can be obtained as small permutations of the sorted levels. This is more like a discrete version of a local extremum. It would be interesting to check if ``larger'' permutations not subject to Eq.~\eqref{eq:smallpermutation} would lead to significantly better maxima; this is unlikely to be the case without some non-intuitive conspiracy between distant energy levels.
\section{Time dependence of persistence amplitudes}
\label{app:errorcoefficientpairing}
\subsection{Error coefficient pairing in discrete sum over paths}
\label{app:typicalpersistence}
We rewrite Eq.~\eqref{eq:q_periodic_random} for $p = 1$ as
\begin{equation}
\hat{U}_\Delta e^{-i\phi_\Delta(1)} = (1-\varepsilon_1)^{1/2}\left[\hat{\mathds{1}} + g_1 \sum_{m=1}^{d-1}\nu_m(1)\hat{U}_C^m\right],
\end{equation}
where $\varepsilon_p \equiv \varepsilon_C(p, t_0)$ and $g_1 = \sqrt{\varepsilon_1/(1-\varepsilon_1)}$. We note that $g_1$ is also the coefficient that occurs on the right hand side of Eq.~\eqref{eq:nu_constraint2}. The $p$-th power of the error unitary is
\begin{align}
\hat{U}_\Delta^{p} e^{-ip \phi_\Delta(1)} &= (1-\varepsilon_1)^{p/2} \sum_{s=0}^{p} \binom{p}{s} g_1^s \sum_{m_1,\ldots, m_s} \nu_{m_1}(1)\ldots\nu_{m_s}(1) \hat{U}_C^{m_1+\ldots+m_s},
\nonumber \\
&= (1-\varepsilon_1)^{p/2} \sum_{r=0}^{d-1}\left(\sum_{s=0}^{p} \dptG{s}{r}\right)\hat{U}_C^r \label{eq:dpt1b}
\end{align}
where $\binom{p}{s} = p!/(s!(p-s)!)$ is the binomial coefficient, and we recall that the sums are modulo $d$.
We have also defined
\begin{equation}
\dptG{s}{r} = \binom{p}{s} g_1^s \sum_{m_1,\ldots, m_s} \nu_{m_1}(1)\ldots\nu_{m_s}(1) \overline{\Theta}(m_1+\ldots+m_s = r),
\label{eq:dpt2}
\end{equation}
with $\overline{\Theta}(x) = 1$ if $x$ is true and $0$ otherwise.
Each term with fixed $r$ in \eqref{eq:dpt1b} represents a sum over paths for the transition amplitude from any $\lvert C_{k}\rangle$ to $\lvert C_{k+r}\rangle$.
Now, we apply the assumption of error coefficient pairing, by considering only terms where . For even $s$ in Eq.~\eqref{eq:dpt2}, restricting to such pairings necessarily implies that $m_1+\ldots+m_s =0$. For odd $s$, it is not possible to pair all error coefficients and a free error coefficient remains, whose index must necessarily be $r$ if the remaining coefficients are paired. Schematically (in the sense that we avoid explicitly enumerating the possible pairings), for non-negative integer $u$,
\begin{align}
\dptG{2u}{r} \approx \delta_{r0} &\left\lbrace\binom{p}{2u} g_1^{2u} \sum_{\text{pairings}} [\nu_{m_1}(1)\nu_{-m_1}(1)]\ldots[\nu_{m_u}(1)\nu_{-m_u}(1)]\right\rbrace, \label{eq:pairingsum1}\\
\dptG{2u+1}{r} \approx g_1\nu_r(p-2u) &\left\lbrace\binom{p}{2u} g_1^{2u} \sum_{\text{pairings}} [\nu_{m_1}(1)\nu_{-m_1}(1)]\ldots[\nu_{m_u}(1)\nu_{-m_u}(1)]\right\rbrace. \label{eq:pairingsum2}
\end{align}
In the second line, we have accounted for $s=2u+1$ different ways of choosing the unpaired coefficient, and used $s \binom{p}{s} = (p+1-s)\binom{p}{s-1}$.
For a given $u$, the sum over pairings and coefficients within the braces in Eqs.~\eqref{eq:pairingsum1} and \eqref{eq:pairingsum2} are identical, irrespective of the value of $r$. Treating $g_1$ as a formally independent parameter that we can take partial derivatives with respect to, we can further replace $(p-2u)$ with $(p-g_1 \vec{\partial}/\partial g_1)$ acting on its right in Eq.~\eqref{eq:pairingsum2}, which moves all the $u$ dependence to inside the braces. For even $p$, this means that each sum over $s$ in Eq.~\eqref{eq:dpt1b} --- which is naturally restricted to even $s$ for $r=0$ and odd $s$ for $r\neq 0$ after pairing --- produces coefficients for all $r$ that are identical except for the operators outside the braces in Eqs.~\eqref{eq:pairingsum1} and \eqref{eq:pairingsum2}. If the time dependence is sufficiently slow, the result for odd $p$ can be extrapolated (to a good approximation) in any convenient way between those for $p\pm 1$. Thus, we have the approximate form
\begin{equation}
\hat{U}_\Delta^{p} e^{-ip \phi_\Delta(1)} \approx (1-\varepsilon_1)^{p/2}\left[\hat{\mathds{1}}+g_1\sum_{r=1}^{d-1}\nu_r(1)\hat{U}_C^r\left(p-g_1\frac{\vec{\partial}}{\partial g_1}\right)\right] h(p, g_1).
\label{eq:uerrT_form}
\end{equation}
The function $h(p, g_1)$ originates in the sum over pairings within the braces of Eqs.~\eqref{eq:pairingsum1} and \eqref{eq:pairingsum2}; from the above expression, it is formally related to the persistence amplitude at $p$ by
\begin{equation}
z(p, t_0) = (1-\varepsilon_1)^{p/2} h\left(p, \sqrt{\frac{\varepsilon_1}{1-\varepsilon_1}}\right).
\end{equation}
\subsection{Gaussian estimate}
The persistence amplitude at $p+1$ can be expressed in terms of the coefficients in $\hat{U}_\Delta^{p}$ and $\hat{U}_\Delta^1$ as follows:
\begin{align}
z(p+1, t_0) &= \left\lvert \frac{1}{d}\Tr\left[\hat{U}_\Delta^1 \hat{U}_\Delta^{p}\right]\right\rvert \nonumber \\
&= \left\lvert \sqrt{1-\varepsilon_1}\sqrt{1-\varepsilon_{p}}+\sqrt{\varepsilon_1\varepsilon_p} \sum_{r=1}^{d-1}\nu_{r}(1)\nu_{-r}(p)\right\rvert.
\end{align}
Substituting the appropriate expressions for $\varepsilon_p$ and $\nu_p$ from Eq.~\eqref{eq:uerrT_form}, we get
\begin{equation}
z(p+1, t_0) \approx (1-\varepsilon_1)^{1/2}\left\lvert z(p,t_0) + (1-\varepsilon_1)^{(p)/2}g_1^2 \sum_{r=1}^{d-1}\nu_{r}(1)\nu_{-r}(1)\left(p-g_1\frac{\vec{\partial}}{\partial g_1}\right) h(p, g_1)\right\rvert
\label{eq:gestimate_intermediate}
\end{equation}
Now, we assume that the second term within the absolute value is smaller than the first, and $p h \gg g_1 \partial h/\partial g_1$; both will be justified retroactively. Further defining
\begin{equation}
\nu_C = -\sum_{r=1}^{d-1} \nu_r(1) \nu_{-r}(1),
\end{equation}
which happens to measure the goodness of the approximation in Eq.~\eqref{eq:nu_symmetry}, we are led to
\begin{equation}
z(p+1, t_0) \approx (1-\varepsilon_1)^{1/2}\left[1-g_1^2p \nu_C \right]z(p, t_0).
\end{equation}
It is now straightforward to multiply over values of $p$ from some given $\overline{p}$ through to $1$. For $\varepsilon_1 \ll 1$ and setting $\nu_C \approx 1$ as per Eq.~\eqref{eq:nu_symmetry}, we get
\begin{equation}
z(\overline{p}, t_0) \approx \exp\left[-\frac{\varepsilon_1}{2}\lvert p\rvert-\frac{g_1^2}{2}p^2\right].
\end{equation}
We see that the smallness of the second term in Eq.~\eqref{eq:gestimate_intermediate} and $p h \gg g_1 \partial h/\partial g_1$ are both satisfied when $p \ll 1/g_1$, i.e. when the persistence amplitude is still close to $1$.
\subsection{Minimum error constraints from the SFF}
\label{app:sfferror}
Substituting the form $K(t) = \lambda t^\gamma$ in Eq.~\eqref{eq:SFFerror_relation} and dropping subleading terms in $\varepsilon_1 = \varepsilon_C(1,t_0)$ gives
\begin{equation}
2\lambda t_0^\gamma\sum_{p=1}^{1/(M\sqrt{\varepsilon_1})} p^{\gamma-2} \lessapprox \varepsilon_1.
\end{equation}
For $\gamma \in [0,1)$, the left hand side is dominated by small $p$ and is independent of $M$. Replacing $1/(M\sqrt{\varepsilon_1}) \to \infty$, we obtain
\begin{equation}
\varepsilon_1 \gtrapprox 2\lambda t_0^\gamma\zeta(2-\gamma),
\end{equation}
where $\zeta(x)$ is the Riemann zeta function. In particular, for $\gamma = 0$ and $\lambda = 1/d$ (Poisson statistics), we have $\varepsilon \gtrapprox \pi^2/(3d) = O(1/d)$. For $\gamma > 1$, it is instead the terms with larger $p$ that dominate. Using the leading term in Faulhaber's formula for the sum (formula (0.121) in Ref.~\cite{mathGR}; equivalent to replacing the sum with an integral), we have
\begin{equation}
2\lambda t_0^\gamma\frac{[1/(M\sqrt{\varepsilon_1})]^{\gamma-1}}{\gamma-1} \lessapprox \varepsilon_1.
\label{eq:errsff_gamma>1}
\end{equation}
The presence of $M = O(1) \geq 1$ in this expression allows us to make only order of magnitude statements. We get
\begin{equation}
\varepsilon_1^{(1+\gamma)/2} \gtrapprox 2\lambda t_0^\gamma\frac{ (\gamma-1)}{M^{\gamma-1}},
\end{equation}
which implies $\varepsilon_1 \geq O(d^{-4/(\gamma+1)})$ when $\lambda = O(d^{-2})$ and $t_0 = O(1)$, for any $\gamma = O(1) > 1$. The most generic case (i.e. typical for Haar random~\cite{Mehta, Haake} systems), $\gamma = 1$, is a bit more subtle. Here, it is again the large-$p$ terms that dominate, so we take the $\gamma \to 1$ limit of Eq.~\eqref{eq:errsff_gamma>1}, which gives
\begin{equation}
\frac{\varepsilon_1}{\ln\left(\frac{1}{M\sqrt{\varepsilon_1}}\right)} \gtrapprox 2\lambda t_0.
\end{equation}
This is a transcendental equation for $\varepsilon_1$, but we can nevertheless invert it to leading order in $\lambda^{-1}$ (i.e. substituting $\varepsilon_1 = \mu(\lambda)\lambda$ and solving for $\mu$, neglecting $\ln(\ln \lambda)$), obtaining
\begin{equation}
\varepsilon_1 \gtrapprox \lambda t_0 \ln \frac{1}{\lambda}.
\end{equation}
For Wigner-Dyson statistics, $\lambda = O(d^{-2})$ and $t_0 = O(1)$ gives $\varepsilon \geq O(d^{-2}\ln d)$.
\subsection{Numerical evidence for error coefficient pairing}
To provide numerical evidence for the pairing of error coefficients, we test the prediction of Eq.~\eqref{eq:uerrT_form} when $g_1 \partial h/\partial g_1$ is negligible i.e. Eq.~\eqref{eq:uerrT_approx} in the main text. More directly, we define
\begin{equation}
\tilde{\nu}_m(p) = \frac{1}{p z(p, t_0)}\nu_m(p).
\end{equation}
Eqs.~\eqref{eq:uerrT_form}, \eqref{eq:uerrT_approx} then imply that $\tilde{\nu}_m(p) = \tilde{\nu}_m(1)$ for any $p \ll 1/\sqrt{\varepsilon_C(1,t_0)}$. This is verified in Fig.~\ref{fig:errorpairing} for the ($\beta=2$, $d=2048$) CUE dataset of Fig.~\ref{fig:RMT_ergodicity}, for which $1/\sqrt{\varepsilon_C(1,t_0)} \approx 525$.
\begin{figure}
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[scale=0.25]{RMTfigs/CUE2/ErrorPairing100_2e.pdf}
\caption{\small $p = 100$.}
\label{fig:errorparing100}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[scale=0.27]{RMTfigs/CUE2/ErrorPairing100Zoomed_2e.pdf}
\caption{\small $p = 100$, zoomed in.}
\label{fig:errorpairing100zoomed}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[scale=0.25]{RMTfigs/CUE2/ErrorPairing250_2e.pdf}
\caption{\small $p = 250$.}
\label{fig:errorpairing250}
\end{subfigure}
\caption{\small Comparison of $\tilde{\nu}_m(p)$ with $\tilde{\nu}_m(1)$, using magnitudes $\lvert \tilde{\nu}_m(p)\rvert^2$, $\lvert \tilde{\nu}_m(1)\rvert^2$ and residuals $\lvert \tilde{\nu}_m(1)-\tilde{\nu}_m(p)\rvert^2$ for $d=2048$. The residuals are predicted to be negligible compared to the magnitudes at the same $m$ for $p \ll 525$, which these plots are in good agreement with even when $p$ is a considerable fraction of $525$.}
\label{fig:errorpairing}
\end{figure}
\section{The classical limit}
\subsection{Wigner quasiprobabilities and mixed states}
\label{app:wignerfunc}
We consider the special problem of a particle with position coordinates $\mathbf{x} \in \mathbb{R}^N$, allowing the definition of the real-valued Wigner quasiprobability functions $W$ (following the conventions adopted in Ref.~\cite{PolkovnikovPhaseSpace}) in an effective Hamiltonian phase space $\mathbf{\mathcal{P}} = \lbrace(\mathbf{x},\mathbf{p})\rbrace = \mathbb{R}^{2N}$. For a general (i.e. mixed state) density matrix $\hat{\rho}$ (in units where the reduced Planck's constant $\hbar = 1$),
\begin{equation}
W(\mathbf{x},\mathbf{p}) = \int_L \diff\mathbf{y}\ \langle \mathbf{x}-\tfrac{1}{2}\mathbf{y}\rvert \hat{\rho}\lvert\mathbf{x}+\tfrac{1}{2}\mathbf{y}\rangle e^{i\mathbf{p}\cdot\mathbf{y}},
\end{equation}
where $W$ is normalized according to $\int \diff \mathbf{x}\diff \mathbf{p}\ W(\mathbf{x},\mathbf{p}) = (2\pi)^N$.
The overlap of the density matrices is directly given by the overlap of these quasiprobabilities,
\begin{equation}
\Tr(\hat{\rho}_1\hat{\rho}_2) = \frac{1}{(2\pi)^N}\int\diff\mathbf{x}\int\diff\mathbf{p}\ W_{1}(\mathbf{x},\mathbf{p})W_{2}(\mathbf{x},\mathbf{p}).
\label{eq:Wigneroverlap}
\end{equation}
When $W_{A}(\mathbf{x},\mathbf{p})$ is a uniform distribution over some region $A$ in the phase space (at least at some level of approximation; $W$ is in general not non-negative everywhere~\cite{WignerNonnegativity}), we have $W_{A}[(\mathbf{x},\mathbf{p}) \in A] = [(2\pi)^N/\tilde{\mu}(A)]$, with its value being (approximately) $0$ elsewhere. Here, $\tilde{\mu}(A) = \int_{A} \diff \mathbf{x}\diff \mathbf{p}$. Using this expression in Eq.~\eqref{eq:Wigneroverlap}, we get for any two regions $A$ and $B$ and the density matrices $\hat{\rho}_A$ and $\hat{\rho}_B$ corresponding to such uniform Wigner functions,
\begin{align}
\frac{1}{(2\pi)^N}\Tr(\hat{\rho}^2_A) &= \frac{1}{\tilde{\mu}(A)}, \\
\frac{1}{(2\pi)^N} \Tr(\hat{\rho}_A \hat{\rho}_B) &= \frac{\tilde{\mu}(A \cap B)}{\tilde{\mu}(A)\tilde{\mu}(B)}.
\end{align}
These are equivalent to Eqs.~\eqref{eq:mixedstatepurity} and \eqref{eq:mixedstateoverlap} in the main text, subject to the normalization of the measure $\tilde{\mu}$. The expressions in the main text assume that $\mu(\mathcal{P}) = 1$, but the system considered in this Appendix has an infinite phase space with $\tilde{\mu}(\mathcal{P}) = \infty$, as well as an infinite dimensional Hilbert space with $d = \infty$. By requiring that the maximally mixed state $\hat{\rho} = \hat{\mathds{1}}/d$ corresponds to a uniform distribution over the full phase space $\mathcal{P}$ (e.g. any projective measurement onto an orthonormal basis has equal probabilities for every outcome in the former, and the latter is equally distributed over any foliation of the phase space into surfaces of constant position coordinates e.g. $\mathbf{x}$), we heuristically obtain $\tilde{\mu}(\mathcal{P}) = (2\pi)^N d$, fixing the normalization of $\mu$ and directly giving Eqs.~\eqref{eq:mixedstatepurity} and \eqref{eq:mixedstateoverlap}. We expect this reasoning to go through without dealing with infinities, if one can suitably define analogues of the Wigner functions restricted to energy shells of finite measure.
\subsection{Cyclic permutations for a harmonic oscillator}
\label{app:oscillator_ergodicity}
Here, we consider the example of the 1D harmonic oscillator, mainly to illustrate the relationship between ergodicity and cyclic ergodicity. The classical Hamiltonian $H = p^2/(2m) + m\omega^2 x^2/2$ can be rewritten in terms of action-angle~\cite{Goldstein, Ott} variables $(J,\theta)$ as $H = J\omega$, where the equation of motion is $\theta = \omega t$. The action variable $J>0$ is a conserved quantity, and the phase space $\mathcal{P}$ decomposes into subsets $\mathcal{P}_J$ with fixed $J=J_0$ and measure induced by $\diff \theta$ on $\theta \in [0,2\pi)$, each of which is ergodic and periodic.
We can construct $n$-element cyclic permutations of zero error in an energy window with arbitrary base $E_0 = J_0 \omega$ and arbitrary width $\delta E = \omega \delta J$, by choosing the sets
\begin{equation}
C_k = \left\lbrace (J,\theta) : J \in \left[J_0, J_0+\delta J\right], \theta \in \left[\frac{2\pi k}{n}, \frac{2\pi (k+1)}{n}\right] \right\rbrace
\end{equation}
and $t_0 = 2\pi/(\omega n)$. That the error is zero requires that none of the $C_k$ are completely contained in any $\mathcal{P}_n$, which is indeed the case. At the same time, if we take $\delta J \to 0$, the $C_k$ are all contained in $\mathcal{P}_{J_0}$; zero error now implies ergodicity and periodicity within $\mathcal{P}_{J_0}$.
The quantized oscillator has $J \in \mathbb{N}_0$ with energy eigenstates $\lvert J\rangle$ and eigenvalues $J\omega$, and the DFT basis states
\begin{equation}
\lvert \theta \in \mathbb{Z}_{\delta J}\rangle = \frac{1}{\sqrt{\delta J}}\sum_{k=0}^{\delta J-1} e^{-2\pi i k \theta/\delta J}\left\lvert J_0+k\right\rangle
\end{equation}
provide a zero error cyclic permutation for time evolution (with $t_0 = 2\pi/(\omega\delta J)$) in the energy subspace spanned by $\lbrace\lvert J_0\rangle,\ldots,\lvert J_0+\delta J\rangle\rbrace$ (implying quantum cyclic ergodicity and periodicity). In the classical limit ($J_0 \gg 1$, $\delta J \gg 1$), the $\lvert \theta \rangle$ have the same equations of motion as localized points in $\theta$, and identifying the two leads to the identification of the $C_k$ with diagonal mixed states supported on $\delta J/n$ contiguous $\lvert \theta\rangle$ states.
\section{Mixed state cyclic permutations}
\label{app:mixedapprox}
Let $\hat{\Pi}_k = (d_n/n)\hat{\rho}[\overline{\Sigma}(k)]$ be the projection operators onto the subspaces $\overline{\Sigma}(k)$. Focus on the two subspaces with projectors $\hat{\Pi}_k(t_m) \equiv \hat{U}_H(t_m) \hat{\Pi}_k \hat{U}_H^\dagger(t_m)$ and $\hat{\Pi}_{k+1}$ for a given $k$. The expanded energy subspace $\overline{\Sigma}_{d_n}$ can be expressed using Halmos' decomposition~\cite{Halmos2sub, Intro2sub} relative to the two projectors:
\begin{equation}
\overline{\Sigma} = \left(\mathcal{H}_{00}\oplus \mathcal{H}_{01} \oplus \mathcal{H}_{10} \oplus \mathcal{H}_{11}\right) \oplus \left[\mathcal{H}_{1} \oplus \mathcal{H}_{0}\right]
\end{equation}
where $\dim(\mathcal{H}_1) = \dim(\mathcal{H}_0)$ always, and $\dim(\mathcal{H}_{01}) = \dim(\mathcal{H}_{10})$ in our case as $\Tr\hat{\Pi}_k(t_m) = \Tr\hat{\Pi}_{k+1}$. These subspaces are such that $\hat{\Pi}_k(t_m)$ and $\hat{\Pi}_{k+1}$ take the following forms (in the same order of subspaces):
\begin{align}
\hat{\Pi}_k(t_m) &= \left(0 \oplus 0 \oplus \hat{\mathds{1}} \oplus \hat{\mathds{1}}\right) \oplus \begin{bmatrix}
\hat{\mathds{1}} & 0 \\
0 & 0 \\
\end{bmatrix}, \label{eq:halmos1}\\
\hat{\Pi}_{k+1} &= \left(0 \oplus \hat{\mathds{1}} \oplus 0 \oplus \hat{\mathds{1}}\right) \oplus \begin{bmatrix}
\hat{\Lambda^2} & \hat{\Lambda} \hat{K} \hat{V}^\dagger \\
\hat{V} \hat{\Lambda} \hat{K} & \hat{V} \hat{K}^2 \hat{V}^\dagger
\end{bmatrix}, \label{eq:halmos2}
\end{align}
with Hermitian $0 < \hat{\Lambda} < \hat{\mathds{1}}$ and $0 < \hat{K} < \hat{\mathds{1}}$ satisfying $\hat{\Lambda}^2+\hat{K}^2 = \hat{\mathds{1}}$ (both acting on $\mathcal{H}_1$), and some unitary $\hat{V}: \mathcal{H}_1 \to \mathcal{H}_0$. The only subspace in which both projectors have nonzero matrix elements is $\mathcal{H}_{11} \oplus \mathcal{H}_1$. Further, $\mathcal{H}_{11} = \left[\hat{U}_H(t_m)\overline{\Sigma}(k)\right] \cap \overline{\Sigma}(k+1)$ is the intersection of the subspaces, while $\mathcal{H}_{11}\oplus \mathcal{H}_1$ is completely contained within the first subspace $\left[\hat{U}_H(t_m)\overline{\Sigma}(k)\right]$. Additionally, $\hat{\Lambda}$ and $\hat{K}$ necessarily commute, and have a shared eigenbasis (due to the non-negativity of their eigenvalues).
Let $\lambda_j \in (0,1)$ be the eigenvalues of $\hat{\Lambda}$, and $\kappa_j \in (0,1)$ those of $\hat{K}$. It is instructive to write the matrix in Eq.~\eqref{eq:halmos2} in terms of the shared eigenbasis $\lvert \xi_j\rangle$ of $\hat{\Lambda}, \hat{K}$:
\begin{equation}
\begin{bmatrix}
\hat{\Lambda^2} & \hat{\Lambda} \hat{K} \hat{V}^\dagger \\
\hat{V} \hat{\Lambda} \hat{K} & \hat{V} \hat{K}^2 \hat{V}^\dagger
\end{bmatrix} = \sum_{j=0}^{\dim(\mathcal{H}_1)-1} \left(\lambda_j\lvert \xi_j\rangle + \kappa_j\hat{V}\lvert \xi_j\rangle\right)\left(\lambda_j\lvert \xi_j\rangle + \kappa_j\hat{V}\lvert \xi_j\rangle\right)^\dagger,
\end{equation}
from which we see that the orthonormal set of vectors
\begin{equation}
\lvert \eta_j\rangle \equiv \lambda_j\lvert \xi_j\rangle + \kappa_j\hat{V}\lvert \xi_j\rangle
\end{equation}
are completely contained in $\overline{\Sigma}(k+1)$. We can use this fact to identify a convenient orthonormal basis in each subspace. Namely, with $\mathcal{B}_{uv} = \lbrace \lvert \mathcal{B}_{uv}; j\rangle\rbrace_{j=0}^{\dim(\mathcal{H}_{uv})-1}$ representing some orthonormal basis in $\mathcal{H}_{uv}$, we define the following orthonormal bases in $\hat{\Pi}_k(t_m)$ and $\hat{\Pi}_{k+1}$:
\begin{align}
\mathcal{B}_k(t_m) &= \mathcal{B}_{10}\oplus \mathcal{B}_{11} \oplus \left\lbrace \lvert \xi_j\rangle\right\rbrace_{j=0}^{\dim(\mathcal{H}_1)-1}, \\
\mathcal{B}_{k+1} &= \mathcal{B}_{01}\oplus \mathcal{B}_{11} \oplus \left\lbrace \lvert \eta_j\rangle\right\rbrace_{j=0}^{\dim(\mathcal{H}_1)-1},
\end{align}
It is important to note that the auxiliary directions in $\Sigma_{\text{aux}}$, introduced in the main text to make $d_n$ a multiple of $n$, are in $\mathcal{H}_{10}$ or $\mathcal{H}_{01}$ if present in $\overline{\Sigma}(k)$, $\overline{\Sigma}(k+1)$, as $\hat{U}_H(t_m) \Sigma_{\text{aux}} = \Sigma_{\text{aux}}$ by definition. We will require the auxiliary dimension $\overline{\Sigma}(k) \cap \Sigma_{\text{aux}}$ of each subspace to be an element of the corresponding $\mathcal{B}_{01}$ or $\mathcal{B}_{10}$. We will also require the indices $j$ in $\lvert \mathcal{B}_k(t_m); j\rangle$ and $\lvert \mathcal{B}_{k+1}; j\rangle$ to be such that elements in $\mathcal{B}_{11}$ have the same index, as do $\lvert \xi_\ell\rangle$ and $\lvert \eta_\ell\rangle$ for a given $\ell$.
The overlap of the projectors is
\begin{equation}
P_k(1,t_m) \equiv \Tr\left[\hat{\Pi}_k(t_m)\hat{\Pi}_{k+1}\right] = \dim(\mathcal{H}_{11})+\sum_{j}\lambda_j^2.
\label{eq:projectoroverlap}
\end{equation}
Similarly, the total magnitude of overlap amplitudes between the corresponding elements of the two orthonormal bases is
\begin{align}
R_k(1,t_m) &\equiv \sum_{j=0}^{\dim(\mathcal{H}_{11})-1} \left\lvert \langle \mathcal{B}_{11}; j\vert \mathcal{B}_{11}; j\rangle\right\rvert + \sum_{j=0}^{\dim(\mathcal{H}_{1})-1} \left\lvert \langle \eta_j \vert \xi_j\rangle\right\rvert \nonumber \\
&= \dim(\mathcal{H}_{11})+\sum_j \lambda_j.
\end{align}
We want to find a lower bound for $R_k(1,t_m)$ from the overlap. The free variables at our disposal are $\dim(\mathcal{H}_{10})$, $\dim(\mathcal{H}_{11})$, $\dim(\mathcal{H}_1)$ and all of the $\lambda_j$, constrained by $\lambda_j \in (0,1)$, Eq.~\eqref{eq:projectoroverlap} and
\begin{equation}
\frac{d_n}{n} = \dim(\mathcal{H}_{10})+\dim(\mathcal{H}_{11}) + \dim(\mathcal{H}_1),
\end{equation}
from $\Tr\hat{\Pi}_k(t_m) = \Tr\hat{\Pi}_{k+1} = (d_n/n)$. This is conveniently done by introducing $\overline{\lambda}_j \in (0,1]$ representing the eigenvalues of $\hat{\Pi}_{k+1}$ in $\mathcal{H}_{11}\oplus \mathcal{H}_1$ (which reduce to $1$ in $\mathcal{H}_{11}$ and the $\lambda_j$ in $\mathcal{H}_1$), so that $R_k(1,t_m)$ is the sum of these eigenvalues and $P_k(1,t_m)$ the sum of their squares. Using the method of Lagrange multipliers immediately shows that the sum of a set of non-negative variables, with a fixed sum of squares, has no local minima with respect to first order variations (but a local maximum when they are equal). The true minimum is then to be found somewhere on the boundary of the (constrained) domain of the $\overline{\lambda}_j$ (i.e. setting as many $\overline{\lambda}_j$s to $1$ as possible to minimize the excess of the variables over their squares), which gives
\begin{equation}
R_k(1,t_m) \geq \lfloor P_k(1,t_m)\rfloor + \sqrt{P_k(1,t_m)-\lfloor P_k(1,t_m)\rfloor} \equiv \tilde{P}_k(1,t_m).
\label{eq:mixedstateZbound}
\end{equation}
Here $\lfloor x\rfloor$ denotes the greatest integer smaller than $x$, so the right hand side is between $P_k(1,t_m)$ and $P_k(1,t_m)+1$. Thus, $\dim(\mathcal{H}_{11}) = \lfloor P_k(1,t_m)\rfloor$, and $\dim(\mathcal{H}_1) \in \lbrace 0,1\rbrace$ (depending on the fractional part) minimizes $R_k(1,t_m)$, which achieves a larger value in every other situation.
Now, let $\us$ be a unitary that satisfies
\begin{equation}
\us \lvert \mathcal{B}_k; j\rangle = \begin{dcases}
\hat{U}_H^\dagger(t_m)\lvert \mathcal{B}_k(t_m); j\rangle,& \text{ for } 0 \leq k < n-1, \\
\hat{U}_H^\dagger(t_m)\lvert \mathcal{B}_k(t_m); {j+1}\rangle,& \text{ for } k = n-1,
\end{dcases}
\label{eq:ushelldef}
\end{equation}
for any fixed labeling of the basis elements of the $\mathcal{B}_k$ with the addition $j+1$ being modulo $d_n$. As $\hat{U}_H(t_m)$ acts trivially (i.e. as identity) on $\Sigma_{\text{aux}}$, we have
\begin{align}
\us \Sigma(k) &= \Sigma(k), \\
\us \Sigma_{\text{aux}} &= \Sigma_{\text{aux}},
\end{align}
which follows from a complete orthonormal set of vectors in $\Sigma_{\text{aux}}$ being chosen to be elements of $\mathcal{B}_{10}$ and $\mathcal{B}_{01}$ for the respective subspaces.
For any such unitary, the cycling operator of the basis $\overline{\mathcal{C}} = \lbrace \lvert \overline{C}_\ell\rangle \rbrace_{\ell=0}^{(d_n)-1}$ formed by
\begin{equation}
\lvert \overline{C}_{j n + k}\rangle = \lvert \mathcal{B}_k; j\rangle,
\label{eq:mixedtopurecyclic}
\end{equation}
is a pure state cyclic permutation, which approximates $\hat{U}_H(t_m) \us$ with mean persistence
\begin{equation}
\frac{1}{d_n}\sum_{\ell=0}^{d_n-1} \left\lvert \langle \overline{C}_{\ell+1}\rvert \hat{U}_H(t_m) \us \lvert \overline{C}_{\ell}\rangle\right\rvert \geq \frac{1}{d_n}\sum_{k=0}^{n-1}\tilde{P}_k(1,t_m),
\end{equation}
from Eq.~\eqref{eq:mixedstateZbound}. However, there are $(d_n - d)$ invariant states in this basis under the action of $\hat{U}_H(t_m)$ --- a consequence of artificially expanding the Hilbert space by $\Sigma_{\text{aux}}$ to define mixed state cyclic permutations --- and any persistence amplitude involving these states is zero. We can then construct a restricted basis $\mathcal{C} \subseteq \overline{\mathcal{C}}$ of $d$ elements that inherits the ordering of $\overline{\mathcal{C}}$ but drops any members of the latter in $\Sigma_{\text{aux}}$:
\begin{equation}
\mathcal{C} = \left\lbrace \lvert C_k\rangle \in \overline{\mathcal{C}} \cap \Sigma : \left[\lvert C_k\rangle = \lvert \overline{C}_{j_{k}}\rangle \implies \left( \lvert \overline{C}_j\rangle \in \Sigma_{\text{aux}}\ \forall\ j \in (j_{k}, j_{k+1})\right)\ \forall k \in \mathbb{Z}_d \right]\right\rbrace.
\end{equation}
The condition in square brackets formally states the ordering requirement, that consecutive elements of $\mathcal{C}$ can only be separated by elements of $\Sigma_{\text{aux}}$ in $\overline{\mathcal{C}}$. The mean persistence of $\mathcal{C}$ is
\begin{equation}
\frac{1}{d}\sum_{\ell=0}^{d-1} \left\lvert \langle C_{\ell+1}\rvert \hat{U}_H(t_m) \us \lvert C_{\ell}\rangle\right\rvert \geq \frac{1}{d}\sum_{k=0}^{n-1}\tilde{P}_k(1,t_m)
\label{eq:puremixedpersistence1}
\end{equation}
Rewriting Eq.~\eqref{eq:puremixedpersistence1} using $P_k(1,t_m) = d_n Z_k(1,t_m)$ gives Eq.~\eqref{eq:puremixedoverlap} in the main text.
In summary, we have constructed a pure state cyclic permutation for $\hat{U}_H(t_m) \us$ with $\us$ leaving each subspace $\Sigma(k)$ of the mixed state cyclic permutation invariant, whose mean persistence is determined by the mixed state overlaps $P_k(1,t_m)$.
\section{Cyclic permutations for linear flows on a 2D torus}
\subsection{Classical cyclic permutations for the 2D torus}
\label{app:toruscyclic}
We are interested in the flow $\mathcal{T}^t$, defined by $\theta_x = \omega_x t$, $\theta_y = \omega_y t$ (modulo $2\pi$) with $\theta_x \in [0,2\pi)_x$, $\theta_y \in [0,2\pi)_y$ (subscripts introduced for convenience) for irrational $\alpha = \omega_y/\omega_x$. Singling out the $x$ direction, the period of the flow along $\theta_x$ is $T_x = 2\pi/\omega_x$.
While leaving $\theta_x$ invariant, $\mathcal{T}^{p T_x}$ acts as $p$ steps of an irrational rotation of $\theta_y$ by the angle $\vartheta = 2\pi\alpha$. In Ref.~\cite{IwanikRotation}, it is shown that if $\vartheta$ has a rational approximation of speed $f_{\vartheta}(q)$, i.e. if there exists a sequence of co-prime integers $p, q$ such that
\begin{equation}
\left\lvert \vartheta - \frac{p}{q}\right\rvert < f_{\vartheta}(q)
\end{equation}
as $q \to \infty$, then the irrational rotation of $\theta_y$ by $\vartheta$ can be approximated by an $n_y$-element cyclic permutation $\mathtt{C}(y) = \lbrace C_k(y)\rbrace_{k=0}^{n_y-1}$ in $[0,2\pi)_y$ with error
\begin{equation}
\overline{\epsilon}_C(T_x) < O(f_{\vartheta}(n_y)),
\end{equation}
as $n_y \to \infty$.
Divide $[0,2\pi)_x$ into any $n_x$ equal segments $\mathcal{N}_x(r) = [2\pi r/n_x, 2\pi (r+1)/n_x]$ for $r \in \mathbb{Z}_{n_x}$. With $t_m = T_x / n_x$, we have
\begin{equation}
\mathcal{T}^{t_m} \mathcal{N}_x(r) = \mathcal{N}_x(r+1),\ \forall\ r \in \mathbb{Z}_{n_x}.
\end{equation}
Now, define the $(n= n_x n_y)$-element cyclic permutation $\mathtt{C} = \lbrace C_k\rbrace_{k=0}^{n-1}$
\begin{equation}
C_{k n_x + j} = \mathcal{T}^{jt_m} [\mathcal{N}_x(0) \times C_k(y)], \text{ for } j \in \mathbb{Z}_{n_x}, j \in \mathbb{Z}_{n_y}.
\label{eq:torusCk}
\end{equation}
To obtain the error for approximating $\mathcal{T}^{t_m}$ by $\mathcal{T}_C$ (the cycling operator for $\mathtt{C})$, we note that
\begin{equation}
\mu\left((\mathcal{T}^{t_m}C_{k n_x + j}) \cap C_{k n_x + j + 1}\right) = \begin{dcases}
1,& \text{ for } 0 \leq j < n_x-1, \\
\frac{1}{n_x} \mu\left((\mathcal{T}^{T_x}C_k(t)) \cap C_{k+1}(y)\right),& \text{ for } j = n_x - 1.
\end{dcases}
\label{eq:torusInter}
\end{equation}
This immediately gives
\begin{equation}
\overline{\epsilon}_C(t_m) < \frac{1}{n_x}O(f_{\vartheta}(n_y)).
\end{equation}
As an aside, we note the qualitative similarity of Eqs.~\eqref{eq:torusCk} and \eqref{eq:torusInter} to Eqs.~\eqref{eq:mixedtopurecyclic} and \eqref{eq:ushelldef}.
For almost all irrational $\vartheta$ (and therefore, almost all $\alpha = \vartheta/(2\pi)$), $f_{\vartheta}(q) < O(q^{-2})$, as discussed in Ref.~\cite{IwanikRotation}. Taking $n_x \sim O(n_y)$ for the $n\to\infty$ limit, it follows that $t_m \sim O(\omega n^{-1/2})$ and $\overline{\epsilon}_C(t_m) < O(n^{-3/2})$, giving Eq.~\eqref{eq:irrationalerror1} in the main text. It is possible for cyclic permutations with lower error to exist (and perhaps likely, as suggested by the numerical results of Sec.~\ref{sec:torusfigs}), but their construction is not obvious using the present method.
\subsection{Additional data and discussion for quantized torus}
\label{app:torusdata}
Some additional insight into the nature of the spectrum for the irrational ($\alpha = \sqrt{2}$) and rational ($\alpha = 2$) torus is provided by a plot of the mode fluctuations themselves i.e. $\Delta_n$ vs. $n$, as in Fig.~\ref{fig:torusmodes}. In particular, the choice of $t_0 = 2\pi \Omega /d$ seems to be (close to) optimal for the irrational case, with e.g. the fluctuations appearing to be centered around $0$ (cf. Refs.~\cite{Aurich2, Lozej2021}). However, there is an additional linear trend for the rational case, likely due to the multiplicity of eigenvalues differing near the edge of the spectrum for the chosen $L_1$,$L_2$. This indicates a more optimal $t_0$ could have been chosen for the rational case to minimize the error further (without affecting the order of magnitude), but the present choice of $t_0$ clearly illustrates the periodicity (in Figs.~\ref{fig:ratlinp}, \ref{fig:ratlogp}) expected from the classical decomposition into periodic subsets.
To conclude this Appendix, we mention an alternate quantization of the torus, for which the choice of $t_0$ is more nontrivial. Instead of the UV cutoff in Eq.~\eqref{eq:torus_UVcutoff1}, one could have imposed $\lvert J_x\rvert < L_x$, $\lvert J_y\rvert < L_y$ to obtain an alternate quantization, where the energy levels in Fig.~\ref{fig:irrpoints} or \ref{fig:ratpoints} would be bounded along a rectangle whose sides are parallel to the axes. In that case, the density of states is not uniform but generally has 3 parts: a linear increase, a constant part, and a linear decrease that mirrors the increase. But one should not hastily conclude from the nonuniform density of states that the system is non-ergodic. When the eigenphases of $\hat{U}_H(t)$ corresponding to these energy eigenvalues are wrapped around a circle with increasing $t$, there will eventually come a time $t_0$ where the linear increase and decrease overlap precisely and produce an effective uniform density of eigenphases. The spectrum can in fact be rearranged in the $J_x, J_y$ plane to look like Figs.~\ref{fig:irrpoints}, \ref{fig:ratpoints} without affecting the ``wrapped'' eigenphases at $t_0$, resulting in the same \textit{eigenphase} statistics. In this special instance, ergodicity is preserved in spite of a varying density of states due to the wrapping of the spectrum, suggesting that this property is not significantly sensitive to the choice of UV cutoff.
\begin{figure}[!hbt]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.3]{TorusFigs/rootTwo/root2modesDetail.pdf}
\caption{\small Irrational case: $\alpha = \sqrt{2}$, $d = 2133$.}
\label{fig:modesforirr}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.3]{TorusFigs/Two/Two_modesDetails.pdf}
\caption{\small Rational case: $\alpha = 2$, $d=1961$.}
\label{fig:modesforrat}
\end{subfigure}
\caption{\small Mode fluctuations $\Delta_n$ plotted against $n$ for the rational and irrational datasets in Figs.~\ref{fig:irr_torus} and \ref{fig:rat_torus}.}
\label{fig:torusmodes}
\end{figure}
\end{appendices}
\end{document} |
2,869,038,156,256 | arxiv | \section{Introduction}
\label{intro}
Neural Machine Translation (NMT) has shown its effectiveness in translation tasks when NMT systems perform best in recent machine translation campaigns~\cite{cettolo2015iwslt,bojar2016findings}. Compared to phrase-based Statistical Machine Translation (SMT) which is basically an ensemble of different features trained and tuned separately, NMT directly modeling the translation relationship between source and target sentences. Unlike SMT, NMT does not require much linguistic information and large monolingual data to achieve good performances.
An NMT consists of an encoder which recursively reads and represents the whole source sentence into a context vector and a recurrent decoder which takes the context vector and its previous state to predict the next target word. It is then trained in an end-to-end fashion to learn parameters which maximizes the likelihood between the outputs and the references. Recently, attention-based NMT has been featured in most state-of-the-art systems. First introduced by \cite{Bahdanau2014}, attention mechanism is integrated in decoder side as feedforward layers. It allows the NMT to decide which source words should take part in the predicting process of the next target words. It helps to improve NMTs significantly. Nevertheless, since the attention mechanism is specific to a particular source sentence and the considering target word, it is also specific to particular language pairs.
Some recent work has focused on extending the NMT framework to multilingual scenarios. By training such network using parallel corpora in number of different languages, NMT could benefit from additional information embedded in a common semantic space across languages. Basically, the proposed NMT are required to employ multiple encoders or multiple decoders to deal with multilinguality. Furthermore, in order to avoid the tight dependency of the attention mechanism to specific language pairs, they also need to modify their architecture to combine either the encoders or the attention layers. These modifications are specific to the purpose of the tasks as well. Thus, those multilingual NMTs are more complicated, much more free parameters to learn and more difficult to perform standard trainings compared to the original NMT.
In this paper, we introduce a unified approach to seamlessly extend the original NMT to multilingual settings. Our approach allows us to integrate any language in any side of the encoder-decoder architecture with only one encoder and one decoder for all the languages involved. Moreover, it is not necessary to do any network modification to enable attention mechanism in our NMT systems. We then apply our proprosed framework in two demanding scenarios: under-resourced translation and zero-resourced translation. The results show that bringing multilinguality to NMT helps to improve individual translations. With some insightful analyses of the results, we set our goal toward a fully multilingual NMT framework.
The paper starts with a detailed introduction to attention-based NMT. In Section~\ref{related}, related work about multi-task NMT is reviewed. Section~\ref{proposed} describes our proposed approach and thorough comparisons to the related work. It is followed by a section of evaluating our systems in two aforementioned scenarios, in which different strategies have been employed under a unified approach (Section \ref{evaluation}). Finally, the paper ends with conclusion and future work.
\blfootnote{
%
%
%
%
\hspace{-0.65cm}
This work is licenced under a Creative Commons
Attribution 4.0 International License.
License details:
\url{http://creativecommons.org/licenses/by/4.0/}
}
\section{Neural Machine Translation: Background}
\label{NMT}
An NMT system consists of an encoder which automatically learns the characteristics of a source sentence into fix-length context vectors
and a decoder that recursively combines the produced context vectors with the previous target word to
generate the most probable word from a target vocabulary.
More specifically, a bidirectional recurrent encoder reads every words $x_{i}$ of a source sentence $\bm{x}=\{x_1,...,x_n\}$
and encodes a representation $\bm{s}$ of the sentence into a fixed-length vector $\bm{h}_i$
concatinated from those of the forward and backward directions:
\[
\begin{aligned}
& \bm{h}_i=[\overrightarrow{\bm{h}}_i,\overleftarrow{\bm{h}}_i] \\
& \overrightarrow{\bm{h}}_i=f(\overrightarrow{\bm{h}}_{i-1},\bm{s}) \\
& \overleftarrow{\bm{h}}_i=f(\overleftarrow{\bm{h}}_{i+1},\bm{s}) \\
& \bm{s}=\bm{E}_s~\dotproduct~\bm{x}_i \\
\end{aligned}
\]
Here $\bm{x}_i$ is the one-hot vector of the word $x_i$ and $\bm{E}_s$ is the word embedding matrix which is shared across the source words.
$f$ is the recurrent unit computing the current hidden state of the encoder based on the previous hidden state. $\bm{h}_i$ is then called an \textit{annotation vector},
which encodes the source sentence up to the time $i$ from both forward and backward directions. Recurrent units in NMT can be a simple recurrent neural network unit (RNN), a Long Short-Term Memory unit (LSTM)~\cite{Hochreiter1997} or a Gated Recurrent Unit (GRU)~\cite{Cho2014}
Similar to the encoder, the recurrent decoder generates one target word $y_{j}$ to form a translated target sentence $\bm{y}=\{y_1,...,y_m\}$ in the end.
At the time $j$, it takes the previous hidden state of the decoder $\bm{z}_{j-1}$, the previous embedded word representation $\bm{t}_{j-1}$
and a time-specific context vector $\bm{c}_j$ as inputs to calculate the current hidden state $\bm{z}_{j}$:
\[
\begin{aligned}
& \bm{z}_{j}=g(\bm{z}_{j-1}, \bm{t}_{j-1}, \bm{c}_j) \\
& \bm{t}_{j-1} = \bm{E}_t~\dotproduct~\bm{y}_{j-1}
\end{aligned}
\]
Again, $g$ is the recurrent activation function of the decoder and $\bm{E}_t$ is the shared word embedding matrix of the target sentences.
The context vector $\bm{c}_j$ is calculated based on the annotation vectors from the encoder.
Before feeding the annotation vectors into the decoder, an \textit{attention mechanism} is set up in between,
in order to choose which annotation vectors should contribute to the predicting decision of the next target word.
Intuitively, a relevance between the previous target word
and the annotation vectors can be used to form some attention scenario.
There exists several ways to calculate the relevance as shown in ~\cite{Luong2015b}, but what we describe here follows the proposed method of ~\cite{Bahdanau2014}
\[
\begin{aligned} \label{eq:1}
& rel\_sc(\bm{z}_{j-1},\bm{h}_i)) = \bm{v}_a~\dotproduct~\tanh(\bm{W}_a~\dotproduct~\bm{z}_{j-1} + \bm{U}_a~\dotproduct~\bm{h}_i) \\
& \alpha_{ij} = \displaystyle \frac{\exp(rel\_sc(\bm{z}_{j-1},\bm{h}_i))}{\sum_{i'} \exp(rel\_sc(\bm{z}_{j-1},\bm{h}_{i'}))},~\bm{c}_j = \displaystyle \sum_{i}{\alpha_{ij}\bm{h}_i}\\
\end{aligned}
\]
In ~\cite{Bahdanau2014}, this attention mechanism, originally called \textit{alignment model},
has been employed as a simple feedforward network with the first layer is a learnable layer via $\bm{v}_a$,$\bm{W}_a$ and $\bm{U}_a$.
The relevance scores $rel\_sc$ are then normalized into attention weights $\alpha_{ij}$
and the context vector $\bm{c}_j$ is calculated as the weighted sum of all annotation vectors $\bm{h}_i$.
Depending on how much attention the target word at time $j$ put on the source states $\bm{h}_i$, a soft alignment is learned.
By being employed this way, word alignment is not a latent variable but a parametrized function, making the alignment model differentiable. Thus,
it could be trained together with the whole architecture using backpropagation.
One of the most severe problems of NMT is handling of the rare words,
which are not in the short lists of the vocabularies, i.e. out-of-vocabulary (OOV) words, or do not appear in the training set at all.
In~\cite{Luong2015a}, the rare target words are copied from their aligned source words after the translation.
This heuristic works well with OOV words and named entities but unable to translate unseen words.
In~\cite{Sennrich2016a}, their proposed NMT models have been shown to not only
be effective on reducing vocabulary sizes but also have the ability to generate unseen words.
This is achieved by segmenting the rare words into subword units and translating them.
The state-of-the-art translation systems essentially employ subword NMT~\cite{Sennrich2016a}.
\section{Universal Encoder and Decoder for Multilingual Neural Machine Translation}
\label{multiling}
While the majority of previous research has focused on
improving the performance of NMT on individual language pairs with individual NMT systems, recent work has
started investigating potential ways to conduct the translation involved in multiple languages using a single NMT system.
The possible reason explaining these efforts lies on the unique architecture of NMT.
Unlike SMT, NMT consists of separated neural networks for the source and target sides, or the encoder and decoder, respectively.
This allows these components to map a sentence in any language to a representation in an embedding space
which is believed to share common semantics
among the source languages involved\footnote{But not necessarily syntactics since the embeddings
are learned from parallel sentences which essentially share the same meaning although they might be very different in word order}.
From that shared space, the decoder, with some implicit or explicit relevant constraints, could transform the representation into a concrete sentence
in any desired language. In this section, we review some related work on this matter.
We then describe a unified approach toward an universal attention-based NMT scheme. Our approach does not require any architecture modification and it can be trained to learn a minimal number of parameters compared to the other work.
\subsection{Related Work}
\label{related}
By extending the solution of sequence-to-sequence modeling using encoder-decoder architectures to multi-task learning, \newcite{Luong2016} managed to achieve better performance on some $many-to-many$ tasks such as translation, parsing and image captioning compared to individual tasks. Specifically in translation, the work utilizes multiple encoders to translate from multiple languages, and multiple decoders to translate to multiple languages. In this view of multilingual translation, each language in source or target side is modeled by one encoder or decoder, depending on the side of the translation. Due to the natural diversity between two tasks in that multi-task learning scenario, e.g. translation and parsing, it could not feature the attention mechanism although it has proven its effectiveness in NMT.
There exists two directions which proposed for multilingual translation scenarios where they leverage the attention mechanism. The first one is indicated in the work from \cite{Dong2015}, where it introduce an \textit{one-to-many} multilingual NMT system to translates from one source language into multiple target languages. Having one source language, the attention mechanism is then handed over to the corresponding decoder. The objective function is changed to adapt to multilingual settings. In testing time, the parameters specific to a desired language pair are used to perform the translation.
\newcite{Firat2016} proposed another approach which genuinely delivers attention-based NMT to multilingual translation. As in \cite{Luong2016}, their approach utilizes one encoder per source language and one decoder per target language for \textit{many-to-many} translation tasks. Instead of a quadratic number of independent attention layers, however, one single attention mechanism is integrated into their NMT, performing an affine transformation between the hidden layer of $m$ source languages and that one of $n$ target languages. It is required to change their architecture to accomodate such a complicated shared attention mechanism.
In a separate effort to achieve multilingual NMT, the work of \newcite{Zoph2016} leverages available parallel data from other language pairs to help reducing possible ambiguities in the translation process into a single target language\footnote{An example taken from the paper is when we want to translate the English word \textit{bank} into French, it might be easier if we have an additional German sentence containing the word \textit{Flussufer} (\textit{river bank}). }. They employed the multi-source attention-based NMT in a way that only one attention mechanism is required despite having multiple encoders. To achieve this, the outputs of the encoders were combined before feeding to the attention layer. They implemented two types of encoder combination; One is adding a non-linear layer on the concatenation of the encoders' hidden states. The other is using a variant of LSTM taking the respective gate values from the individual LSTM units of the encoders. As a result, the combined hidden states contain information from both encoders , thus encode the common semantic of the two source languages.
\subsection{Universal Encoder and Decoder}
\label{proposed}
Inspired by the multi-source NMT as additional parallel data in several languages are expected to benefit single translations, we aim to develop a NMT-based approach toward an universal framework to perform multilingual translation. Our solution features two treatments: 1) Coding the words in different languages as different words in the language-mixed vocabularies and 2) Forcing the NMT to translating a representation of source sentences into the sentences in a desired target language. \\
\textbf{Language-specific Coding.}~~When the encoder of a NMT system considers words across languages as different words, with a well-chosen architecture, it is expected to be able to learn a good representation of the source words in an embedding space in which words carrying similar meaning would have a closer distance to each others than those are semantically different. This should hold true when the words have the same or similar surface form, such as (\textit{\textbf{@de@}Obama}; \textit{\textbf{@en@}Obama})
or (\textit{\textbf{@de@}Projektion}; \textit{\textbf{@en@}projection})\footnote{\textit{\textbf{@lang\_code@}a\_word} is a simple way
that transforms the word \textit{a\_word} into a different surface form associated with its language \textit{\textbf{lang\_code}}.
For example, \textit{\textbf{@de@}Projektion} is referred to the word \textit{Projektion} appearing in a German (\textit{\textbf{de}}) sentence.}.
This should also hold true when the words have the same or similar meaning across languages, such as (\textbf{@en@}\textit{car}; \textit{\textbf{@en@}automobile})
or (\textit{\textbf{@de@}Flussufer}; \textit{\textbf{@en@}bank}). Our encoder then acts similarly to the one of multi-source approach\cite{Zoph2016}, collecting additional information from other sources for better translations, but with a much simpler embedding function. Unlike them, we need only one encoder, so we could reduce the number of parameters to learn. Furthermore, we neither need to change the network architecture nor depend on which recurrent unit (GRU, LSTM or simple RNN) is currently using in the encoder.
We could apply the same trick to the target sentences and thus enable \textit{many-to-many} translation capability of our NMT system. Similar to the multi-target translation\cite{Dong2015}, we exploit further the correlation in semantics of those target sentences across different languages. The main difference between our approach and the work of \cite{Dong2015} is that we need only one decoder for all target languages. Given one encoder for multiple source languages and one decoder for multiple target languages, it is trivial to incorporate the attention mechanism as in the case of a regular NMT for single language translation. In training, the attention layers were directed to learn relevant alignments between words in specific language pair and forward the produced context vector to the decoder.
Now we rely totally on the network to learn good alignments between source and target sides. In fact, giving more information, our system are able to form nice alignments.
In comparison to other research that could perform complete multi-task learning, e.g. the work from \cite{Luong2016} or the approach proposed by \cite{Firat2016}, our method is able to accommodate the attention layers seemlessly and easily. It also draws a clear distinction from those works in term of the complexity of the whole network: considerably less parameters to learn, thus reduces overfitting, with a conventional attention mechanism and a standard training procedure. \\
\textbf{Target Forcing.}~~While language-specific coding allows us to implement a multilingual attention-based NMT,
there are two issues we have to consider before training the network.
The first is that the number of rare words would increase in proportion with the number of languages involved.
This might be solved by applying a rare word treatment method with appropriate awareness of the vocabularies' size.
The second one is more problematic:
Ambiguity level in the translation process definitely increases due to the additional introduction of words having the same or similar meaning across languages
at both source and target sides.
We deal with the problem by explicitly forcing the attention and translation to the direction that we prefer,
expecting the information would limit the ambiguity to the scope of one language instead of all target languages.
We realize this idea by adding at the beginning and at the end of every source sentences a special symbol indicating the language they would be translated into.
For example, in a multilingual NMT, when a source sentence is German and the target language is English, the original sentence (already language-specific coded) is: \\
{\tt @de@darum @de@geht @de@es @de@in @de@meinem @de@Vortrag} \\
Now when we force it to be translated into English, the target-forced sentence becomes:\\
{\tt <E> @de@darum @de@geht @de@es @de@in @de@meinem @de@Vortrag <E>}
Due to the nature of recurrent units used in the encoder and decoder, in training, those starting symbols\footnote{For a bidirectional encoder, they are actually the starting symbols of a source sentence from two directions.} encourage the network learning the translation of following target words in a particular language pair. In testing time, information of the target language we provided help to limit the translated candidates, hence forming the translation in the desired language.
Figure~\ref{fig:steps} illustrates the essence of our approach. With two steps in the preprocessing phase, namely language-specific coding and target forcing, we are able to employ multilingual attention-based NMT without any special treatment in training such a standard architecture. Our encoder and attention-enable decoder can be seen as a shared encoder and decoder across languages, or an \textit{universal} encoder and {decoder}. The flexibitily of our approach allow us to integrate any language into source or target side. As we will see in Section~\ref{evaluation}, it has proven to be extremely helpful not only in low-resourced scenarios but also in translation of well-resourced language pairs as it provides a novel way to make use of large monolingual corpora in NMT.
\begin{figure}[!htp]
\centering
\includegraphics[width=0.9\columnwidth]{img/steps.jpg}
\caption{\label{fig:steps} {\it Preprocessing steps to employ a multilingual attention-based NMT system}}
\end{figure}
\section{Evaluation}
\label{evaluation}
In this section, we describe the evaluation of our proposed approach in comparisons with the strong baselines using NMT in two scenarios:
the translation of an under-resource language pair and the translation of a language pair that does not exist any paralled data at all.
\subsection{Experimental Settings}
~~~~\textbf{Training Data.}~~We choose WIT3's TED corpus~\cite{cettolo2012} as the basis of our experiments since it might be the only high-quality parallel data of many low-resourced language pairs.
TED is also multilingual in a sense that it includes numbers of talks which are commonly translated into many languages.
In addition, we use a much larger corpus provided freely by WMT organizers\footnote{\url{http://www.statmt.org/wmt15/}}
when we evaluate the impact of our approach in a real machine translation campaign. It includes the paralled corpus extracted from the digital corpus of European Parliament (EPPS),
the News Commentary (NC) and the web-crawled parallel data (CommonCrawl). While the number of sentences in popular TED corpora varies from 13 thousands to 17 thousands, the total number of sentences in those larger corpus is approximately 3 million sentences. \\
\textbf{Neural Machine Translation Setup.}~~All experiments have been conducted using NMT framework {\tt Nematus}\footnote{\url{https://github.com/rsennrich/nematus}}, Following the work of \newcite{Sennrich2016a},
subword segmentation is handled in the prepocessing phase using Byte-Pair Encoding (BPE).
Excepts stated clearly in some experiments, we set the number of BPE merging operations at 39500 on the joint of source and target data.
When training all NMT systems, we take out the sentence pairs exceeding 50-word length and shuffle them inside every minibatch.
Our short-list vocabularies contain 40,000 most frequent words while the others are considered as rare words and applied the subword translation.
We use an 1024-cell GRU layer and 1000-dimensional embeddings with dropout at every layer
with the probability of 0.2 in the embedding and hidden layers and 0.1 in the input and ourput layers.
We trained our systems using gradient descent optimization with Adadelta~\cite{zeiler2012adadelta} on minibatches of size 80 and the gradient is rescaled whenever its norm exceed 1.0.
All the trainings last approximately seven days if the early-stopping condition could not be reached.
At a certain time, an external evaluation script on BLEU~\cite{papineni2002bleu} is conducted on a development set to decide the early-stopping condition.
This evaluation script has also being used to choose the model archiving the best BLEU on the development set
instead of the maximal loglikelihood between the translations and target sentences while training.
In translation, the framework produces $n$-best candidates and we then use a beam search with the beam size of 12 to get the best translation.
\subsection{Under-resourced Translation}
First, we consider the translation for an under-resourced pair of languages.
Here a small portion of the large parallel corpus for English-German is used as a simulation for the scenario where we do not have much parallel data: Translating texts in English to German.
We perform language-specific coding in both source and target sides.
By accommodating the German monolingual data as an additional input (German$\rightarrow$German), which we called the \textit{mix-source} approach,
we could enrich the training data in a simple, natural way.
Given this under-resourced situation, it could help our NMT obtain a better representation of the source side,
hence, able to learn the translation relationship better.
Including monolingual data in this way might also improve the translation of some rare word types such as named entities.
Furthermore, as the ultimate goal of our work, we would like to investigate the advantages of multilinguality in NMT.
We incorporate a similar portion of French-German parallel corpus into the English-German one. As discussed in Section~\ref{proposed}, it is expected to help reducing the ambiguity in translation between one language pair since it utilizes the semantic context provided by the other source language. We name this \textit{mix-multi-source}.
\begin{figure}
\centering
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{img/mixs.jpg}
\caption{\textit{mix-source} system}
\label{fig:gull}
\end{subfigure}
\qquad \qquad ~~~
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{img/mixms.jpg}
\caption{\textit{mix-multi-source} system}
\label{fig:tiger}
\end{subfigure}
\caption{Different strategies of multi-source NMT in under-resourced translation}\label{fig:animals}
\end{figure}
\begin{table} [h!]
\label{table:underresourced}
\centerline{
\begin{tabular}{|l|c|c|c|c|}
\cline{1-5}
\multirow{2}{*}{\textbf{System}} & \multicolumn{2}{c|}{ \textbf{tst2013}} & \multicolumn{2}{c|}{ \textbf{tst2014}} \\
\cline{2-5}
&
BLEU & $\Delta$BLEU & BLEU & $\Delta$BLEU \\ \hline \hline
Baseline (En$\rightarrow$De) & 24.35 & -- & 20.62 & -- \\ \hline
\cline{1-5}
Mix-source (En,De$\rightarrow$De,De) & 26.99 & +2.64 & 22.71 & +2.09 \\
Mix-multi-source (En,Fr$\rightarrow$De,De) & 26.64 & +2.21 & 22.21 & +1.59 \\ \hline
\cline{1-5}
\end{tabular}}
\caption{\label{table:underresourced} {\textit{Results of the English$\rightarrow$German systems in an under-resourced scenario.}}}
\end{table}
Table~\ref{table:underresourced} summarizes the performance of our systems measured in BLEU\footnote{We used the script {\tt mteval-v13a.pl} of the Moses framework (\url{http://statmt.org/moses/}) as the official way to calculate BLEU scores in main machine translation campaigns.} on two test sets, \textit{tst2013} and \textit{tst2014}. Compared to the baseline NMT system which is solely trained on TED English-German data, our \textit{mix-source} system achieves a considerable improvement of 2.6 BLEU points on \textit{tst2013} and 2.1 BLEU points on and \textit{tst2014} . Adding French data to the source side and their corresponding German data to the target side in our \textit{mix-multi-source} system also help to gain 2.2 and 1.6 BLEU points more on \textit{tst2013} \textit{tst2014}, respectively. We observe a better improvement from our \textit{mix-source} system compared to our \textit{mix-multi-source} system. We speculate the reason that the \textit{mix-source} encoder utilize the same information shared in two languages while the \textit{mix-multi-source} receives and processes similar information in the other language but not necessarily the same. We might validate this hypothesis by comparing two systems trained on a common English-German-French corpus of TED. We put it in our future work's plan.
As we expected
Figure~\ref{fig:MWE} shows how different words in different languages can be close in the shared space after being learned to translate into a common language.
We extract the word embeddings from the encoder of the \textit{mix-multi-source} (En,Fr$\rightarrow$De,De) after training,
remove the language-specific codes (\textit{\textbf{@en@}} and \textit{\textbf{@fr@}})and project the word vectors to the 2D space using t-SNE\footnote{To illustrate more clearly, only the word vectors of the words related to ``research'' are projected and visualized.
The blue words are the English words and the red ones are the French words.}\cite{maaten2008visualizing}.
\begin{figure}[!htp]
\centering
\includegraphics[width=1\columnwidth]{img/research-eps-converted-to.pdf}
\caption{\label{fig:MWE} {\it The multilingual word embeddings from the shared representation space of the source.}}
\vspace{-0.5cm}
\end{figure}
\subsection{Using large monolingual data in NMT.}
A standard NMT system employs parallel data only. While good parallel corpora are limited in number, getting monolingual data of an arbitrary language is trivial. To make use of German monolingual corpus in an English$\rightarrow$German NMT system, ~\newcite{sennrich2016b} built a separate German$\rightarrow$English NMT using the same parallel corpus, then they used that system to translate the German monolingual corpus back to English, forming a synthesis parallel data.~\newcite{gulcehre2015} trained another RNN-based language model to score the monolingual corpus and integrate it to the NMT system through shallow or deep fusion. Both methods requires to train separate systems with possibly different hyperparameters for each. Conversely, by applying \textit{mix-source} method to the big monolingual data, we need to train only one network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+CommonCrawl) and train a \textit{mix-source} NMT system from those data.
The first result is not encouraging when its performance is even worse than the baseline NMT which is trained on the small parallel data only. Not using the same information in the source side, as we discussed in case of \textit{mix-multi-source} strategy, could explain the degrading in performance of such a system. But we believe that the magnitude and unbalancing of the corpus are the main reasons. The data contains nearly four millions sentences but only around twenty thousands of them (0.5\%) are the genuine parallel data. As a quick attempt, after we get the model with that big data, we continue training on the real parallel corpus for some more epochs. When this adaptation is applied, our system brings an improvement of +1.52 BLEU on \textit{tst2013} and +1.06 BLEU on \textit{tst2014} (Table~\ref{table:monoNMT}).
\begin{table} [h!]
\label{table:monoNMT}
\centerline{
\begin{tabular}{|l|c|c|c|c|}
\cline{1-5}
\multirow{2}{*}{\textbf{System}} & \multicolumn{2}{c|}{ \textbf{tst2013}} & \multicolumn{2}{c|}{ \textbf{tst2014}} \\
\cline{2-5}
&
BLEU & $\Delta$BLEU & BLEU & $\Delta$BLEU \\ \hline \hline
Baseline (En$\rightarrow$De) & 24.35 & -- & 20.62 & -- \\ \hline
\cline{1-5}
Mix-source big (En,De$\rightarrow$De,De) & 25.87 & +1.52 & 21.68 & +1.06 \\
\cline{1-5}
\end{tabular}}
\caption{\label{table:monoNMT} {\textit{Results of the English$\rightarrow$German system using large monolingual data.}}}
\end{table}
\subsection{Zero-resourced Translation}
Among low-resourced scenarios, zero-resourced translation task stands in an extreme level. A zero-resourced translation task is one of the most difficult situation when there is no parallel data between the translating language pair. To the best of our knowledge, there have been yet existed a published work about using NMT for zero-resourced translation tasks up to now. In this section, we extend our strategies using the proposed multilingual NMT approach as first attempts to this extreme situation.
We employ language-specific coding and target forcing in a strategy called \textit{bridge}. Unlike the strategies used in under-resourced translation task, \textit{bridge} is an entire \textit{many-to-many} multilingual NMT. Simulating a zero-resourced German$\rightarrow$French translation task given the available German-English and English-French parallel corpora, after applying language-specific coding and target forcing for each corpus, we mix those data with an English-English data as a ``bridge'' creating some connection between German and French. We also propose a variant of this strategy that we incorporate French-French data. And we call it \textit{universal}.
We evaluate \textit{bridge} and \textit{universal} systems on two German$\rightarrow$French test sets. They are compared to a \textit{direct} system, which is an NMT trained on German$\rightarrow$French data, and to a \textit{pivot} system, which essentially consists of two separate NMTs trained to translate from German to English and English to French. The \textit{direct} system should not exist in a real zero-resourced situation. We refer it as the perfect system for comparison purpose only. In case of the \textit{pivot} system, to generate a translated text in French from a German sentence, we first translate it to English, then the output sentence is fed to the English$\rightarrow$German NMT system to obtain the French translation. Since there are more than two languages involved in those systems, we increase the number of BPE merging operations proportionally in order to reduce the number of rare words in such systems. We do not expect our proposed systems to perform well with this primitive way of building direct translating connections since this is essentially a difficult task.
We report the performance of those systems in Table~\ref{table:zeroresourced}.
\begin{table} [h!]
\label{table:zeroresourced}
\centerline{
\begin{tabular}{|l|c|c|} \hline
System & BLEU & $\Delta$BLEU \\ \hline
Direct (De$\rightarrow$Fr) & 16.65 & +3.24 \\
Pivot (De$\rightarrow$En$\rightarrow$Fr) & 13.41 & -- \\
Bridge (De,En,En$\rightarrow$En,Fr,En) & 9.70 & -3.71 \\
Universal (De,En,En,Fr$\rightarrow$En,Fr,En,Fr) & 10.77 & -2.64 \\ \hline
\end{tabular}}
\caption{\label{table:zeroresourced} {\textit{Results of the German$\rightarrow$French systems in a zero-resourced scenario.}}}
\end{table}
Unsupprisingly, both \textit{bridge} and \textit{universal} systems perform worse than the \textit{pivot} one. We consider two possible reasons:
\textbf{Our target forcing mechanism is moderately primitive.}~~Since the process is applied after language-specific coding, the target forcing symbol is the same for all source sentences in every languages. Thus, the forcing strength might not be enough to guide the decision of the next words. Once the very first word is translated into a word in wrong language, the following words tend to be translated into that wrong language again. Table~\ref{table:zrstats} shows some statistics of the translated words and sentences in wrong language.
\begin{table} [h!]
\label{table:zrstats}
\centerline{
\begin{tabular}{|l|c|c|} \hline
System & \% Translated words in wrong language & \% Sentences in wrong language \\ \hline
Bridge & 21.27\% & 9.70\% \\
Universal & 17.57 & 9.47\% \\ \hline
\end{tabular}}
\caption{\label{table:zrstats} {\textit{Percentages of language identificcation mistakes when applying our translation strategies.}}}
\end{table}
\textbf{Balancing of the training corpus.}~~Although it is not severe as in the case of \textit{mix-source} system for large monolingual data, the limited number of sentences in target language can affect the training. The difference of 1.07 BLEU points between \textit{bridge} and \textit{universal} might explain this assumption as we added more target data (French) in \textit{universal} strategy, thus reducing the unbalance in training.
Those issues would be addressed in our following future work toward the multilingual attention-based NMT.
\section{Conclusion and Future Work}
In this paper, we present our first attempts in building a multilingual Neural Machine Translation framework. By treating words in different languages as different words and force the attention and translation to the direction of desired target language, we are able to employ attention-enable NMT toward a multilingual translation system. Our proposed approach alleviates the need of complicated architecture re-designing when accommodating attention mechanism. In addition, the number of free parameters to learn in our network does not go beyond that magnitute of a single NMT system. With its universality, our approach has shown its effectiveness in an under-resourced translation task with considerable improvements. In addition, the approach has achieved interesting and promising results when applied in the translation task that there is no direct parallel corpus between source and target languages.
Nevertheless, there are issues that we can continue working on to address in future work. A more balancing data would be helpful for this framework. The mechanism of forcing the NMT system to the right target language could be improved. We could conduct more detailed analyses of the various strategies under the framework to show its universarity.
\bibliographystyle{acl}
|
2,869,038,156,257 | arxiv | \section{Introduction}
\noindent In an earlier paper \cite{FergussonLiguoriShellard2009}, we described a general approach to
the estimation of non-separable CMB bispectra using separable mode expansions.
Our aim here is to directly estimate the full CMB bispectrum from WMAP data, to survey
and constrain current non-Gaussian primordial theories, and to
discuss the prospects for reconstructing the bispectrum with forthcoming data, such as the Planck
experiment. Previous work by other groups has endeavoured to measure the bispectrum by
using specific estimators tailored to investigate particular separable models, such as the well-known
local and equilateral bispectra. This restriction to separable cases was for reasons of calculational
simplicity to make the data analysis tractable, that is, reducing it from
${\cal O}(l_\textrm{max}^5)$ to ${\cal O}(l_\textrm{max}^3)$ operations. We summarise constraints
that have been obtained to date using these methods later in section V, when we survey theoretical
models; it is sufficient at this point to note that the present WMAP7 constraint $-10<f_\textrm{NL}<74$ \cite{WMAP7}
(95\% confidence) does not provide any significant evidence for a primordial local bispectrum signal, and nor do
constraints on the few other models investigated to date (see the review \cite{LigSef2010}).
Two significant developments mean that we can move beyond these specific estimators
and consider a more general approach which includes the reconstruction of the whole bispectrum directly
from the observational data. First, explicit calculations of the reduced CMB bispectrum $b_{l_1l_2l_3}$ in a wide-ranging
survey of primordial theories \cite{Fergusson:2006pr, Fergusson:2008ra}, demonstrated that
the resulting coherent patterns of acoustic peaks could be represented
by rapidly convergent mode expansions with a limited number of terms (irrespective of
whether the primordial bispectrum was separable).
Secondly, these complete orthonormal mode expansions could be transformed
into a non-orthogonal frame with separable basis functions \cite{FergussonLiguoriShellard2009} in which the
same simplifications could be exploited to efficiently calculate the estimator (\ref{eq:optimalestimator})
in ${\cal O}(l_\textrm{max}^3)$ operations, again for arbitrary non-separable theoretical bispectra $b_{l_1l_2l_3}$.
We shall employ this mode expansion methodology in this paper, convolving observational
maps with the separable basis functions and then reconstructing the observed bispectrum $b_{l_1l_2l_3}$
in an expansion using the resulting mode coefficients.
Rather than looking in just a few specific directions within the large space of
possible bispectra, this general mode decomposition encompasses
all bispectra up to a given resolution. Our aim is to determine whether there is evidence for
{\it any bispectrum} causing a departure from Gaussianity in the WMAP data. Of course,
we can compare with previous constraints for the local and equilateral models, but an important byproduct
is a set of entirely new constraints on a wide range of non-separable models.
While we believe this work represents a significant step forward, we note that this analysis is far
from the last word on CMB non-Gaussianity, not least because much higher quality and higher
resolution data will soon be available from Planck. We also note that we have only used WMAP5 data out
to l=500, together with a pseudo-optimal analysis of the noise and masking contributions.
This paper should be considered primarily as a proof of concept implementation of these
methods, leaving up to an order of magnitude discovery potential available for bispectrum
signals with new CMB data, let alone future galaxy and other 3D surveys where this approach
can also be applied. We note that there are other recent methodologies in the literature which,
in principle, can be used to extract information from the bispectrum beyond simple separable cases,
including the bispectrum power approach of ref.~\cite{MunshiHeavens2009}), bispectrum
binning used in ref.~\cite{BuchervanTent} and wavelet approaches (see the review \cite{LigSef2010}).
In section \ref{sec:CMBbispest} we review general results regarding primordial and angular
bispectra and their optimal estimation. The eigenmode decomposition of the bispectrum that constitutes the
foundation of our methodology is summarized in section \ref{sec:modeexp}. We then show in section
\ref{sec:reconstruction} how this expansion can be used to reconstruct the full bispectrum from the data, and
proceed to validate this result in section \ref{sec:validation}, before directly extracting the bispectrum
from WMAP data in section \ref{sec:WMAP}. We finally turn our attention to estimates of $f_\textrm{NL}$ for a wide
variety of shapes, including both scale invariant bispectra (section \ref{sec:scaleinv}) and scale-dependent oscillatory
bispectra (section \ref{sec:feature}). Finally, before drawing our conclusions in section \ref{sec:conclusions}, we discuss in section
\ref{sec:totalbisp} a possible way to use our mode expansion technique to define
a model independent constraint on the total integrated bispectrum extracted from the data.
\section{CMB bispectrum estimation}\label{sec:CMBbispest}
\subsection{Primordial and CMB bispectrum}
Temperature anisotropies are represented using the $a_{lm}$ coefficients of a spherical harmonic decomposition of
the cosmic microwave sky,
\begin{eqnarray} \label{eq:alm}
\frac{\Delta T}{T}(\hat {\bf n}) = \sum_{lm} a_{lm} Y_{lm}(\hat {\bf n})\,,
\end{eqnarray}
with an (ideal) angular power spectrum $C_l = \sum _m a_{l\,m}\,a_{l\,-m}$.
The CMB bispectrum is the three point correlator of the $a_{lm}$,
\begin{eqnarray}\label{eq:fullbispectrum}
B^{l_1 l_2 l_3}_{m_1 m_2 m_3} = a_{l_1 m_1} a_{l_2 m_2} a_{l_3 m_3}\,,
\end{eqnarray}
where, here, we assume that the $B^{l_1 l_2 l_3}_{m_1 m_2 m_3}$ coefficients are not an ensemble average but, instead,
directly calculated using the $a_{lm}$'s from a high resolution map (or maps), that is, from an
experiment such as WMAP or Planck.
We shall assume for the moment that if there is a non-trivial bispectrum then it has arisen through a physical process which is statistically isotropic, so we can employ the angle-averaged bispectrum $B_{l_1l_2l_3}$ without loss of information, that is \cite{Luo1994},
\begin{eqnarray}
B_{l_1 l_2 l_3} &=& \sum_{m_i} \( \begin{array}{ccc} l_1 & l_2 & l_3 \\ m_1 & m_2 & m_3 \end{array} \)B^{l_1 l_2 l_3}_{m_1 m_2 m_3}\nonumber\\
&=& \sum_{m_i}h_{l_1l_2l_3}^{-1} \mathcal{G}^{l_1 l_2 l_3}_{m_1 m_2 m_3} B^{l_1 l_2 l_3}_{m_1 m_2 m_3}\,,
\end{eqnarray}
where $h_{l_1l_2l_3}$ is a geometrical factor,
\begin{eqnarray}
h_{l_1l_2l_3} = \sqrt{\frac{(2l_1+1)(2l_2+1)(2l_3+1)}{4\pi}} \( \begin{array}{ccc} l_1 & l_2 & l_3 \\ 0 & 0 & 0 \end{array} \)\,,
\end{eqnarray}
and $ \mathcal{G}^{\,\,l_1\; l_2\; l_3}_{m_1 m_2 m_3}$ is the Gaunt integral,
\begin{align}\label{eq:Gaunt}
\nonumber \mathcal{G}^{l_1 l_2 l_3}_{m_1 m_2 m_3} &\equiv \int d\Omega \, Y_{l_1 m_1}({\bf \hat{n}}) \, Y_{l_2 m_2}({\bf \hat{n}}) \, Y_{l_3 m_3}({\bf \hat{n}}) \\
&=h_{l_1l_2l_3} \( \begin{array}{ccc} l_1 & l_2 & l_3 \\ m_1 & m_2 & m_3 \end{array} \)\,,
\end{align}
with the usual Wigner-$3j$ symbol ${\scriptstyle \big(\stackrel{\scriptstyle l_1 }{\scriptstyle m_1 }\stackrel{\scriptstyle l_2 }{\scriptstyle m_2 }\stackrel{\scriptstyle l_3 }{\scriptstyle m_3} \big)}$.
It is more convenient to eliminate the geometrical factors entirely and to work with the reduced bispectrum which is defined as
\begin{eqnarray}
b_{l_1l_2l_3} = h_{l_1l_2l_3}^{-1} B_{l_1l_2l_3} \,.
\end{eqnarray}
It is important to note the relationship between the late-time CMB bispectrum $b_{l_1l_2l_3}$ and the
primordial bispectrum $B_\Phi(k_1,k_2,k_3)$ from which it would arise in many models, notably inflation.
The convention has been to remove a $k^{-6}$ scaling by defining a shape function:
\begin{align} \label{eq:shapefn}
S(k_1,k_2,k_3) \equiv \frac{1}{N} (k_1 k_2 k_3)^2 B_\Phi(k_1,k_2,k_3)\,.
\end{align}
The shape function (\ref{eq:shapefn}) is particularly pertinent for scale-invariant models because their momentum dependence is restricted entirely to planes transverse to the diagonal $\tilde k = {\textstyle {\frac{1}{2}}} (k_1+k_2+k_3)$.
The CMB bispectrum induced by the primordial shape $S$ is obtained from the convolution \cite{KomatsuSpergel2001}:
\begin{align} \label{eq:redbispect}
\nonumber b_{l_1 l_2 l_3}= \(\frac{2}{\pi}\)^3 \int x^2dx \int & d k_1 d k_2 d k_3\, S(k_1,k_2,k_3)\\
&\times \Delta_{l_1}(k_1) \,\Delta_{l_2}(k_2)\, \Delta_{l_3}(k_3)\, j_{l_1}(k_1 x)\, j_{l_2}(k_2 x)\, j_{l_3}(k_3 x)\,,
\end{align}
where $ \Delta_{l}(k)$ is the transfer function.
The impact of the transfer functions in (\ref{eq:redbispect}) is to impose
a series of acoustic peaks on the underlying primordial shape, as illustrated for the CMB bispectrum of
the constant model $S(k_1,k_2,k_3)=1$ in fig.~\ref{fig:constant}. Here, we can observe a large primary
peak when all the $l_i\approx 220$. In principle, the CMB bispectrum is
difficult to evaluate since (\ref{eq:redbispect}) represents a four-dimensional integral over highly
oscillatory functions. However, the integral breaks down into a product of one-dimensional integrals
if the shape function is separable,
that is, if it can be represented in the form $S(k_1,k_2,k_3) = X(k_1) Y(k_2)Z(k_3)$. In the large-angle limit
with $\Delta_{l}(k) = j_l(...)$ ($l\ll 200$) it is possible in some separable models to obtain analytic solutions, such as that
for the constant model
\cite{Fergusson:2008ra}
\begin{eqnarray}\label{eq:constbispect}
b_{l_1l_2l_3}^{const(la)} = \frac{\Delta^2_\Phi}{27 N} \frac{1}{(2\ell_1+1)(2\ell_2+1)(2\ell_3+1)}\[\frac{1}{\ell_1+\ell_2+\ell_3+3} + \frac{1}{\ell_1+\ell_2+\ell_3}\]\,.
\end{eqnarray}
This particular regular solution is important because we divide by it when plotting the CMB bispectrum $b_{l_1l_2l_3}/ b_{l_1l_2l_3}^{const(la)}$
throughout this paper. Normalising with the constant model (\ref{eq:constbispect}) is analogous to multiplying the
power spectrum $C_l$'s by $l(l+1)$, because it serves to remove an overall $\ell^{-4}$ scaling for all scale-invariant bispectra, preserving the effects of the oscillating transfer functions without introducing
spurious transverse momentum dependence.
\begin{figure}[t]
\centering
\includegraphics[width=.48\linewidth]{figures/Const2000_sm.jpg}
\includegraphics[width=.48\linewidth]{figures/ConstSlice_sm.jpg}
\caption[Constant model]{\small The reduced CMB bispectrum for the constant model $b_{l_1l_2l_3}^{\rm const}$ arising from the convolution of the primordial shape function $S(k_1,k_2,k_3) =1$
with transfer functions (normalised relative to the large-angle constant solution $b_{l_1l_2l_3}^{const(la)}$ given in (\ref{eq:constbispect})).
On the left, the 3D bispectrum is plotted over the allowed tetrahedral region of multipole triples (see fig.~\ref{fig:tetrapyd}) using several density contours (light blue positive and magenta negative) out to
$l_i\le 2000$. On the right, a transverse triangular slices through the bispectrum is shown for $l_1+l_2+l_3= 4000$ (Planck
resolution).
Note the coherent pattern of acoustic peaks with a dominant primary peak in a broad diagonal region around $l_1=l_2=l_3=220$.
This constant model bispectrum plotted is the analogue of the angular power spectrum $C_l$'s for a purely scale-invariant model.
}
\label{fig:constant}
\end{figure}
\subsection{CMB bispectrum estimators}
Now it is usually presumed that the full bispectrum for a high resolution map cannot be evaluated explicitly
because of the sheer
number of operations involved ${\cal O}(l_\textrm{max}^5)$, as well as the fact that the signal will be too weak to measure
individual multipoles with any significance. Instead, we essentially use a least squares fit to compare the bispectrum
of the observed $a_{lm}$'s (\ref{eq:alm}) with a particular (separable) theoretical bispectrum $b_{l_1l_2l_3}^{\rm th}$,
\begin{eqnarray}
\langle a^{\rm th}_{l_1 m_1} a^{\rm th}_{l_2 m_2} a^{\rm th}_{l_3 m_3}\rangle= \mathcal{G}^{\,\,l_1\; l_2\; l_3}_{m_1 m_2 m_3 }
b_{l_1l_2l_3}^{\rm th}\,.
\end{eqnarray}
Here, $b_{l_1l_2l_3}^{\rm th}$ will be recovered as the expectation value from an ensemble average over $a_{lm}^{\rm th}$ realisations
or simulations created with the given reduced bispectrum.
Formally, taking into account the fact that instrument noise and masking can break rotational invariance, the result is the general optimal estimator
\cite{KSW,CreminellietAl2006,SmithetAl2009}
\begin{eqnarray}\label{eq:optimalestimator}
{\mathcal{E}} = \frac{1}{{{N}^2}} \sum_{l_i,m_i}& &\left[ \mathcal{G}^{\,\,l_1\; l_2\; l_3}_{m_1 m_2 m_3 } b_{l_1l_2l_3} ^{\rm th}\(C^{-1}_{l_1 m_1, l_4 m_4} a_{l_1m_1}\)
\(C^{-1}_{l_2 m_2, l_5 m_5} a_{l_2m_2}\) \(C^{-1}_{l_3 m_3, l_6 m_6}a_{l_3m_3} \) \right.
\nonumber \\
& & ~~~ \left. -~3 \left \langle a_{l_1m_1} a_{l_2m_2} a_{l_3m_3} \right \rangle C^{-1}_{l_1 m_1, l_2 m_2}
C^{-1}_{l_3 m_3,l_4 m_4} a_{l_4m_4} \right]\,,
\end{eqnarray}
where $C^{-1}$ is the inverse of the covariance matrix $C_{\ell_1 m_1, \ell_2 m_2} = \langle a_{\ell_1 m_1}a_{l_2 m_2}\rangle$
and $\ {N}$ is a suitable normalisation (discussed further below). Here, we follow ref.~\cite{WMAP5,YadavWandelt2009}, by assuming a
nearly diagonal covariance matrix ($C_{l_1 m_1, l_2 m_2} \approx C_l \,\delta_{l_1l_2}\,\delta_{m_1\,-m_2}$) and approximating
the estimator (\ref{eq:optimalestimator}) as
\begin{align}\label{eq:approxestimator}
\mathcal{E} = \frac{1}{\tilde{N}^2} \sum_{l_i m_i} \frac{\mathcal{G}^{l_1 l_2 l_3}_{m_1 m_2 m_3} \, \tilde{b}_{l_1 l_2 l_3} }{ \tilde{C}_{l_1}\tilde{C}_{l_2}\tilde{C}_{l_3} } \(a_{l_1 m_1} a_{l_2 m_2} a_{l_3 m_3} - 6\, C^{\rm sim}_{l_1 m_1 , l_2 m_2} a_{l_3 m_3}\)\,,
\end{align}
where the tilde denotes the modification of $C_l$ and $b_{l_1l_2l_3}$ to incorporate instrument beam and noise effects through
\begin{align}
\label{eq:noisebeam}
\tilde{C}_l = b_l^2 C_l + N_l \qquad \mbox{and}\qquad \tilde{b}_{l_1 l_2 l_3} = b_{l_1}b_{l_2}b_{l_3}\, b_{l_1 l_2 l_3}\,.
\end{align}
For a relatively small galactic mask (leaving a large fraction $f_{\rm sky}$ of the full sky), it has also been shown to be a good
approximation to renormalise using
\begin{eqnarray}
\label{eq:cutsky}
b_{l_1l_2l_3}^{\rm mask} = f_{\rm sky} b_{l_1l_2l_3} \qquad \mbox{and}\qquad C_l^{\rm mask} = f_{\rm sky} C_l\,.
\end{eqnarray}
(We shall assume noise, beam and mask inclusion henceforth and drop any special notation.)
Here, the second linear term in (\ref{eq:approxestimator}) ensures subtraction of spurious inhomogeneous noise
and masking contributions by using the covariance matrix $C^{\rm sim}_{l_1 m_1 , l_2 m_2}$ from an
ensemble average of Gaussian maps in which these effects are incorporated.
If the theoretical bispectrum
$b_{l_1l_2l_3}^{\rm th}$ has the property of primordial separability then it has been noted that the summation in (\ref{eq:approxestimator})
becomes much more tractable taking only ${\cal O}(l_\textrm{max}^3)$ operations \cite{KSW}. Essentially this
exploits the separability of the Gaunt integral (\ref{eq:Gaunt}), as well as primordial counterparts, to
reduce the dimensionality of the integrals and summations involved in evaluating (\ref{eq:approxestimator})
(see ref.~\cite{FergussonLiguoriShellard2009} for a more detailed discussion on this point). To date, such separability has been
a property of all the primordial theories constrained observationally with most attention given to the canonical local model.
\subsection{$ F_\textrm {NL}$ normalisation}
It remains to briefly discuss the normalisation factor $N$ in (\ref{eq:optimalestimator}). In the past this has been
taken on a case-by-case manner for a given theoretical bispectrum $b_{l_1l_2l_3}^{\rm th}$ to be
\begin{eqnarray}\label{eq:theorynorm}
{N_{\rm th}}^2 \equiv \sum_{l_i} \frac{h_{l_1l_2l_3}^2{b^{\rm th}_{l_1 l_2 l_3}}^2}{C_{l_1}C_{l_2}C_{l_3}}\,.
\end{eqnarray}
As we discuss below, this has yielded very model-dependent results for the measurement of the
nonlinearity parameter $f_\textrm{NL}^{\rm th} \equiv {\cal E}$. Instead, we have proposed the parameter
$ F_\textrm {NL}$ which is much easier to compare between models, because it measures the
integrated CMB bispectrum signal relative to that from the canonical local model with $f_\textrm{NL}=1$. In this case, we define\cite{FergussonLiguoriShellard2009}
\begin{eqnarray} \label{eq:newfnl}
F_\textrm {NL}^{\rm th} = {\cal E}, \quad\mbox{with}\quad N^2 \equiv N_{\rm loc}N_{\rm th} \,,
\end{eqnarray}
with $N_{\rm th}$ from (\ref{eq:theorynorm}) and where $N_{\rm loc}$ is defined for the $f_\textrm{NL}=1$ local model:
\begin{eqnarray}
\quad {N_{\rm loc}}^2 \equiv \sum_{l_i} \frac{h_{l_1l_2l_3}^2{b^{{\rm loc}(f_\textrm{NL}=1)}_{l_1 l_2 l_3}}^2}{C_{l_1}C_{l_2}C_{l_3}}\,.
\end{eqnarray}
Of course, for the local model, the quantities are identical $ F_\textrm {NL}^{\rm th} = f_\textrm{NL}^{\rm loc}$).
However, when we quote constraints on other models we will use $ F_\textrm {NL}$---making self-evident the comparable nature
of this quantity---while also noting the $f_\textrm{NL}^{\rm th}$ previously used in the literature.
The problem with $f_\textrm{NL}^{\rm th}$ is that it derives from a somewhat arbitrary normalisation
of the primordial bispectrum $B^{\rm th}_\Phi(k_1,k_2,k_3)$ which bears little relation to the
observable CMB bispectrum signal. The convention has been to assume
a nearly scale-invariant shape function $S(k_1,k_2,k_3)$ and then to normalise it such that $S^{\rm th}(k,k,k)=1$, that is,
at a single point; this becomes $f_\textrm{NL}=1$ case for the model under study. This definition
ignores the behaviour away from the equilateral value $k$$=$$k_1$$=$$k_2$$=$$k_3$. For example, $S$ rises
from a central minimum in
the local model and falls from a maximum in the equilateral model; hence, the huge disparities between their $f_\textrm{NL}$
constraints, e.g. $\Delta f_\textrm{NL}^{\rm equil} \approx 7 \Delta f_\textrm {NL}^\textrm{loc}$. This definition also does not apply to non-scaling
models. The alternative to base the non-Gaussianity measure on the actually observable CMB bispectrum $b_{l_1l_2l_3}^{\rm th}$, as above in (\ref{eq:newfnl}), does accommodates non-scale invariant models, such as feature models. It also covers bispectra
induced by late-time processes like gravitational lensing and cosmic strings. For models which are
not scale-invariant it should be quoted with the observational cut-off $l_\textrm{max}$. The normalisation for a particular model $ F_\textrm {NL}^{\rm th}$ can be easily forecast using the primordial $B_\Phi^{\rm th}(k_1,k_2,k_3)$ without the need for accurate CMB calculations of $b_{l_1l_2l_3}^{\rm th}$ in (\ref{eq:redbispect}); primordial shape autocorrelators just need to be compared with the local shape as
demonstrated in ref.~\cite{Fergusson:2008ra}.
\section{Separable mode expansions}\label{sec:modeexp}
When analysing the CMB bispectrum $b_{l_1l_2l_3}$, we are restricted to a tetrahedral domain of
multipole triples $\{l_1l_2l_3\}$ satisfying both a triangle condition and a limit given by the maximum resolution
$l_\textrm{max}$ of the experiment. This three-dimensional domain ${{\cal V}_{\cal T}}$ of allowed multipoles is illustrated in
fig.~\ref{fig:tetrapyd} and it is explicitly defined by
\begin{eqnarray} \label{eq:tetrapydl}
\nonumber
&&\mbox{Resolution:} \qquad \qquad~ l_1,l_2,l_3 \leq l_\textrm{max}\,,\quad l_1,l_2,l_3 \in \mathbb{N}\,,\\
&&\mbox{Triangle condition:}\quad l_1 \leq l_2+l_3 ~~\hbox{for}~~ l_1 \geq l_2,\,l_3, ~~ +~\hbox{cyclic}~\hbox{perms.}\,,\\ \nonumber&&\mbox{Parity condition:} \qquadl_1+l_2+l_3 = 2n\, ,~~~n\in\mathbb{N}\,.
\end{eqnarray}
The multipole domain is denoted a `tetrapyd'
because it arises from the union of a regular tetrahedron from the origin out to the plane $l_1+l_2+l_3\le 2l_\textrm{max}$ and a triangular pyramid constructed from the corner of the cube taking in the remaining multipole values out to $l_i\le l_\textrm{max}$. Summed bispectrum expressions such as (\ref{eq:theorynorm}) indicate that we must define a weight function $w_{l_1 l_2 l_3}$ on the tetrapyd domain
in terms of the geometrical factor $h_{l_1l_2l_3}$, that is,
\begin{eqnarray}\label{eq:lweightdiscrete}
w_{l_1 l_2 l_3} =h_{l_1l_2l_3}^2\,.
\end{eqnarray}
This is a nearly constant function on cross sections defined by $l_1+l_2+l_3=\mbox{const}$, except very near the tetrahedral boundaries where it is still bounded, and a useful and accurate continuum limit $w(l_1,l_2,l_3)$ is given in \cite{FergussonLiguoriShellard2009}.
In order to eliminate an $l^{-1/2}$ scaling in the bispectrum estimator functions, we usually
exploit the freedom to divide by a separable function and to employ instead the weight
\begin{eqnarray} \label{eq:lweightsep}
w_s(l_1,l_2,l_3) = \frac{w_{l_1 l_2 l_3} }{v_{l_1}^2v_{l_2}^2v_{l_3}^2}\,,\quad \mbox{where} \quad v_l = (2l+1)^{1/6}\,.
\end{eqnarray}
We can then define an inner product of two functions $f(l_1,l_2,l_3),\,g(l_1,l_2,l_3)$ on the tetrapyd domain (\ref{eq:tetrapydl})
through
\begin{eqnarray}\label{eq:innerproduct}
\langle f,\,g\rangle ~\equiv~ \sum_{l_1,l_2,l_3\in{{\cal V}_{\cal T}} } w_s(l_1,l_2,l_3)\, f(l_1,l_2,l_3)\, g(l_1,l_2,l_3)\,.
\end{eqnarray}
Given that calculations generally deal with smooth functions $f,\,g,\,w,\, v$, we can use a variety of schemes to speed up
this summation (effectively an integration).
\begin{figure}[t]
\centering
\includegraphics[width=.5\linewidth]{figures/tetrapyd.jpg}
\caption[Tetrahedral domain]{\small Observational domain (\ref{eq:tetrapydl}) for the CMB bispectrum $b_{l_1l_2l_3}$. Allowed multipole values $(l_1,\,l_2,\,l_3)$ lie inside the shaded `tetrapyd' region, satisfying both the triangle condition and $l <L $$\,\equiv\,$$l_\textrm{max}$.}
\label{fig:tetrapyd}
\end{figure}
Our goal is to represent the observed CMB bispectrum estimator functions, such as those in (\ref{eq:approxestimator}) and (\ref{eq:theorynorm}), on the multipole domain (\ref{eq:tetrapydl})
using a separable mode expansion,
\begin{eqnarray} \label{eq:cmbestmodes}
\frac{v_{l_1}v_{l_2}v_{l_3}}{\sqrt{C_{l_1}C_{l_2}C_{l_3}}} \, b_{l_1l_2l_3} = \sum_n \baQ_n \barQ_n(l_1,l_2,l_3)\,,
\end{eqnarray}
where the $\barQ_n$ are basis functions constructed from symmetrised polynomial products
\begin{eqnarray}
\barQ_n (l_1,l_2,l_3) &=& {\textstyle \frac{1}{6}}[\bar q_p(l_1)\, \bar q_r(l_2)\, \bar q_s(l_3) + \bar q_r(l_1)\, \bar q_p(l_2)\, \bar q_s(l_3) + \mbox{cyclic perms in $prs$}]\nonumber\\
&\equiv& \bar q_{\{p} q_{r}q_{s\}}\quad \mbox{with}\quad n\leftrightarrow \{prs\}\,,
\end{eqnarray}
with the $\bar q_p(l)$ defined below. Here, the six permutations of the polynomial products which we denote as $\{ prs\}$
reflect the underlying symmetries of the bispectrum $b_{l_1l_2l_3}$ . For convenience, we define a one-to-one mapping $n\leftrightarrow \{prs\}$ ordering the permuted triple indices into a single list labelled by $n\in \mathbb{N}$. Alternative `slicing' and `distance' orderings were presented in ref.~\cite{FergussonLiguoriShellard2009}, but the results presented here are robust to this change. However, we shall quote explicit coefficients $\bQ_n$ resulting from distance ordering (i.e.\ $n(l_1,l_2,l_3) < n'(l_1',l_2',l_3')$ implies $l_1^2+l_2^2+l_3^2\le{l_1'}^2+{l_2'}^2+{l_3'}^2$ and in the instance of two modes being equidistant the one with most equal $l_i$ takes precedence).
We choose to define the tetrahedral $\bar q_p(l)$ polynomials analogously to
Legendre polynomials $P_n$ by requiring them to be self-orthogonal with respect to the
inner product (\ref{eq:innerproduct}),
\begin{eqnarray}
\langle\bar q_p(l_1),\,\bar q_r(l_1)\rangle = \delta_{pr}\,,
\end{eqnarray}
with the first few polynomials given by $\bar q_0=0.074$, $\bar q_1 = 0.30(-0.61+l)$, $\bar q_2 = 1.2(0.26 -1.1\,l +l^2)$ etc.
More precise expressions and generating functions are given in ref.~\cite{FergussonLiguoriShellard2009}.
As products, the $q_p$ only confer partial orthogonality on the 3D basis functions $\barQ_n$,
but their use is vital for other reasons, given their bounded and near scale-invariant behaviour.
While the product basis functions $\barQ_n$ are independent and separable, they are not orthogonal in general
\begin{eqnarray}
\langle \barQ_n,\,\barQ_p\rangle \equiv \gamma_{np}\ne \delta_{np}\,,
\end{eqnarray}
so it is very useful to construct a related set of orthonormal mode functions $\barR_n$ using Gram-Schmidt orthogonalisation
such that
\begin{eqnarray}\label{eq:orthonormal}
\langle \barR_n,\,\barR_p\rangle = \delta_{np}\,.
\end{eqnarray}
Working up to a given order $N$, the two sets of mode functions are related through
\begin{align}\label{eq:RQinverse}
\mathcal{R}_n = \sum_{p=0}^n \lambda_{mp} \mathcal{Q}_p \quad \hbox{for}~~ n,p\le N\,,
\end{align}
where $ \lambda_{mp}$ is a lower triangular matrix with
\begin{eqnarray}\label{eq:gammalambda}
(\lambda^{-1})_{np} ^{\top} = \langle\curl{Q}_n,\,\curl{R}_p\rangle \qquad\mbox{and} \qquad
(\gamma^{-1})_{np} = \sum_{r}^N(\lambda^\top)_{nr}\lambda_{rp}\, .
\end{eqnarray}
Knowing $\lambda_{np}$ allows us to easily systematically evaluate
the expansion coefficients in (\ref{eq:cmbestmodes}) directly from the inner product
\begin{eqnarray} \label{eq:RQrelation}
\baR_n =\Big \langle \barR_n,\, \frac{v_{l_1}v_{l_2}v_{l_3}}{\sqrt{C_{l_1}C_{l_2}C_{l_3}}} \, b_{l_1l_2l_3}\Big\rangle\,,
~~~~\hbox{yielding}~~~~\baQ_n = \sum_{p=0}^{N}(\lambda^\top)_{np}\, \baR_p\,.
\end{eqnarray}
Indeed, it is more convenient to present our final bispectrum results in the orthonormal $\barR_n$ basis,
\begin{eqnarray} \label{eq:cmborthmodes}
\frac{v_{l_1}v_{l_2}v_{l_3}}{\sqrt{C_{l_1}C_{l_2}C_{l_3}}} \, b_{l_1l_2l_3} = \sum_n \baR_n \barR_n\,
\end{eqnarray}
because their orthonormality
(\ref{eq:orthonormal}) implies a version of Parseval's theorem. Here, we note that the expansion (\ref{eq:cmborthmodes})
presumes a spectrum normalised as in (\ref{eq:newfnl}) to have $ F_\textrm {NL} =1$, that is, with $N$ such that $\sum_n\baR_n{}^2=N^2$
in the estimator (\ref{eq:approxestimator}).
To summarise, the $\barQ_n(l_1,l_2,l_3)$'s are independent separable
basis functions built out of the permutations of simple products of the polynomials $\bar q_p(l)$, which
are well-behaved and bounded over the tetrapyd. The $\barQ_n$'s in their easily separable form
are employed directly in the bispectrum estimator. However, it is more straightforward to present results and to
use the inner product (\ref{eq:innerproduct}) with the transformed $\barR_n$ eigenmodes because they are orthonormal; a simple
matrix expression (\ref{eq:RQrelation}) relates the expansion coefficients $\baQ_n$ and $\baR_n$ using the two sets of basis functions.
\section{Reconstructing the CMB bispectrum}\label{sec:reconstruction}
Now consider the implications of substituting the mode expansion (\ref{eq:cmbestmodes}) into the estimator (\ref{eq:approxestimator}), while exploiting the separability of the Gaunt integral (\ref{eq:Gaunt}),
\begin{eqnarray}\label{eq:cmbestsep}
{\cal E} &=& \frac{1}{N^2}\sum_{l_i,m_i}\sum _{n\leftrightarrow prs}\kern-6pt \baQ_n\bar q_{\{p}\bar q_r \bar q_{s\}} \int d^2\hat {\bf n} \frac{Y_{l_2m_2}(\hat {\bf n})Y_{l_1m_1}(\hat {\bf n})\, Y_{l_3m_3} (\hat {\bf n})}{{v_{l_1}v_{l_2}v_{l_3}}\sqrt{C_{l_1}C_{l_2}C_{l_3}}}
\left[a_{l_1m_1}a_{l_2m_2}a_{l_3m_3} - 6 \langle a_{l_1m_1}a_{l_2m_2}\rangle a_{l_3m_3}\right]\nonumber\\
&=& \frac{1}{N^2} \sum _{n\leftrightarrow prs}\kern-6pt \baQ_n \int d^2\hat {\bf n}\left[\(\sum_{l_1,m_1} \bar q_{\{p} \, \frac{a_{l_1m_1} Y_{l_1m_1}}{v_{l_1}
\sqrt{C_{l_1}}}\)\( \sum_{l_2,m_2}
\bar q_{r} \, \frac{a_{l_2m_2} Y_{l_2m_2}}{v_{l_2} \sqrt{C_{l_2}}}\)\(
\sum_{l_3,m_3} \bar q_{s\}} \, \frac{a_{l_3m_3} Y_{l_3m_3}}{v_{l_3}
\sqrt{C_{l_3}}}\)\right.\nonumber\\
&&~~~~~~-6\left.\left\langle\(\sum_{l_1,m_1} \bar q_{\{p} \, \frac{a_{l_1m_1} Y_{l_1m_1}}{v_{l_1}
\sqrt{C_{l_1}}}\)\( \sum_{l_2,m_2} \bar q_{r} \, \frac{a_{l_2m_2} Y_{l_2m_2}}{v_{l_2} \sqrt{C_{l_2}}}\)\right\rangle\(
\sum_{l_3,m_3} \bar q_{s\}} \, \frac{a_{l_3m_3} Y_{l_3m_3}}{v_{l_3}
\sqrt{C_{l_3}}}\)\right] \\
&=& \frac{1}{N^2} \sum _{n\leftrightarrow prs}\kern-6pt \baQ_n \int d^2\hat {\bf n}\,\left[\bar M_{\{p}({\bf \hat{n}})\bar M_r({\bf \hat{n}})\bar M_{s\}}({\bf \hat{n}})-
6\left\langle\bar M^{\rm G}_{\{p}({\bf \hat{n}})\bar M^{\rm G}_r({\bf \hat{n}})\right\rangle\bar M_{s\}}({\bf \hat{n}})\right]\,.
\end{eqnarray}
Here, the $\bar M_p({\bf \hat{n}})$ represent versions of the original CMB map filtered with
the polynomial $\bar q_p$ with the separated weight function $(v_l \sqrt{C_l})^{-1}$, that is,
\begin{align}\label{eq:barmapfilter}
\bar M_p({\bf \hat{n}}) = \sum_{lm} \bar q_p(l)\frac{a_{lm}}{v_l\sqrt{C_l}} Y_{lm}({\bf \hat{n}})\,.
\end{align}
The maps $\bar M^{\rm G}_p({\bf \hat{n}})$ incorporate the same mask
and a realistic model of the inhomogeneous instrument noise; a large ensemble of these maps, calculated from Gaussian simulations, are used in the averaged linear
term in the estimator (\ref{eq:cmbestsep}), allowing for the subtraction of these important effects. Defining the integral over
these convolved product maps as cubic and linear terms respectively
\begin{eqnarray}\label{eq:mapintegral}
\bbQ_n{}^{\rm cub} &=& \int d^2\hat {\bf n}\, \bar M_{\{p}({\bf \hat{n}})\bar M_r({\bf \hat{n}})\bar M_{s\}}({\bf \hat{n}})\,,\\
\bbQ_n{}^{\rm lin} &=& \int d^2\hat {\bf n}\, \left\langle\bar M^{\rm G}_{\{p}({\bf \hat{n}})\bar M^{\rm G}_r({\bf \hat{n}})\right\rangle\bar M_{s\}}({\bf \hat{n}})\nonumber\,,
\end{eqnarray}
the estimator (\ref{eq:approxestimator}) reduces to a simple sum over the mode coefficients
\begin{eqnarray}\label{eq:estimatorsum}
{\cal E} = \frac{1}{N^2} \sum_n \baQ_n \bbQ_n\,,
\end{eqnarray}
where $\bbQ_n \equiv \bbQ_n{}^{\rm cub} - \bbQ_n{}^{\rm lin}$.
The estimator sum (\ref{eq:estimatorsum}) is straightforward to evaluate, provided
the theoretical model coefficients $\baQ_n$ are known. It has been separated into
a product of three sums over the observational maps (\ref{eq:cmbestsep}), followed by a single integral over
all directions (\ref{eq:mapintegral}). The actual operations entailed in the estimator sum are only ${\cal{O}} (l^2)$,
so these late-time methods are extremely rapid for direct data analysis and for obtaining variances from map
simulations. However, we note that the preparatory `one-off' calculations setting up the orthonormal eigenmodes
and theoretical CMB bispectra are of order ${\cal{O}} (l^3)$. We emphasise that the
utility of this approach depends on a fairly rapidly convergent expansion for the theoretical bispectrum under
study (as indicated for almost all models studied to date \cite{Fergusson:2008ra}) and the fact that we have
constructed a {\it complete} set of orthonormal eigenmodes on the observed multipole domain (\ref{eq:tetrapydl}).
There is potentially much more information in the observed $\bbQ_n$ coefficients than just the estimator
sum (\ref{eq:estimatorsum}) which only yields $f_\textrm{NL}$ for a given theoretical model. Following the steps above in
(\ref{eq:cmbestsep}), it is easy to show (see Appendix)
that the expectation value for $\bbQ_n$ for an ensemble of maps with a given CMB bispectrum (expanded in modes $\baR_n$
with amplitude $ F_\textrm {NL}$) is
\begin{eqnarray}
\langle \bbQ_n\rangle = \sum_p F_\textrm {NL} \baQ_n \langle \barQ_n,\, \barQ_p\rangle = F_\textrm {NL} \sum_p\baQ_n\gamma_{np}\,,
\end{eqnarray}
so that the averaged estimator (\ref{eq:estimatorsum}) becomes
\begin{eqnarray}\label{eq:estimatorsum2}
\langle{\cal E}\rangle = \frac{1}{N^2} F_\textrm {NL}\sum_n\sum_p \baQ_n\, \gamma_{np}\, \baQ_p = \frac{1}{N^2} F_\textrm {NL}\sum_n \baR_n{}^2 = F_\textrm {NL}\,,
\end{eqnarray}
where we have used (\ref{eq:RQrelation}) in transforming to the $\barR_n$ basis. (Here we note again that in this basis
$N^2 = \sum_n \baR_n{}^2$.)
Equivalently, then, in the orthonormal frame we have the simple result
\begin{eqnarray}\label{eq:bestfitbeta}
\langle \bbR_n\rangle = F_\textrm {NL} \baR_n\,,
\end{eqnarray}
that is, we expect the best fit $\bbR_n$ coefficients for a particular realization to be the $\baR_n$'s themselves (given
a sufficiently large signal).
Assuming that we can extract the $\bbR_n$ coefficients with sufficient significance from a particular experiment,
this means that we can directly reconstruct the CMB bispectrum using the expansion (\ref{eq:cmborthmodes}).
\section{Validation using simulated maps}
\label{sec:validation}
In our previous paper \cite{FergussonLiguoriShellard2009}, we have validated this mode expansion approach with simulated maps in a
WMAP realistic context, accurately obtaining the input $f_\textrm{NL}$ as well as a fairly good recovery of the actual bispectrum
coefficients $\baR_n$.
In that case, we chose to use the equilateral model because the noise analysis requirements were not as stringent.
Here, however, we wish to undertake a comprehensive reconstruction of the bispectrum, so we have ensured an accurate
and robust implementation of the full noise and mask
subtraction, incorporated in the linear correction term to the bispectrum estimator (\ref{eq:approxestimator}).
As well as the equilateral model, the other canonical test model we have used
to validate our methods is the local model, characterised by the shape function (\ref{eq:shapefn}):
\begin{eqnarray}\label{eq:localS}eq:localS
S^{\rm local}(k_1,k_2,k_3) = \frac{1}{3} \(\frac{k_1^2}{k_2k_3}+\frac{k_2^2}{k_1k_3}+\frac{k_3^2}{k_1k_2}\)\,.
\end{eqnarray}
The local model (\ref{eq:localS}) is dominated by signal from squeezed triangles,
e.g.\ $k_1\ll k_2,\,k_3$. This behaviour is strongly reflected in
the CMB bispectrum $b_{l_1l_2l_3}$ where the impact of the transfer functions $ \Delta_{l}(k)$ is to impose
a series of acoustic peaks on the
underlying local shape. This dominant signal along the edges of the tetrapyd for the local model provides
a rigorous test for subtracting the inhomogeneous noise and masking effects. This is because these effects also exhibit
an overall local shape (as we will discuss in a future publication \cite{inprep}).
\begin{figure}[th]
\centering
\includegraphics[width=.75\linewidth, height = 5cm]{figures/Local_Modes_Recon_sm.jpg}
\caption[]{\small Recovered spectral coefficients $\bbR_n$ (\ref{eq:cmbestmodes}) from a single
map simulation for a local model with $f_\textrm{NL} = 100$. The original $\baR_n$ decomposition coefficients for the
local model are shown for comparison (blue). The $\bbR_n$ coefficients were recovered in a WMAP-realistic
context using the KQ75 mask with inhomogeneous noise added. Error bars (1$\sigma$) are also shown
for each mode as estimated from 1000 Gaussian maps; note that this variance is roughly constant for all modes.}
\label{fig:reconalpha}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[width=.53\linewidth]{figures/Local3D_exact.jpg}
\includegraphics[width=.46\linewidth]{figures/Local3D_Recon.jpg}
\caption[Recovered bispectrum]{\small Recovered 3D bispectrum using the mode decomposition method (\ref{eq:cmborthmodes}) from the $f_\textrm{NL} = 100$ local model map simulation used in
fig.~\ref{fig:reconalpha}. The left panel represents the original theoretical
bispectrum used to construct map realisations. The right panel represents the recovery in a
WMAP-realistic context using the spectrum shown in fig.~\ref{fig:reconalpha}. The main features of the bispectrum are evident,
including the primary acoustic peak and the high signal in the squeezed states along the tetrahedron edges.}
\label{fig:3drecon}
\end{figure}
The $\baR_n$ coefficients for the local bispectrum $b_{l_1l_2l_3}^\textrm{local}$ are illustrated in fig.~\ref{fig:reconalpha},
having been found using the robust CMB bispectrum calculation methods described in
ref.~\cite{Fergusson:2008ra} (which achieve better
than 99\% correlation with the exact results). With only $n_\textrm{max} = 31$
$\barQ_n$ modes it was possible to achieve a 94\% correlation between the partial sum (\ref{eq:cmbestmodes})
and the original $b_{l_1l_2l_3}$ for all the models studied (usually greater than 98\%). These
coefficients were then used to create ensembles of local maps with $f_\textrm{NL}=100$ (resolution $l_\textrm{max}=500$)
using the separable mode map simulation
methods described in ref.~\cite{FergussonLiguoriShellard2009}. As a further test independent of modal map-making, the
local map simulations from ref.~\cite{LigMat} were also used to verify the main conclusions. In addition,
inhomogeneous noise obtained by coadding WMAP V and W channels was included together with the
use of a KQ75 sky mask, just as in the original non-Gaussian analysis of WMAP5 data.
The efficacy of the modal estimator (\ref{eq:estimatorsum}) in recovering the correct $f_\textrm{NL}=100$ from the local maps
is illustrated in fig.~\ref{fig:reconalpha}, with the ensemble average $f_\textrm{NL} = 93\pm 32.5$ found
to be in good agreement. We expect the value to be slightly low as we are 94\% correlated (the maps have been created using an exact method rather than from the modes to ensure robustness). This confirmed the results for the equilateral model studied
in ref.~\cite{FergussonLiguoriShellard2009}, where it was also shown that Gaussian maps give unbiased results around $f_\textrm{NL}=0$.
The recovery of the local bispectrum mode coefficients (\ref{eq:mapintegral}) also proved to be remarkably efficient as illustrated
in fig.~\ref{fig:reconalpha} for a typical spectrum obtained from a single map realization. The dominant
local modes are clearly identifiable above the noise (for a signal of this 3$\sigma$ significance), with the
typical variance obtained from Gaussian maps also shown. The three-dimensional reconstruction
for the local model bispectrum is illustrated on the tetrapyd domain in fig.~\ref{fig:3drecon} and is comparable
with the original bispectrum. The dominant features are recovered, including the primary acoustic peak at
$l_1\approx l_2\approx l_3\approx 200$ and the strong signal for the squeezed triangles
along the edges of the tetrahedron where one of $l_i\approx0$. Comparing with results for the
equilateral model in ref.~\cite{FergussonLiguoriShellard2009}, it is clear that for a measurement of $3\sigma$ significance or more,
we should be able
to distinguish between families of models which are weakly correlated, such as local and equilateral.
\section{The WMAP bispectrum}\label{sec:WMAP}
\begin{figure}[b]
\centering
\includegraphics[width=.85\linewidth, height = 6.5cm]{figures/WMAP_Modes_sm.jpg}
\caption[]{\small Recovered mode coefficients $\bbR_n$ (\ref{eq:cmbestmodes}) from
the WMAP5 coadded V and W maps. Error bars (1$\sigma$) are also shown
for each mode as estimated from 1000 Gaussian map simulations in WMAP-realistic context. }
\label{fig:reconalphaWMAP5}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.825\linewidth]{figures/WMAP_Recon.jpg}
\caption[Recovered bispectrum]{\small Recovered 3D bispectrum from WMAP5 data showing the result
using the reconstructed mode coefficients $\bbR_n$ shown in fig.~\ref{fig:reconalphaWMAP5} with the
partial sum (\ref{eq:cmborthmodes}). Several density contours (light blue positive and magenta negative) are shown out to
$l_i\le 500$. }
\label{fig:3dreconWMAP5}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.145\linewidth]{figures/WMAP_250_ang}
\includegraphics[width=.175\linewidth]{figures/WMAP_500_ang}
\includegraphics[width=.275\linewidth]{figures/WMAP_750_ang}
\includegraphics[width=.375\linewidth]{figures/WMAP_1000_ang}
\caption[Recovered bispectrum]{\small
Recovered 3D bispectrum from WMAP5 data showing slices through the data
at $\tilde l\equiv l_1+l_2+l_3= \hbox{const.}$. Slices shown are $\tilde l = 250, 500, 750, 1000$, using the
same colour scale as fig.~\ref{fig:3dreconWMAP5}.}
\label{fig:3dreconslices}
\end{figure}
We now move on to apply the mode decomposition techniques described and validated
in the previous sections to the analysis of WMAP5 data. Our aim, first, will be
to estimate $f_\textrm{NL}$ arising from different primordial shapes, some as yet unconstrained in the literature
(such as the feature models of section \ref{sec:nonscaling} and the flattened models of section \ref{sec:flattened}).
Secondly, we aim
to provide a full reconstruction of the bispectrum from the data, using the same pipeline shown to recover
local and equilateral bispectra from simulated data.
The main emphasis of this work is obtaining
fast and accurate convergence for many different shapes, rather than a fully optimised estimation.
The analysis presented here is intended as a proof-of-concept for
late time modal estimators of non-Gaussianity, gleaning valuable new information from WMAP rather achieving
a maximal extraction. For this reason our study has a number of limitations, which we enumerate here. We do not implement full inverse covariance weighting in the estimator
as in (\ref{eq:optimalestimator}) \cite{SmithetAl2009}, but we adopt the pseudo-optimal weighting scheme used by
the WMAP team for the WMAP 5-year analysis
\cite{WMAP5}; we
use multipoles up to $\ell_{\max} = 500$, rather than $1000$, since the pseudo-optimal $f_\textrm{NL}$ error bars tend to
saturate above that threshold; finally, we work with WMAP 5-year instead of WMAP 7-year data. The reason for not using the
latest available dataset is not only that this work started well before the 7-year WMAP data release, but is also due
to the fact that WMAP 5-year data was originally studied with a pseudo-optimal weighting approach, thus making a comparison
between our results more straightforward. The present work represents the initial implementation of this general approach to analysing non-Gaussianity, rather than its completion.
After coadding the V and W band data (with the same weights as in the WMAP5
analysis), our first step was to extract the $\bbQ_n$ mode coefficients from the data, following the procedure summarized eqn (\ref{eq:barmapfilter}) and (\ref{eq:mapintegral}). In our analysis we chose to compute the first $n=31$ modes in (\ref{eq:estimatorsum}) because this proved sufficient to describe almost all theoretical CMB bispectra on the observational
domain $l_\textrm{max}=500$. The resulting estimates
will be shown in the following sections. As pointed out in (\ref{eq:bestfitbeta}), by rotating
our recovered $\bbQ_n$ into the orthonormal frame we obtain the best-fit estimate of the actual bispectrum coefficients $\baR_n$.
The mode coefficients obtained from the WMAP5 data $\bbR_n$ in this orthonormal frame are plotted in
fig.~\ref{fig:reconalphaWMAP5}. The variance is estimated from 1000 Gaussian map simulations, using the pipeline
repetitively in the same WMAP-realistic context.
The mode coefficient extraction from the WMAP5 data was straightforward with both the cubic and
linear terms contributing significantly to the final result. The late-time estimator (\ref{eq:approxestimator})
is sensitive to all forms of non-Gaussianity, in contrast to the two or three separable (and
oscillating) modes previously extracted from the data using primordial estimators. Despite this increased sensitivity, in principle,
making the method more susceptible to foreground contamination,
our results do not appear to have been significantly affected after subtraction by the linear term.
This has been investigated through extensive testing, including increasing mask size, and we will discuss these issues at much
greater length in a companion paper \cite{inprep}, characterising the mask, noise and other contributions. It is interesting
to note here, however, that the mode decompositions can also used to characterise anisotropic contributions, such as the inhomogeneous noise (and possibly other contaminants). We will show quantitatively how the action of the linear term
essentially projects out these spurious bispectrum directions from the cubic term, with the local shape being the most affected
(as noted originally in ref.~\cite{CreminellietAl2006}).
The extracted mode coefficients $\bbR_n$ from
fig.~\ref{fig:reconalphaWMAP5} can be used to reconstruct the full 3D WMAP bispectrum using (\ref{eq:cmborthmodes}).
The result of this partial sum is shown in fig.~\ref{fig:3dreconWMAP5}, together with a series of transverse slices through
the bispectrum shown in fig.~\ref{fig:3dreconslices}. Visually the WMAP bispectrum bears a qualitative
similarity to the local CMB bispectrum already used for pipeline validation, illustrated in fig.~\ref{fig:3drecon}.
As we shall discuss, there appears to be some local signal emerging from the WMAP data, but the periodicity of the
other features does not match well with scale-invariant primordial models. Nevertheless,
the orthonormal mode coefficients $\bbR_n$ plotted in fig.~\ref{fig:reconalphaWMAP5} do not individually show significant deviations
away from Gaussianity, presuming the accuracy of the nearly constant mode variances which are also plotted. We note at the outset, therefore,
that the WMAP bispectrum shown in fig.~\ref{fig:3dreconWMAP5} is likely to be the result of cosmic variance
(perhaps with some residual local signal left-over from the noise/mask subtraction and or other contamination).
As well as constraining specific theoretical models, we shall test the assumption of Gaussianity more generally in
section \ref{sec:totalbisp} by considering a measure of the total integrated bispectrum
obtained from the squared coefficients $\bbR_n{}^2$. In the near future, using the full WMAP7 data set and smaller variances we
will expand the scope of our mode exploration, including principal component analysis and other statistical approaches \cite{inprep}.
Before obtaining specific new constraints, we emphasise again that the extraction of the mode coefficients $\bbR_n$ provides a completely model-independent assessment of the three-point correlation function. The approach provides far more
information than that contained in a simple $f_\textrm{NL}$ amplitude parameter extraction for particular models.
Although obvious deviations from Gaussianity
are not apparent from this limited WMAP5 analysis (i.e.\ pseudo-optimal error bars and $l_\textrm{max}=500$), there remains considerable
potential with new data sets. For Planck, the sensitivity to primordial non-Gaussianity will improve by
up to an order of magnitude and so the error bars in fig.~\ref{fig:reconalphaWMAP5} will shrink dramatically. The prospects
for detection of a large NG signal remain completely open.
\section{Constraints on nearly scale-invariant models}\label{sec:scaleinv}
Constraints on the bispectrum to date have been for scale-invariant models of separable form,
primarily on the local and equilateral models, discussed
previously. There has been significant evolution over time for these constraints as both the CMB
data and the estimation methodology have improved. However, as table~\ref{tab:review}
illustrates (taken from ref.~\cite{LigSef2010}), there is no compelling and confirmed evidence for
a significant non-Gaussian signal at this stage. Our purpose in this section is to apply our
more general mode expansion estimator (\ref{eq:estimatorsum}) with our WMAP analysis to obtain constraints
on a much wider set of scale-invariant models. This method can be applied to any model for
which there is good convergence with the given $n_\textrm{max}$ modes.
\begin{table}[h]
\begin{tabular}{c |cl | cl }
& \multicolumn{2}{c}{\bf Local} & \multicolumn{2}{c}{\bf Equilateral} \\
\hline
\multirow{2}{*}{\bf Pure cubic} & $-58 < f_\textrm{NL} < 134$ &W1, Komatsu et al 2003~\cite{WMAP1} & $-366 < f_\textrm{NL} < 238$ & W1, Creminelli et al 2006~\cite{CreminellietAl2006} \\ & $-54 < f_\textrm{NL} < 114$ & W3, Spergel et al 2007~\cite{WMAP3} & $-256 < f_\textrm{NL} <332$ &W3, Creminelli et al 2006~\cite{CreminellietAl2006}
\\
\hline
\multirow{5}{*}{\bf Pseudo-optimal} & $-27 < f_\textrm{NL} < 121$ & W1, Creminelli et al, 2006~\cite{CreminellietAl2006} & $-151 < f_\textrm{NL} < 253$ & W5, Komatsu et al 2009~\cite{WMAP5} \\ & $-36 < f_\textrm{NL} < 100$ & W3, Creminelli et al 2006~\cite{CreminellietAl2006} & \\ & $\,\,27\, < f_\textrm{NL} < \, 147$ & W3, Yadav Wandelt 2008~\cite{YadavWandelt2009} & \\ & $9\,\, < f_\textrm{NL} < 129\,$ & W3, Smith et al 2009~\cite{SmithetAl2009} &
\\ & $\,-9\, < f_\textrm{NL} < \, 111$ & W5, Komatsu et al 2009~\cite{WMAP5} & \\
\hline
\multirow{3}{*}{\bf Optimal} & $12<f_\textrm{NL}<104$ & W3, Smith et al 2009~\cite{SmithetAl2009} & $-125<f_\textrm{NL}<435$& W5, Smith et al 2009~\cite{SmithetAl2009} \\ & $-4<f_\textrm{NL}<80$ & W5, Smith et al 2009~\cite{SmithetAl2009} & $-254<f_\textrm{NL}<306$& W7, Komatsu et al 2010~\cite{WMAP7} \\
& $-10<f_\textrm{NL}<74$ & W7, Komatsu et al 2010~\cite{WMAP7} & \\
\hline
\end{tabular}
\caption{Constraints on $f_\textrm{NL}^{local}$,$f_\textrm{NL}^{equil.}$, obtained by different groups on the one-year (W1), three-year (W3), five-year (W5), and seven-year (W7)
WMAP data releases. The estimators employed are the pseudo-optimal (\ref{eq:approxestimator}), the cubic (the same
without the linear noise term), and the optimal with full-covariance weighting (\ref{eq:optimalestimator}). All results were in
the context of a primordial estimator using separable functions to describe the specific model, unlike the general
late-time estimator employed here. For further details about the estimator methods employed and
the significant evolution of these results over time, please refer to the review \cite{LigSef2010}. }
\label{tab:review}
\end{table}
\begin{figure}[b]
\centering
\includegraphics[width=.65\linewidth]{figures/Const3D.jpg}
\caption[Recovered bispectrum]{\small
Predicted 3D bispectrum for the constant model up to $l_i\le 500$.
The same thresholds are employed as those shown in the
WMAP reconstructions in fig.~\ref{fig:3drecon}
(after an overall rescaling). }
\label{fig:const3Dslices}
\end{figure}
\subsection{Constant model}
The constant model $S(k_1,k_2,k_3)= 1$ is the simplest possible primordial shape with triangles of every configuration
contributing equally, resulting in a CMB bispectrum $b_{l_1l_2l_3}$ with features due entirely to the transfer functions (as we observed
for the acoustic peaks shown in fig.~\ref{fig:constant}). The constant model was motivated initially
by its simplicity with the large-angle analytic solution (\ref{eq:constbispect}) for the CMB bispectrum \cite{Fergusson:2008ra}.
However, the constant shape does have other more explicit physical motivation, such as
generation during a slowly turning trajectory in multifield inflation, denoted quasi-single field inflation \citep{ChenWang2009}.
For nearly scale-invariant models, the central values for the bispectrum, $b_{lll}$, all have roughly the same profile but with different normalisations. The oscillatory properties of the transfer functions create acoustic peaks located at triple combinations involving the following multipole values,
$l \approx 200, 500, 800, ...$. To observe the key differences between scale invariant models we must study the bispectrum in the plane orthogonal to the $(l,l,l)$-direction, that is, the directions reflecting changes in the primordial shape functions.
For the multipole
range $l_\textrm{max} <500$ relevant to the present analysis, we have plotted the 3D bispectrum in fig.~\ref{fig:const3Dslices}. Here, the dominant feature is the primary acoustic peak stretched along the diagonal
of the tetrapyd, peaking at $l = l_1=l_2=l_3=220$ and elongated like an extended balloon from $l\approx100$ to $l\approx450$.
Evidence for this primary peak would indicate the presence of a primordial and scale-invariant non-Gaussian signal, as
emphasised in ref.~\cite{Fergusson:2008ra} and investigated quantitatively for the local model in ref.~\cite{BuchervanTent}.
Observing the reconstructed WMAP bispectrum shown in fig.~\ref{fig:3dreconWMAP5} there is a central fluctuation at $l\approx 140$
but it does not extend to larger $l$ as would be expected; see the $l_1+l_2+l_3 =750$ slice in fig.~\ref{fig:3dreconslices} (right)
corresponding to $l\approx 250$ where the (apparent) WMAP peak has disappeared. If this measured 3D WMAP bispectrum were considered
to have any statistical significance then it would mitigate against a scale-invariant model, motivating the discussion in section~\ref{sec:nonscaling}.
A comparison of the mode coefficients $ \baR_n{}^{\rm const} $ from the constant model CMB bispectrum shown in fig.~\ref{fig:constalpha} indicates little obvious correlation with the WMAP coefficients $ \bbR_n{}^{\rm wmap} $ (also plotted).
Note that the constant model mode coefficients are large for the constant offset $n=0$ and for $n=3,4,5$ reflecting the periodicity of the acoustic peak structure (for $l_\textrm{max}=500$), that is, corresponding to the $\bar q_p\bar q_r\bar q_s$ polynomial products
with $prs = \{000\}, \{002\}, \{111\}, \{012\}$
(also with related harmonics at lower amplitude with $n=9,10,11$).
The mode decomposition estimator (\ref{eq:estimatorsum}) yields the quantitative
constraint
\begin{eqnarray}
\label{eq:fnlconst}
F_\textrm {NL}^{\rm const} = \frac{1}{N}\sum_n \baR_n{}^{\rm const} \;\bbR_n{}^{\rm wmap} = 35.1 \pm 27.4 \,,\qquad \quad(f_\textrm{NL}^{\rm const}= 149.4\pm 116.8)\,,
\end{eqnarray}
where $ F_\textrm {NL}$ is the bispectrum parameter normalised relative to the local model (\ref{eq:newfnl}),
while the lower case $f_\textrm{NL}$ constraint
employs the more model-dependent normalisation using the primordial shape function $S(k,k,k)=1$.
The variance here was determined from
using 2000 Gaussian simulations in the same WMAP-realistic context.
It is clear from this result that there is no evidence---given
the present precision---for a significant constant primordial non-Gaussian signal.
\begin{figure}[t]
\centering
\includegraphics[width=.75\linewidth,height=5cm]{figures/ConstModeComp_sm.jpg}
\caption[]{\small Comparison between constant model and
recovered mode coefficients for the WMAP5 data. Note that the constant model incorporates features
entirely due to the transfer functions (the acoustic peaks seen in modes $n=3,4,5$), which are indicators of
its primordial origin. }
\label{fig:constalpha}
\end{figure}
\subsection{Local model}
The canonical local shape, which we have already introduced in (\ref{eq:localS}), covers a wide range of models where the non-Gaussianity is produced by local interactions. These models have their peak signal in ``squeezed" states where one $k_i$ is much smaller than the other two due to non-Gaussianity typically being produced on superhorizon scales. Single-field slow-roll inflation is dominated by the local shape, though $f_\textrm {NL}^\textrm{loc}$ is tiny \cite{Maldacena2003,AcquavivaetAl2003}. The production of large
non-Gaussianity during multiple field inflation \cite{RigopoulosShellardvanTent2006A,SeeryLidsey2005,VernizziWands2006} shows much greater promise of producing an observable signal through conversion of isocurvature into adiabatic perturbations.
Large $f_\textrm {NL}^\textrm{loc}$ can also be produced in curvaton models \citep{LindeMukhanov2006, LythUngarelliWands2003, BartoloMatarreseRiotto2004}, at the end of inflation from reheating mechanisms \cite{EnqvistetAl2005A} and also in more exotic scenarios such as (non-local) $p$-adic inflation \cite{BarnabyCline2008} and the ekpyrotic scenario \cite{LehnersRenauxPetel}.
For more comprehensive references and recent examples please refer to the review, ref.~\cite{Chen2010}.
The distinct mode decomposition of the local model has already been illustrated in fig.~\ref{fig:reconalpha}, together with the 3D CMB bispectrum in fig.~\ref{fig:3drecon} because they were used to validate this estimator methodology in section~\ref{sec:validation}. There are a number of differences from the
constant model expansion reflecting the dominant signal along the edges of the tetrahedron, thus favouring higher order
polynomials able to describe this localised signal. That is, as well as the periodic acoustic peak signal seen
in the constant model ($n=3,4,5$), the spectrum is otherwise
dominated by pure modes $n=9,\,15,\,26$ with $prs = \{003\}, \{004\}, \{005\}$. The expansion is not as rapidly convergent but
the eigenmode partial sum achieves a 94\% correlation by $n=31$.
\begin{figure}[th]
\centering
\includegraphics[width=.75\linewidth, height = 5cm]{figures/LocalModeComp_sm.jpg}
\caption[]{\small Comparison between local model expansion coefficients and
recovered modes for the WMAP5 data. Note the relatively slow convergence
of the local model and the apparent visual correlation of modes. }
\label{fig:localalpha}
\end{figure}
To aid comparison with the recovered WMAP bispectrum, we illustrate both in fig.~\ref{fig:localalpha}. There appears to be
some correlation between the two sets of data points which is reflected in the result from the mode estimator
\begin{eqnarray}
\label{eq:fnllocal}
F_\textrm {NL}^\textrm{loc} = 54.4\pm 29.4\qquad (f_\textrm {NL}^\textrm{loc}=54.4\pm 29.4)\,.
\end{eqnarray}
This nearly $2\sigma$ result is consistent with that obtained by other groups in Table~\ref{tab:review}. In particular,
it can be compared to the raw result of $f_\textrm {NL}^\textrm{loc}= 59\pm 21$ obtained by the WMAP7 analysis, before marginalising over
foregrounds. The similarities between the recovered WMAP bispectrum and the local bispectrum can be observed
by comparing the 3D bispectrum in fig.~\ref{fig:3dreconWMAP5} and fig.~\ref{fig:3drecon} respectively.
There are obvious similarities around the edges of the tetrahedron where much of the local signal resides. However, we note
an important precautionary point. Our analysis of the effects of the noise and mask before subtraction from simulations
in a WMAP-realistic context indicates that these also have a very nearly `local' shape. We believe the same is also true
for likely foreground contaminants which we believe also contribute to the cubic term in the estimator (\ref{eq:approxestimator});
this is indicated also be the significant effect of marginalisation over foregrounds on $f_\textrm{NL}$ in the WMAP7 analysis. Our
late-time analysis is obviously susceptible to all sources of non-Gaussianity, unlike the filtered primordial estimator searching
for just a couple of modes. While such contaminants appear to have been largely removed from our analysis by the
linear term (given their anisotropic nature and local shape), the local constraint (\ref{eq:fnllocal}) is clearly more susceptible
to systematic effects. Further detailed investigations of these effects and the shape characterisation of noise, mask and contaminants
using our mode expansions is the subject of a follow-up publication \cite{inprep}.
\subsection{Equilateral models}
\begin{figure}[b]
\centering
\includegraphics[width=0.3\linewidth,height=0.25\linewidth]{figures/DBI.jpg}
\includegraphics[width=0.3\linewidth,height=0.25\linewidth]{figures/Ghost_sm.jpg}
\includegraphics[width=0.3\linewidth,height=0.25\linewidth]{figures/Sing1.jpg}
\caption[The shape function of models in the equilateral class.]{\small The shape function of models in the equilateral class
which from left to right are DBI inflation, ghost inflation and the remaining single field inflation model.}
\label{fig:equipics}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=.82\linewidth, height = 5cm]{figures/Equi_Modes.jpg}
\includegraphics[width=.75\linewidth, height = 5cm]{figures/EquiModeComp_sm.jpg}
\caption[]{\small Equilateral model expansion coefficients $\baR_n$ compared between models (top panel) and
compared with WMAP5 results (lower panel).}
\label{fig:equimodels}
\end{figure}
Bispectra dominated by contributions from nearly equilateral triangle configurations, $k_1\approx k_2\approx k_3$
are produced through the amplification of nonlinear effects around the time modes exit the horizon, which can be
achieved by modifying kinetic terms, as in the DBI model \citep{AlishahihaSilversteinTong2004}, or by explicitly adding higher derivative terms, such as in K-inflation \citep[see, for example,][]{ChenetAl2007}. For DBI inflation, this leads to non-Gaussianity being produced with a shape function of the form \citep{Creminelli2003, AlishahihaSilversteinTong2004}
\begin{eqnarray}\label{eq:dbiS}
S(k_1,k_2,k_3) = \frac{1}{k_1 k_2 k_3 (k_1+k_2+k_3)^2} \[\sum_i k_i^5 + \sum_{i \neq j}\(2 k_i^4 k_j - 3 k_i^3 k_j^2\)
+ \sum_{i \neq j \neq l}\(k_i^3 k_j k_l - 4 k_i^2 k_j^2 k_l\)\].
\end{eqnarray}
This shape is illustrated in fig.~\ref{fig:equipics}, together with ghost inflation
\cite{ArkaniHamedetAl2004} and a third distinct single field equilateral shape found in a general
analysis of such models \cite{ChenetAl2007}. Note that the generic equilateral shapes are not separable,
but have been approximated to date using a separable ansatz commonly called the `equilateral model' \cite{CreminellietAl2006}:
\begin{align} \label{eq:equi}
S^{equi}(k_1,k_2,k_3) = \frac{1}{N} \frac{(k_2+k_3 - k_1)(k_3+k_1-k_2)(k_1+k_2-k_3)}{k_1k_2k_3}\,.
\end{align}
Despite the apparent visual differences between these primordial shapes, particularly near the edges of the tetrahedral domain, the resulting CMB bispectra share at least a 95\% or greater correlation (\citep[see][]{Fergusson:2008ra}). The CMB mode decomposition for these models is illustrated in fig.~\ref{fig:equimodels}, showing very similar behaviour to the constant model
also dominated by the acoustic peak coefficients $n=3,4,5$. The resulting constraints from the modal estimator are:
\begin{eqnarray}
\label{eq:fnlequi}
&\hbox{Equilateral:}~~ & F_\textrm {NL} = 25.1\pm 26.4 \qquad (f_\textrm{NL}=143.5\pm 151.2)\,,\\
&\hbox{DBI:}~~~~~~~~~ & F_\textrm {NL} = 26.7 \pm 26.5\qquad (f_\textrm{NL}=146.0\pm 144.5)\,,\\
&\hbox{Ghost:}~~~~~~ & F_\textrm {NL} = 22.0 \pm26.3\qquad (f_\textrm{NL}=138.7\pm 165.4)\,,\\
&\hbox{Single:}~~~~~~ & F_\textrm {NL} = 28.8 \pm 26.6\qquad (f_\textrm{NL}=142.1\pm 131.3)\,.
\end{eqnarray}
Here, the local $ F_\textrm {NL}$ normalisation (\ref{eq:newfnl}) yields much more consistent variances between
models within the equilateral family than $f_\textrm{NL}$ (as well as values comparable to local and other models). Note
that there are up to 30\% variations between the central values of these $ F_\textrm {NL}$ constraints despite the strong correlations
between these bispectra; this is attributable to the different behaviour near the edges and faces where much of the apparent
WMAP signal is localised. These results are consistent with the evolving constraints obtained in the literature
to date, as shown in Table~\ref{tab:review}.
Finally, we consider a separable `orthogonal' shape $S^{\rm orthog}$ which is a constructed from a linear combination of the constant and equilateral shape functions $S^{\rm orthog} \propto S^{\rm equil} -2/3$ (see \cite{MeerburgVanDerSchaarCorasaniti2009, SmithetAl2009}). The constraint from the mode estimator (\ref{eq:estimatorsum}) then becomes
\begin{eqnarray}
\label{eq:fnlortho}
F_\textrm {NL}^{\rm ortho} = -16.3 \pm27.3 \,,\qquad (f_\textrm{NL}^{\rm ortho}=-79.4\pm 133.3)\,,
\end{eqnarray}
which is a less negative result than the latest WMAP7 limit $f_\textrm{NL}^{\rm ortho}= -199 \pm 104$, but it remains consistent.
\subsection{Flattened model}\label{sec:flattened}
\begin{figure}[b]
\centering
\includegraphics[width=0.49\linewidth]{figures/Ini2_sm}
\includegraphics[width=.49\linewidth]{figures/Flat3D.jpg}
\caption[Recovered bispectrum]{\small Flattened model: smoothed primordial shape function (left) and three-dimensional
CMB bispectrum (right) for the flattened model. }
\label{fig:flat}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.75\linewidth, height = 5cm]{figures/FlatModeComp_sm.jpg}
\caption[]{\small Flat model mode coefficients compared to WMAP5 mode coefficients. }
\label{fig:flatalpha}
\end{figure}
It is possible to consider inflationary vacuum states which are more general than the Bunch-Davies vacuum, such as an excited Gaussian (and Hadamard) state \citep[][see also discussions in \citealt{ChenetAl2007,MeerburgVanDerSchaarCorasaniti2009}]{HolmanTolley2008}. Observations of non-Gaussianity in this case might provide insight into trans-Planckian physics. The proposed non-separable shape for the bispectrum is
\begin{eqnarray}
\label{eq:flat}
S^{\rm flat}(k_1,k_2,k_3) \propto 6\frac{k_1^2+k_2^2 -k_3^2)}{k_2k_3} +\mbox{2 perms}
+ 2\frac{k_1^2+k_2^2+k_3^2}{(k_1+k_2-k_3)^2(k_2+k_3-k_1)^2(k_3+k_1-k_2)^2}\,.
\end{eqnarray}
The bispectrum contribution from early times is dominated by flattened triangles, with e.g.\ $ k_3 \approx k_1+k_2$, and for a small sound speed $c_s\ll 1$ can be large. Unfortunately, as the divergent analytic approximation breaks down at the boundary of the allowed tetrahedron, some form of cut-off must be imposed, as shown for the smoothed shape in fig.~\ref{fig:flat} where an edge truncation has been imposed together with a mild Gaussian filter. This leads to a degree of predictive uncertainty, but
the regularisation scheme ensures the primary signal is well-localised on the tetrahedral faces and is quite distinct
from other separable shapes investigated to date (refer to ref.~\cite{Fergusson:2008ra} for the specific details).
The resulting CMB spectrum reflects this behaviour with the dominant signal residing near the tetrahedral faces as shown in fig.~\ref{fig:flat}.
Figure~\ref{fig:flatalpha} shows the flat model mode coefficients, which like the local model are only slowly convergent. Comparing
the flat model with the coefficients obtained from WMAP, the mode estimator
yields the new constraint:
\begin{eqnarray}
\label{eq:fnlflat}
F_\textrm {NL} = 35.4\pm 29.2\qquad (f_\textrm{NL}=18.1\pm 14.9)\,.
\end{eqnarray}
Despite the apparent visual similarity between the flat and WMAP CMB bispectra (figs~\ref{fig:flat} and \ref{fig:3dreconWMAP5}) the
present analysis does not reveal a significant correlation.
\subsection{Warm model}
Finally, we consider warm inflation scenarios, that is, nearly scale-invariant models in which dissipative effects play a dynamical role, because these also may produce significant non-Gaussianity \cite{MossXiong2007} (for a review see \cite{BereraMossRamos2009}). Contributions are again dominated by squeezed configurations but with a different more complex shape possessing a sign flip as the corner is approached. essentially making the signal orthogonal to the local shape. It can be shown that this makes the warm and local shapes essentially orthogonal with only a 33\% correlation (see ref.~\cite{Fergusson:2008ra} where the shape function and
CMB bispectra are discussed). As with the flat model, uncertainties remain as to the validity of the approximations made as the
corners and edges of the tetrapyd are approached. Comparison of the predicted warm bispectrum coefficients $\bbR_n$ with the
WMAP data through the modal estimator (\ref{eq:estimatorsum}) yields the constraint
\begin{eqnarray}
\label{eq:fnlwarm}
F_\textrm {NL}^{\rm warm} = 10.3\pm 27.2\qquad(f_\textrm{NL}^{\rm warm}=47.4\pm 125.4)\,.
\end{eqnarray}
A previous WMAP3 warm inflation analysis obtained a lower central value $f_\textrm{NL}^{\rm warm}=-169\pm 103$ \cite{MossGraham2007}
which is marginally consistent with (\ref{eq:fnlwarm}) at the 95\% confidence level. Probably the most significant difference is
that the previous analysis did not include a linear term in the estimator (\ref{eq:approxestimator}) to account for noise and masking effects; these corrections are significant here as for the edge-dominated local model.
\section{Implications for non-scaling feature models}\label{sec:feature}
\label{sec:nonscaling}
\begin{figure}[b]
\centering
\includegraphics[width=.75\linewidth, height = 5cm]{figures/Feat400_Modes_sm.jpg}
\includegraphics[width=.75\linewidth, height = 5cm]{figures/Feat_0_Modes_sm.jpg}
\caption[]{\small Feature model coefficients $\baR_n$ plotted in two-dimensions by mode number $n$ and as function of
phase $\phi$ with $l^*=400$ (top panel) and as a function of scale $l^*$ with $\phi=0$ (lower panel). Note how the
characteristic $n=3,4,5$ primordial acoustic peak signature is affected (compare with fig.~\ref{fig:constalpha}.}
\label{fig:featurephase}
\label{fig:featurescale}
\end{figure}
It is possible to produce non-Gaussian signals which are not scale-invariant, such as models with
a distinct feature in the inflaton potential. These usually take the form of either a step in the potential (models which have
a long history, see e.g.\ ref.~ \citep{ChenEastherLim2008}) or those with a small oscillation superimposed onto
the potential (which have become more popular recently, see e.g.\ ref.~\citep{BeanetAl2008B}. Two analytic forms for the
resulting three point functions have been presented in ref.~\citet{ChenEastherLim2008} with the expression we will
analyse here taking the form
\begin{align}
\label{eq:feature}
S^{feat}(k_1,k_2,k_3) = \frac{1}{N} \sin\(2\pi\frac{k_1+k_2+k_3}{3k^*} + \Phi\)\,,
\end{align}
where $k^*$ is the associated with the physical scale of the feature in question and $\Phi$ is an arbitrary phase factor.
The alternative form with a logarithmic momentum dependence in the $\sin$ argument can be shown to be closely
correlated with the simpler form (\ref{eq:feature}), certainly on the present domain of study $l_\textrm{max} =500$. Previously,
we studied the shape and CMB bispectrum for a particular feature model (with $k^* \approx l^*/\tau_0$ and $l^*\px400$),
showing that its non-scaling behaviour made it essentially independent of all the other shapes \cite{Fergusson:2008ra}.
Such models can have starkly contrasting CMB bispectra as illustrated in fig.~\ref{fig:featurefit}, disrupting
the usual pattern of acoustic peaks which switch from correlation to anticorrelation on multipole scales $l^*$.
Clearly, scale dependent feature models form a distinct category of bispectra beyond the equilateral, local, warm and flat
families, so searches within WMAP and future data sets are well-motivated.
For the present WMAP5 analysis, we have studied the primordial feature shape (\ref{eq:feature}) over a wide range of
for which the CMB bispectra that we obtained could be accurately described by our $n=31$ eigenmodes, that is,
for which we could obtain $>90\%$ convergence to $b_{l_1l_2l_3}^{\rm feat}$ for the partial sum (\ref{eq:cmbestmodes}). This
restricted the scale parameters in (\ref{eq:feature}) to the range $l^*\ge 150$, so we studied values $l^* = 150, \,200,\,
250,\,300,\,400,\,500,\,600,\,700$. For larger values $l^*>700$ the models became highly correlated with the constant
model given that $l_\textrm{max}=500$. No such restriction applied to the phase which was studied for each $l^*$ over the full
domain $0\le \Phi <2\pi$ in $\pi/8$ steps (noting that models separated by $\pi$ are merely anticorrelated). This
entailed considerable computational effort calculating 64 distinct CMB bispectra at high accuracy using the robust methods
previously described elsewhere \cite{FergussonShellard2007}. The mode coefficients for the $l^*=400$ model are illustrated for the
different phases in fig.~\ref{fig:featurephase}, demonstrating how the characteristic acoustic peak signal in $n=3,4,5$ can
be modified (compare the constant model fig.~\ref{fig:constalpha}). The strong dependence of the mode coefficients on the
different multipole scales $l^*$ (at fixed phase $\Phi = 0$) are shown in fig.~\ref{fig:featurescale}.
\begin{table}[t]
\begin{tabular}{| l | c | c | c | c | c | c | c | c |}
\hline
\backslashbox{\bf Phase}{\bf Scale} & 150 & 200 & 250 & 300 & 400 & 500 & 600 & 700\\
\hline
$0$ & $ 57 \; (30)$ & $-52 \; (33)$ & $-25 \; (32)$ & $1 \; (30)$ & $1 \; (27)$ & $8 \; (26)$ & $18 \; (25)$ & $23 \; (25) $ \\
$\pi/8$ & $ 67 \; (36)$ & $-26 \; (27)$ & $-36 \; (30)$ & $-6 \; (25)$ & $-4 \; (26)$ & $-2 \; (27)$ & $12 \; (26)$ & $20 \; (25) $ \\
$\pi/4$ & $ 68 \; (42)$ & $-10 \; (29)$ & $-43 \; (30)$ & $-11 \; (21)$ & $-7 \; (25)$ & $-10 \; (27)$ & $-1 \; (28)$ & $13 \; (27) $ \\
$3\pi/8$ & $ 49 \; (46)$ & $7 \; (34)$ & $-42 \; (32)$ & $-18 \; (24)$ & $-9 \; (25)$ & $-14 \; (26)$ & $-13 \; (28)$ & $-2 \; (28) $ \\
$\pi/2$ & $ 15 \; (46)$ & $32 \; (41)$ & $-30 \; (35)$ & $-32 \; (34)$ & $-10 \; (25)$ & $-16 \; (25)$ & $-18 \; (27)$ & $-14 \; (28) $ \\
$5\pi/8$ & $ -19 \; (42)$ & $63 \; (46)$ & $-15 \; (35)$ & $-38 \; (43)$ & $-11 \; (25)$ & $-16 \; (25)$ & $-20 \; (26)$ & $-20 \; (27) $ \\
$3\pi/4$ & $ -39 \; (35)$ & $87 \; (48)$ & $0 \; (35)$ & $-25 \; (41)$ & $-11 \; (26)$ & $-15 \; (25)$ & $-21 \; (25)$ & $-23 \; (26) $ \\
$7\pi/8$ & $ -48 \; (30)$ & $81 \; (43)$ & $13 \; (34)$ & $-11 \; (35)$ & $-7 \; (27)$ & $-13 \; (25)$ & $-20 \; (25)$ & $-23 \; (25) $ \\
\hline
\end{tabular}
\caption{Limits for a selection of feature models in the form $ F_\textrm {NL}$ (StDev).}
\label{tab:fnllim2}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=.75\linewidth]{figures/Feature_sm.jpg}
\caption[]{\small Significance of feature model bispectra $ F_\textrm {NL}/\Delta F_\textrm {NL}$ using WMAP data with the modal
estimator (\ref{eq:estimatorsum}). This is plotted as a function of the multipole scale $l^*$ and the
phase of feature models given by (\ref{eq:feature}).}
\label{fig:featuremodels}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.75\linewidth, height = 5cm]{figures/FeatModeComp_sm.jpg}
\caption[]{\small Best fit feature model coefficients ($l^*=150$, $\phi=0$) compared to WMAP5 mode coefficients. }
\label{fig:featalpha}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[width=.65\linewidth]{figures/Feat3D_2.jpg}
\caption[Recovered bispectrum]{\small Three-dimensional CMB bispectrum calculated for the
best fit feature model ($l^*=150$, $\phi=0$) .
Note how the scale-dependence of the
central peaks mimics at some level that observed in the WMAP data. }
\label{fig:featurefit}
\end{figure}
Results from the modal estimator for all the feature models investigated are provided in Table~\ref{tab:fnllim2}.
Note that the constraints are given in terms of the normalised quantity $ F_\textrm {NL}$ defined in (\ref{eq:newfnl}),
since there is no simple generalisation of the primordial normalisation used for $f_\textrm{NL}$ without scale-invariance.
As before, the variances (given in parentheses) are those obtained for the same set of models
from 1000 Gaussian simulations.
The results are illustrated graphically in fig.~\ref{fig:featuremodels} showing the relative significance
of the central $ F_\textrm {NL}$ values relative to the standard deviation.
The result with the highest significance is that for the feature model with $l^* = 150$ and zero phase
which achieves a $1.9 \sigma$ significance. The 3D bispectrum for this model is shown in fig.~\ref{fig:featurefit}
demonstrating how such models can reproduce the apparent scale-dependence observed in the WMAP
bispectrum (see fig.~\ref{fig:3dreconWMAP5}).
However, we note that this model is
close to the resolution limit set by the eigenmodes deployed (like the other cases of higher significance).
The results over the full domain of feature
models investigated remain consistent with the Gaussian hypothesis with no significant detection found on the
WMAP domain for $l\le 500$.
\section{Towards a measure of the total integrated bispectrum $ \bar F_\textrm {NL}$}\label{sec:totalbisp}
\label{sec:integratedfnl}
Our focus in this paper has been on recovering the observed bispectrum $b_{l_1l_2l_3}$ which contains more
information than $f_\textrm {NL}^\textrm{th}$ constraints for particular models. We can also consider squaring this quantity and
summing over all multipoles to obtain a total integrated nonlinearity parameter $\bar F_{\rm NL}$ defined by
\begin{eqnarray}\label{eq:totalbispectrum}
\bar F_{\rm NL}^2 ~\equiv~ \frac{1}{N_{\rm loc}^2} \sum_{l_i m_i}\frac{{B^{l_1 l_2 l_3}_{m_1 m_2 m_3}}^2}{C_{l_1} C_{l_2} C_{l_3}}
~ = ~\frac{1}{N_{\rm loc}^2} \sum_{l_i}\frac{h_{l_1l_2l_3}^2b_{l_1l_2l_3}^2 }{C_{l_1} C_{l_2} C_{l_3}}\,.
\end{eqnarray}
Substituting our mode decomposition
(\ref{eq:cmborthmodes}) into the expression for the integrated bispectrum we can find the leading order contribution
from the three-point correlator to be\footnote{As an unambiguous signature of a significant bispectrum we should compare
$ \bar F_\textrm {NL}$ with the skewness $\gamma_1$ which is
given by \cite{Regan:2010cn}
\begin{eqnarray}\label{eq:skewness}
\gamma_1 \equiv\left \langle \left(\frac{\Delta T}{T}(\hat {\bf n})\right)^3\right\rangle = \frac{1}{4\pi} \sum_{l_i} h_{l_1l_2l_3} ^2b_{l_1l_2l_3}\,.
\end{eqnarray}
In principle, the skewness can conspire to vanish even with a non-zero bispectrum $b_{l_1l_2l_3}$ because it is not
positive definite, in contrast to the bispectrum contribution to $ \bar F_\textrm {NL}$.}
\begin{eqnarray}\label{eq:totalbispectrumsum}
\left . \bar F_{\rm NL}^2\right |_{\rm 3pt} &\equiv& \frac{1}{N_{\rm loc}^2} \sum_{l_i}\frac{h_{l_1l_2l_3}^2b_{l_1l_2l_3}^2 }{C_{l_1} C_{l_2} C_{l_3}}
= \frac{1}{N_{\rm loc}^2} \sum_{l_i} w_s (l_1,l_2,l_3) \left(\sum_{n} \bbR_n\barR_n \right)\left(\sum_p \bbR_p\,\barR_p\right) \nonumber\\
&\approx& \frac{1}{N_{\rm loc}^2} \sum_{n}\sum_{p} \bbR_n \bbR_p \,\langle \barR_n,\,\barR_p\rangle ~\approx ~ \frac{\sum_{n} {\bbR_n}{}^2}{\sum_{n} {\baR_n}{}^2}\,.
\end{eqnarray}
Unfortunately, however, the expectation value $\langle \bar F_{\rm NL}^2\rangle$ contains much more than the just
contributions from the three-point correlator. There is necessarily a contribution from products of two-point correlators, reflecting
both cosmic variance and instrument noise and there are also higher-order contributions, which are derived in the Appendix.
The leading order terms are found to be
\begin{eqnarray}
\label{eq:fnlsummary}
\bar F_\textrm {NL}^2 \approx \frac{1}{N_{\rm loc}^2} \(6n_\textrm{max} + \sum_n^{n_\textrm{max}} \left [ F_\textrm {NL}^2\baR_n{}^2 + \langle\bbR_n{}^2\rangle_6 + ...\right]\)\,,
\end{eqnarray}
where the first term represents the underlying variance, the second is the integrated bispectrum we seek,
and the third term arises from contributions from the four-point correlator or trispectrum $T^{l_1\, l_2}_{l_3\,l_4}(L)$.
The non-Gaussian corrections from the bispectrum and trispectrum (and possibly higher correlators) to $ \bar F_\textrm {NL}^2$ are not easily distinguishable.
Nevertheless, we know the Gaussian expectation so it can still be used to measure any deviation from Gaussianity, by determining the amplitude and nature of any differences. To prove the utility of this statistic we have applied it to both Gaussian and local simulations. As we have shown in a previous paper \cite{Regan:2010cn} our local map simulations contain a negligible trispectrum and so we can use the approximation
\begin{eqnarray}
\label{eq:fnlshort}
\bar F_\textrm {NL}^2 \approx \<{ \bar F_\textrm {NL}^G}{}^2\> + \frac{1}{N_{\rm loc}^2} F_\textrm {NL}^2 \sum_{n} {\baR_n}{}^2\,,
\end{eqnarray}
where we have defined $\<{ \bar F_\textrm {NL}^G}{}^2\>$ to be the value recovered from Gaussian simulations. As $N^2=\sum_n^{n_\textrm{max}}\baR_n{}^2$ and we normalise all models to have $N=N^{local}$ we can then attempt to recover $ \bar F_\textrm {NL}$ without assuming any particular from for $\baR_n{}$. The recovered $ \bar F_\textrm {NL}$ will then be given by the following formula
\begin{eqnarray}
\label{eq:fnlrec}
F_\textrm {NL}^{rec} = \sqrt{ \bar F_\textrm {NL}^2 - \<{ \bar F_\textrm {NL}^G}{}^2\>}\,.
\end{eqnarray}
We estimated $\<{ \bar F_\textrm {NL}^G}{}^2\>$ from 1000 Gaussian simulations and then calculated $ F_\textrm {NL}^{rec}$ from 100 simulations with various input $ F_\textrm {NL}$. The results are presented in table \ref{tab:FnlRec} and are plotted as cumulative sum of squared
mode coefficients in figure \ref{fig:localtotalFnl}.
The encouraging results show that the statistic is recovering the input $f_{NL}$ to a reasonable degree of accuracy, although with a slight tendency to under-estimation. The biggest surprise is that the error bars for this general bispectrum estimator
only increase by a factor of $~50\%$ from those when the particular local form is assumed explicitly. We note, however,
that (\ref{eq:fnlshort}) gives rise to a $\chi^2$-distribution, so we have to take care in assuming Gaussianity for small $n_\textrm{max}$.
Again, we will further explore the utility of such general modal statistics elsewhere \cite{inprep}.
\begin{figure}[t]
\centering
\includegraphics[width=.85\linewidth, height = 7.25cm]{figures/Fnl_Simulations2_sm.jpg}
\caption[]{\small
Cumulative sum of mode contributions to the total $ \bar F_\textrm {NL}^2$ (\ref{eq:totalbispectrumsum}) for the local $ F_\textrm {NL}=100$ (red) and $ F_\textrm {NL}=200$ (green)
map simulations compared with Gaussian maps (blue). The $1\sigma$ variance is shaded around the mean value
obtained from 100 simulations (1000 simulations for the Gaussian case).}
\label{fig:localtotalFnl}
\end{figure}
\begin{table}[h]
\begin{tabular}{| c | c | c | c |}
\hline
Input $f_{NL}$ & Mean & StDev & Null \\
\hline
50 & 55.86 & 48.53 & 36 \\
75 & 65.39 & 46.95 & 20 \\
100 & 95.28 & 41.61 & 8 \\
150 & 138.8 & 44.87 & 3 \\
200 & 187.39 & 41.55 & 0 \\
\hline
\end{tabular}
\caption{$ F_\textrm {NL}^{rec}$ as recovered from 100 simulated local maps. Null refers to the number of maps in which the recovered $ F_\textrm {NL}$ is less than the Gaussian expectation value.}
\label{tab:FnlRec}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=.85\linewidth, height = 7.25cm]{figures/WMAP_Fnl}
\caption[]{\small Cumulative sum of mode contributions to the total $ \bar F_\textrm {NL}$ (\ref{eq:totalbispectrumsum}) for the
WMAP data compared with Gaussian map simulations as in fig.~\ref{fig:localtotalFnl}.
}
\label{fig:WMAPtotalFnl}
\end{figure}
With the efficacy of the $ \bar F_\textrm {NL}$ statistic established we have also applied it to the WMAP5 data. This yields the unexpected result that $ F_\textrm {NL}$ obtained from the WMAP5 data is less than that which we we would expect from a typical Gaussian map by slightly over 1$\sigma$ (see the cumulative sum in fig. \ref{fig:WMAPtotalFnl}). This is somewhat surprising because one would expect a late-time
estimator to be susceptible to foregrounds or other contamination, but the deviation remains statistically insignificant.
In principle, this result could be due to a large negative trispectrum or higher order contribution (see Appendix). However,
neglecting this possibility, the result shown in fig.~\ref{fig:WMAPtotalFnl} indicates that there is no significant contribution
to the bispectrum from the first $31$ eigenmodes. This would constrain virtually all smooth scale invariant shapes, as well as
the feature models we have surveyed. The only remaining possibility for a bispectrum detection (at the present precision) would then be for oscillatory models with sufficiently high frequencies or bispectra with particularly sharp, or localised, features (i.e. those which
require $n>31$ for an accurate description). We have good evidence, therefore, for the null hypothesis that we live in
a Gaussian universe.
\section{Discussion and conclusions}\label{sec:conclusions}
We have implemented and validated separable mode expansions with a general late-time CMB bispectrum estimator, using it to
investigate a wide range of primordial models with WMAP 5-year data. Notable new constraints include those on non-scaling
feature models, trans-Planckian (flat) models and warm inflation. The results for nearly scale-invariant models are summarised
in Table \ref{tab:fnllim1}, demonstrating consistency with previous constraints on equilateral and local models. Note that
we adopt a nonlinearity parameter $ F_\textrm {NL}$ normalised to facilitate direct comparison between the local $f_\textrm{NL}$ and any other model.
We found no evidence for significant deviations from Gaussianity for any specific model (at 95\% confidence).
Feature models were surveyed over a wide range of parameters with periodicities above $l^*=150$ and over the
full domain of phase values. Again, no significant bispectrum detection was made, though given the nature of this
survey some models provide a better a posteriori fit to the data than others.
As we have emphasised throughout, more information can be extracted from the mode decomposition of the data than
a few $ F_\textrm {NL}$'s for specific models. Given that we have constructed a complete orthonormal basis $\barR_n$ we can use the
mode coefficients $\bbR_n$ to directly reconstruct the full CMB bispectrum using the partial sum (\ref{eq:cmborthmodes}).
We plotted the result for WMAP5 in fig.~\ref{fig:3dreconWMAP5} which, despite its low significance, revealed interesting qualitative features similar to the local model (\ref{fig:3drecon}), but without the periodicity expected from acoustic peaks. We discussed
a positive-definite measure for the total integrated bispectrum constructed from the mode coefficients $ \bar F_\textrm {NL}^2 = \sum_n \bbR_n{}^2$, which was used to recover $f_\textrm{NL}$ from map simulations in a model independent manner (though
with larger variance). For WMAP5 data the integrated $ \bar F_\textrm {NL}$ was found to be small and again consistent with
a Gaussian hypothesis.
\begin{table}[t]
\begin{tabular}{| l | c | c |}
\hline
{\bf Model} & $ F_\textrm {NL} $ & ($f_\textrm{NL}$)\\
\hline
{\bf Constant} & $35.1 \pm 27.4\; $ & $(149.4\pm 116.8)$\\
{\bf DBI} & $26.7 \pm 26.5\; $ & $(146.0\pm 144.5)$\\
{\bf Equilateral} & $25.1\pm 26.4 \; $ & $ (143.5\pm 151.2)$\\
{\bf Flat (Smoothed)} & $35.4\pm 29.2\; $ & $ (18.1\pm 14.9)$\\
{\bf Ghost} & $22.0 \pm26.3 \; $ & $ (138.7\pm 165.4)$\\
{\bf Local} & $54.4\pm 29.4$ & $(54.4\pm 29.4)$\\
{\bf Orthogonal} & $-16.3 \pm27.3 \; $ & $(-79.4\pm 133.3)$\\
{\bf Single} & $28.8 \pm 26.6\; $ & $ (142.1\pm 131.3)$\\
{\bf Warm} & $24.2 \pm 27.3\; $ & $ (94.7\pm 106.8)$\\
\hline
\end{tabular}
\caption{Limits for all known scale invariant models}
\label{tab:fnllim1}
\end{table}
Despite the absence of any convincing evidence for a statistically significant CMB bispectrum in the present analysis, many
avenues remain open for further investigation using the present methodology. The late-time modal estimator (\ref{eq:estimatorsum})
can identify any bispectrum whether generated at early times like inflation or sourced since decoupling by
cosmic strings, gravitational lensing, or second-order gravitational effects. Unlike the primordial estimator, the general mode expansion can also
be used to characterise noise and foregrounds, which need to be identified and subtracted through the linear term
in the estimator (\ref{eq:approxestimator}). The efficacy of this removal and other validation checks which may affect
a residual local signal will be published shortly \cite{inprep}.
Finally, we note again that these methods can be pressed much further with existing and future data, especially from Planck.
The anticipated Planck variance $\Delta f_\textrm{NL} \approx 5$ will substantially improve sensitivity to specific bispectrum shapes,
leaving significant discovery potential available in the near future.
We note also that these separable mode techniques have been adapted for general CMB trispectrum estimation, in principle,
making tractable the investigation of all planar primordial trispectra \cite{Regan:2010cn}. Analogous methods can also
be applied to modal bispectrum extraction for large-scale structure and in other contexts. For the time being, however,
this general bispectrum survey uncovers no significant evidence of non-Gaussianity which would undermine
the standard predictions of the simplest models of inflation.
\section{Acknowledgements}
We are very grateful for many informative discussions with Donough Regan, Xingang Chen, Anthony Challinor
and Alessandro Renzi. Simulations were performed on the COSMOS supercomputer (an Altix 4700) which is funded by
STFC, HEFCE and SGI. We are particularly grateful for computer support from Andrey Kaliazin.
JRF, ML and EPS were supported by STFC grant ST/F002998/1 and the
Centre for Theoretical Cosmology. EPS is grateful for the hospitality
of the Arnold Sommerfeld Centre and the Universe Excellence
Cluster in Munich.
\section{Appendix - Higher-order contributions to $ F_\textrm {NL}^2$}
Consider the expectation value of the quantity $\langle \bar F_\textrm {NL}^2\rangle=\sum_n\langle\bbR_n{}^2\rangle$ which we defined in (\ref{eq:totalbispectrum}) and discussed in section \ref{sec:integratedfnl}.
Our definition of the bispectrum mode coefficients $\bbR_n$ in (\ref{eq:mapintegral}) can be expanded using the
expressions in (\ref{eq:cmbestsep}) to the explicit form
\begin{eqnarray}
\label{eq:betadefn}
\bbR_n = \sum_{l_i}\frac {h_{l_1l_2l_3} \barR_n (l_1,l_2,l_3) }{v_{l_1} v_{l_2} v_{l_3}\sqrt{C_{l_1}C_{l_2}C_{l_3}} }\sum_{m_i} \( \begin{array}{ccc} l_1 & l_2 & l_3 \\ m_1 & m_2 & m_3 \end{array} \) a_{l_1 m_1}a_{l_2 m_2}a_{l_3 m_3}\,.
\end{eqnarray}
It is instructive at this point to repeat the derivation of the expectation value of the coefficient $\langle \bbR_n\rangle$ for an ensemble of
universes with both a given $ F_\textrm {NL}$ as in (\ref{eq:newfnl}) and a given theoretical bispectrum $b_{l_1l_2l_3}^{\rm th( F_\textrm {NL}=1)}$
(for further details see ref.~\cite{FergussonLiguoriShellard2009}). We describe the
theoretical bispectrum $b_{l_1l_2l_3}^{\rm th( F_\textrm {NL}=1)}$ by the
orthonormal mode expansion coefficients $\baR_n$ as in (\ref{eq:cmborthmodes}) or, equivalently, the separable modes $\baQ_n$ as in (\ref{eq:cmbestmodes}),
distinguishing it from the observed bispectrum $b_{l_1l_2l_3}$ described by $\bbR_n$ or $\bbQ_n$.
Substituting the separable mode expansion of the bispectrum (\ref{eq:cmbestmodes}) into the expression for the recovered coefficient
(\ref{eq:mapintegral}) we find
\begin{eqnarray}
\langle \bbQ_n\rangle &=& \int d^2{\bf \hat{n}} \, \langle\bar M_{\{p}({\bf \hat{n}})\bar M_r({\bf \hat{n}})\bar M_{s\}}({\bf \hat{n}})\rangle
~= ~\sum_{l_i,m_i} \frac{\bar q_{\{p}(l_1)\bar q_r(l_2) \bar q_{s\}}(l_3)}{{v_{l_1}v_{l_2}v_{l_3}}\sqrt{C_{l_1}C_{l_2}C_{l_3}}}\left( \mathcal{G}^{\,l_1 ~l_2~ l_3}_{m_1 m_2 m_3} \right)^2 \langle {b_{l_1l_2l_3}}\rangle \\
&=&\sum_{l_i,m_i} \frac{h_{l_1l_2l_3}^2 \barQ_n(l_1,l_2,l_3)}{{v_{l_1}v_{l_2}v_{l_3}}\sqrt{C_{l_1}C_{l_2}C_{l_3}}}
\sum _p F_\textrm {NL}\baQ_p \barQ_p(l_1,l_2,l_3) \frac{\sqrt{C_{l_1}C_{l_2}C_{l_3}}}{{v_{l_1}v_{l_2}v_{l_3}}}\\
&=& F_\textrm {NL} \sum _p \baQ_p \sum_{l_1l_2l_3}w_s\barQ_n\barQ_p ~= ~ F_\textrm {NL} \sum_p \bar\gamma_{np}\baQ_p\,.
\end{eqnarray}
>From (\ref{eq:RQrelation}) we can transform this into the orthonormal basis $\barR_n$ to find the simple result
\begin{eqnarray}
\langle \bbR_n\rangle = F_\textrm {NL} \baR_n\,.
\end{eqnarray}
Here we have ignored the linear term in (\ref{eq:mapintegral}) because its expectation value vanishes.
Now the expectation value of the square of this coefficient $\bbR_n$ necessarily involves the six-point function, because the
expression takes the form
\begin{eqnarray}
\label{eq:betasqr}
\langle\bbR_n{}^2\rangle &=& \sum \frac {h_{l_1l_2l_3}h_{l_4l_5l_6} \,\barR_n (l_1,l_2,l_3) \barR_n (l_4,l_5,l_6) }{v_{l_1} v_{l_2} v_{l_3}v_{l_4} v_{l_5} v_{l_6}\sqrt{C_{l_1}C_{l_2}C_{l_3}C_{l_4}C_{l_5}C_{l_6} } }~~ \times\cr
&& \qquad~\sum_{m_i} \( \begin{array}{ccc} l_1 & l_2 & l_3 \\ m_1 & m_2 & m_3 \end{array} \)
\( \begin{array}{ccc} l_4 & l_5 & l_6 \\ m_4 & m_5 & m_6 \end{array} \) \langle a_{l_1 m_1}a_{l_2 m_2}a_{l_3 m_3}
a_{l_4m_4}a_{l_5m_5}a_{l_6m_6}\rangle\,.
\end{eqnarray}
Here we note that we can include a cut-sky, as well as noise and beam effects, as we did previously in (\ref{eq:cutsky}) and (\ref{eq:noisebeam})
respectively.
The expectation value of the six-point function has a variety of non-vanishing contributions from combinations of lower order
correlators, which become (after summing over equivalent permutations):
\begin{align}
\langle a_{l_1 m_1}a_{l_2 m_2}a_{l_3 m_3} &a_{l_4m_4}a_{l_5m_5}a_{l_6m_6}\rangle = ~6\,
\delta_{l_1l_4} \delta_{l_2l_5} \delta_{l_3l_6} \delta_{m_1\,-m_4}\delta_{m_2\,-m_5} \delta_{m_3\,-m_6} \; C_{l_1} C_{l_2} C_{l_3}\cr
& +~9\,
\delta_{l_1l_2} \delta_{l_3l_4} \delta_{l_5l_6} \delta_{m_1\,-m_2}\delta_{m_3\,-m_4} \delta_{m_5\,-m_6}
\;C_{l_1} C_{l_3} C_{l_5}\cr
& +~\mathcal{G}^{\,\,l_1\; l_2\; l_3}_{m_1 m_2 m_3} \mathcal{G}^{\,\,l_4\; l_5\; l_6}_{m_4 m_5 m_6}\;b_{l_1l_2l_3}\, b_{l_4l_5l_6}\cr
& +
9 \mathcal{G}^{\,\,l_1\; l_2\; l_4}_{m_1 m_2 m_4} \mathcal{G}^{\,\,l_3\; l_5\; l_6}_{m_3 m_5 m_6}\;b_{l_1l_2l_4}\, b_{l_3l_5l_6}\cr
&~~ + 6 \delta_{l_1l_2}\delta_{m_1\,-m_2} \,C_{l_1}\, \sum_{LM}(-1)^M
\( \begin{array}{ccc} l_3 & l_4 & L\\ m_3 & m_4 & M \end{array} \)
\( \begin{array}{ccc} l_5 & l_6 & L \\ m_5 & m_6 &-M \end{array} \) \;\, T^{l_3\, l_4}_{l_5\,l_6}(L)\cr
&~~ + 9 \delta_{l_1l_4}\delta_{m_1\,-m_4} \,C_{l_1}\,\sum_{LM}(-1)^M
\( \begin{array}{ccc} l_2 & l_3 & L\\ m_2 & m_3 & M \end{array} \)
\( \begin{array}{ccc} l_5 & l_6 & L \\ m_5 & m_6 &-M \end{array} \) \; T^{l_2\, l_3}_{l_5\,l_6}(L)\cr
&~~ + \hbox{higher order terms}\,,
\label{eq:sixptexp}
\end{align}
where the CMB trispectrum $T^{l_1 \,l_2}_{l_3\,l_4}(L)$ is reviewed and discussed at some length in a recent companion
paper \cite{Regan:2010cn}.
Consecutively labelling the $i$th term in the six-point expression (\ref{eq:sixptexp}) above, we now evaluate each specific contribution to the expectation value of $ \bar F_\textrm {NL}^2$, denoting modes term-by-term as $\langle\bbR_n{}^2\rangle_i$: Recall that we have orthonormal basis eigenmodes $\barR_n$ (\ref{eq:orthonormal}) which satisfy
\begin{eqnarray}
\label{eq:orthoagain}
\sum_{l_i} \frac{h_{l_1l_2l_3}^2 \barR_n(l_1,l_2,l_3)\barR_p(l_1,l_2,l_3)}{v_{l_1}^2 v_{l_2}^2 v_{l_3}^2} = \sum_{l_i} w_s(l_1,l_2,l_3)\barR_n(l_1,l_2,l_3)\barR_p(l_1,l_2,l_3)
= \langle\barR_n,\,\barR_p\rangle = \delta_{np}\,.
\end{eqnarray}
The first term from (\ref{eq:sixptexp}) is a product of two-point correlators $C_l$ in the numerator which cancels
with the weighting in the denominator of (\ref{eq:betasqr}) to become simply
\begin{eqnarray}
\label{eq:fnlone}
\langle\bbR_n{}^2\rangle_1= 6\langle \barR_n,\,\barR_n\rangle = 6\,.
\end{eqnarray}
This is the primary Gaussian noise contribution which cumulatively dominates $ \bar F_\textrm {NL}^2$, as discussed in section \ref{sec:integratedfnl},
and which has been confirmed quantitatively in Gaussian simulations.
The second term from (\ref{eq:sixptexp}) also consists of products of two-point correlators which divide out, but it does not simply further:
\begin{eqnarray}
\label{eq:fnltwo}
\langle\bbR_n{}^2\rangle_2 = 9 \,\sum_{l_i} \frac{h_{l_1l_1l_3}h_{l_3l_5l_5}}
{v_{l_1}^2 v_{l_3}^2 v_{l_5}^2}\barR_n(l_1,l_1,l_3)\barR_n(l_3,l_5,l_5) \sum_{m_i} \( \begin{array}{ccc} l_1 & l_1 &l_3\\ m_1 & -m_1 & 0 \end{array} \) \( \begin{array}{ccc} l_3 & l_5 & l_5\\ 0 & m_5 & -m_5 \end{array} \) \,.
\end{eqnarray}
Despite the difficulty evaluating this expression explicity, it appears that the product of Wigner-3$j$ symbols will behave asymptotically
as $l^{-1}$, so we can expect the summed product of the $\barR_n$ eigenfunctions in (\ref{eq:fnltwo}) to be significantly suppressed
relative to the inner product in (\ref{eq:fnlone}). For $l_\textrm{max} \gg 1$, therefore, we expect $\langle\bbR_n{}^2\rangle_1\gg\langle\bbR_n{}^2\rangle_2$. For small non-Gaussianity,
it may be necessary to calculate this term explicitly, though it is more costly to evaluate than the usual inner product (\ref{eq:innerproduct}).
Alternatively, its effect can be determined from Gaussian simulations, which already confirm the clear dominance of the
first contribution (\ref{eq:fnlone}).
The third contribution from (\ref{eq:sixptexp}) is the straightforward product of the bispectra sought in \ref{sec:integratedfnl},
which from (\ref{eq:cmborthmodes}) simply collapses to
\begin{eqnarray}
\label{eq:fnlthree}
\langle\bbR_n{}^2\rangle_3&=& \left[\sum_{l_i} \frac{h_{l_1l_2l_3}^2}{v_{l_1}^2 v_{l_2}^2 v_{l_3}^2\sqrt{C_{l_1}C_{l_2}C_{l_3}}} \barR_n(l_1,l_2,l_3) \, b_{l_1l_2l_3}\right]^2 = \left[\sum_{l_i} w_s(l_1,l_2,l_3)\barR_n \, F_\textrm {NL} \sum_p \baR_n\barR_n \right]^2 \cr
&=& F_\textrm {NL}^2 \baR_n{}^2\,,
\end{eqnarray}
where here we have distinguished the recovered $b_{l_1l_2l_3}$ in the above from that normalised with $ F_\textrm {NL}=1$ which defines the $\baR_n$
coefficients in (\ref{eq:cmborthmodes}). The fourth contribution also arises from products of the bispectrum but in the form
\begin{eqnarray}
\label{eq:fnlfour}
\langle\bbR_n{}^2\rangle_4 = 9 F_\textrm {NL}^2 \sum_{L} \frac{1}{2L+1} \left[ \sum_p \baR_p \sum_{l_1l_2} w_s(l_1,l_2,L)\barR_n ((l_1,l_2,L) \barR_p((l_1,l_2,L) \right]^2\,.
\end{eqnarray}
However, as for the second term (\ref{eq:fnltwo}), it is clear that the additional weighting $(2L+1)^{-1}$ will generically suppress
this contribution, so that $\langle\bbR_n{}^2\rangle_4 \ll \langle\bbR_n{}^2\rangle_3$ for $l_\textrm{max}\gg1$.
Finally, we consider terms involving combinations of the two-point and four-point correlators which simplify considerably
because of the following identities for summed Wigner-3$j$ symbols:
\begin{eqnarray}
\label{eq:Wigneridentities}
\sum_{m_2,m_3} \( \begin{array}{ccc} l_1 & l_2 & l_3\\ m_1 & m_2 & m_3 \end{array} \) \( \begin{array}{ccc} l_2 & l_3 &l_4 \\ m_2 & m_3 &m_4 \end{array} \) = \frac{\delta_{l_1l_4}\delta_{m_1-m_4}}{2l_1+1} \,,\cr
\sum_{m_1} (-1)^{-l_1-m_1} \( \begin{array}{ccc} l_1 & l_1 & l_3\\ m_1 & -m_1 & 0 \end{array} \) = \sqrt{2l_1+1}\;\delta_{l_30}\,.
\end{eqnarray}
The fifth term from (\ref{eq:sixptexp}) reduces to
\begin{eqnarray}
\label{eq:fnlfive}
\langle\bbR_n{}^2\rangle_5 = 6\,\sum_{l_i} \frac{\sqrt{(2l_1+1)(2l_2+1)(2l_3+1)(2l_4+1)}}{4\pi v_{l_1}^2v_{l_2}v_{l_3}v_{l_4}\sqrt{C_0 C_{l_2} C_{l_3} C_{l_4}}}
\( \begin{array}{ccc} l_2 & l_3 & l_4\\ 0 &0 & 0 \end{array} \) \barR_n(l_1,l_1,0)\,\barR_n(l_2,l_3,l_4) \,T^{0\,l_2}_{l_3\,l_4}(l_2)\,,
\end{eqnarray}
which is a monopole term which can be ignored. The sixth term from (\ref{eq:sixptexp}) can be expressed in the form
\begin{eqnarray}
\langle\bbR_n{}^2\rangle_6 &=& 9\,\sum_{L} \frac{1}{v_L^2}\sum_{l_i}\frac{\sqrt{(2l_1+1)(2l_2+1)(2l_3+1)(2l_4+1)}}{4\pi v_{l_1} v_{l_2}v_{l_3}v_{l_4}\sqrt{C_{l_1}C_{l_2}C_{l_3}C_{l_4}}}
\( \begin{array}{ccc}L & l_1 & l_2\\ 0 &0 & 0 \end{array} \)
\( \begin{array}{ccc}L & l_3 & l_4\\ 0 &0 & 0 \end{array} \) \cr
&&\qquad\qquad\qquad\qquad ~\times~\barR_n(L,l_1,l_2)\,\barR_n(L,l_2,l_3) \;T^{l_1\, l_2}_{l_3\,l_4}(L)\,,
\end{eqnarray}
This term will contribute at a similar order to the third term with $\langle\bbR_n{}^2\rangle_3
\sim F_\textrm {NL}^2$ if there is a significant trispectrum, that is, if $g_{\rm NL}\,\tau_{\rm NL}\gg 1$ (see ref.~\cite{Regan:2010cn}).
Assuming that
correlators (unconnected parts) beyond fourth-order are negligible, then the primary contributions to $ \bar F_\textrm {NL}^2$ become
\begin{eqnarray}
\bar F_\textrm {NL}^2 = 6n_\textrm{max} + \sum_n^{n_\textrm{max}} \left [ F_\textrm {NL}^2\baR_n{}^2 + \langle\bbR_n{}^2\rangle_6 + ...\right]\,.
\end{eqnarray}
|
2,869,038,156,258 | arxiv | \section{Introduction}
\cite{Hosking-81} introduced long memory processes with quasi
periodic behaviour. This fact corresponds, for stationary processes, to spectral densities which exhibit
singularities at non zero frequencies. Many authors have contributed to the construction of
fractional models with singularities/poles outside the origin, see
for instance,
\cite{Gray:1994,Gray:1989,Hassler-94,viano95,leipus:viano:00, MR2532092}.
We can distinguish between two types of long memory: one regular and
the other cyclical according to whether the spectral density has a pole at
the origin or outside the origin. From a statistical point of view, the estimators of the long memory parameter
have been adapted to yield some estimates if cyclical effects are
assumed. In a parametric context, the $\sqrt{n}$-consistency of the maximum
likelihood estimate or the Whittle estimate has been proved (see
\cite{Hosoya-97,Giraitis:Hidalgo:Robinson-01} when the
pole is unknown). Semi parametric estimates can be more or less easily adapted
to the \textit{cyclical} case (see \cite{1051.62075,0974.62079,arteche:robinson,MR2497555,MR2201232,MR2060018}).
When we consider empirical process related statistics, the situation is more delicate. The normalisation and the limit
distribution can be different according to whether the memory is regular
or cyclical. An important literature is devoted to the convergence of
the empirical process, see for instance \cite{0862.60026,MR1713796} in regular case and \cite{MR1943152}
\cite{ouldhaye:aphil:2003} in cyclical case.
In this paper we give some convergence results on the kernel
estimator of the marginal density $f$.
Let $(X_1,\cdots,X_n)$ be an observed sample from $f$,
the kernel estimator of $f$ is defined by
\begin{equation}\label{noyau}
\tilde{f}_n(x)=\frac{1}{nm_n}\sum_{j=1}^nK\bigl(\frac{x-X_j}{m_n}\bigr).
\end{equation}
where the bandwidth $m_n$ is a sequence such that $m_n \to +\infty$
and $nm_n \to 0$ as $n\to\infty$, and $K$ is a
kernel function.
Consider the following infinite moving average process,
\begin{equation}\label{lineaires}
X_t=\sum_{j=-\infty}^{t}b_{t-j}\xi_j , \qquad t\ge 1
\end{equation}
where
\begin{itemize}
\item the sequence $(b_k)_k$ has the form
\begin{equation}\label{b(s)}
b_k=k^{-(\alpha+1)/2}\sum_{j\in J}a_j\bigl(\cos
k\lambda_j+o(1)\bigr), \qquad k\to\infty
\end{equation}
where $\alpha \in (0,1)$ and $\lambda_j \not= 0 $ for all $j\in J$, a
finite non empty subset of $\BN$.
\item $(\xi_n)_n$ is a sequence of
independent and identically distributed random variables with zero mean
and finite variance $\BE \xi_0^2=\sigma^2< \infty$.
\end{itemize}
From \cite{Giraitis:leipus:1995}, the covariance
function $r$ of $(X_t)$ defined by \eqref{lineaires} and \eqref{b(s)} has
the form
\begin{equation}\label{r(n)}
r(h)=h^{-\alpha}\sum_{j\in J}a_j\bigl(\cos h\lambda_j+o(1)\bigr).
\end{equation}
as $h$ tends to infinity.
A large class of linear processes satisfying these
conditions is obtained by filtering a white noise $(\xi_i)$
\begin{equation}\label{g1}
X_t = G(B)\xi_t \quad \mathrm{ with }\quad G(z)=g(z)\prod_{j=-m}^m\bigl(1-e^{i\lambda_j}z
\bigr)^{(\alpha_j-1)/2},\qquad m\ge1,
\end{equation}
where $B$ is the backshift operator and
where $g$ is an analytic function on $\{ |z|< 1 \}$,
continuous on $\{ |z|\le 1 \}$ and
$g(z)\not= 0$ if $\Abs{z}=1$,
and where
$$
0<\alpha_j\leq 1,\quad\alpha_j=\alpha_{-j},\quad\lambda_{-j}=-\lambda_j,
\quad j=0,\ldots,m,\textrm{ and}
$$
$$
0=\lambda_0<\lambda_1<\ldots<\lambda_m<\pi.
$$
Taking \begin{equation*}
\alpha=\min\{\alpha_j,\,\,j=0,\ldots,m\},\quad \text{and } \quad
J=\{j \geq 0 \; : \; \alpha_j=\alpha\},
\end{equation*}
if $\alpha < \alpha_0/2$ then the condition \eqref{b(s)} is satisfied.
Note that the condition on the coefficient $\alpha$ ensures
that $\sum_{h=1}^\infty |r(h)|=\infty $, thus the process has a long-memory.
However this condition is not enough to characterize the cyclical long
memory.
\begin{enumerate}
\item When $\alpha<\alpha_0 /2$. $\left \vert \displaystyle\sum_{j=1}^h r(j)
\right\vert = o\left(\displaystyle\sum_{j=1}^h r(j)^2 \right)$ as $h\to\infty$.
Therefore the process $(X_t^2)$ has also a long
memory, which is more persistent than $(X_t)$ (see Remark \ref{rem1}
for the exact expressions).
This fact characterises cyclical long memory, and the asymptotic behavior of many statistics (see
below for the empirical process) can be drastically different.
We focus on this case in this paper.
\item When $\alpha > \alpha_0 /2$, the cyclical behavior is less
persistent than the regular long memory (singularity at frequency zero). The
presence of singularities outside zero do not modify the convergence
results obtained in the regular case.
\item When $\alpha = \alpha_0 /2$, both $(X_t)$ and $(X_t^2)$ will
contribute to the limiting distribution, which will be a
combinaison of the two
previous cases.
\end{enumerate}
We consider the empirical process associated with the
process $(X_n)_{n\ge1}$ defined by
$$
F_n(x)=\frac{1}{n}\sum_{j=1}^n\ind_{\{X_j\le x\}}.
$$
\cite{ouldhaye:aphil:2003} proved the following results
for the linear process $(X_n)$
defined in (\ref{g1}). Assume that $\BE
\xi_0^4<\infty$, the cumulative distribution function of $\xi_0$
is 5 times differentiable with continuous bounded and integrable
derivatives on $\mathbb{R}$. Denote
$$
d_n=n^{1-\alpha},\quad
\textrm{and} \quad D=\frac{\sqrt{(2-2\alpha)(1-2\alpha)}}{4\Gamma(\alpha)\cos(\alpha\pi/2)}.
$$
If $\alpha<\alpha_0/2$, then, as $n$ tends
to infinity, we have
\begin{equation}\label{process empirique}
d_n^{-1}[nt]\bigl(F_{[nt]}(x)-F(x)\bigr)\Longrightarrow
\frac{F''(x)}{2}R(t) ,
\end{equation}
where $R$ is a linear combination of independent Rosenblatt
processes with the same parameter $\alpha$
\begin{equation}\label{equa9}
R(t)=R_{\alpha,\Lambda}(t)=D^{-1}\sum_{j\in J}c_j\Bigl(R_j^{(1)}(t)+R_j^{(2)}(t)\Bigr),
\end{equation}
where $\Lambda=\{\lambda_j,\quad j\in J\}$, and where
\begin{itemize}
\item$c_0=h_0/2$, $c_j=h_j$ if $j\neq0$ and
\begin{equation*}
h_j=g(e^{i\lambda_j})\prod_{k\neq
j}\bigl(1-e^{i(\lambda_k-\lambda_j)}\bigr)^{(\alpha-1)/2},
\end{equation*}
\item $R_j^{(i)}(t),\,i=1,2\textrm{ and }j\in J$
are Rosenblatt processes with parameter $1-\alpha$, independent
except for $j=0$, $R_0^{(1)}(t)=R_0^{(2)}(t)$.
\end{itemize}
The paper is organized as follows. In Section 2, we establish
a limit theorem for the kernel estimate. This extends one of
\cite{0862.60026}'s results, in particular we show the
contribution and the effect of the singularities of the spectral
density outside the origin to the convergence rate and the limiting
distribution. Then we apply our limit theorem to construct
confidence bands for the density function.
Similarly to \cite{0695.60043,MR1457496}, we provide in Section 3,
the asymptotic behavior of the mean integrated squared error, and we
show that the equivalence, one had in regular long memory' can be modified when the singularities of the
spectral density are not limited to the origin.
\section{Asymptotic distribution of the kernel estimator}\label{Sec:kernel}
Hereafter, we assume that the kernel $K$
is a continuous function with
compact support and $\int K(x) dx =1$. Concerning the bandwidth $m_n$, we assume that
$m_n\to 0$ and $nm_n\to\infty$, as $n$ tends to infinity.\\
The equality
\begin{equation}\label{noyaux}
\tilde{f}_n(x)-\mathbb{E}\tilde{f}_n(x)=\frac{1}{m_n}\int_\mathbb{R}K\bigl(\frac{x-u}{m_n}\bigr)d\bigl(F_n(u)-F(u)\bigr)
\end{equation}
clearly shows the relationship between the estimate
$\tilde{f}_n(x)$ and the empirical process $F_n(x)$. The process
$\tilde{f}_n(x)$ is sometimes called the empirical density process.\\
For every integer $n \ge1$, we define the following statistics
\begin{equation}\label{Y_{n,p}}
Y_{n,1}=\sum_{k=1}^n X_k, \; \qquad
Y_{n,2}=\sum_{k=1}^n \sum_{ s< r } b_r b_{s}\xi_{k-s}
\xi_{k-r},
\end{equation}
and
\begin{equation}\label{S_{n,2}}
S_{n,2}(x)=n\bigl(F_n(x)-F(x))+F'(x)Y_{n,1}-\frac{1}{2}F''(x)Y_{n,2}.
\end{equation}
\begin{Rem}\label{rem1} For linear processes defined in \eqref{g1}, the following
equivalences as $n$ tends to infinity, have been proved by \cite{ouldhaye:aphil:2003}
\begin{equation}\label{Y_{N,2}}
\mathop{\rm Var}\nolimits(Y_{n,2})\sim\frac 14 \mathop{\rm Var}\nolimits\Bigl(\sum_{j=1}^n(X_j^2-\mathbb{E}(X_1^2))\Bigr)\sim
Cn^{2-2\alpha}.
\end{equation}
and
\begin{equation}\label{var1}
\mathop{\rm Var}\nolimits(Y_{n,1})=\mathop{\rm Var}\nolimits\Bigl(\sum_{j=1}^nX_j\Bigr) \sim
Cn^{2-\alpha_0}.
\end{equation}
Therefore (\ref{var1}) and (\ref{Y_{N,2}}) imply that the convergence rate
obtained in Proposition \ref{estimation} is smaller than the convergence rate
of $\bar{X}_n$.
\end{Rem}
Let us define the class of Parzen kernels of order $s$.
\begin{Def}
A kernel function $K$ is said to be a Parzen kernel of order $s\ge2$
if it satisfies the following conditions
\begin{enumerate}
\item $ \int_{\mathbb{R}}K(u)du=1, $
\item for every $1\le j\le s-1$, $\int_{\mathbb{R}}u^jK(u)du=0, $
\item $\int_{\mathbb{R}}\vert u^s\vert\vert K(u)\vert du<\infty.$
\end{enumerate}
\end{Def}
\cite{Bretagnolle} proved the existence of such kernels, for
which, an explicit construction can be found in \cite{MR564251}.
\begin{Prop} \label{estimation}
Consider a process $(X_n)$
defined in (\ref{g1}). Assume that $\alpha<\alpha_0/2$, $\BE
\xi_0^4<\infty$, the cumulative distribution function of $\xi_0$
is 5 times differentiable with continuous bounded and integrable
derivatives on $\mathbb{R}$.
Let $K$ be a Parzen kernel of order $4$ having bounded total
variation. Assume that the bandwidth has the form
$$
m_n=n^{-\delta},\quad\textrm{where
}\frac{\alpha}{4}<\delta<\frac{\alpha}{2}.
$$
Then, as $n$ tends to infinity
\begin{equation}\label{HoSup}
n^\alpha\underset{x\in\mathbb{R}}{\sup}\vert
\tilde{f}_n(x)-f(x)\vert
\overset{d}{\longrightarrow}\underset{x\in\mathbb{R}}{\sup}
\Bigl\vert\frac{f''(x)}{2}\Bigr\vert\vert R_{\alpha,\Lambda}\vert.
\end{equation}
where $R_{\alpha,\Lambda}=R_{\alpha,\Lambda}(1)$.
Moreover,
\begin{equation}\label{vectorielle}
n^{\alpha}(\tilde{f}_n(x)-f(x))\overset{C_b(\mathbb{R})}{\Longrightarrow}
-\frac{f''(x)}{2}R_{\alpha,\Lambda},
\end{equation}
where $\overset{C_b(\mathbb{R})}{\Longrightarrow}$ denotes the
convergence in $C_b(\mathbb{R})$, the space of continuous bounded
functions.
\end{Prop}
Proof:\\
The difference between $\tilde{f}_n$ and $f$ can be expressed as
\begin{align*}
\tilde{f}_n(x)-f(x)& =\tilde{f}_n(x)-\mathbb{E}\tilde{f}_n(x)+\mathbb{E}\tilde{f}_n(x)-f(x)
\\
&=\frac{1}{m_n}\int K(u)d\bigl(F_n(x-m_nu)-F(x-m_nu)\bigr)
+\int\bigl(f(x-m_nu)-f(x)\bigr)K(u)du.
\end{align*}
We first replace $F_n-F$ by its expression in (\ref{S_{n,2}}).
Then we
apply the integration by parts formula on the first integral. For the
second, we apply the Taylor-Lagrange formula. There exists
a real number $u^*$ such that $\vert u^*-x\vert<\vert m_nu\vert$ and
\begin{align*}
\tilde{f}_n(x)-f(x)=& \frac{-1}{nm_n}\int
S_{n,2}(x-m_nu)dK(u)+\frac{Y_{n,1}}{n}\int
f'(x-m_nu)K(u)du\\
&-\frac{Y_{n,2}}{n}f''(x)\int K(u)du+\frac{Y_{n,2}}{n}m_n\int
f^{(3)}(u^*)uK(u)+\\
&+\int
\bigl(-m_nuf'(x)+\frac{m_n^2u^2}{2}f''(x)-\frac{m_n^3u^3}{6}f^{(3)}(x)+
\frac{m_n^4u^4}{24}f^{(4)}(u^*)\bigr)K(u)du\\
=:&a_n(x)+b_n(x)+c_n(x)+d_n(x)+e_{n}(x).
\end{align*}
Now, a proof similar to that of Theorem 2.2 in \cite{0862.60026}
allows us to write for $2\delta<\alpha$
\begin{equation}\label{ho}
n^{\alpha+\delta-1}\underset{x\in\mathbb{R}}{\sup}\vert
S_{n,2}(x)\vert\overset{a.s.}\longrightarrow 0, \qquad \textrm{as} \;
n\to \infty.
\end{equation}
And, thus we have
\begin{equation}\label{probability}
n^\alpha\underset{x\in\mathbb{R}}{\sup}\vert a_n(x)\vert
\overset{P}\longrightarrow0, \qquad \textrm{as} \;
n\to \infty
\end{equation}
where $\overset{P}\longrightarrow$ denotes the convergence in probability.\\
For the sequences $b_n(x)$, $d_n(x)$, $e_n(x)$, we get the
same convergence in probability as in (\ref{probability}) by bounding the
variances. To obtain the bounds, we start from the variances of $Y_{n,1}$
and $Y_{n,2}$ defined in (\ref{var1}) and (\ref{Y_{N,2}}), and we
use the fact that $K$ is a Parzen kernel and that $f$ is 4 times
differentiable and bounded derivatives. We get, as $n$ tends to
infinity,
\begin{eqnarray*}
\mathop{\rm Var}\nolimits(n^\alpha\underset{x\in\mathbb{R}}{\sup}\vert
b_n(x)\vert)&\le&
n^{2\alpha-2}\mathop{\rm Var}\nolimits\Bigl(Y_{n,1}\underset{x\in\mathbb{R}}{\sup}\vert
f'(x)\vert\int\vert K(u)\vert du\Bigr)\\
&=&Cn^{2\alpha-2}n^{2-\alpha_0}=Cn^{2\alpha-\alpha_0}\longrightarrow 0,
\end{eqnarray*}
\begin{eqnarray*}
\mathop{\rm Var}\nolimits(n^\alpha\underset{x\in\mathbb{R}}{\sup}\vert
d_n(x)\vert)&\le&
n^{2\alpha-2}\mathop{\rm Var}\nolimits\Bigl(Y_{n,2}m_n\underset{x\in\mathbb{R}}{\sup}\vert
f^{(3)}(x)\vert\int\vert u K(u)\vert du\Bigr)\\
&=&Cn^{2\alpha-2}n^{2-2\alpha}n^{-\delta}\longrightarrow 0,
\end{eqnarray*}
\begin{equation*}
n^\alpha\underset{x\in\mathbb{R}}{\sup}\vert
e_n(x)\vert\le\underset{x\in\mathbb{R}}{\sup}\vert
f^{(4)}(x)\vert\frac{n^{\alpha-4\delta}}{24}\int u^4\vert
K(u)\vert du=O(n^{\alpha-4\delta})\longrightarrow 0,
\end{equation*}
These four convergences in probability imply that both sequences
$$
n^\alpha\underset{x\in\mathbb{R}}{\sup}\vert\tilde{f}_n(x)-f(x)\vert\quad\textrm
{and}\quad n^\alpha\underset{x\in\mathbb{R}}{\sup}\vert f''(x)\vert
\big\vert\frac{Y_{n,2}}{n}\big\vert=n^\alpha\underset{x\in\mathbb{R}}{\sup}\vert
c_n(x)\vert
$$
have the same limit as $n$ tends to infinity. According to Lemma~2.1
in \cite{ouldhaye:aphil:2003}, this common limit is equal to
$$ \underset{x\in\mathbb{R}}{\sup}
\Bigl\vert\frac{f''(x)}{2}\Bigr\vert\vert R_{\alpha,\Lambda}\vert.
$$
Hence (\ref{HoSup}) is proved. According to (\ref{Y_{N,2}}), we notice that the rate
$n^{-\alpha}$ given in (\ref{HoSup}) is the convergence rate of
$n^{-1}\sum_{j=1}^n(X_j^2-\mathbb{E}(X_1^2))$.\\
Similarly, as $n$ tends to infinity, the finite-dimensional
distributions of
$$
n^{\alpha}(\tilde{f}_n(x)-f(x))\quad\textrm {and}\quad-n^\alpha
f''(x)\frac{Y_{n,2}}{n}=n^\alpha c_n(x)
$$
converge simultaneously to the finite-dimensional distributions of
$ -(f''(x)/2)R_{\alpha,\Lambda}. $ This concludes the proof of
(\ref{vectorielle}) because (\ref{HoSup})
implies the tightness of $n^{\alpha}(\tilde{f}_n(x)-f(x))$.
\begin{Rem} We clearly see that the choice of the class
of Parzen kernels allows the bias
$\mathbb{E}\tilde{f}_n(x)-f(x)$ to become negligible. If $K$ is not
a Parzen kernel, the contribution of the bias $e_n(x)$ is
not negligible with respect to $b_n(x)$. Therefore, (\ref{HoSup})
is false for a standard kernel unless we replace
$\tilde{f}_n(x)-f(x)$ by $\tilde{f}_n(x)-\mathbb{E}\tilde{f}_n(x)$
in (\ref{HoSup}).
\end{Rem}
\begin{Rem}
The result (\ref{HoSup}) in Proposition
\ref{estimation} can be applied to obtain a goodness of fit test on
the marginal density.
\end{Rem}
\begin{Rem} The result (\ref{HoSup}) in Proposition
\ref{estimation} provides confidence bands for $f$ which
depend on the derivative $f''$. In general, $f''$ is not
available, and thus the confidence band cannot be calculated. Then
$f''$ can be replaced by its kernel estimate given by
$$
\tilde{f}_n''(x)=\frac{1}{nm_n^3}\sum_{j=1}^nK''\bigl(\frac{x-X_j}{m_n}\bigr).
$$
(note that it is necessary to assume that the kernel function $K$
is twice differentiable.)
\end{Rem}
\begin{Prop}\label{confidence}
Under the same hypotheses as in Proposition \ref{estimation} and if
the kernel function $K$ is twice differentiable and its
derivative $K''$ is continuous, then for each interval
$[a,b]$ on which $f''$ is positive, we have
\begin{equation}\label{confiance}
2n^\alpha\underset{x\in[a,b]}{\sup}\Big\vert\frac{\tilde{f}_n(x)-f(x)}{\tilde{f}_n''(x)}
\Big\vert\overset{d}{\longrightarrow}\vert R_{\alpha,\Lambda}\vert.
\end{equation}
\end{Prop}
In other words, as $n$ tends to infinity, for every $t>0$, we have
\begin{equation}
P\big\{\tilde{f}_n(x)-\frac{t\tilde{f}_n''(x)}{2n^\alpha}\le
f(x)\le\tilde{f}_n(x)+\frac{t\tilde{f}_n''(x)}{2n^\alpha},\,\,a\le
x\le b\big\}\to P\big\{\vert R_{\alpha,\Lambda}\vert<t\big\}.\label{eq:15}
\end{equation}
In Proposition \ref{prop-quantil}, we give a consistent estimate of
the quantiles of process $R_{\alpha,\Lambda} $. Using
(\ref{eq:15}), this allows us
to obtain asymptotic confidence band for the density
$f(x)$ which is valid for every $x\in[a,b]$.\\
\noindent \textbf{Proof : } \\
Let $\phi$ be the function defined on $C_b(\mathbb{R})$ by
$$\phi(g)=\underset{x\in[a,b]}{\sup}\Big\vert\frac{g(x)}{f''(x)}\Big\vert $$
Since $\phi$ is continuous, (\ref{vectorielle}) ensures the
following convergence :
\begin{equation}\label{continue}
2n^\alpha\underset{x\in[a,b]}{\sup}\Big\vert
\frac{\tilde{f}_n(x)-f(x)}{f''(x)}\Big\vert
\overset{d}{\longrightarrow}\vert R_{\alpha,\Lambda}\vert, \qquad \textrm{as}\; n\to\infty.
\end{equation}
Now, we prove that the difference
$$
Y_n(x):=n^\alpha\Bigl(\frac{\tilde{f}_n(x)-f(x)}{f''(x)}
-\frac{\tilde{f}_n(x)-f(x)}{\tilde{f}_n''(x)}\Bigr)
$$
satisfies
$$
\underset{x\in[a,b]}{\sup}\vert
Y_n(x)\vert\overset{P}{\longrightarrow}0, \qquad \textrm{as}\; n\to\infty.
$$
This convergence is obtained as follows. We rewrite $Y_n(x)$ as
$$
\vert
Y_n(x)\vert=n^\alpha\Big\vert\frac{\tilde{f}_n(x)-f(x)}{f''(x)}\Big\vert
\Big\vert\frac{\tilde{f}_n''(x)-f''(x)}{\tilde{f}_n''(x)}\Big\vert.
$$
and by (\ref{continue}), it is enough to prove that
\begin{equation}\label{continue1}
\underset{x\in\mathbb{R}}{\sup}\Big\vert\frac{\tilde{f}_n''(x)-f''(x)}{\tilde{f}_n''(x)}\Big\vert
\overset{P}{\longrightarrow}0, \qquad \textrm{as}\; n\to\infty.
\end{equation}
The difference between
$\tilde{f}_n''$ and $f''$ can be written as
\begin{align*}
\tilde{f}_n''(x)-f''(x) = & \frac{-1}{nm_n^3}\int
S_{n,2}(x-m_nu)dK''(u)+\frac{Y_{n,1}}{n}
\int f^{(3)}(x-m_nu)K(u)du - \\
&-\frac{Y_{n,2}}{n}\int f^{(4)}(x-m_nu)K(u)du
+\int\bigl(f''(x-hu)-f''(x)\bigr)K(u)du.
\end{align*}
by replacing $f$ with $f''$ and $\tilde{f}_n$ with $\tilde{f_n''}$
and following the same lines as the proof of Proposition \ref{estimation}. Then, we get
$$
\underset{x\in\mathbb{R}}{\sup}\vert\tilde{f}_n''(x)-f''(x)\vert
=O\bigl(n^{-(2\delta\wedge(1-3\delta))}\bigr).
$$
Since $0<\delta<1/4$, we have
$$
\underset{x\in\mathbb{R}}{\sup}\vert\tilde{f}_n''(x)-f''(x)\vert\overset{P}{\longrightarrow}0,
$$
moreover, the derivative $f''$ satisfies
$$
\underset{x\in[a,b]}{\inf}\vert f''(x)\vert>0.
$$
Thus, we get (\ref{continue1}). This concludes the proof.\\
\begin{Prop}\label{prop-quantil}
Fix $\beta\in(0,1)$. Let $c(\alpha,\Lambda,\beta)$ be the quantile
of order $\beta$ of the process $R_{\alpha,\Lambda}$ defined in
\eqref{equa9}.
If $(\alpha_n,\Lambda_n)$ are consistent (in probability) estimators of
$(\alpha,\Lambda)$.
then
\begin{equation}
c(\alpha_n,\Lambda_n,\beta)\overset{P}{\to}c(\alpha,\Lambda,\beta)\label{eq:1}
\end{equation}
\end{Prop}
\begin{Rem}
In the references given in the introduction, the parametric and semi
parametric methods provide estimators of $(\alpha,\Lambda)$ which satisfy the
condition required in Proposition \ref{prop-quantil}.
\end{Rem}
{Proof : }
We want to show \eqref{eq:1} which will be obtained if we show that the application
$(\gamma,\theta) \mapsto c(\gamma,\theta,\beta)$ is continuous, as $(\alpha_n,\Lambda_n)\overset{P}{\to}(\alpha,\Lambda)$. To prove this continuity we prove that the mappings $g,h$ below are continuous,
$$
((0,1)\times [0,\pi]^{|J|}, \;
\vert.\vert)\overset{g}{\to}(C_b(\mathbb{R}),\|.\|)\overset{h}{\to}((0,1),\vert.\vert),$$
where
$\|.\|$ is the uniform metric, and in the following decomposition
$F_{\gamma,\theta}$ is the distribution function of
$R_{\gamma,\theta}$.
$$
(\gamma,\theta) \mapsto [g(\gamma,\theta)=F_{\gamma,\theta}]\mapsto
[h(F_{\gamma,\theta})=c(\gamma,\theta,\beta)].$$
Continuity of $g$ can be proved as follows. Consider a deterministic
sequence $(\gamma_n,\theta_n)$ such that
$(\gamma_n,\theta_n)\to(\gamma,\theta)$ as $n\to\infty$. Then to prove that
$F_{\gamma_n,\theta_n}\to F_{\gamma,\theta}$ uniformly it will be
enough to show that $R_{\gamma_n,\theta_n}\Longrightarrow
R_{\gamma,\theta}$. To obtain the latter weak convergence it will
suffice to show that every sequence of Rosenblatt variables
$(R_{\gamma_n})$ with parameter $\gamma_n$ converges weakly to a
Rosenblatt variable $R_{\gamma}$ with parameter $\gamma$, as
$R_{\gamma_n,\theta_n}$ is a linear combination of independent
Rosenblatt variables $R_{\gamma_n}$ with the coefficients $c_j/D$ that
are continuous functions of $\gamma_n,\theta_n$. We have from
\cite{major} $$
R_{\gamma_n}=\int\int_{\mathbb{R}^2}\frac{e^{i(x+y)}-1}{i(x+y)}W_n(dx,dy)
$$
where
$$W_n(dx,dy)= \vert x\vert^{(\gamma_n-1)/2}\vert y\vert^{(\gamma_n-1)/2}W(dx,dy)
$$
with $W(dx,dy)$ being the standard Gaussian random measure, and since
$$\vert x\vert^{(\gamma_n-1)/2}\vert y\vert^{(\gamma_n-1)/2}\to\vert x\vert^{(\gamma-1)/2}\vert y\vert^{(\gamma-1)/2}
$$ then we have the required convergence. \\
Now to prove the continuity of $h$ it is enough to note that the
quantile function is continuous (with respect to the uniform metric)
over the class of monotonic continuous distribution functions, i.e. if
$\|F_n-F\|\to0$ then $h(F_n,\beta))\to h(F,\beta)$. Of course here we
do have
$\|F_{\gamma_n,\theta_n}-F_{\gamma,\theta}\|\to0$, as we
just established that $R_{\gamma_n,\theta_m}\Longrightarrow
R_{\gamma,\theta}$.\hfill $\square$
\section{Asymptotic mean integrated squared error (MISE)}
The mean integrated squared error (MISE) of the estimate
$\tilde{f}_n$ is defined by
$$
\int_\mathbb{R}\mathbb{E}\bigl(\tilde{f}_n(x)-f(x)\bigr)^2dx.
$$
For a wide class of linear processes including the processes
with short and \textit{regular} long memories,
\cite{0695.60043} and \cite{MR1457496} studied the asymptotic
behavior of the MISE. In particular, they established the following equivalence, when $n$ tends to infinity,
\begin{equation}\label{Hall1}
\int_\mathbb{R}\mathbb{E}\bigl(\tilde{f}_n(x)-f(x)\bigr)^2dx\sim
\int_\mathbb{R}\mathbb{E}_0\bigl(\tilde{f}_n(x)-f(x)\bigr)^2dx
+\mathop{\rm Var}\nolimits(\overline{X}_n)\int_{\mathbb{R}}f'(x)^2dx
\end{equation}
where $\BE_0$ denotes the expectation with respect to the distribution of
$n$ independent random variables distributed from the density $f$.
In particular, the equivalence (\ref{Hall1}) shows that the
convergence rate of the MISE cannot be faster than
the convergence of $\mathop{\rm Var}\nolimits(\overline{X}_n)$. In
other words, the convergence rate of the kernel density estimates is
bounded from above by the convergence rate of the empirical mean.
This is the optimal rate.
Hereafter, we assume that the distribution of the innovation $(\xi_k)$ satisfies
\begin{itemize}
\item[\textbf{[Z]}] There exist $\delta > 0$ and $C<\infty$ such that
the characteristic
function of $\xi_0$ satisfies
\begin{equation}
|E e^{iu\xi_0}| \leq C (1+|u|)^{-\delta}\label{eq:z2}
\end{equation}
\end{itemize}
\begin{Theo}\label{sec:asympt-mean-integr}
Let $(X_n)$ be a linear process
defined in (\ref{lineaires}) and (\ref{g1}) such that the
distribution of $\xi_0$ satisfies $[Z]$ and
$\BE
\xi_0^4<\infty$.
Assume that $\alpha<\frac 13 \wedge\frac{\alpha_0}{2}$ and the kernel $K$ is a
bounded symmetric density function.
Then the MISE satisfies, as $n$ tends to infinity,
\begin{align}
MISE(\tilde f_n) \sim &
\int_\mathbb{R}\mathbb{E}_0\bigl(\tilde{f}_n(x)-f(x)\bigr)^2dx
+\frac 14 \mathop{\rm Var}\nolimits \Bigl(\frac 1n \sum_{j=1}^n(X_j^2-\mathbb{E}(X_1^2))\Bigr) \int_{\mathbb{R}}f''(x)^2dx
\label{eq:2}
\end{align}
where $\BE_0$ denotes the expectation with respect to the distribution of
$n$ independent random variables distributed from the density $f$.
\end{Theo}
\begin{Rem}
The variance $ \mathop{\rm Var}\nolimits \Bigl(\frac 1n \sum_{j=1}^n(X_j^2-\mathbb{E}(X_1^2))\Bigr) $ is also equivalent to
$4\mathop{\rm Var}\nolimits(\frac 1n {Y}_{n,2}) $ (see \cite{ouldhaye:aphil:2003}).
Equation \eqref{eq:2} shows that this term is
a ceiling rate of MISE independently of the choice of the kernel and
bandwidth.
\end{Rem}
\noindent{\bf Proof. }~ \\
\textit{Notation :} for an arbitrary function $g$, we denote by $\hat g $ its Fourier
transform.
The proof consists in adapting the proof of \cite{MR1457496} to the cyclical
case.
Using \cite{0695.60043} decomposition of the MISE, we have
\begin{align}
\mathrm{MISE}(\tilde f_n) = &
\int_\mathbb{R}\mathbb{E}_0\bigl(\tilde{f}_n(x)-f(x)\bigr)^2\textrm{\,d} x + \nonumber\\
&+\frac1{n\pi} \sum_{j=1}^{n-1} (1-j/n) \int |\hat K (m_nt)|^2 \left\{ \textrm{Re\,}(
\mathbb{E} (e^{i t (X_1-X_{j+1})} ) - |\hat f (t)|^2\right\} \textrm{\,d} t \label{eq:5}\\
& :=
\mathrm{MISE}_0 + W_n \nonumber
\end{align}
Let $f_j$ be the joint density of $(X_1,X_{j+1})$.
We extend the expansion of $f_j$, obtained by
\cite{MR1409327} for the first order, to the second order as follows: there exists a function
$\ell_j \; : \mathbb{R}^2 \mapsto \mathbb{R} $ such
that
\begin{equation}
f_j(x,y) = f(x) f(y) +r(j) f'(x) f'(y) + \frac 12 r(j)^2 f''(x)
f'' (y) +\ell_j(x,y) \quad \forall (x,y)\in \mathbb{R}^2\label{eq:3}
\end{equation}
where $r$ is given in \eqref{r(n)}.
We have
\begin{align}
\label{eq:4}
\mathbb{E} (e^{i t (X_1-X_{j+1})} ) & = \int e^{it(x-y) } f(x) f(y)
\textrm{\,d} x \textrm{\,d} y +r(j) \int e^{it(x-y) } f'(x) f'(y)\textrm{\,d} x \textrm{\,d} y +\nonumber \\ & +\frac 12 r(j)^2\int
e^{it(x-y) } f''(x)
f'' (y) \textrm{\,d} x \textrm{\,d} y + \int e^{it(x-y) } \ell_j(x,y)\textrm{\,d} x \textrm{\,d} y \nonumber \\
&= |\hat f (t)|^2 +r(j) |\widehat{f'}(t)|^2 +\frac 12 r(j)^2
|\widehat{f''}(t)|^2 +\widehat{ \ell_j}(t,-t).
\end{align}
Similarly to \cite{MR1457496}, $W_n$ in \eqref{eq:5} can be written as
\begin{multline*}
W_n = \frac{2}{n} \sum_{j=1}^{n-1} (1-j/n) r(j) \int |K_{m_n}\star
f'|^2(t) \textrm{\,d} t + \frac{1}{n} \sum_{j=1}^{n-1} (1-j/n) r(j)^2 \int
|K_{m_n}\star f''|^2(t) \textrm{\,d} t +\\+\frac{1}{n\pi} \sum_{j=1}^{n-1}
(1-j/n) \int |\hat K(m_n t)|^2 \textrm{Re\,} \widehat{\ell_j}(t,-t) \textrm{\,d} t
\end{multline*}
where $K_{m_n}(x) = m_n^{-1}K(x m_n^{-1})$, and where $f\star g$ is
the convolution of $f$ and $g$. Moreover we have, for $k = 1,2$,
$$\int |K_{m_n}\star f^{(k)}|^2(t) \textrm{\,d} t = \int f^{(k)}(t)^2 \textrm{\,d} t +
o(1), \quad n\to\infty.$$
We obtain
\begin{multline}\label{eq:10}
W_n = \frac2 {n} \sum_{j=1}^{n-1} (1-\frac jn) r(j) \left( \int f'(t)^2 \textrm{\,d}
t +o(1) \right) + \frac1 {n} \sum_{j=1}^{n-1} (1-\frac jn)
r(j)^2\left( \int f''(t)^2 \textrm{\,d} t +o(1) \right) + \\ + \frac{1}{n\pi}
\sum_{j=1}^{n-1} (1-\frac jn) \int |\hat K(m_n t)|^2 \textrm{Re\,} \widehat{\ell_j}(t,-t)
\textrm{\,d} t.
\end{multline}
According to \cite{MR1027992}, we have
\begin{align}
\mathop{\rm Var}\nolimits \Bigl(\frac 1n
\sum_{j=1}^n(X_j^2-\mathbb{E}(X_1^2))\Bigr) & = \frac{2}{n^2}
\sum_{1\leq i,j\leq n } r^2(i-j) +O(n^{-1}), \nonumber \\
&=\frac2 {n^2} (nr(0) + 2 \sum_{j=1}^{n-1} (n-j) r(j)^2 )
+O(n^{-1}) \nonumber \\ & = \frac 4 {n} \sum_{j=1}^{n-1} (1-j/n)
r(j)^2 +O(n^{-1}) :=\gamma(n) \label{eq:11}
\end{align}
Moreover, using the form of $r$ given in \eqref{r(n)} and the fact
that $\alpha < 1/3$, we get
\begin{align}
\gamma(n) &= \frac 4 {n} \sum_{j=1}^{n-1} (1-j/n) j^{-2\alpha}
(\sum_{k\in J}a_k \bigl(\cos j \lambda_k+o(1)\bigr))^2
+O(n^{-1}) \nonumber \\
&= \frac 2 {n} \sum_{j=1}^{n-1} (1-j/n)
j^{-2\alpha} \sum_{k\in J} a_k^2
+O(n^{-1}) \nonumber \\
&= \frac 2 {n} n^{1-2\alpha} \left( \frac{1}{1-2\alpha} -\frac{1}{2-2\alpha}\right) \sum_{k\in J} a_k^2
+O(n^{-1}) \nonumber \\
&= n^{-2\alpha} \frac{1}{(1-2\alpha)(1-\alpha)} \sum_{k\in J} a_k^2
+O(n^{-1}) \sim C n^{-2\alpha} \label{eq:7}
\end{align}
As $\alpha < \frac 13 \wedge \frac{\alpha_0}{2}$ and using \eqref{var1}, we get
\begin{equation}
\frac2{n} \sum_{j=1}^{n-1} (1-j/n) r(j) =\frac 1{n^2}\mathop{\rm Var}\nolimits(Y_{n,1})
-r(0) n^{-1}= O(n^{-\alpha_0}) + O(n^{-1})=
o(n^{-2\alpha}).\label{eq:6}
\end{equation}
From \eqref{eq:10}, \eqref{eq:11}, \eqref{eq:7} and \eqref{eq:6}, we get
\begin{align*}
W_n =\frac 14 \mathop{\rm Var}\nolimits \Bigl(\frac 1n
\sum_{j=1}^n(X_j^2-\mathbb{E}(X_1^2))\Bigr) &\int f''(t)^2 \textrm{\,d} t
+o(n^{-2\alpha}) + \\ &+ \frac{1}{n\pi}
\sum_{j=1}^{n-1} (1-j/n) \int |\hat K(m_n t)|^2 \textrm{Re\,} \widehat{\ell_j}(t,-t)
\textrm{\,d} t.
\end{align*}
Since $r(j)^2$ behaves asymptotically as $j^{-2\alpha}$, and
\begin{equation}
\frac{1}{n\pi}
\sum_{j=1}^{n-1} (1-j/n) \int |\hat K(m_n t)|^2 \textrm{Re\,} \widehat{\ell_j}(t,-t)
\textrm{\,d} t \leq \frac{1}{n\pi}
\sum_{j=1}^{n-1} (1-j/n) \int | \widehat{\ell_j}(t,-t)|
\textrm{\,d} t \end{equation}
the proof is completed using the following lemma proven below.
\begin{Lem}\label{sec:asympt-mean-integr-1}
Under the same assumption of Theorem \ref{sec:asympt-mean-integr},
\begin{equation}
\int |\widehat{\ell_j}(t,-t)|
\textrm{\,d} t = O(j^{-2\alpha - \epsilon} ). \label{eq:9}
\end{equation}
for $\epsilon$ an arbitrary positive number smaller than
$ \dfrac{1-3\alpha}{10}$.
\end{Lem}
\null \hfill $\square$
\noindent \textbf{Proof of Lemma \ref{sec:asympt-mean-integr-1}}
By definition of $\ell_j$ in \eqref{eq:3}, we have
$$
\widehat{\ell_j}(x,y) = \widehat{f_j}(x,y) - \widehat f(x) \widehat f(y)(
1-xy r(j) +\frac 12 x^2y^2 r(j)^2) $$
We split the integral
\begin{equation}\label{eq:8}
\int_{\mathbb{R}} |\widehat{\ell_j}(t,-t)|
\textrm{\,d} t = \int_{|t|> j^\epsilon}
|\widehat{\ell_j}(t,-t)| \textrm{\,d} t + \int_{|t|<j^\epsilon} |\widehat{\ell_j}(t,-t)| \textrm{\,d} t
\end{equation}
where $\epsilon$ is an arbitrary positive number smaller than $ \dfrac{1-3\alpha}{10}$.
Under assumption \eqref{eq:z2},
\cite{MR1409327} proved for the regular long memory that for arbitrary
$k$
$$|\widehat{f_j}(x_1,x_2) | \leq c(k) (1+|x|)^{-k}$$ for all
$x=(x_1,x_2)\in\mathbb{R}^2$ and
$$|\widehat{f}(x) | \leq c(k) (1+|x|)^{-k}$$ for all $x\in\mathbb{R}$.
Their proof can be adapted to the cyclical case i.e. when the coefficients
$(b_j)_{j\in\mathbb{N}}$ satisfies \eqref{b(s)}. Using their
notation, it suffices to construct a finite set $J_1$ such that for
all $j\in J_1$ : $|b_{-j} |> 2| b_{t-j}| +c_1$ where $c_1$ does not
depend on $t$.
Since $(|b_j|)_{j\in\mathbb{Z}}$ is not summable, there exists a subsequence
$({j_u})_{u\in \mathbb{Z}}$ such that $b_{-j_u} \not = 0$. We can take
$J_1 $ a subset of $\{ j_u : u\in\mathbb{Z} \}$ with $[\delta
|J_1|] =k+3$.
Indeed, for $j\in J_1$, we have
$|b_{-j}| > C(J_1) |j^{-(\alpha+1)/2}| $, and for $t$ large enough
there exists $\tilde c_1$
$$ |j|^{-(1+\alpha)/2} > 2 /C(J_1)
|t-j|^{-(1+\alpha)/2} +\tilde c_1.$$
Therefore, there exists $c_1$ such that for all $j\in J_1$,
$$ |b_{-j} | > 2| b_{t-j}| + c_1.$$
For all $k'$, the first integral in (\ref{eq:8}) satisfies
$$ \int_{|t|> j^\epsilon}
|\widehat{\ell_j}(t,-t)| \textrm{\,d} t \leq j^{-\epsilon k'} \int_{|t|> j^\epsilon} |t|^{k'}
|\widehat{\ell_j}(t,-t)| \textrm{\,d} t = O(j^{-\epsilon k'}).$$
Therefore we can take any arbitrary $k'$ such that $k'> (2\alpha+\epsilon)/\epsilon$.
For the second integral in \eqref{eq:8}, it is enough to show that
\begin{equation}
\sup_{|u|< j^\epsilon} |\widehat{\ell_j}(u) | =
O(j^{-2\alpha-2\epsilon }).\label{eq:12}
\end{equation}
The proof is quite similar to that of equation $(2.20)$ in Giraitis \textit{et al}
\cite{MR1409327} adding the terms of order two in the
expansion.
We write the difference $\widehat{f_j}(x,y) - \widehat f(x) \widehat f(y)$
from products of the characteristic function $\phi$ of $\xi_1$.
\begin{align*}
\widehat{f_j}(x,y) - \widehat f(x) \widehat f(y) & = \prod_{I_1}
\prod_{I_1} \prod_{I_1} \phi(xb_{-i} +y b_{t-i}) - \prod_{I_1}
\prod_{I_1} \prod_{I_1} \phi(xb_{-i} )\phi(y b_{t-i}) := a_1a_2a_3
-a'_1a'_2a'_3\\
&= (a'_1-a_1) a_2a_3 + (a'_2-a_2) a'_1a_3+ (a'_3-a_3)a'_1a'_2 \end{align*}
where $I_1= \{ |i| < j^{2\epsilon}\}$, $I_3= \{ |t-i| < j^{2\epsilon}\}$
and $I_3= \mathbb{Z}-(I_1\cup I_2)$.
We will deduce \eqref{eq:12} from $|a_i|<1$, $|a'_i|<1$ and the following
facts, for all $u<t^\epsilon$
\begin{eqnarray}
\label{eq:13}
a_i-a'_i &=& O(j^{-2\alpha-2\epsilon}), \quad i=1,2\\
a_3-a'_3 &= & a'_3(-xy r(j) +\frac 12 x^2y^2 r(j)^2) + O(j^{-2\alpha-2\epsilon}).\label{eq:14}
\end{eqnarray}
Similarly to \cite{MR1409327}, we prove \eqref{eq:13} with $i=1$ (or
similarly for $i=2$) as follows
\begin{align*}
| a_1-a'_1| &\leq \sum_{|i|\leq j^{2\epsilon}} |\phi(xb_{-i} +y
b_{j-i}) - \phi(xb_{-i} )\phi(y b_{j-i}) |\\
&\leq \sum_{|i|\leq j^{2\epsilon}} |xb_{j-i} |
\end{align*}
As $|i|\leq j^{2\epsilon}$ and $x\leq j^\epsilon$, we have
$$|xb_{j-i}|\leq C j^\epsilon j^{-(1+\alpha)/2} =
j^{-2\alpha-4\epsilon} O(1) $$
since $\epsilon< \dfrac{1-3\alpha}{10}$. Therefore $|a_1 - a'_1| =
j^{-2\alpha-2\epsilon } O(1) .$
To prove \eqref{eq:14}, we follow the same calculations as \cite{MR1409327} page
325. Since $|xb_{-i} | + |yb_{j-i}| =o(1)$, we write $a_3-a'_3$ of the
form $$ a_3-a'_3 = a'_3 (e^{Q_j(x,y)}-1) = a'_3 (Q_j(x,y) + \frac 12
Q_j(x,y)^2 +o(Q_j(x,y)^2) ) $$
where
\begin{align*}
Q_j(x,y) &=\sum_{i\in I_3} \Psi(xb_{-i},yb_{j-i}) = -xy \sum_{i\in
I_3} b_{-i} b_{j-i} +O(\sum_{i\in I_3} (x b_{-i})^2 |y b_{j-i}|
+|x b_{-i}| |yb_{j-i}|^2 ) \\ &:= -xy \sum_{i\in I_3} b_{-i} b_{j-i}
+R_n
\end{align*}
and $$ \Psi(x,y) = \log(\phi(x+y) ) -\log(\phi(x) )-\log(\phi(y) )$$ and we show that
$$Q_j(x,y) =-xyr(j) + O(\sum_{I_1\cup I_2} |x||y||b_{-i}||b_{j-i}|
+\sum_i x^2 |y| |b_{-i}|^2|b_{j-i}| ) = $$
\begin{multline*}
Q_j(x,y)^2 = x^2 y^2 r(j)^2+ x^2y^2 (\sum_{i\in I_1•\cup I_2} b_{-i}
b_{j-i} )^2 - 2x^2y^2 -xy \sum_{i\in \mathbb{Z}} b_{-i}
b_{j-i}\sum_{i\in I_1\cup I2} b_{-i} b_{j-i} \\ + R_n^2 -2R_n xy
\sum_{i\in I_3} b_{-i} b_{j-i}
\end{multline*}
For $|x|< j^\epsilon$ et $|y|< j^\epsilon$ we have
$$\sum_{I_1\cup I_2} |x||y||b_{-i}||b_{j-i}| =
j^{2\epsilon-(1+\alpha)/2 } O(1) = j^{-2\alpha-2\epsilon} O(1) $$
since $\epsilon < (1-3\alpha )/8 $
and
$$\sum_i x^2 |y| |b_{-i}|^2|b_{j-i}| = j^{-\alpha/2 -1/2 +3\epsilon}
O(1) = j^{-2\alpha-2\epsilon} O(1) $$
since $\epsilon < (1-3\alpha )/10 $.
These asymptotic behaviors ensure that for $|x|< j^\epsilon$ et $|y|< j^\epsilon$ we have
$$ a_3-a'_3 = a'_3 ( xy r(j) +\frac 12 x^2 y^2 r(j)^2
+O(j^{-2\alpha-2\epsilon}) ). $$
\begin{center}
\textbf{Acknowledgement}
\end{center}
The authors would like to thank the anonymous referee for their
helpful comments and suggestions, that improved the presentation
of the paper.
|
2,869,038,156,259 | arxiv | \section{Introduction and Motivation}
The starting point of this paper is the following well-known double
inequalit
\begin{equation}
\frac{e}{2n+2}<e-\left( 1+\frac{1}{n}\right) ^{n}<\frac{e}{2n+1},\ \ \ n\geq
1. \label{s}
\end{equation
This inequality was highly discussed and extended in the recent past, since
it was used to improve inequalities of Hardy-Carleman type. See for example
\cite{yp}, \cite{sz}, \cite{y}, \cite{x}, \cite{yu}.
As (\ref{s}) is equivalent t
\[
\frac{2n}{2n+1}<\frac{1}{e}\left( 1+\frac{1}{n}\right) ^{n}<\frac{2n+1}{2n+2
,
\
we prove that the best approximation of the for
\begin{equation}
\frac{1}{e}\left( 1+\frac{1}{n}\right) ^{n}\approx \frac{n+a}{n+b},\text{ \
\ as }n\rightarrow \infty \label{na}
\end{equation
is obtained for $a=5/12$ and $b=11/12.$ Then we prove the following\bigskip
\textbf{Theorem 1. }\emph{For every real number }$x\in \lbrack 1,\infty ),$
\emph{the following inequalities hold:
\begin{eqnarray*}
&&\frac{x+\frac{5}{12}}{x+\frac{11}{12}}-\frac{5}{288x^{3}}+\frac{343}
8640x^{4}}-\frac{2621}{41\,472x^{5}} \\
&<&\frac{1}{e}\left( 1+\frac{1}{x}\right) ^{x} \\
&<&\frac{x+\frac{5}{12}}{x+\frac{11}{12}}-\frac{5}{288x^{3}}+\frac{343}
8640x^{4}}-\frac{2621}{41\,472x^{5}}-\frac{2621}{41\,472x^{5}}+\frac{300\,90
}{3483\,648x^{6}}.
\end{eqnarray*}
As application, we give a new proof of the limi
\begin{equation}
\lim_{n\rightarrow \infty }\left( \frac{\left( n+1\right) ^{n+1}}{n^{n}}
\frac{n^{n}}{\left( n-1\right) ^{n-1}}\right) =e. \label{z}
\end{equation
This limit is also known as Keller's limit. See e.g. \cite{j}, where a
different proof of (\ref{z}) is presented.
Moreover, the estimates from Theorem 1 are strong enough to prov
\begin{equation}
\lim_{n\rightarrow \infty }n^{2}\left( \left( \frac{\left( n+1\right) ^{n+1
}{n^{n}}-\frac{n^{n}}{\left( n-1\right) ^{n-1}}\right) -e\right) =\frac{e}{2
}, \label{v}
\end{equation
which is a new result, according to the best of our knowledge.
Finally, improvements of Carlemans' inequality are given.
\section{The Proofs}
In order to find the best approximation (\ref{na}), we associate the
relative error sequence $w_{n}$ by the relation
\[
\frac{1}{e}\left( 1+\frac{1}{n}\right) ^{n}=\frac{n+a}{n+b}\cdot \exp
w_{n},\ \ \ n\geq 1
\
and we consider an approximation (\ref{na}) to be better when $w_{n}$
converges faster to zero. We hav
\[
w_{n}=n\ln \left( 1+\frac{1}{n}\right) -1-\ln \frac{n+a}{n+b},
\
but using a mathematical software such as Maple, we ge
\[
w_{n}=\left( -a+b-\frac{1}{2}\right) \frac{1}{n}+\left( \frac{1}{2}a^{2}
\frac{1}{2}b^{2}+\frac{1}{3}\right) \frac{1}{n^{2}}+\left( \frac{1}{3}b^{3}
\frac{1}{3}a^{3}-\frac{1}{4}\right) \frac{1}{n^{3}}+O\left( \frac{1}{n^{4}
\right) .
\
This form can be also obtained by direct computation.
Evidently, the fastest sequence $w_{n}$ is obtained when the first two
coefficients in this expansion vanish, that is $a=5/12$ and $b=11/12.$ Our
first aim is now attained.\bigskip
\emph{Proof of Theorem 1. }The requested inequalities can be written as $f>0$
and $g<0,$ wher
\[
f\left( x\right) =x\ln \left( 1+\frac{1}{x}\right) -1-\ln \left( \frac{x
\frac{5}{12}}{x+\frac{11}{12}}-\frac{5}{288x^{3}}+\frac{343}{8640x^{4}}
\frac{2621}{41\,472x^{5}}\right)
\
\[
g\left( x\right) =x\ln \left( 1+\frac{1}{x}\right) -1-\ln \left( \frac{x
\frac{5}{12}}{x+\frac{11}{12}}-\frac{5}{288x^{3}}+\frac{343}{8640x^{4}}
\frac{2621}{41\,472x^{5}}+\frac{300\,901}{3483\,648x^{6}}\right) .
\
We hav
\[
f^{\prime \prime }\left( x\right) =\frac{A\left( x-1\right) }{x^{2}\left(
x+1\right) ^{2}\left( 12x+11\right) ^{2}P^{2}\left( x\right) }>0
\
an
\[
g^{\prime \prime }\left( x\right) =-\frac{B\left( x-1\right) }{x^{2}\left(
x+1\right) ^{2}\left( 12x+11\right) ^{2}Q^{2}\left( x\right) }<0,
\
wher
\[
P\left( x\right)
=59\,184x^{2}-66\,708x-43\,200x^{3}+1036\,800x^{5}+2488\,320x^{6}-144\,155
\
\begin{eqnarray*}
Q\left( x\right) &=&5945\,040x-5603\,472x^{2}+4971\,456x^{3}-3628\,800x^{4}
\\
&&+87\,091\,200x^{6}+209\,018\,880x^{7}+16\,549\,555
\end{eqnarray*}
\begin{eqnarray*}
A\left( x\right) &=&387\,888\,768\,\allowbreak
643\,091\,163x+1374\,068\,561\,\allowbreak 183\,884\,363\allowbreak x^{2} \\
&&+2856\,411\,438\,\allowbreak
418\,498\,368x^{3}+3861\,333\,058\,\allowbreak 156\,847\,712\allowbreak x^{4}
\\
&&+3547\,125\,026\,\allowbreak
642\,062\,080x^{5}+2242\,448\,726\,\allowbreak 942\,859\,264\allowbreak x^{6}
\\
&&+963\,345\,615\,\allowbreak 805\,707\,264x^{7}+269\,162\,452\,\allowbreak
894\,408\,704\allowbreak x^{8} \\
&&+44\,174\,729\,\allowbreak 709\,158\,400x^{9}+3234\,548\,\allowbreak
057\,702\,400\allowbreak x^{10} \\
&&+48\,685\,659\,\allowbreak 681\,079\,707
\end{eqnarray*}
\begin{eqnarray*}
B\left( x\right) &=&5495\,\allowbreak 336\,279\,092\,\allowbreak
271\,136\,793x+22\,015\,\allowbreak 820\,845\,590\,\allowbreak
210\,733\,374\allowbreak x^{2} \\
&&+52\,587\,\allowbreak 526\,363\,654\,\allowbreak
958\,754\,048x^{3}+83\,107\,\allowbreak 983\,906\,845\,\allowbreak
638\,539\,984\allowbreak x^{4} \\
&&+91\,197\,\allowbreak 790\,053\,279\,\allowbreak
643\,410\,048x^{5}+70\,886\,\allowbreak 916\,929\,730\,\allowbreak
329\,339\,904\allowbreak x^{6} \\
&&+39\,022\,\allowbreak 307\,420\,738\,\allowbreak
572\,320\,768x^{7}+14\,907\,\allowbreak 444\,982\,230\,\allowbreak
536\,515\,584\allowbreak x^{8} \\
&&+3763\,\allowbreak 807\,019\,677\,\allowbreak
591\,584\,768x^{9}+565\,\allowbreak 244\,311\,814\,\allowbreak
774\,194\,176\allowbreak x^{10} \\
&&+38\,\allowbreak 255\,330\,631\,\allowbreak
116\,390\,400x^{11}+621\,\allowbreak 810\,333\,384\,\allowbreak
191\,039\,953\allowbreak .
\end{eqnarray*
Evidently, $g$ is strictly concave, $f$ is strictly convex, with $f\left(
\infty \right) =g\left( \infty \right) =0,$ so $g<0$ and $f>0$ on $[1,\infty
).$ The proof is completed.$\square $
\section{Kellers' limit}
Let us rewrite Theorem 1 in the for
\[
u\left( n\right) <\frac{1}{e}\left( 1+\frac{1}{n}\right) ^{n}<v\left(
n\right) ,
\
wher
\[
u\left( x\right) =\frac{x+\frac{5}{12}}{x+\frac{11}{12}}-\frac{5}{288x^{3}}
\frac{343}{8640x^{4}}-\frac{2621}{41\,472x^{5}}
\
an
\[
v\left( x\right) =\frac{x+\frac{5}{12}}{x+\frac{11}{12}}-\frac{5}{288x^{3}}
\frac{343}{8640x^{4}}-\frac{2621}{41\,472x^{5}}-\frac{2621}{41\,472x^{5}}
\frac{300\,901}{3483\,648x^{6}}.
\]
We prove (\ref{z}) using policemen lemma. As the sequenc
\[
x_{n}=\frac{1}{e}\left( \frac{\left( n+1\right) ^{n+1}}{n^{n}}-\frac{n^{n}}
\left( n-1\right) ^{n-1}}\right)
\
can be written a
\[
x_{n}=\left( n+1\right) \frac{1}{e}\left( 1+\frac{1}{n}\right) ^{n}-n\frac{
}{e}\left( 1+\frac{1}{n-1}\right) ^{n-1},
\
we use Theorem 1 to obtai
\begin{equation}
\left( n+1\right) u\left( n\right) -nv\left( n-1\right) <x_{n}<\left(
n+1\right) v\left( n\right) -nu\left( n-1\right) . \label{t}
\end{equation
The extreme-side sequences are rational functions of $n$ and they tends
together to $1,$ as $n$ approaches infinity. Indeed
\[
\left( n+1\right) u\left( n\right) -nv\left( n-1\right) =\frac
2508\,226\,560n^{13}-12\,\allowbreak 959\,170\,560n^{12}+\cdots }
17\,418\,240n^{5}\left( n-1\right) ^{6}\left( 12n-1\right) \left(
12n+11\right) }
\
an
\[
\left( n+1\right) v\left( n\right) -nu\left( n-1\right) =\frac
2508\,226\,560n^{13}-10\,\allowbreak 450\,944\,000n^{12}+\cdots }
17\,418\,240n^{6}\left( n-1\right) ^{5}\left( 12n-1\right) \left(
12n+11\right) }.
\
It results that $x_{n}$ tends to $1,$ as $n$ approaches infinity, so (\ref{z
) is proved.
Further, by (\ref{t}), we ge
\begin{eqnarray*}
&&n^{2}\left( \left( \left( n+1\right) u\left( n\right) -nv\left( n-1\right)
\right) -1\right) \\
&<&n^{2}\left( x_{n}-1\right) \\
&<&n^{2}\left( \left( \left( n+1\right) v\left( n\right) -nu\left(
n-1\right) \right) -1\right)
\end{eqnarray*
and again the extreme-side sequences are rational functions of $n$ and they
tends together to $1/24,$ as $n$ approaches infinity. Indeed
\[
n^{2}\left( \left( \left( n+1\right) u\left( n\right) -nv\left( n-1\right)
\right) -1\right) =\frac{104\,509\,440n^{11}-539\,965\,440n^{10}+\cdots }
17\,418\,240n^{3}\left( n-1\right) ^{6}\left( 12n-1\right) \left(
12n+11\right) }
\
\[
n^{2}\left( \left( \left( n+1\right) v\left( n\right) -nu\left( n-1\right)
\right) -1\right) =\frac{104\,509\,440n^{11}+-435\,456\,000n^{10}\cdots }
17\,418\,240n^{4}\left( n-1\right) ^{5}\left( 12n-1\right) \left(
12n+11\right) }.
\
In consequence, $n^{2}\left( x_{n}-1\right) $ tends to $1/24,$ which is (\re
{v}).
\section{Improvements of Carlemans' inequality}
While Swedish mathematician Torsten Carleman was studying quasi-analytical
functions, he discovered an important inequality, now known as Carlemans'
inequality. If $\sum a_{n}$ is a convergent series of nonneagtive reals, the
\begin{equation}
\sum_{n=1}^{\infty }\left( a_{1}a_{2}\cdots a_{n}\right) ^{1/n}\leq
e\sum_{n=1}^{\infty }a_{n}. \label{e}
\end{equation
This inequality was proven to be of great independent interest, since many
authors improved it in the recent past.
The main tool for studying and improving (\ref{e}) was the proof of P\'{o
lya (see \cite{p1}-\cite{p2}), who started from AM-GM inequality in the for
\begin{equation}
\left( a_{1}a_{2}\cdots a_{n}\right) ^{1/n}\leq \frac{c_{1}a_{1}+c_{2}a_{2}
\cdots +c_{n}a_{n}}{n\left( c_{1}c_{2}\cdots c_{n}\right) ^{1/n}}, \label{c}
\end{equation
where $c_{1},$ $c_{2},$ $...,$ $c_{n}>0.$ The proof of the following result
is based on P\'{o}lya's\ idea.\bigskip
\textbf{Theorem 2. }\emph{Let }$a_{n}>0$ \emph{such that }$\sum a_{n}$ \emph
is convergent and }$c_{n}>0$ \emph{such that
\[
\sum_{k=1}^{\infty }\frac{1}{k\left( c_{1}c_{2}\cdots c_{k}\right) ^{1/k}
=l\in
\mathbb{R}
.
\
\emph{Denote
\[
x_{n}=\sum_{k=n}^{\infty }\frac{1}{k\left( c_{1}c_{2}\cdots c_{k}\right)
^{1/k}}.
\
\emph{Then
\begin{equation}
\sum_{n=1}^{\infty }\left( a_{1}...a_{n}\right) ^{1/n}\leq
\sum_{n=1}^{\infty }c_{n}x_{n}a_{n}. \label{cn}
\end{equation}
\emph{Proof. }Using (\ref{c}), we hav
\begin{eqnarray*}
\sum_{n=1}^{\infty }\left( a_{1}a_{2}\cdots a_{n}\right) ^{1/n} &\leq
&\sum_{n=1}^{\infty }\left( \frac{1}{n\left( c_{1}c_{2}\cdots c_{n}\right)
^{1/n}}\sum_{m=1}^{n}c_{m}a_{m}\right) \\
&=&\sum_{n=1}^{\infty }\sum_{m=1}^{n}\frac{c_{m}a_{m}}{n\left(
c_{1}c_{2}\cdots c_{n}\right) ^{1/n}} \\
&=&\sum_{m=1}^{\infty }\sum_{n=m}^{\infty }\frac{c_{m}a_{m}}{n\left(
c_{1}c_{2}\cdots c_{n}\right) ^{1/n}} \\
&=&\sum_{m=1}^{\infty }c_{m}a_{m}\sum_{n=m}^{\infty }\frac{1}{n\left(
c_{1}c_{2}\cdots c_{n}\right) ^{1/n}} \\
&=&\sum_{m=1}^{\infty }c_{m}a_{m}\sum_{n=m}^{\infty }\left(
x_{n}-x_{n+1}\right) \\
&=&\sum_{m=1}^{\infty }c_{m}x_{m}a_{m}.\square
\end{eqnarray*
P\'{o}lya took $c_{n}=\left( n+1\right) ^{n}/n^{n-1}$ and (\ref{cn}) become
\[
\sum_{n=1}^{\infty }\left( a_{1}...a_{n}\right) ^{1/n}\leq
\sum_{n=1}^{\infty }\left( 1+\frac{1}{n}\right) ^{n}a_{n}.
\
Now (\ref{c}) follows from $\left( 1+1/n\right) ^{n}<e.$
Almost all improvements stated in the recent past use upper bounds for
\left( 1+1/n\right) ^{n},$ stronger than $\left( 1+1/n\right) ^{n}<e.$ See
\cite{yp}, \cite{j}-\cite{yu}.
We use Theorem 1 to establish the following improvement of Carlemans'
inequality.\bigskip
\textbf{Theorem 3. }\emph{Let }$a_{n}>0$ \emph{such that }$\sum a_{n}<\infty
.$ \emph{Then
\[
\sum_{n=1}^{\infty }\left( a_{1}...a_{n}\right) ^{1/n}\leq
e\sum_{n=1}^{\infty }\frac{12n+5}{12n+11}a_{n}.
\
\emph{It also holds good
\[
\sum_{n=1}^{\infty }\left( a_{1}...a_{n}\right) ^{1/n}\leq
e\sum_{n=1}^{\infty }\left( \frac{12n+5}{12n+11}-\varepsilon _{n}\right)
a_{n},
\
\emph{where
\[
\varepsilon _{n}=\frac{5}{288n^{3}}-\frac{343}{8640n^{4}}+\frac{2621}
41\,472n^{5}}+\frac{2621}{41\,472n^{5}}-\frac{300\,901}{3483\,648n^{6}}.
\]
Being very accurate, we are convinced that the inequalities presented in
Theorem 1 can be succesfully used to obtain other new results.
\begin{acknowledgement}
Computations in this paper were made using Maple software, but they can be
also made (or verified) by direct approach.
\end{acknowledgement}
\begin{acknowledgement}
The work of the second author was supported by a grant of the Romanian
National Authority for Scientific Research, CNCS-UEFISCDI project number
PN-II-ID-PCE-2011-3-0087.
\end{acknowledgement}
|
2,869,038,156,260 | arxiv | \section{Introduction}
After success for fabricating the monolayer or few layer graphene\cite{fabrication}, there are a lot of activities for researching into the
various properties of graphene\cite{review}. This is mainly due to the fact that the low-energy electrons in graphene have unusual electronic properties.
Long ago it was predicted by Wallace\cite{wallace} that the electron located near the hexagonal vertices of the Brillouin zone exhibits a
linear dispersion
relation and $40$ years later Semenoff\cite{semenoff} showed that the low-energy dynamics of the corresponding electron is governed by massless Dirac equation even in the non-relativistic regime. Thus, the fabrication of the monolayer graphene opens a possibility to test various predictions of
quantum electrodynamics (QED) by making use of condensed matter experiment. However, this does not mean that all phenomena QED predicted can
be realized in the graphene-based experiment because the light velocity $c$ in QED should be replaced by the Fermi velocity $v_F \sim c / 300$. It
results in the large fine-structure constant $\alpha \sim 2$. This fact implies that only non-perturbative characters of the planar QED can
be realized in the graphene experiment. Recently, there have been many efforts on this connection\cite{connection}.
Among many phenomena arising in the planar QED most interesting issue, at least for us, is the spin-1/2 Aharonov-Bohm (AB)\cite{ab1959} or
Aharonov-Bohm-Coulomb (ABC) problem, which was extensively discussed about two decades ago\cite{spin-ab} because the same problem appeared
in the context of anyonic and cosmic string theories\cite{cosmic}. The most important issue in this problem is how to treat the
$\delta$-like singular potential generated by an interaction between particle's spin and thin magnetic flux tube. Recently, similar AB and related problems were discussed theoretically\cite{ab-graphene-th} and experimentally\cite{ab-graphene-exp} in the branch of graphene physics. Another closely
related issue in the graphene is Coulomb impurity problem\cite{coulomb}. The interesting fact in this case is that depending on the charge of
impurity there are two regions, subcritical and supercritical, in which the effects of impurity are completely different. Similar phenomenon in
QED was discussed long ago in Ref.\cite{zeldo}.
Other unobserved interesting phenomena QED predicts are Klein paradox and zitterbewegung.
The Klein paradox\cite{klein}-counterintuitive barrier penetration in the relativistic setting-was re-examined in Ref.\cite{paradox1}.
Authors in Ref.\cite{paradox1} argued that the Klein paradox can be realized using electrostatic barriers in single- and bi-layer graphene. Few years
later it was reported that the Klein tunneling was observed by measuring the quantum conductance oscillation and phase shift pattern in the extremely
narrow graphene\cite{paradox2}. The zitterbewegung (ZB)\cite{zitter1}-the trembling motion arising due to the interference between positive and negative energy states- was also investigated recently in the graphene without\cite{zitter2} and with\cite{zitter3} external magnetic field. The
effect of zitterbewegung for other models is also discussed recently.\cite{zitter4}
Besides a connection between graphene and QED much attractive attention is paid to the graphene as a new material for future technology. Most
important application of graphene, at least for us, is a possibility for realization of quantum computer. Recently, many techniques are
used independently or cooperatively to realize the quantum computer. The typical techniques are optical ones, ion traps, NMR, quantum dots,
and superconductors. Current status for the realization is summarized in detail in Ref. \cite{qcs}. Also, the graphene-based quantum
computer is explored in Ref. \cite{qc1}.
In this paper we will examine the position-momentum and position-velocity uncertainties of the low-energy electrons in the
monolayer gapped graphene when the initial wave packet is chosen as a general Gaussian wave packet. Since Gaussian wave packet, in general,
contains both positive-energy and negative-energy spectra, the expectation values of the physical quantities should be
contributed by spreading and zitterbewegung phenomena. Thus, it is of interest to examine the effect of the gap parameter in the
expectation values of various quantities and uncertainties. We will show in this paper that the position-momentum and position-velocity
uncertainties can be under control within the quantum mechanical laws if the gap parameter can be chosen freely.
Although this controllability of the uncertainties is an interesting fact from the aspect of purely theoretical ground, it is also important
from the aspect of quantum computer. Quantum computer\cite{qcs} is a machine, which performs quantum computational
processes by making use of the quantum mechanical laws. So far many quantum information processes are developed such as quantum
teleportation\cite{teleport}, factoring algorithm\cite{factoring}, and search algorithm\cite{search}. All quantum information processes
consist of three stages: preparation of initial states at initial stage, time evolution of quantum states via various unitary gates at
intermediate stage, and quantum measurements at final stages. If uncertainties, therefore, are large at the final stage, the quantum
measurement can generate fatal errors in the computing processes. For this reason it is important to reduce the uncertainties
as much as possible at the final stages.
This paper is organized as follows. In section II we examine the position-momentum uncertainties in the gapped graphene. It is shown that the
uncertainties are contributed by the spreading and ZB effects of the given wave packet. The uncertainties in the gapped graphene are
compared with the corresponding quantities of the $2d$ free Hamiltonian system. In section III we discuss on the position-velocity uncertainties
in the gapped graphene. Unlike the position uncertainties the velocity uncertainties are shown to be contributed by sorely ZB effect of the wave
packet. This fact implies that the $t \rightarrow \infty$ limit of the velocity uncertainties coincides with the Fermi velocity $v_F$ regardless of
the choice of the packet. In section IV a brief conclusion is given.
\section{position-momentum uncertainty}
In this section we examine the position-momentum uncertainty in the gapped graphene.
The appropriate Hamiltonian for the low-energy electron near the Dirac point is given by
\begin{eqnarray}
\label{hamil-2-1}
\hat{H}_M = v_F \left( \begin{array}{cc}
M v_F & p_1 - i p_2 \\
p_1 + i p_2 & -M v_F
\end{array} \right)
\end{eqnarray}
where $v_F \sim c / 300$ is a Fermi velocity and $M$ is a gap parameter generated via some dynamical
and technical reasons. Theoretically, most popular mechanism which generates the gap is a chiral symmetry breaking\cite{dynamical3}. This
mechanism is similar to dynamical breaking\cite{DGSB}, which was studied deeply in gauge theories. The bandgap can be generated
by breaking the sublattice symmetry. This case was experimentally realized by choosing the substrate appropriately\cite{dynamical1}.
In addition, the gap is also generated in graphene nanoribbon\cite{dynamical2}. Both cases are taken into account in Hamiltonian (\ref{hamil-2-1}).
Although monolayer graphene itself does not have a gap, the bandgap is naturally generated in bilayer graphene\cite{bilayer}. However, we
cannot use the Hamiltonian (\ref{hamil-2-1}) to explore the effect of gap in the bilayer graphene due to non-trivial structure of the gap
in the bilayer system. From the terminology of relativistic field theories this gap parameter $M$ is a mass term of the Dirac fermion.
The position operator $\hat{x} (t)$ in the Heisenberg picture can be expressed by $2 \times 2$ matrix from
$\hat{x} (t) = \exp(i \hat{H}_M t / \hbar) \hat{x} (0) \exp(-i \hat{H}_M t / \hbar)$. Explicit calculation shows
\begin{eqnarray}
\label{position1}
\hat{x} (t) = \hat{x} (0) + \left( \begin{array}{cc}
\hat{\Sigma} (p) & \hat{\sigma}_1 (p) + i \hat{\sigma}_2 (p) \\
\hat{\sigma}_1 (p) - i \hat{\sigma}_2 (p) & -\hat{\Sigma} (p)
\end{array} \right),
\end{eqnarray}
where
\begin{eqnarray}
\label{position2}
& &\hat{\Sigma} (p) = \frac{\hbar}{{\bm p}^2 + (M v_F)^2}
\left[ p_2 \sin^2 \theta_M + \frac{(M v_F) p_1}{\sqrt{{\bm p}^2 + (M v_F)^2}} \left(\theta_M - \sin \theta_M \cos \theta_M \right) \right]
\\ \nonumber
& &\hat{\sigma}_1 (p) = \frac{\hbar}{[{\bm p}^2 + (M v_F)^2]^{3/2}}
\left[\theta_M p_1^2 + \sin \theta_M \cos \theta_M \left\{p_2^2 + (M v_F)^2 \right\} \right] \\ \nonumber
& &\hat{\sigma}_2 (p) = \frac{\hbar}{[{\bm p}^2 + (M v_F)^2]^{3/2}}
\left[ p_1 p_2 \left(\sin \theta_M \cos \theta_M - \theta_M \right) + (M v_F) \sqrt{{\bm p}^2 + (M v_F)^2} \sin^2 \theta_M \right]
\end{eqnarray}
and $\theta_M = (v_F t / \hbar) \sqrt{{\bm p}^2 + (M v_F)^2}$. Each operator in Eq.(\ref{position2}) consists of two kinds, one of which is
responsible for ZB phenomena and the other is responsible for the spreading of wave packet.
In order to examine the uncertainty relations we should introduce a wave packet. In this paper we introduce a usual two-dimensional
Gaussian wave packet
\begin{eqnarray}
\label{packet}
|\psi (x, y: 0)\rangle = \frac{d}{2 \pi \sqrt{\pi}} \int d^2 {\bm k} \exp \left[-\frac{d^2}{2} (k_x - \alpha)^2 - \frac{d^2}{2} (k_y - \beta)^2\right]
e^{i {\bm k} \cdot {\bm r}}
\left( \begin{array}{c}
a \\
b
\end{array} \right),
\end{eqnarray}
where real parameters $a$ and $b$ satisfy $a^2 + b^2 = 1$. It is easy to show that $|\psi (x, y: 0)\rangle$ can be decomposed as
\begin{equation}
\label{decompose1}
|\psi (x, y: 0)\rangle = |\psi^p (x, y: 0)\rangle + |\psi^n (x, y: 0)\rangle,
\end{equation}
where $|\psi^p (x, y: 0)\rangle$ and $|\psi^n (x, y: 0)\rangle$ are the positive-energy and negative-energy components of
$|\psi (x, y: 0)\rangle$, respectively. Using Hamiltonian $\hat{H}_M$ it is easy to derive these components and the explicit
expressions are given by
\begin{eqnarray}
\label{decompose2}
& &|\psi^p (x, y: 0)\rangle = \frac{d}{4 \pi \sqrt{\pi}} \int d^2 {\bm k} \exp \left[-\frac{d^2}{2} (k_x - \alpha)^2 - \frac{d^2}{2} (k_y - \beta)^2\right]
e^{i {\bm k} \cdot {\bm r}} \\ \nonumber
& & \hspace{3.0cm} \times
\frac{a k_+ + b (\sqrt{{\bm k}^2 + \lambda_c^{-2}} - \lambda_c^{-1})}{k_+ \sqrt{{\bm k}^2 + \lambda_c^{-2}}}
\left( \begin{array}{c}
\sqrt{{\bm k}^2 + \lambda_c^{-2}} + \lambda_c^{-1} \\
k_+
\end{array} \right) \\ \nonumber
& &|\psi^n (x, y: 0)\rangle = \frac{d}{4 \pi \sqrt{\pi}} \int d^2 {\bm k} \exp \left[-\frac{d^2}{2} (k_x - \alpha)^2 - \frac{d^2}{2} (k_y - \beta)^2\right]
e^{i {\bm k} \cdot {\bm r}} \\ \nonumber
& & \hspace{3.0cm} \times
\frac{a k_+ - b (\sqrt{{\bm k}^2 + \lambda_c^{-2}} + \lambda_c^{-1})}{k_+ \sqrt{{\bm k}^2 + \lambda_c^{-2}}}
\left( \begin{array}{c}
\sqrt{{\bm k}^2 + \lambda_c^{-2}} - \lambda_c^{-1} \\
-k_+
\end{array} \right).
\end{eqnarray}
In Eq. (\ref{decompose2}) $k_{\pm} = k_x \pm i k_y$ and $\lambda_c = \hbar / (M v_F)$. The parameter $\lambda_c$ is a familiar quantity.
In fact, this is a Compton wavelength if the Fermi velocity $v_F$ is replaced with light velocity $c$. In this paper we will call
$\lambda_c$ as Compton wavelength. Thus, the intensity for the positive-energy and negative-energy components are
\begin{eqnarray}
\label{decompose3}
& &P_+ \equiv \langle \psi^p (x, y: 0) | \psi^p (x, y: 0) \rangle = \frac{1}{2} + \Delta P \\ \nonumber
& &P_- \equiv \langle \psi^n (x, y: 0) | \psi^n (x, y: 0) \rangle = \frac{1}{2} - \Delta P = 1 - P_+,
\end{eqnarray}
where
\begin{equation}
\label{decompose4}
\Delta P = \frac{d^2}{2 \pi} \int d^2 {\bm k} \exp \left[-d^2 (k_x - \alpha)^2 - d^ (k_y - \beta)^2\right]
\frac{\lambda_c^{-1} (a^2 - b^2) + 2 a b k_x}{\sqrt{{\bm k}^2 + \lambda_c^{-2}}}.
\end{equation}
If, therefore, $\alpha = 0$ with $a = b = 1 / \sqrt{2}$, we get $P_+ = P_- = 1/2$. In this case the expectation values of various
operators are summarized in Appendix A. For arbitrary $\alpha$ and $\beta$, however,
$P_{\pm}$ should be computed numerically. Since $|\psi (x, y: 0)\rangle$ has both positive-energy and negative-energy components, the expectation
value of various physical quantities should exhibit the trembling behavior due to the interference of these components as discussed in
Ref.\cite{zitter1,zitter2,zitter3,zitter4}.
Using Eq.(\ref{position1}) and Eq.(\ref{packet}) it is straightforward to show
\begin{equation}
\label{position3}
\langle x \rangle (t) \equiv \bra{\psi (x, y: 0)} \hat{x} (t) \ket{\psi (x, y: 0)}
= \frac{d^2}{\pi} \int d^2 {\bm k} \exp \left[-d^2 (k_x - \alpha)^2 - d^2 (k_y - \beta)^2\right] \left(X_S + X_{ZB} \right),
\end{equation}
where
\begin{eqnarray}
\label{position4}
& &X_S = \frac{(v_F t)}{{\bm k}^2 + \lambda_c^{-2}} \left[(a^2 - b^2) \lambda_c^{-1} k_x + 2 a b k_x^2 \right] \\ \nonumber
& &X_{ZB} = \frac{a^2 - b^2}{{\bm k}^2 + \lambda_c^{-2}} \left[ k_y \sin^2 \theta - \frac{\lambda_c^{-1} k_x}{\sqrt{{\bm k}^2 + \lambda_c^{-2}}}
\sin \theta \cos \theta \right] + \frac{2 a b}{({\bm k}^2 + \lambda_c^{-2})^{3/2}} \sin \theta \cos \theta (k_y^2 + \lambda_c^{-2})
\end{eqnarray}
and, $\theta = (v_F t) \sqrt{{\bm k}^2 + \lambda_c^{-2}}$.
As remarked before $X_S$ and $X_{ZB}$ are responsible for the spreading and trembling motion in the
time evolution of the packet, respectively. It is worthwhile noting that the ${\bm k}$-integration in Eq.(\ref{position3}) can be performed
explicitly by making use of the binomial expansion. Finally, then, $\langle x \rangle (t)$ is represented in terms of the Hermite polynomials.
Instead of integral representation, however, $\langle x \rangle (t)$ has triple summations. The explicit expressions in terms of the Hermite
polynomials for various expectation values derived in this paper are summarized in Appendix B.
Similar calculation procedure derives $\langle y \rangle (t)$ as
\begin{equation}
\label{position5}
\langle y \rangle (t) \equiv \bra{\psi (x, y: 0)} \hat{y} (t) \ket{\psi (x, y: 0)}
= \frac{d^2}{\pi} \int d^2 {\bm k} \exp \left[-d^2 (k_x - \alpha)^2 - d^2 (k_y - \beta)^2\right] \left(Y_S + Y_{ZB} \right),
\end{equation}
where
\begin{eqnarray}
\label{position6}
& &Y_S = \frac{(v_F t)}{{\bm k}^2 + \lambda_c^{-2}} \left[(a^2 - b^2) \lambda_c^{-1} k_y + 2 a b k_x k_y \right] \\ \nonumber
& &Y_{ZB} = \frac{\sin^2 \theta}{{\bm k}^2 + \lambda_c^{-2}} \left[ -(a^2 - b^2) k_x + 2 a b \lambda_c^{-1} \right]
- \frac{\sin \theta \cos \theta}{({\bm k}^2 + \lambda_c^{-2})^{3/2}} \left[ (a^2 - b^2) \lambda_c^{-1} k_y + 2 a b k_x k_y \right].
\end{eqnarray}
Of course, $Y_S$ and $Y_{ZB}$ represent the spreading and ZB motion of the wave packet in $y$-direction.
In order to confirm the validity of our calculation we consider the case of zero gap ($\lambda_c^{-1} \rightarrow 0$), which was considered in
Ref.\cite{zitter2}. For simplicity, we choose $\alpha = 0$, $a=1$, and $b=0$. Then, $Y_S = 0$ and
$Y_{ZB} = -\sin^2 \theta k_x / {\bf k}^2$, which makes $\langle y \rangle (t) = 0$ due to $k_x$-integration. In this case we also get
$X_S = 0$ and $X_{ZB} = \sin^2 \theta k_y / {\bf k}^2$. Using $\int_0^{2\pi} d\theta \sin \theta e^{a \sin \theta} = 2\pi I_1 (a)$,
where $I_{\nu} (z)$ is a modified Bessel function, one can show directly
\begin{equation}
\label{gapless1}
\langle x \rangle (t) = \frac{1}{2\beta} \left(1 - e^{-\beta^2 d^2} \right) - d e^{-\beta^2 d^2}
\int_{0}^{\infty} dq e^{-q^2} \cos \left(\frac{2 v_F t}{d} q \right) I_1 (2 \beta d q),
\end{equation}
which exactly coincides with the second reference of Ref.\cite{zitter2}.
Before we explore the uncertainty properties it is interesting to examine the limiting behaviors of $\langle x \rangle (t)$
and $\langle y \rangle (t)$. In the $t \rightarrow 0$ limit some combinations of the spreading and trembling motion become dominant
and the limiting behaviors reduce to
\begin{eqnarray}
\label{limit1}
& &\lim_{t \rightarrow 0} \langle x \rangle (t) = 2 a b (v_F t) + O \left((v_F t)^2 \right) \\ \nonumber
& &\lim_{t \rightarrow 0} \langle y \rangle (t) = (v_F t)^2 \left[-(a^2 - b^2) \alpha + 2 a b \lambda_c^{-1} \right] +
O \left((v_F t)^3 \right).
\end{eqnarray}
It is interesting to note that the $t \rightarrow 0$ limiting behaviors of $\langle x \rangle (t)$ and $\langle y \rangle (t)$ are
completely different because their orders of $v_F t$ are different from each other. Furthermore, the dominant terms of
$\langle x \rangle (t)$ come from the off-diagonal components of $\hat{x} (t)$ while those of $\langle y \rangle (t)$ are
contributed from all components of $\hat{y} (t)$. In the $t \rightarrow \infty$ limit the dominant terms in
$\langle x \rangle (t)$ and $\langle y \rangle (t)$ are contributed from spreading terms and their expressions are
\begin{eqnarray}
\label{limit2}
& &\lim_{t \rightarrow \infty} \langle x \rangle (t) = \frac{d^2 (v_F t)}{\pi} \left[ (a^2 - b^2) \lambda_c^{-1} J_{1,0}
+ 2 a b J_{2,0} \right] \\ \nonumber
& &\lim_{t \rightarrow \infty} \langle y \rangle (t) = \frac{d^2 (v_F t)}{\pi} \left[ (a^2 - b^2) \lambda_c^{-1} J_{0,1}
+ 2 a b J_{1,1} \right],
\end{eqnarray}
where
\begin{equation}
\label{limit3}
J_{m,n} \equiv \int d^2 {\bm k} \exp \left[-d^2 (k_x - \alpha)^2 - d^2 (k_y - \beta)^2\right]
\frac{k_x^m k_y^n}{{\bm k}^2 + \lambda_c^{-2}}.
\end{equation}
In order to examine the position uncertainty $\Delta x (t)$ we should derive $\hat{x}^2 (t)$, which reduces to
\begin{equation}
\label{psquare1}
\hat{x}^2 (t) = \left[ \hat{x}^2 (0) + \hat{\Sigma}^2 (p) + \hat{\sigma}_1^2 (p) + \hat{\sigma}_2^2 (p) \right] \openone +
\left\{ \hat{x} (0), \hat{x} (t) - \hat{x} (0) \right\},
\end{equation}
where $\left\{ A, B \right\} \equiv A B + B A$. Since it is straightforward to show
$\bra{\psi (x, y: 0)} \left\{ \hat{x} (0), \hat{Z} (p) \right\} \ket{\psi (x, y: 0)} = 0$ with $\hat{Z} = \hat{\Sigma}$, $\hat{\sigma}_1$, or
$\hat{\sigma}_2$, one can show directly
\begin{equation}
\label{psquare2}
\langle x^2 \rangle (t) = \frac{d^2}{2} + \frac{d^2}{\pi}
\int d^2 {\bm k} \exp \left[-d^2 (k_x - \alpha)^2 - d^2 (k_y - \beta)^2\right] \left(\tilde{X}_S + \tilde{X}_{ZB} \right),
\end{equation}
where
\begin{equation}
\label{psquare3}
\tilde{X}_S = (v_F t)^2 \frac{k_x^2}{{\bm k}^2 + \lambda_c^{-2}} \hspace{2.0cm}
\tilde{X}_{ZB} = \sin^2 \theta \frac{k_y^2 + \lambda_c^{-2}}{({\bm k}^2 + \lambda_c^{-2})^2}.
\end{equation}
Similar calculation shows
\begin{equation}
\label{psquare4}
\langle y^2 \rangle (t) = \frac{d^2}{2} + \frac{d^2}{\pi}
\int d^2 {\bm k} \exp \left[-d^2 (k_x - \alpha)^2 - d^2 (k_y - \beta)^2\right] \left(\tilde{Y}_S + \tilde{Y}_{ZB} \right),
\end{equation}
where $\tilde{Y}_S$ and $\tilde{Y}_{ZB}$ are obtained from $\tilde{X}_S$ and $\tilde{X}_{ZB}$ by interchanging $k_x$ with $k_y$.
For the case of zero gap ($\lambda_c^{-1} \rightarrow o$) with $\alpha = 0$, $a = 1$, and $b=0$ one can show straightforwardly
\begin{eqnarray}
\label{gapless2}
& &\langle x^2 \rangle(t) = \frac{d^2}{2} + \frac{(v_F t)^2}{2 \beta^2 d^2} \left(1 - e^{-\beta^2 d^2} \right) \\ \nonumber
& & \hspace{1.5cm}
+ d^2 e^{-\beta^2 d^2} \int_0^{\infty} \frac{dq}{q^2} e^{-q^2}
\left[ 1 - \cos \left( \frac{2 v_F t}{d} q \right) \right]
\left[ q I_0 (2 \beta d q) - \frac{1}{2 \beta d} I_1 (2 \beta d q) \right] \\ \nonumber
& &\langle y^2 \rangle(t) = \frac{d^2}{2} + (v_F t)^2
\left[ e^{-\beta^2 d^2 / 2} \left( \sin \frac{\beta^2 d^2}{2} + \cos \frac{\beta^2 d^2}{2} \right) - \frac{1}{2 \beta^2 d^2}
\left(1 - e^{-\beta^2 d^2} \right) \right] \\ \nonumber
& & \hspace{1.5cm}
+ \frac{d}{2 \beta} e^{-\beta^2 d^2} \int_0^{\infty} \frac{dq}{q^2} e^{-q^2} \left[ 1 - \cos \left( \frac{2 v_F t}{d} q \right) \right]
I_1 (2 \beta d q).
\end{eqnarray}
Eq. (\ref{gapless1}) and Eq. (\ref{gapless2}) can be used to compute the uncertainties $\Delta x$ and $\Delta y$ for the case of zero gap.
While in the $t \rightarrow 0$ limit $\langle x^2 \rangle (t)$ and $\langle y^2 \rangle (t)$ exhibit similar behavior as
\begin{equation}
\label{limit4}
\lim_{t \rightarrow 0} \langle x^2 \rangle (t) = \lim_{t \rightarrow 0} \langle y^2 \rangle (t) = \frac{d^2}{2} + (v_F t)^2 +
O \left( (v_F t)^3 \right),
\end{equation}
and the $t \rightarrow \infty$ limits of $\langle x^2 \rangle (t)$ and $\langle y^2 \rangle (t)$ reduce to
\begin{equation}
\label{limit5}
\lim_{t \rightarrow \infty} \langle x^2 \rangle (t) = \frac{d^2}{2} + \frac{d^2}{\pi} (v_F t)^2 J_{2,0}
\hspace{1.0cm}
\lim_{t \rightarrow \infty} \langle y^2 \rangle (t) = \frac{d^2}{2} + \frac{d^2}{\pi} (v_F t)^2 J_{0,2}.
\end{equation}
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=6.2cm]{fig1a.eps}
\includegraphics[height=6.2cm]{fig1b.eps}
\includegraphics[height=6.2cm]{fig1c.eps}
\includegraphics[height=6.2cm]{fig1d.eps}
\caption[fig1]{(Color online) The time-dependence of $\Delta x \Delta p_x / \hbar$ for
$\lambda_c^{-1} = 6 (1 / nm)$ (a), $\lambda_c^{-1} = 2 (1 / nm)$ (b),
and $\lambda_c^{-1} = 0.14 (1 / nm)$ (c). The black solid line for each figure is a corresponding value $(\Delta x \Delta p_x / \hbar)_{free}$ for
the usual two-dimensional free Hamiltonian $\hat{H}_{free}$. As Fig. (a), (b), and (c) show, the uncertainty
$\Delta x \Delta p_x$ in graphene is larger (or smaller) than $(\Delta x \Delta p_x)_{free}$ in the entire range of time when
$\lambda_c^{-1} > \mu_2$ (or $\lambda_c^{-1} < \mu_1$). When $\mu_1 < \lambda_c^{-1} < \mu_2$, $\Delta x \Delta p_x$ is larger and smaller than
$(\Delta x \Delta p_x)_{free}$ at $t \rightarrow 0$ and $t \rightarrow \infty$ limits, respectively. (d) The critical value
$\mu_2$ increases with decreasing $\alpha$, and eventually goes to $\infty$ at $\alpha = 0$.}
\end{center}
\end{figure}
Since it is easy to show $\Delta p_x = \Delta p_y = \hbar / \sqrt{2} d$, we plot the time-dependence of the dimensionless quantity
$\Delta x \Delta p_x / \hbar$ in Fig. 1. In the figure we choose $a=0.9$, $d=8 (nm)$, $\alpha = 0.04 (1 / nm)$, and
$\beta = 1.2 (1 / nm)$. We also choose the inverse of the Compton wave length as $6 (1/nm)$ (Fig. 1a), $2 (1 / nm)$ (Fig. 1b), and
$0.14 (1/ nm)$ (Fig. 1 c). The black solid line in (a), (b), and (c) is $(\Delta x \Delta p_x / \hbar)_{free} = \sqrt{(1/2)^2 + (\lambda_c v_F t / 2 d^2)^2}$,
which is a corresponding value for the usual non-relativistic free Hamiltonian $\hat{H}_{free} = (p_1^2 + p_2^2) / 2 M$. The unit of the
time-axis is femto-second.
As Fig. 1 represents, the uncertainty $\Delta x \Delta p_x$ has several distinct properties. First, it is contributed from both
spreading and ZB motion of the wave packet. The spreading motion is dominated in the large scale of time. With increasing the inverse
Compton wavelength the overall increasing rate of $\Delta x \Delta p_x$ arising due to the spreading of the packet decreases drastically.
This can be understood from the analogy of the relativistic field theories, that is, with increasing $M$ the relativistic theories approach
to the non-relativistic Galilean theories, where the uncertainty is minimized. In the small scale of time $\Delta x \Delta p_x$
oscillates rapidly due to the ZB effect. The amplitude of the oscillation increases with decreasing $\lambda_c^{-1}$. This is mainly due to
the fact the the ZB effect is dominated when the energy gap $\Delta E$ between positive and negative energy spectra decreases.
However, the frequency increases rapidly with increasing $\lambda_c^{-1}$ because of the famous formula $\omega = \Delta E / \hbar$.
When $\lambda_c^{-1}$ is larger than a critical value $\mu_2$,
$\Delta x \Delta p_x$ becomes larger than $(\Delta x \Delta p_x)_{free}$ as Fig. 1a indicated. When, however, $\lambda_c^{-1}$ is smaller than
a different critical value $\mu_1$, it is smaller than $(\Delta x \Delta p_x)_{free}$ as Fig. 1c shows.
In the intermediate range of $\lambda_c^{-1}$ $\Delta x \Delta p_x$ is larger
and smaller than $(\Delta x \Delta p_x)_{free}$ in $t \rightarrow 0$ and $t \rightarrow \infty$ limits, respectively as Fig. 1b shows.
Using Eq.(\ref{limit1}), (\ref{limit2}) and several other limiting values, one can derive the critical values $\mu_1$ explicitly, and
$\mu_2$ implicitly as
\begin{equation}
\label{critical1}
\mu_1 = \frac{1}{\sqrt{2 d^2 (1 - 4 a^2 b^2)}}, \hspace{1.5cm} \gamma (\lambda_c^{-1}) \bigg|_{\lambda_c^{-1} = \mu_2} = 1,
\end{equation}
where
\begin{equation}
\label{critical2}
\gamma (\lambda_c^{-1}) = \frac{2 \lambda_c^{-2} d^4}{\pi} \left[J_{2,0} - \frac{d^2}{\pi}\left\{ (a^2 - b^2) \lambda_c^{-1} J_{1,0} + 2 a b J_{2,0} \right\}^2 \right].
\end{equation}
The $\lambda_c^{-1}$-dependence of $\gamma (\lambda_c^{-1})$ is plotted in Fig. 1d when $a = 0.9$, $d = 8 (nm)$, $\alpha = 1.2 / n (1/nm)$,
and $\beta = 1.2 (1/nm)$ for various $n$. As this figure indicates, the critical value $\mu_2$ increases with increasing $n$, and eventually
$\mu_2 = \infty$ when $\alpha = 0$.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=6.2cm]{fig2a.eps}
\includegraphics[height=6.2cm]{fig2b.eps}
\includegraphics[height=6.2cm]{fig2c.eps}
\includegraphics[height=6.2cm]{fig2d.eps}
\caption[fig2]{(Color online) The time-dependence of $\Delta y \Delta p_y / \hbar$ for
$\lambda_c^{-1} =8 (1 / nm)$ (a), $\lambda_c^{-1} = 2 (1 / nm)$ (b),
and $\lambda_c^{-1} = 0.04 (1 / nm)$ (c). The black solid line for each figure is a corresponding value $(\Delta y \Delta p_y / \hbar)_{free}$.
As Fig. (a), (b), and (c) show, the uncertainty
$\Delta y \Delta p_y$ in graphene exhibits a similar behavior to $\Delta x \Delta p_x$. However, the critical values $\mu_1$ and $\mu_2$ are
changed into $\nu_1$ and $\nu_2$. (d) The critical value
$\nu_2$ increases with decreasing $\beta$, and eventually goes to $\infty$ at $\beta = 0$.}
\end{center}
\end{figure}
The dimensionless uncertainty $\Delta y \Delta p_y / \hbar$ is plotted in Fig. 2 when $a=0.9$, $d = 8 (nm)$, $\alpha = 1.2 (1/nm)$ and
$\beta = 0.04 (1/nm)$. We also choose $\lambda_c^{-1}$ as $8 (1/nm)$ (Fig. 2a), $2 (1 / nm)$ (Fig. 2b), and $0.08 (1/nm)$ (Fig. 2c).
We plot $(\Delta y \Delta p_y / \hbar)_{free}$ together for comparison. As Fig. 2 shows, $\Delta y \Delta p_y$ exhibits a similar behavior
with $\Delta x \Delta p_x$. However, the critical values $\mu_1$ and $\mu_2$ are changed into $\nu_1$ and $\nu_2$, which reduce to
\begin{equation}
\label{critical3}
\nu_1 = \frac{1}{\sqrt{2} d}, \hspace{1.5cm} \delta (\lambda_c^{-1}) \bigg|_{\lambda_c^{-1} = \mu_2} = 1,
\end{equation}
where
\begin{equation}
\label{critical4}
\delta (\lambda_c^{-1}) = \frac{2 \lambda_c^{-2} d^4}{\pi} \left[J_{0,2} - \frac{d^2}{\pi}\left\{ (a^2 - b^2) \lambda_c^{-1} J_{0,1} + 2 a b J_{1,1} \right\}^2 \right].
\end{equation}
The $\lambda_c^{-1}$-dependence of $\delta (\lambda_c^{-1})$ is plotted in Fig. 2d when $a = 0.9$, $d = 8 (nm)$, $\alpha = 1.2 (1/nm)$,
and $\beta = 1.2 / n (1/nm)$ for various $n$. As this figure indicates, the critical value $\nu_2$ increases with increasing $n$, and eventually
goes to $\infty$ when $\beta = 0$.
\section{position-velocity uncertainty}
In this section we discuss on the position-velocity uncertainties \cite{po-vel}, which is completely different from position-momentum
uncertainties because of ${\bm p} \neq M {\bm v}$. The velocity operator
$\hat{v}_x (t)$ is defined as $\exp \left(i \hat{H}_M t / \hbar \right) \hat{v}_x (0) \exp \left(-i \hat{H}_M t / \hbar \right)$, where
$\hat{v}_x (0) = \partial \hat{H}_M / \partial p_1$. This operator is easily constructed from $\hat{x} (t)$ by making use of
Ehrenfest\cite{schiff} theorem $d \hat{x} (t) / d t = (i / \hbar) \exp \left(i \hat{H}_M t / \hbar \right) [\hat{H}_M, \hat{x} (0)] \exp \left(-i \hat{H}_M t / \hbar \right) = \hat{v}_x (t)$. Then, the final expression of $\hat{v}_x (t)$ is
\begin{eqnarray}
\label{velocity1}
\hat{v}_x (t) = \left( \begin{array}{cc}
\hat{U} (p) & \hat{u}_1 (p) + i \hat{u}_2 (p) \\
\hat{u}_1 (p) - i \hat{u}_2 (p) & -\hat{U} (p)
\end{array} \right),
\end{eqnarray}
where
\begin{eqnarray}
\label{velocity2}
& &\hat{U} (p) = v_F \left[ \frac{2 p_2}{\sqrt{{\bm p}^2 + (M v_F)^2}} \sin \theta_M \cos \theta_M +
\frac{2 (M v_F) p_1}{{\bm p}^2 + (M v_F)^2} \sin^2 \theta_M \right] \\ \nonumber
& &\hat{u}_1 (p) = v_F \left[ \cos^2 \theta_M + \frac{p_1^2 - p_2^2 - (M v_F)^2}{{\bm p}^2 + (M v_F)^2} \sin^2 \theta_M \right]
\\ \nonumber
& &\hat{u}_2 (p) = v_F \left[- \frac{2 p_1 p_2}{{\bm p}^2 + (M v_F)^2} \sin^2 \theta_M + \frac{2 (M v_F)}{\sqrt{{\bm p}^2 + (M v_F)^2}}
\sin \theta_M \cos \theta_M \right].
\end{eqnarray}
Unlike the position operators $\hat{x} (t)$ and $\hat{y} (t)$ the velocity operator $\hat{v}_x (t)$ does not have the spreading
term. This is due to the fact that the spreading term in the position operators is linear in time. Another remarkable property of
$\hat{v}_x (t)$ is that $\hat{v}_x^2 (t)$ is simply $v_F^2$ times identity operator $\openone$. Combining these two properties one can easily
conjecture $\lim_{t \rightarrow \infty} \Delta v_x = v_F$ regardless of the choice of the wave packet because the ZB term in
$\hat{v}_x (t)$ has infinitely high frequency in this limit, and therefore, is canceled out in the time average.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=6.0cm]{fig3a.eps}
\includegraphics[height=6.0cm]{fig3b.eps}
\includegraphics[height=6.0cm]{fig3c.eps}
\caption[fig3]{(Color online) The time-dependence of $\Delta x \Delta v_x / d v_F$ for $\lambda_c^{-1} = 0.09 (1 / nm)$ (a),
$\lambda_c^{-1} = 0.14 (1 / nm)$ (b),
and $\lambda_c^{-1} = 0.5 (1 / nm)$ (c). The black dotted line for each figure is a corresponding value $(\Delta x \Delta v_x / d v_F)_{free}$.
As Fig. (a), (b), and (c) show, the uncertainty
$\Delta x \Delta v_x$ in graphene is larger (or smaller) than $(\Delta x \Delta v_x)_{free}$ depending on the gap parameter $\lambda_c^{-1}$.
One can show explicitly that $\lim_{t \rightarrow 0} \Delta x \Delta v_x < (\Delta x \Delta v_x)_{free}$ if $\lambda_c^{-1} < \mu_1$ and $\lim_{t \rightarrow \infty} \Delta x \Delta v_x > (\Delta x \Delta v_x)_{free}$ if
$\lambda_c^{-1} > \mu_{2*}$, where $\mu_{2*}$ is defined as $\gamma (\lambda_c^{-1} = \mu_{2*}) = 1 / (2 (\mu_{2 *} d)^2$. }
\end{center}
\end{figure}
The expectation value $\langle v_x \rangle (t)$ and $\langle v_x^2 \rangle (t)$ with a wave packet (\ref{packet}) can be straightforwardly
computed by making use of Eq. (\ref{velocity1}). As expected the resulting $\Delta v_x (t)$ has only trembling motion and approaches to
$v_F$ at $t \rightarrow \infty$ limit. The dimensionless position-velocity uncertainty $\Delta x \Delta v_x / d v_F$ is plotted in Fig. 3 for
$\lambda_c^{-1} = 0.09 (1 / nm)$ (Fig. 3a), $\lambda_c^{-1} = 0.14 (1 / nm)$ (Fig. 3b), and $\lambda_c^{-1} = 0.5 (1 / nm)$ (Fig. 3c) when
$a = 0.9$, $d = 8 (nm)$, $\alpha = 0.04 (1 / nm)$ and $\beta = 1.2 (1 / nm)$. The $x$-axis is time axis with femto-second unit. The black dotted
line is a corresponding value $(\Delta x \Delta v_x)_{free} / d v_F$, where
$(\Delta x \Delta v_x)_{free} = \sqrt{\lambda_c^2 v_F^2 / 4 + \lambda_c^4 v_F^4 t^2 / 4 d^4}$ is a position-velocity uncertainty for $\hat{H}_{free}$. The overall increasing behavior of $\Delta x \Delta v_x$ is solely due to $\Delta x$
because $\Delta v_x$ does not have its own spreading term. As Fig. 3 shows, $\Delta x \Delta v_x$ can be smaller or larger than
$(\Delta x \Delta v_x)_{free}$ depending on the gap parameter $\lambda_c$. In order to compare $\Delta x \Delta v_x$ with
$(\Delta x \Delta v_x)_{free}$ more accurately we compute its limiting values at $t \rightarrow 0$ and $t \rightarrow \infty$. Then,
it is easy to show $\lim_{t \rightarrow 0} \Delta x \Delta v_x < (\Delta x \Delta v_x)_{free}$ if $\lambda_c^{-1} < \mu_1$, where $\mu_1$ is
defined at Eq.(\ref{critical1}), and $\lim_{t \rightarrow \infty} \Delta x \Delta v_x > (\Delta x \Delta v_x)_{free}$ if
$\lambda_c^{-1} > \mu_{2*}$, where $\mu_{2*}$ is defined as $\gamma (\lambda_c^{-1} = \mu_{2*}) = 1 / (2 (\mu_{2 *} d)^2$.
The critical values $\mu_1$, $\mu_2$, and $\mu_{2*}$ are given at Table I when $d = 8 (nm)$, $\alpha = 1.2 / n (1 / nm)$, $\beta = 1.2 (1 / nm)$,
and $a = 0.9$ or $0.7$. The reason for choice of $a$ is that while the diagonal components of the various operators contribute
dominantly to the uncertainty relations at $a = 0.9 \sim 1$, the off-diagonal components become more important at $a = 0.7 \sim 1 / \sqrt{2}$.
As expected from Fig. 1d, $\mu_2$ increases with increasing $n$, and eventually goes to $\infty$ at $\alpha = 0$. Another critical value
$\mu_{2*}$ also exhibits an increasing behavior with increasing $n$, but its increasing rate is very small compared to $\mu_2$ and
converges to $0.332$ at $n \rightarrow \infty$ limit.
\begin{center}
{\large{Table I}}: Critical values for $\Delta x \Delta p_x$ and $\Delta x \Delta v_x$ when $ d = 8 (nm)$, $\alpha = 1.2 / n (1 / nm)$ and
$\beta = 1.2 (1 / nm)$.
\end{center}
\begin{center}
\begin{tabular}{c|c|cccccc} \hline \hline
& $a$ & $n=10$ & $n=20$ & $n=30$ & $n=40$ & $n=50$ & $n=\infty$ \\ \hline
$\mu_1 (1 / (nm)$ & $0.9$ & $0.143$ & $0.143$ & $0.143$ & $0.143$ & $0.143$ & $0.143$ \\ \cline{2-8}
& $0.7$ & $4.42$ & $4.42$ & $4.42$ & $4.42$ & $4.42$ & $4.42$ \\ \hline
$\mu_2 (1 / (nm)$ & $0.9$ & $1.03$ & $2.24$ & $3.47$ & $4.69$ & $5.90$ & $\infty$ \\ \cline{2-8}
& $0.7$ & $0.90$ & $1.79$ & $2.68$ & $3.58$ & $4.47$ & $\infty$ \\ \hline
$\mu_{2*} (1 / (nm)$ & $0.9$ & $0.257$ & $0.303$ & $0.318$ & $0.324$ & $0.327$ & $0.332$ \\ \cline{2-8}
& $0.7$ & $0.256$ & $0.302$ & $0.317$ & $0.323$ & $0.326$ & $0.332$ \\ \hline \hline
\end{tabular}
\\
\end{center}
\vspace{0.5cm}
Following similar calculation procedure one can plot the time-dependence of the dimensionless quantity $\Delta y \Delta v_y / (d v_F)$.
Although the time-dependence of the uncertainties is not plotted in this paper, $\Delta y \Delta v_y$ exhibits a similar behavior with
$\Delta x \Delta v_x$. However, the critical values $\mu_1$ and $\mu_{2*}$ are changed into $\nu_1$ and $\nu_{2*}$, whose explicit values
are given at Table II.
\begin{center}
{\large{Table II}}: Critical values for $\Delta y \Delta p_y$ and $\Delta y \Delta v_y$ when $ d = 8 (nm)$, $\alpha = 1.2 (1 / nm)$ and
$\beta = 1.2 / n (1 / nm)$.
\end{center}
\begin{center}
\begin{tabular}{c|c|cccccc} \hline \hline
& $a$ & $n=10$ & $n=20$ & $n=30$ & $n=40$ & $n=50$ & $n=\infty$ \\ \hline
$\nu_1 (1 / (nm)$ & $0.9$ & $0.088$ & $0.088$ & $0.088$ & $0.088$ & $0.088$ & $0.088$ \\ \cline{2-8}
& $0.7$ & $0.088$ & $0.088$ & $0.088$ & $0.088$ & $0.088$ & $0.088$ \\ \hline
$\nu_2 (1 / (nm)$ & $0.9$ & $2.23$ & $3.36$ & $4.48$ & $5.60$ & $6.73$ & $\infty$ \\ \cline{2-8}
& $0.7$ & $1.22$ & $2.05$ & $2.88$ & $3.73$ & $4.59$ & $\infty$ \\ \hline
$\nu_{2*} (1 / (nm)$ & $0.9$ & $0.309$ & $0.326$ & $0.329$ & $0.330$ & $0.331$ & $0.332$ \\ \cline{2-8}
& $0.7$ & $0.319$ & $0.328$ & $0.330$ & $0.331$ & $0.331$ & $0.332$ \\ \hline \hline
\end{tabular}
\\
\end{center}
\vspace{0.5cm}
\section{concluding Remarks}
In this paper we have examined the position-momentum and position-velocity uncertainties for the monolayer gapped graphene. We have shown that the uncertainties are contributed by the spreading effect of the wave packet in the long-range of time and the ZB in the short-range of time. By choosing the gap parameter $\lambda_c$ appropriately one can control the uncertainties within the quantum mechanical law.
The uncertainties can be tested experimentally because all figures in this paper show a significant difference between free and graphene cases. The uncertainties in the graphene might be measured via the following one-slit experiment (see Figure $4$). In this paper we will discuss on $\Delta x$ only because other quantities can be measured similarly. The slit width $d$ should be order of Angstroms to ensure the occurrence of diffraction in the slit. The distance $L$ should be order of nanometers because the effect of the zitterbewegung is important within initial few femtoseconds. The electrons emitted by the emitter would arrive at the detecter through the slit. Then, one can make a probability distribution with respect to $x$, which would be a smooth Gaussian form. Measuring the width of the Gaussian distribution, one can deduce $\Delta x$ at $t \sim L / v_F$, where $v_F$ is a Fermi velocity. Repeating the same experiment with changing $L$ one can measure the time-dependence of $\Delta x$. If the prediction we presented in this paper is correct, $\Delta x$ would exhibit an oscillating behavior in the short-range of time due to the effect of the zitterbewegung, but globally an increasing behavior in the long-range of time due to the spreading effect of the wave packet.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=6cm]{fig4.eps}
\caption[fig3]{(Color online) Schematic diagram for measuring the uncertainties.}
\end{center}
\end{figure}
It is interesting to extend this paper to the bilayer graphene. Another interesting issue is to examine the uncertainty relations when
external magnetic field is applied. We guess that the external magnetic field drastically reduce the uncertainties in the graphene. If so,
the graphene-based quantum computer can be more useful for huge calculations. We would like to explore this issue in the near future.
{\bf Acknowledgement}:
This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(2011-0011971 and National Scientist Program 2010-0020414).
|
2,869,038,156,261 | arxiv | \section{Introduction}
The Cosmological principle, according to which space is considered as homogeneous and isotropic at a global scale, is an assumption that provides an enormous simplification in the investigation of the dynamics of the universe. A consequence of this assumption is that space is a maximally symmetric manifold, leading to the well-known Friedmann-Lema\^\i tre-Robertson-Walker (FLRW) metric. This is one of the simplest approaches that can be followed to model a dynamic universe, since it allows to study the universe with one single evolving parameter, namely the scale factor $a$.
On the basis of this model, results obtained from several cosmological probes have led to conclude that the expansion of the universe was accelerating since some time. Such results were first obtained by \cite{Riess} and \cite{Perlmutter} from the observations of distant Type Ia supernovae (SNIa), and were supported by the additional measurements performed over the years, see \cite{Scolnic}. Similar results were also obtained using other independent cosmological probes, based in particular on cosmic microwave background (see in particular the WMAP \cite{Bennett} and the Planck projects \cite{Aghanim}) or baryon acoustic oscillations (BAO) measurements (see \cite{Eisenstein}).
Despite a relatively broad consensus in the scientific community on the reality of the accelerated expansion, generally attributed to some form of dark energy, alternative theories have been proposed to explain the results obtained by those cosmological probes. Recently, it has been suggested by \cite{Deledicque} that a bias in the SNIa measurements could result in an apparent dark energy effect, explaining thus the accelerated expansion as an observational artefact. The bias is related to the fact that space is not homogeneous, as supposed in theory, and that SNIa occur preferentially in overdense regions, in which matter has grouped together. Since those regions have their own dynamics, which cannot be considered as representative of the one of the universe at large scale, measurements performed on SNIa should necessarily be altered by this inhomogeneity. Its was shown in $\cite{Deledicque}$ by using a global approach that this bias could explain the appearance of an apparent cosmological constant in the Einstein equation of General Relativity. However, this global approach did not allow for a detailed understanding of the underlying cause in the SNIa measurements that led to observe an accelerated expansion. The aim of this article is to continue the work of $\cite{Deledicque}$, by developing a model to better understand how SNIa measurements could be affected by the bias.
In section $\ref{S1}$ we develop a two-regions model, assuming that space is made of overdense and underdense regions, both being characterized by their own average metric and stress-energy tensors. We examine in particular the dynamics of overdense regions, and show that it significantly diverges from the one of the global universe, characterized by the FLRW metric. In section $\ref{S2}$, we then use this model to investigate the effect of the inhomogeneity on the measurements performed on SNIa, in particular on luminosity distance and redshift measurements. In section $\ref{S3}$, we finally present and discuss the results of the model. We show that the bias introduced by the inhomogeneity in space leads to observe an apparent accelerated expansion, and that the predicted distance modulus versus redshift relation that would be measured taking into account the bias is in excellent agreement with the one reported by \cite{Riess}, \cite{Perlmutter} or \cite{Scolnic}.
\section{The two-regions model}\label{S1}
From now on, we will admit that the cosmological constant vanishes, so the Einstein equation of General relativity reads
\begin{equation}\label{GR}
G_{\mu\nu} = 8\pi G T_{\mu\nu}\,.
\end{equation}
Using this equation together with the FRLW metric, assuming a flat space, we find that the scale factor $a$ presents a dynamics verifying the Friedman equation:
\begin{equation}\label{FLRW}
3\frac{\dot{a}^2}{a^2} = 8\pi G \rho\,,
\end{equation}
where $\rho$ is the average density over space, and where $a$ has been normalized to be dimensionless, such that $a = 1$ at the current time.
This approach, however, does not allow to investigate the effect of the bias identified in $\cite{Deledicque}$. Indeed, if initially matter was distributed in an almost homogeneous way, small perturbations developed, sharpening progressively the local inhomogeneous character of space. Due to the gravitational attraction, matter grouped together in regions having larger densities than on average, leading consequently to regions of void that expanded over time. SNIa occur preferentially in regions where matter is present, hence in overdense regions. Since they do not occur randomly over space, but instead in specific regions only, which probably cannot be considered as representative of the universe, this has to be considered as a bias if they are used as a cosmic probe.
We develop in this section the simplified model that will be used in our investigation of the effect of the bias on SNIa measurements. The simplest approach that can be followed is to consider space to be made of two different kinds of regions: overdense regions and underdense regions. To fix ideas, let us consider a volume $V$ of space sufficiently large so that it can be assumed to be representative of the universe. The average density of matter in $V$ is written as $\rho$, and corresponds to the density used in the Friedman equation. In this volume $V$, overdense regions occupy a volume $V_o$ and have an average density $\rho_o$, while underdense regions occupy a volume $V_u$ and have an average density $\rho_u$. We will consider here the limiting case for which $\rho_u = 0$, hence underdense regions do not contain any matter. The total mass in $V$ is $M = \rho V$, but since all matter is assumed to be located in overdense regions, we also have $M = \rho_o V_o$. We thus deduce that
\begin{equation}\label{ak}
\frac{V}{V_o} = \frac{\rho_o}{\rho}\,.
\end{equation}
Overdense and underdense regions can have extremely complicated characteristics, but we do not want to examine them in all their complexity. In order to simplify as much as possible the analysis, we will consider that overdense (resp. underdense) regions can be characterized by one single typical region, assumed to representative of the average behaviour of all existing overdense (resp. underdense) regions. Moreover, we are not interested in having a detailed knowledge of the spatial evolution of the metric or of the stress-energy tensor through space in this typical region, we will therefore admit that typical regions can be described by average tensors. Such an approach is also followed when using the FLRW metric, but here we apply it at a smaller scale. So, for our model we have two typical regions, one for the overdense regions, and one for the underdense regions, both having their own average metric and average stress-energy tensors.
Let us thus consider such a typical region supposed to be representative of the average of all overdense regions. To describe the metric of this region, we will use a specific frame of reference based on the co-moving coordinates, for which $t$ is the cosmological time coordinate, and where $(x,y,z)$ are the spatial Cartesian coordinates.
Since space is assumed to be globally isotropic, we will admit that this is also the case for this typical region. The region supposed to be representative of the average of all existing overdense regions should indeed not present any directional preference. In the considered frame of reference, the metric tensor for a typical region can then be written as
\begin{equation}\label{az}
g_{\mu\nu} = \left(
\begin{array}{c c c c}
-f^2 & 0 & 0 & 0\\
0 & b^2 & 0 & 0\\
0 & 0 & b^2 & 0\\
0 & 0 & 0 & b^2
\end{array} \right)\,.
\end{equation}
Here, $b$ is the scale factor inside the typical region, equivalent to the scale factor $a$ for the global universe. Since the metric is considered to be the spatial average over the typical overdense region, $b$ does not depend on spatial coordinates. It however may depend on $t$. We notice also that the first diagonal component is not necessarily equal to $-1$, as for the FLRW metric. Indeed, the rate at which time evolves in a typical overdense regions will in general differ from the one in the FLRW metric, and hence it is characterized by a function $f$ depending on $t$ at most.
For this metric, the Christoffel symbols are such that $\Gamma^{0}_{\ 00} = \dot{f}/f$, $\Gamma^{0}_{\ ii} = b\dot{b}/f^2$, $\Gamma^{i}_{\ it} = \Gamma^{i}_{\ ti} = \dot{b}/b$ and all other components are zero. In these relations, we use the dot notation to represent a derivative with respect to the cosmological time. Then, the Ricci tensor components are
\begin{eqnarray}
R_{00} &=& 3\left(\frac{\dot{f}\dot{b}}{fb} - \frac{\ddot{b}}{b}\right)\,,
\\
R_{ii} &=& \frac{2\dot{b}^2 + b\ddot{b}}{f^2} - \frac{b\dot{b}\dot{f}}{f^3}\,,
\end{eqnarray}
and all other components are zero. So, the Ricci scalar is
\begin{equation}
R = 6\left(\frac{\ddot{b}}{bf^2} + \frac{\dot{b}^2}{b^2f^2} - \frac{\dot{b}\dot{f}}{bf^3}\right)\,.
\end{equation}
Finally, the components of the Einstein tensor are
\begin{eqnarray}
G_{00} &=& 3\frac{\dot{b}^2}{b^2}\,,
\\
G_{ii} &=& 2\frac{b\dot{b}\dot{f}}{f^3} - 2\frac{b\ddot{b}}{f^2} - \frac{\dot{b}^2}{f^2}\,,
\end{eqnarray}
and all other components are zero.
The typical overdense region is assumed to be globally at rest in the comoving coordinates. As a consequence, its average four-velocity is
\begin{equation}
U^\mu = \frac{dx^\mu}{d\tau} = \left(\frac{1}{f}, 0, 0, 0\right)\,.
\end{equation}
Matter being supposed to behave as a perfect fluid, the stress-energy tensor in a typical overdense region, defined as
\begin{equation}
T_{\mu\nu} = \left(\rho_o + p_o\right)U_\mu U_\nu + p_0 g_{\mu\nu}\,,
\end{equation}
becomes
\begin{equation}\label{aq}
T_{\mu\nu} = \left(
\begin{array}{c c c c}
\rho_o f^2 & 0 & 0 & 0\\
0 & p_ob^2 & 0 & 0\\
0 & 0 & p_ob^2 & 0\\
0 & 0 & 0 & p_ob^2
\end{array} \right)\,,
\end{equation}
where $p_o$ is the pressure in the typical overdense region. We then notice that the conservation law $\nabla_\mu T^\mu_{\ 0} = 0$ leads to
\begin{equation}\label{ee}
\dot{\rho_o} = -3\frac{\dot{b}}{b}\left(\rho_o + p_o\right)\,.
\end{equation}
In a typical overdense region, we will admit that matter has quite rapidly reached a state in which it is gravitationally bound. A region made of gravitationally bound matter has a constant volume over time. Normally, it should grow due to the expansion of the universe, but this growth is compensated by the gravitational attraction. The volume being constant, and the region containing a given mass, this means that the density in overdense regions remains constant over time. Hence, according to Eq.\ $(\ref{ee})$, overdense regions have a pressure $p_{o} = -\rho_{o}$.
Applying the equation of General Relativity Eq.\ $(\ref{GR})$ to our typical overdense region, using the relations obtained above, we find for the first diagonal component
\begin{equation}\label{E1}
3\frac{\dot{b}^2}{b^2} = 8\pi G\rho_{o} f^2\,,
\end{equation}
while for the three other diagonal components we have
\begin{equation}\label{E2}
2\frac{b\dot{b}\dot{f}}{f^3} - 2\frac{b\ddot{b}}{f^2} - \frac{\dot{b}^2}{f^2} = 8\pi G p_{o}b^2\,.
\end{equation}
As for the Friedman equations, we can show that Eq.\ $(\ref{E2})$ can be obtained from Eq.\ $(\ref{E1})$, its first temporal derivative and the fact that $p_{o} = -\rho_{o}$. We will thus only examine Eq.\ $(\ref{E1})$.
Eq.\ $(\ref{E1})$ expresses the dynamics of the scale factor $b$ inside the typical overdense region, in function of its density $\rho_o$ and the function $f$. Since the right hand side of $(\ref{E1})$ is not zero, we expect that $b$ will not remain constant over time. In a globally expanding universe, $b$ will grow over time, as does $a$, but with a different rate. But if $b$ increases, this could mean that the volume of the considered typical region increases as well. On the other hand, we have considered that matter was gravitationally bound in overdense regions, so the volume of overdense regions should not change. In fact, while $b$ increases and involves a swelling of the overdense region, simultaneously, matter moves back to keep constant the volume it occupies. It is this phenomenon that is responsible for the negative pressure in this region.
The approach that has been followed so far for a typical overdense region can be followed in a similar way for a typical underdense region. On average, such a region should also be isotropic, and the average scale factor for that region will verify a similar relation as Eq.\ $(\ref{E1})$, in which however the right hand side term is zero, because of the absence of matter in underdense regions. This means that Eq.\ $(\ref{E1})$ predicts a constant scale factor for a typical underdense region. This does not mean, however, that underdense regions will not grow over time. Indeed, as explained above, even if typical overdense regions keep a constant volume, their scale factor increases. The related volume increase is compensated by a backward displacement of matter to keep the volume of the overdense region constant. In moving back, matter leaves hence some void volume that initially belonged to an overdense regions, but that will contribute to enlarge the underdense region. So, underdense regions grow by a continuous volume transfer coming from the overdense regions.
Let us quantity this volume transfer. At a given time, the volume of overdense regions is proportional to $b^3$. If matter was unbound, the rate at which this volume would grow is proportional to $3b^2\dot{b}$, with the same constant of proportionality. Per unit volume, this rate is thus equal to $3\dot{b}/b$. So, in the considered volume $V$ of the universe, the total rate of growth of overdense regions if matter was unbound is equal to $3V_o\dot{b}/b$. But since matter is assumed to be gravitationally bound, overdense regions are not expected to grow, and the increase of volume just calculated is supposed to be the one that will be transferred to underdense regions. In fact, as this volume increase is the only one that occurs in the volume $V$ (due to a null density, underdense regions do not grow by themselves), it should also correspond to the volume increase of the whole volume $V$. In other words, the rate at which overdense regions increase (if matter was not gravitationally bound) corresponds to the rate at which the universe expands. Given that the volume $V$ is expected to obey on average the Friedman equation, its rate of growth should be equal to $3V\dot{a}/a$. Hence, we have:
\begin{equation}\label{fq}
3V_o\frac{\dot{b}}{b} = 3V\frac{\dot{a}}{a}\,.
\end{equation}
Combining this latter equation with Eq.\ $(\ref{FLRW})$ and $(\ref{E1})$, we find that
\begin{equation}
f = \frac{V}{V_o}\sqrt{\frac{\rho}{\rho_o}}\,.
\end{equation}
Using Eq.\ $(\ref{ak})$, this can also be written as
\begin{equation}\label{a1}
f = \sqrt{\frac{\rho_o}{\rho}} = \sqrt{\frac{V}{V_o}} > 1\,.
\end{equation}
Since $V$ is proportional to $a^3$ and $V_o$ is assumed to be constant, we deduce that $f$ is proportional to $a^{3/2}$. Clearly, $f$ may significantly differ from $1$, meaning that the first diagonal component of the metric tensor in overdense regions diverges from the one of the FLRW metric, which is representative of the global universe on average. In overdense regions, time progresses at a rate larger than the one on average through space.
It is interesting to notice that in a flat matter dominated universe, $a$ is proportional to $t^{2/3}$, implying thus that $f$ is directly proportional to $t$. In other words, $f$ linearly increases with the cosmological time:
\begin{equation}\label{fqw}
f = f_0t\,,
\end{equation}
where $f_0$ is a constant to determine. Knowing the temporal dependence of $f$, we can integrate Eq.\ $(\ref{E1})$. We find
\begin{equation}
b = b_0\exp\left(\sqrt{\frac{2\pi}{3}G\rho_{o}}f_0 t^2\right)\,,
\end{equation}
where $b_0$ is another constant to determine.
To fix $f_0$ and $b_0$, it is first important to highlight the limitations of the model. Eq.\ $(\ref{fqw})$ seems to show that $f$ will tend to zero for smaller times, whereas we would expect that $f$ should tend to $1$. The metric of overdense regions should indeed tend to the FLRW metric when going back in time, because perturbations were much smaller and space was looking to be more and more homogeneous. The explanation is as follows. We have considered that the volume $V_o$ of overdense regions is constant over time. This assumption makes sense as long as the total volume $V$ is larger than $V_{o}$. Obviously the volume of overdense regions must be included in the total volume. But $V$ being proportional to $a^3$, going back in time, at some point, $V$ will become smaller than $V_0$. So at times at which $V < V_{o}$, our model does not hold anymore. In fact, we notice from Eq.\ $(\ref{a1})$ that when $V = V_{o}$, thus when the overdense region occupies whole space, $f = 1$. This is the starting point of our model. At that point, the overdense region coincides with whole space, and we expect that its metric is equivalent to the FLRW metric, thus we should also have $b=a$. At earlier times, we cannot assume that $V_o$ remains constant, it should on the contrary shrink at the rate of $a^3$, and it will continuously coincide with the whole existing volume $V$. So, the constants of proportionality $f_0$ and $b_0$ are such that at the time when $V = V_o$, we have $f = 1$ and $b=a$.
Let us calculate which value of $a^*$ can be considered as the starting point of our model. As just said, it should be such that $f = 1$. Knowing that $V_o$ is constant and that $V$ is proportional to $a^3$, we write
\begin{equation}
\frac{V}{V_o} = Aa^3\,,
\end{equation}
where $A$ is a constant equal to the current value of $V/V_o$, given that by convention at the current time $a=1$. As we will see through this article, $A$ is an important parameter, since it is the single one that completely fixes the whole model. So, from Eq.\ $(\ref{a1})$, we deduce that
\begin{equation}\label{ff}
f = \sqrt{A}a^{3/2}\,,
\end{equation}
and hence that $a^* = A^{-1/3}$.
Eq.\ $(\ref{ff})$ provides a relation of $f$ as a function of $a$. For practical reasons, it will also be useful to have a relation of $b$ as a function of $a$. From Eq.\ $(\ref{fq})$ we deduce that
\begin{equation}
\frac{db}{da} = \frac{V}{V_o}\frac{b}{a} = Aa^2b\,.
\end{equation}
Integrating this last relation, we get
\begin{equation}
b = B\exp\left(\frac{A}{3}a^3\right)\,.
\end{equation}
The constant $B$ can be found by imposing that $b=a$ when $a=a^*$. We find
\begin{equation}
B = \left(A\right)^{-1/3}\exp\left(-\frac{1}{3}\right)\,.
\end{equation}
We have established the temporal dependence of $f$ and $b$, so the metric of a typical overdense region is completely determined.
\section{Effect of the bias on the SNIa measurements}\label{S2}
Having established the metric tensor in a typical overdense region, we will now investigate the effect of the bias in SNIa measurements. This requires to understand how the inhomogeneity of space affects redshift and luminosity distance measurements.
\subsection{Effect on redshift measurements}
Redshift measurements allow to deduce the scale factor at the point where the SNIa occurred, simply by performing some specific time span measurement at our current epoch. To fix ideas, let us consider a source (typically a SNIa) emitting light with a known temporal characteristic. A first signal is emitted at some time $t$ by such a source located at a coordinate $x$ and reaches at time $t_0$ an observer located along the $x$ direction at a coordinate $x_0$. A second signal is emitted from the same source at time $t + \Delta t$ and reaches the observer at time $t_0+\Delta t_0$. It can then be shown that, in a perfectly homogenous and isotropic space, we have
\begin{equation}\label{pep}
a(t) = a(t_0)\frac{\Delta t}{\Delta t_0}\,.
\end{equation}
Here, $a(t)$ is the value of the scale factor at the location of the SNIa that we want to determine. By convention, the value of the current scale factor $a(t_0)$ is set at $1$. Since $\Delta t$ is a characteristic time span supposed to be known, a measure of $\Delta t_0$ allows to deduce $a(t)$ from Eq.\ $(\ref{pep})$.
In reality, space is not perfectly homogeneous and isotropic, and contains perturbations. As we will see, this can significantly affect the result of the redshift measurement. To show this, let us first write the interval as
\begin{equation}\label{79}
ds^2 = g_{\mu\nu}dx^\mu dx^\nu = \left(\overline{g}_{\mu\nu} + \Delta g_{\mu\nu}\right)dx^\mu dx^\nu\,,
\end{equation}
where $g_{\mu\nu}$ is the local metric tensor, $\overline{g}_{\mu\nu}$ is the FLRW metric tensor, and $\Delta g_{\mu\nu}$ is the perturbation, i.e., the difference between the real local and the FLRW metric tensors.
Light follows a null geodesic, so if we consider a light ray travelling along the $x$ direction, we have
\begin{equation}
\left(\overline{g}_{tt} + \Delta g_{tt}\right)dt^2 + \left(\overline{g}_{xx}+\Delta g_{xx}\right)dx^2 + 2\Delta g_{xt}dxdt = 0\,.
\end{equation}
Solving for $dx/dt$, we find
\begin{equation}
\frac{dx}{dt} = \frac{-2\Delta g_{xt} \pm \sqrt{\delta}}{2(\overline{g}_{xx}+\Delta g_{xx})}\,,
\end{equation}
where
\begin{equation}
\delta = 4(\Delta g_{xt})^2 - 4(\overline{g}_{xx} + \Delta g_{xx})\left(\overline{g}_{tt} + \Delta g_{tt}\right)\,.
\end{equation}
We have one solution for a signal travelling in the positive direction, and one solution for a signal travelling in the negative direction. Since space is expected to be isotropic, this equation should on average provide a similar result in magnitude for both signals, but with only a change of sign. This is only possible if on average $\Delta g_{xt}$ vanishes. Then, replacing $\overline{g}_{tt} = -1$ and $\overline{g}_{xx} = a^2$, we get
\begin{equation}\label{pp1}
\sqrt{\frac{1-\Delta g_{tt}}{a^2+\Delta g_{xx}}}dt = \pm dx\,.
\end{equation}
We now integrate Eq.\ $(\ref{pp1})$ along the $x$ direction. For the signal emitted at $t$, we get
\begin{equation}
\int_{t}^{t_0} \sqrt{\frac{1-\Delta g_{tt}}{a^2+\Delta g_{xx}}}dt = \pm\int_{x}^{x_0}dx\,.
\end{equation}
Considering the equivalent relation for the second signal we show that
\begin{equation}\label{ggqq}
\int_{t}^{t+\Delta t}\sqrt{\frac{1-\Delta g_{tt}}{a^2+\Delta g_{xx}}}dt = \int_{t_0}^{t_0+\Delta t_0}\sqrt{\frac{1-\Delta g_{tt}}{a^2+\Delta g_{xx}}}dt\,.
\end{equation}
For a small variation in time, the components of the metric may be considered as constant, and we deduce that
\begin{eqnarray}\label{ivc}
\frac{\sqrt{a^2(t)+\Delta g_{xx}(x,t)}}{\Delta t\sqrt{1-\Delta g_{tt}(x,t)}} = \frac{\sqrt{a^2(t_0)+\Delta g_{xx}(x_0,t_0)}}{\Delta t_0\sqrt{1-\Delta g_{tt}(x_0,t_0)}}\,.
\end{eqnarray}
In our simplified model, SNIa occur in an overdense region, and we are ourselves located in an overdense region. This means that, using the functions $f$ and $b$ defined above, we have in a typical overdense region
\begin{eqnarray}
a^2(t)+\Delta g_{xx}(x,t) &=& b^2(t)\,,
\\
1-\Delta g_{tt}(x,t) &=& f^2(t)\,,
\end{eqnarray}
at any $t$ and $x$, so Eq.\ $(\ref{ivc})$ can be written as
\begin{equation}\label{dq}
\frac{b(t)}{f(t) \Delta t} = \frac{b(t_0)}{f(t_0) \Delta t_0}\,.
\end{equation}
When performing redshift measurements, we use Eq.\ $(\ref{pep})$ to determine the scale factor at some time $t$. This thus requires to measure $\Delta t/\Delta t_0$. In practice, however, this is not what we measure. Indeed, the time span measurement performed at our location is carried out in our proper time frame, meaning that we do not measure $\Delta t_0$ but instead $f(t_0)\Delta t_0$. Similarly, the characteristic time span at the source is known in its proper time frame, hence it is equal to $f(t)\Delta t$ instead of $\Delta t$. So, what we measure in practice is not $\Delta t/\Delta t_0$, but instead
\begin{equation}
\frac{f(t)\Delta t}{f(t_0)\Delta t_0}\,.
\end{equation}
This is the measured value $a_{meas}(t)$ of the scale factor at the SNIa, given that $a(t_0)$ is conventionally set to $1$. From Eq.\ $(\ref{dq})$, we deduce that
\begin{equation}\label{jj}
a_{(meas)}(t) = \frac{b(t)}{b(t_0)}\,.
\end{equation}
In general, $a_{(meas)}(t)$ will differ from the real value of $a(t)$.
It is important to notice that this has also consequences on the Hubble constant. This parameter is defined as
\begin{equation}
H_0 = \frac{\dot{a}(t_0)}{a(t_0)}
\end{equation}
where $a$ and $\dot{a}$ are measured at the current epoch $t_0$, and it will thus also be affected by the bias. First of all, we need to take into account the fact that we do not measure the real scale factor $a$, but instead a biased factor whose value is given by Eq.\ $(\ref{jj})$. Secondly, we also need to consider that the temporal evolution is determined according to our own proper time, and not to the Cosmological time. As a consequence, the value of the Hubble constant, as measured in practice, corresponds to
\begin{equation}
H_{0(meas)} = \frac{1}{f(t_0)}\frac{\dot{a}_{(meas)}(t_0)}{a_{(meas)}(t_0)}\,,
\end{equation}
where all variables are evaluated at the current epoch. According to Eq.\ $(\ref{jj})$, we have
\begin{equation}
H_{0(meas)} = \frac{1}{f(t_0)}\frac{\dot{b}(t_0)}{b(t_0)}\,.
\end{equation}
Using then Eq.\ $(\ref{fq})$, we find
\begin{equation}
H_{0(meas)} = \frac{1}{f(t_0)}\frac{V(t_0)}{V_o}\frac{\dot{a}(t_0)}{a(t_0)} = \sqrt{A}H_0\,.
\end{equation}
So, the real value of the Hubble constant is related to the measured one by the following relation:
\begin{equation}\label{H0}
H_0 = \frac{H_{0(meas)}}{\sqrt{A}}\,.
\end{equation}
Since $A>1$, we have $H_0 < H_{0(meas)}$: the real Hubble constant is smaller than the measured one, implying hence also a larger age of the universe.
\subsection{Effect on luminosity distance measurements}
The luminosity distance $d_L$ is defined as
\begin{equation}\label{qk}
d_L^2 = \frac{L}{4\pi F}\,,
\end{equation}
where $L$ is the absolute luminosity (supposed to be known) emitted by the source and $F$ is the flux measured by the observer. So, a measure of the flux $F$ completely determines the luminosity distance. Let us therefore examine how the bias affects this measurement. The flux measured by the observer is an amount of energy per unit time and per unit area. This quantity can thus be affected in three ways:
\begin{enumerate}
\item The energy that has been emitted by the source has been diluted during its propagation. In a perfectly homogeneous and isotropic space, this dilution has occurred in two ways. Firstly, photons undergo a redshift due to the expansion of the universe, and secondly, photons hit the measurement apparatus less frequently, since two photons emitted a time $\delta t$ apart by the source will be measured with a larger time span by the observer. In the theoretical situation, this double dilution corresponds to a redshift of $(a_0/a)^2$, where $a$ is the scale factor at the location of the source, and $a_0$ is the scale factor at the location of the observer. In the real situation, due to the inhomogeneity, a triple dilution has occurred. First, the a redshift has occurred due to the spatial expansion, corresponding to a dilution factor of $b_0/b$. Secondly, since the first diagonal component of the metric tensor is not constant, a temporal expansion has also occurred, responsible for a loss of energy corresponding to $f/f_0$. And thirdly, as shown by Eq.\ $(\ref{dq})$, in the Cosmological time frame, the ratio between the time span at the source and the one measured by the observer is $b_0f/bf_0$. So, compared with the theoretical situation, in the real situation we expect for the energy that will be measured by the observer a correction factor equal to $(a_0f_0b/afb_0)^2$.
\item The source emits energy at a rate which is known in its proper time frame, and similarly, the observer measures the flux in its own proper time frame. Therefore, we expect a correction factor of $f/f_0$. Since $f>1$, time progresses at a rate larger than on average through space. This means that during a Cosmological unit time, the source will have more time to emit photons, and will thus emit a larger energy. On the contrary, at the location of the observer, the flux will be smaller, since a same amount of photons will be measured during a larger time span.
\item Distances are also affected by the metric. The flux is measured by unit area, and since $b>a$, proper distances (and surfaces) in overdense regions are larger than the ones in a perfect homogeneous and isotropic space. This means that, in overdense regions, the flux that will be observed will be diluted over a larger surface, and will hence be smaller. We thus expect a correction factor of $a_0^2/b_0^2$.
\end{enumerate}
Considering all these correction factors, the ratio between the measured flux $F_{meas}$ and the one we should have in a theoretical perfectly homogeneous and isotropic space is
\begin{eqnarray}
\frac{F_{meas}}{F} &=& \left(\frac{a_0f_0b}{afb_0}\right)^2\left(\frac{f}{f_0}\right)\left(\frac{a_0}{b_0}\right)^2\nonumber
\\
&=& \left(\frac{a_0}{b_0}\right)^4\left(\frac{b}{a}\right)^2\frac{f_0}{f}\,.
\end{eqnarray}
Consequently, the ratio between the measured luminosity distance $d_{L(meas)}$ and the one we should have in a perfectly homogeneous and isotropic space is such that
\begin{equation}
\frac{d_L^2}{d_{L(meas)}^2} = \frac{F_{meas}}{F}\,.
\end{equation}
Hence:
\begin{equation}\label{kk}
d_{L(meas)} = \left(\frac{b_0}{a_0}\right)^2\left(\frac{a}{b}\right)\sqrt{\frac{f}{f_0}}d_L\,.
\end{equation}
\section{Results and discussion}\label{S3}
The whole model relies on one single parameter, i.e., $A$, which has therefore to be quantitatively estimated. This parameter represents the current value of $V/V_o$. According to several references, void regions occupy now about $80\%$ of space. For example, $\cite{Cautun}$ estimates that void regions represent currently $77\%$ of space, while \cite{Falck} obtains values slightly above $80\%$. According to \cite{Tavasoli}, void regions occupy more than $80\%$ of the volume of the observable universe. Even if it should be stressed that those values depend on how void regions were defined, this value of $80\%$ has to be considered as an order of magnitude of the volume occupied by void regions. As a consequence, overdense regions occupy currently about $20\%$ of space, implying that $A \simeq 5$.
A parametric study has been performed with the proposed model to select the value of $A$ that fits the measurements in the best way. This has led to fix $A = 5.13$, corresponding to a volume of overdense regions occupying about $19.5\%$ of space. This is in agreement with the order of magnitude mentioned above. This is the value that has thus been used to establish the results that follow.
We start from the Hubble constant deduced from SNIa measurements, which corresponds to a value of about $70\ km/s/Mpc$, see for example $\cite{Riess2}$. In Figure $\ref{Fig1}$, the dashed line represents the distance modulus $\mu$ versus redshift relation for a universe with $\Omega_{m,0} = 1$ and $\Omega_{\Lambda,0} = 0$, with such a Hubble constant. This relation is however not the one that is observed. Measurements indeed show an accelerated expansion, for which $\Omega_{m,0} = 0.3$ and $\Omega_{\Lambda,0} = 0.7$. The corresponding distance modulus versus redshift relation is plotted in Figure $\ref{Fig1}$ with a solid line.
As explained above, the Hubble constant has been measured in our proper time, which differs from the Cosmological time. To find the Hubble constant as it should be determined in a perfectly homogeneous and isotropic space, we use Eq.\ $(\ref{H0})$, and we find that the the real value is $H_0 = 31.3\ km/s/Mpc$, hence quite smaller than the measured one. In Figure $\ref{Fig1}$, the dash-dot line represents the distance modulus versus redshift relation for a universe with $\Omega_m = 1$ and $\Omega_\Lambda = 0$, with this latter Hubble constant. This is the relation that should be obtained if there was no bias in the measurements. However, due to the bias, a different relation is observed. To determine this one, we use Eq.\ $(\ref{jj})$ and $(\ref{kk})$ to deduce how luminosity distance and redshift measurements have been affected by the bias. The resulting distance modulus versus redshift relation is plotted in Figure $\ref{Fig1}$ with diamonds.
\begin{figure}
\centering\includegraphics[width=12cm]{Fig1.eps}
\caption{Distance modulus versus redshift. Dashed line: $\Omega_{m,0} = 1$, $\Omega_{\Lambda,0} = 0$ and $H_0 = 70\ km/s/Mpc$; solid line: $\Omega_{m,0} = 0.3$, $\Omega_{\Lambda,0} = 0.7$ and $H_0 = 70\ km/s/Mpc$; dash-dot line: $\Omega_{m,0} = 1$, $\Omega_{\Lambda,0} = 0$ and $H_0 = 31.3\ km/s/Mpc$; diamonds: theoretical prediction of the measured relation.}\label{Fig1}
\end{figure}
An excellent correspondence between the measured relation and the predicted one can be observed. To illustrate the effect of the bias on a specific measurement, we plot in Figure $\ref{Fig2}$ two circles: the one on the upper curve is the one that should be measured in a perfectly homogeneous and isotropic space. But due to the perturbations in the metric at the source as well at the location of the observer, redshift and luminosity distance measurements are strongly affected, and send the result to the circle located on the lower curve, far away from the real situation.
\begin{figure}[h]
\centering\includegraphics[width=12cm]{Fig2.eps}
\caption{Illustration of the effect of the bias for one particular measurement on the distance modulus versus redshift relation.}\label{Fig2}
\end{figure}
In order to grasp to effect of the metric, we plot in Figure $\ref{Fig3}$ the evolution of $f$. In a perfectly homogeneous and isotropic space, $f$ is constant over time and equal to $1$. In overdense regions, however, $f$ continuously increases and so strongly diverges from what it should be on average over space. At our current epoch, it has already reached a value of about $2.25$, meaning that time progresses at a rate $2.25$ times larger than on average through space.
\begin{figure}
\centering\includegraphics[width=12cm]{Fig3.eps}
\caption{Evolution of $f$ in function of $z$.}\label{Fig3}
\end{figure}
Also, Figure $\ref{Fig4}$ represents the evolution of $b$ in function of the redshift $z$ (dashed line), and compares it with the evolution of the scale factor $a$ of the average space (solid line). Whereas, by convention, we have fixed $a=1$ at our current epoch, we observe that $b$ has reached a value of about $2.3$, meaning that space in overdense regions has expanded quite faster.
\begin{figure}
\centering\includegraphics[width=12cm]{Fig4.eps}
\caption{Evolution of $a$ (solid line) and $b$ (dashed line) in function of $z$.}\label{Fig4}
\end{figure}
We should finally stress that in order to develop the model we have made a strong assumption on the volume of overdense regions. Considering that matter has immediately reached a gravitational bound state and hence that overdense regions have a constant volume is an extreme situation, and this could quantitatively affect the way how $f$ and $b$ evolve over time. The advantage of this assumption is that the model could completely be developed analytically. The drawback is that, even if the model reproduces satisfactorily the measured distance modulus versus redshift relation, it is not the more realistic one. In particular, we could expect that some values determined from the model (such as the real Hubble constant for example) could be affected in some way. A more realistic situation would require to consider a given evolution of $V/V_0$, starting from a value of $1$ at the origin of time, and tending progressively to be proportional to $a^3$. This would however require additional parameters to be included into the model, leading to a more complex approach. Moreover, establishing a more realistic evolution of $V/V_0$ can be quite challenging also.
Nevertheless, even with the strong simplifying assumption, the results obtained with the model developed in this article are promising, but they should be considered more qualitatively than quantitatively. The model provides an insight of the phenomena that could explain the apparent accelerated expansion of the universe as evidenced by SNIa measurements, without needing to assume the existence of dark energy. It is finally important to highlight that the consequences of the perturbation in the metric in overdense regions exceed the case of the SNIa measurements. All kinds of measurements performed from our specific location use apparatuses and theories that generally rely on temporal and spatial concepts, and could thus be biased in a similar way. Results made on astrophysical phenomena should thus be interpreted cautiously. This is in particular true for BAO and cosmic microwave background measurements.
\section{Conclusion}
Admitting that SNIa measurements are affected by a bias, related to the fact that SNIa occur only in overdense regions, we have developed a model to investigate the effect of this bias on the measurement results. This model relies on one single parameter, i.e., the part of space that is occupied by overdense regions. The model considers two distinct regions, namely overdense and underdense regions, and assumes that those regions can be described by average metric and stress-energy tensors. We then focussed on establishing the average metric tensor in overdense regions. This metric tensor presents a scale factor differing from the one of the average space, but also a function that describes the rate at which time progresses in overdense regions. This rate may indeed differ from the one of the average space. Considering the metric tensor in overdense region, the effect of the bias on the results of redshift and luminosity measurements has then been examined. Such measurement indeed imply temporal and spatial concepts, and are thus affected by the perturbation existing in the local metric tensor at the source as well as at the location of the observer. Due to the different scale factor and the different rate at which time progresses in overdense regions, it has been shown that the results of those measurements are strongly affected. Assuming a void fraction of space of about $80\%$, similar to the values found in the literature, it was shown that the model predicts a distance modulus versus redshift relation in perfect agreement with the one established with the SNIa probe (corresponding to $\Omega_{m,0} = 0.3$ and $\Omega_{\Lambda,0} = 0.7$). According to the proposed model, the apparent accelerated expansion of the universe can be explained as a measurement artefact, which thus does not require to assume the existence of some kind of dark energy.
|
2,869,038,156,262 | arxiv | \section{Introduction}
This article is a continuation of~\cite{HS}. Recall that in the work of Lassalle~\cite{L}, Bressoud's matrix inversion formula \cite{B} is extensively used to describe the transition matrices associated with the Macdonald polynomials of types $B_n$, $C_n$ and $D_n$ with one column diagrams. One of our motivations in the present paper is to establish a generalization of Lassalle's principle to the case of the Koornwinder polynomials~\cite{Ko} $P_{(1^r)}(x|a,b,c,d|q,t)$ with full six parameters $a$, $b$, $c$, $d$, $q$ and $t$. (As for the definition of~$P_{(1^r)}(x|a,b,c,d|q,t)$, see Section~\ref{F-Koornwinder}.)
Our starting point is the new version (Theorem~\ref{HSnew}) of our previous fourfold summation formula obtained in~\cite{HS} (Theorem~\ref{HS}). We show that the new fourfold formula can be understood as a product of four Bressoud matrices (Theorem~\ref{dmat}), thereby giving us the corresponding inversion formulas automatically (Theorem~\ref{dmat-dual}).
Another motivation comes from the transition matrix
$\mathcal{C}^{(n)}$ (Definition~\ref{C^n}) from the monomial polynomials $m_{(1^r)}(x)$ to the Koornwinder polynomials $P_{(1^r)}(x|a,b,c,d|q,t)$.
(As for
$m_{(1^r)}(x)$, see Section~\ref{F-Koornwinder}.)
One may
find a reasonable property of the transition matrix $\mathcal{C}^{(n)}$,
as stated in Proposition~\ref{propC^n} below.
Our proof of Proposition~\ref{propC^n} (see Sections~\ref{Ftr} and~\ref{ProofMAIN}) is based on the new fourfold summation formula in Theorem~\ref{HSnew}. It seems, however, that we still lack a~fundamental
grasp of the phenomenon, since the proof remains
technically too much involved.
We hope in the future, a better understanding will appear.
Noting that Proposition \ref{propC^n} implies
the essential independence of $\mathcal{C}^{(n)}$ on $n$,
we summarize our main result in Theorem~\ref{main2}.
For simplicity, write $P^{BC_n}_{(1^r)}=P_{(1^r)}(x|a,b,c,d|q,t)$
and $m_{(1^r)}=m_{(1^r)}(x)$.
Let $n \in \mathbb{Z}_{>0}$. Let ${\bf{P}}^{(n)}$ and ${\bf{m}}^{(n)}$ be the infinite column
vectors defined by
${\bf{P}}^{(n)} = {}^t\big(P_{(1^n)}^{BC_n}, P_{(1^{n-1})}^{BC_n}, \ldots, P_{(1)}^{BC_n},
P_{\varnothing}^{BC_n}, 0, \ldots\big)$,
${\bf{m}}^{(n)} = {}^t\big(m_{(1^n)}, m_{(1^{n-1})}, \ldots, m_{(1)},
m_{\varnothing}, 0, \ldots\big)$.
Here $\varnothing$ denotes the empty diagram,
and hence $P_{\varnothing}^{BC_n}=m_{\varnothing}=1$.
\begin{dfn} \label{DefOffg} Set
\begin{subequations}
\begin{gather}
f(s|a,b,c,d) \nonumber\\
=
{ (1-abcds/t)(1-ts)(1-abs)(1-acs)(1-ads)(1-bcs)(1-bds)(1-cds)
\over
\big(1-abcds^2/t\big)\big(1-abcds^2\big)^2 \big(1-abcdts^2\big)
}, \\
g_1(s|a,b,c,d) ={ a+b+c+d-(abc+abd+acd+bcd)s/t
\over
1-abcds^2/t^2
}
{ 1-s
\over
1-t
}. \label{g1}
\end{gather}
Write $g(s|a,b,c,d)= g_1(s|a,b,c,d) - g_1(s t|a,b,c,d)$ for simplicity.
\end{subequations}
\end{dfn}
\begin{thm}\label{main2}
There exists a unique infinite transition matrix $\mathcal{C}=(\mathcal{C}_{i,j})_{i,j \in \mathbb{Z}_{\geq 0}}$
satisfying the conditions
\begin{subequations}
\begin{gather}
\mathcal{C} \text{ is upper triangular,} \\
\mathcal{C}_{i,i} = 1 \quad (i \geq 0), \label{rec2} \\
\mathcal{C}_{i,j} = \mathcal{C}_{i-1, j-1} + g\big(t^i\big) \mathcal{C}_{i, j-1} + f\big(t^i\big)
\mathcal{C}_{i+1, j-1},\label{rec3}
\end{gather}
\end{subequations}
and we have ${\bf{P}}^{(n)} = \mathcal{C} {\bf{m}}^{(n)}$ for all $n \geq 1$ $($stability$)$.
\end{thm}
\begin{rmk}By the stability is meant
that the entries $\mathcal{C}_{i,j}$ of the transition matrix
$\mathcal{C}$ do not depend on the rank parameter~$n$
of the type $BC_n$ Koornwinder polynomials.
The first few terms of the transition matrix $\mathcal{C}$ read
\begin{gather*}
\left(
\begin{matrix}
1& -g_1(t) & g_1(t)^2 + f(1) & -g_1(t)^3 -g_1(t)f(1) -g_1\big(t^2\big)f(1) & & & & \cdots\\
&1& -g_1\big(t^2\big) & g_1(t)^2 + f(1) -g_1\big(t^2\big)g(t) + f(t) & & & & \cdots\\
&&1& -g_1\big(t^3\big) & & & & \cdots\\
&&&\ddots&&\ddots &
\end{matrix}
\right).
\end{gather*}
\end{rmk}
A proof of Theorem \ref{main2} is presented in Section~\ref{ProofMAIN2}.
As a consequence of Theorem \ref{main2}, we establish the following branching rule.
\begin{thm}\label{main1}
Set $P^{BC_{n-1}}_{(1^n)}(x|a,b,c,d|q,t) = 0$ and $P^{BC_{n-1}}_{(1^{-1})}(x|a,b,c,d|q,t) = 0$
for simplicity. We have
\begin{gather*}
P^{BC_{n}}_{(1^r)}(x_1, x_2, \ldots, x_n|a,b,c,d|q,t)
= P^{BC_{n-1}}_{(1^r)}(x_1, x_2, \ldots, x_{n-1}|a,b,c,d|q,t) \notag \\
\qquad{} + \bigl(x_n + 1/x_n + g\big(t^{n-r}|a,b,c,d\big) \bigr) P^{BC_{n-1}}_{(1^{r-1})}(x_1, x_2, \ldots, x_{n-1}|a,b,c,d|q,t) \notag \\
\qquad{} +f(t^{n-r}|a,b,c,d) P^{BC_{n-1}}_{(1^{r-2})}(x_1, x_2, \ldots, x_{n-1}|a,b,c,d|q,t)
\end{gather*}
\end{thm}
A proof of Theorem \ref{main1} is presented in Section~\ref{ProofMAIN1}.
An explanation is in order
concerning our plan of the proof of Theorem \ref{main2}.
\begin{dfn} \label{E_r} Define the symmetric Laurent polynomial $E_r(x)$'s by expanding the generating
function $E(x|y)$ as
\begin{gather*}
E(x|y)=\prod_{i=1}^{n} (1-y x_i)(1-y/x_i) = \sum_{r \geq 0} (-1)^r E_r(x) y^r.
\end{gather*}
\end{dfn}
Then the ordered collection $(E_r):=(E_n(x),\ldots,E_1(x),E_0(x))$
provides us with another basis of the space of
polynomials spanned by the bases
$(m_{(1^r)}):=(m_{(1^n)},\ldots,m_{(1)},m_\varnothing)$ or
$\big(P^{BC_n}_{(1^r)}\big):=\big(P^{BC_n}_{(1^n)},\ldots,P^{BC_n}_{(1)},P^{BC_n}_{\varnothing}\big)$.
Firstly, the simplest example of the transition matrix is the
one from~$(m_{(1^r)})$ to~$(E_r)$.
\begin{lem}[{\cite[Lemma~3.3]{HS}}]\label{Lem-Em} We have
\begin{gather*}
E_r(x) = \sum_{k=0}^{\lfloor{r \over 2}\rfloor} \binom{n-r+2k}{k} m_{(1^{r-2k})}(x),
\end{gather*}
where $\binom{m}{j}$ denotes the ordinary binomial coefficient.
\end{lem}
Nextly, the transition matrix from $(E_r)$ to $\big(P^{BC_n}_{(1^r)}\big)$ also has already been studied in~\cite{HS},
presented as a certain fourfold summation.
\begin{thm}[{\cite[Theorem 2.2]{HS}}]\label{HS}
We have the following fourfold summation formula for the $BC_n$
Koornwinder polynomial $P_{(1^r)}(x|a,b,c,d|q,t)$ with one column diagram.
\begin{gather*}
P_{(1^r)}(x|a,b,c,d|q,t)
=
\sum_{k,l,i,j\geq 0} (-1)^{i+j}
E_{r-2k-2l-i-j} (x)
\widehat{c}\,'_e(k,l;t^{n-r+1+i+j}) \widehat{c}_o\big(i,j; t^{n-r+1}\big),
\end{gather*}
where
\begin{subequations}\label{5}
\begin{gather}
\widehat{c}\,'_e(k,l;s) = { \big(tc^2/a^2 ; t^2\big)_k \big(sc^2t ; t^2\big)_k \big(s^2c^4/t^2 ; t^2\big)_k
\over
\big(t^2 ; t^2\big)_k \big(sc^2/t ; t^2\big)_k \big(s^2a^2c^2/t ; t^2\big)_k }{ (1/c^2 ; t)_l (s/t ; t)_{2k+l}
\over
(t ; t)_l \big(sc^2 ; t\big)_{2k+l} }\nonumber\\
\hphantom{\widehat{c}\,'_e(k,l;s) =}{}\times
{ 1-st^{2k+2l-1}
\over
1-st^{-1} } a^{2k}c^{2l}, \\
\widehat{c}_o(i,j; s) = { (-a/b ; t)_i (scd/t ; t)_i
\over
(t ; t)_i (-sac/t ; t)_i }
{ (s ; t)_{i+j} (-sac/t ; t)_{i+j} \big(s^2a^2c^2/t^3 ; t\big)_{i+j}
\over
\big(s^2abcd/t^2 ; t\big)_{i+j} \big(sac/t^{3/2} ; t\big)_{i+j} \big({-}sac/t^{3/2} ; t\big)_{i+j} } \nonumber\\
\hphantom{\widehat{c}_o(i,j; s) =}{} \times
{ (-c/d ; t)_j (sab/t ; t)_j
\over
(t ; t)_j (-sac/t ; t)_j } b^id^j.
\end{gather}
\end{subequations}
In \eqref{5} we have used the standard notation explained at the end of this section.
\end{thm}
Hence, the properties of the transition matrix from $(m_r)$ to $\big(P^{BC_n}_{(1^r)}\big)$ should be extracted just by combining Lemma~\ref{Lem-Em} and Theorem~\ref{HS}. One finds, however, that a slightly improved version of the fourfold summation formula
better fits with our investigation of the transition matrices, explaining each degeneration step in (\ref{dscheme}) below from the point of view of the matrix inversion formula of Bressoud~\cite{B}.
\begin{thm}\label{HSnew} We have
\begin{gather*}
P_{(1^r)}(x|a,b,c,d|q,t)
=
\sum_{k,l,i,j\geq 0}\!\! (-1)^{i+j}
E_{r-2k-2l-i-j} (x)
\widehat{c}\,'_e(k,l;t^{n-r+1+i+j}) \widehat{c_o}^{\rm new}\big(i,j; t^{n-r+1}\big),
\end{gather*}
where
\begin{gather*}
\widehat{c_o}^{\rm new} (i,j;s)=
{
(-a/b;t)_i (s;t)_i (sac/t;t)_i (sad/t;t)_i (scd/t;t)_i \big({-}s^2a^2cd/t^3; t\big)_i
\over
(t;t)_i \big(s^2abcd/t^2; t\big)_i \big({-}s^2a^2cd/t^3;t^2\big)_i \big({-}s^2a^2cd/t^2; t^2\big)_i
}b^i \\
\hphantom{\widehat{c_o}^{\rm new} (i,j;s)=}{} \times {
(-c/d;t)_j \big(t^i s;t\big)_j \big({-}t^i s a^2/t;t\big)_j \big(t^{2i}s^2 a^2c^2/t^3; t\big)_j
\over
(t;t)_j \big({-}t^{2i}s^2a^2cd/t^2; t\big)_j \big(t^{2i} s^2 a^2c^2/t^3; t^2\big)_j}d^j.
\end{gather*}
\end{thm}
A proof of Theorem \ref{HSnew} is given in Section \ref{F-Koornwinder},
using Watson's transformation formula for the basic hypergeometric series
of ${}_8W_7$ type \cite[p.~43, equation~(2.5.1)]{GR}.
An interpretation of Theorem \ref{HSnew} is presented in
Section \ref{MatrixInversion}, based on Bressoud's matrix inversion. In
Section \ref{Ftr}, we find a certain five term recursion relation,
which enables one to relate
Theorem \ref{HSnew} with Theorem~\ref{main2}.
An application of Theorem \ref{HSnew} to a particular case reads follows.
Consider the Macdonald polynomials of type $(B_n,B_n)$ \cite{RW, Mac, St}
\begin{gather*}
P^{(B_n,B_n)}_{(1^r)}(x|a;q,t) = P_{(1^r)}\big(x|q^{1/2},-q^{1/2},-1,a|q,t\big).
\end{gather*}
Note that in this specialization of the parameters
$(a,b,c,d)\rightarrow \big(q^{1/2},-q^{1/2},-1,a\big)$,
we have $\widehat{c}\,'_e(k,l;s)=0$ when $l>0$,
and $\widehat{c_o}^{\rm new} (i,j;s)=0$ when $i>0$. Hence the fourfold
summation in Theorem~\ref{HSnew} degenerates to the
following twofold one.
\begin{cor}We have
\begin{gather*}
P^{(B_n,B_n)}_{(1^r)}(x|a;q,t)
=
\sum_{k,j\geq 0} (-1)^{j}
E_{r-2k-j} (x)
\widehat{c}'_e(k,0;t^{n-r+1+j}) \widehat{c_o}^{\rm new}\big(0,j; t^{n-r+1}\big),
\end{gather*}
where
\begin{gather*}
\widehat{c}\,'_e(k,0;s) = { \big(t/q ; t^2\big)_k \big(st ; t^2\big)_k \big(s^2/t^2 ; t^2\big)_k
\over
\big(t^2 ; t^2\big)_k \big(s/t ; t^2\big)_k \big(s^2q/t ; t^2\big)_k }
{ (s/t ; t)_{2k}
\over
(s ; t)_{2k} }
{ 1-st^{2k-1}
\over
1-st^{-1} } q^{k}, \\
\widehat{c_o}^{\rm new} (0,j;s)
=
{
(1/a;t)_j (s;t)_j (-s q/t;t)_j \big(s^2 q/t^3; t\big)_j
\over
(t;t)_j \big(s^2qa/t^2; t\big)_j \big(s^2 q/t^3; t^2\big)_j
}
a^j.
\end{gather*}
\end{cor}
\begin{cor}
When $a=t=q$, the Macdonald polynomials of type $(B_n,B_n)$
become the Schur polynomials
$s_{\lambda}(x)=s^{B_n}_{\lambda}(x)$ of type $B_n$.
It holds that
\begin{align}
s^{B_n}_{(1^r)}(x)&=P^{(B_n,B_n)}_{(1^r)}(x|q;q,q)=E_r(x)+E_{r-1}(x)\nonumber\\
&=\sum_{j=0}^{\lfloor{r \over 2}\rfloor}{n-r+2j \atopwithdelims() j} m_{(1^{r-2j})}(x)
+\sum_{j=0}^{\lfloor{r-1 \over 2}\rfloor}{n-r+2j+1 \atopwithdelims() j} m_{(1^{r-2j-1})}(x), \label{s-m}
\end{align}
where $\binom{m}{j}= {m(m-1)\cdots(m-j+1) \over j!}$ denotes the ordinary binomial coefficient.
\end{cor}
\begin{rmk}
The first few terms of (\ref{s-m}) read
\begin{gather*}
\left(
\begin{matrix}
s_{(1^{n})}^{B_n}\vspace{1mm}\\
s_{(1^{n-1})}^{B_n}\vspace{1mm}\\
s_{(1^{n-2})}^{B_n}\vspace{1mm}\\
s_{(1^{n-3})}^{B_n}\vspace{1mm}\\
\vdots
\end{matrix}
\right)=
\left(
\begin{array}{@{}ccccccccccc@{}}
1&1 &2 &3 & 6 &10 & 20 &35 & 70 &126&\cdots\\
&1&1 & 3 &4 & 10 &15 &35 &56& 126& \\
&&1& 1& 4 &5 & 15 &21 &56 &84& \cdots\\
&&&1&1 & 5 &6 & 21 &28 &84 & \\
&&&&\ddots& &\ddots& & \ddots
\end{array}
\right)
\left(
\begin{matrix}
m_{(1^{n})}\\
m_{(1^{n-1})}\\
m_{(1^{n-2})}\\
m_{(1^{n-3})}\\
\vdots
\end{matrix}
\right).
\end{gather*}
\end{rmk}
As another application of our results obtained in this paper,
we calculate the transition matrix from the Schur polynomials to the Hall--Littlewood polynomials,
namely the Kostka polynomials of type $B_n$, associated with one column diagrams.
As for the Kostka polyonomials of types
$C_n$ and $D_n$ associated with one column diagrams, see~\cite{HS}.
\begin{dfn}
Let $K^{B_n}_{(1^r)(1^{r-l})}(t)$ be the transition coefficients defined by
\begin{gather*}
s^{B_n}_{(1^r)}(x)=\sum_{l=0}^{r}K^{B_n}_{(1^r)(1^{r-l})}(t)P^{(B_n,B_n)}_{(1^{r-l})}(x|t;0,t).
\end{gather*}
\end{dfn}
\begin{thm}\label{KOSTKAthm}
The $K^{B_n}_{(1^r)(1^{r-l})}(t)$
is a polynomial in $t$ with nonnegative integral coefficients.
Explicitly, we have
\begin{gather*}
K^{B_n}_{(1^r) (1^{r-l})}(t)
=
\begin{cases} \displaystyle
t^L
\left[ n-r+2L \atop L \right]_{t^2},
& l=2L, \vspace{1mm}\\
\displaystyle
t^{L+n-r+1}
\left[ n-r+2L+1 \atop L \right]_{t^2},
& l=2L+1.
\end{cases}
\end{gather*}
Here we have used the notation for
the $q$-integer $[n]_q$, the $q$-factorial $ [n]_q!$ and the $q$-binomial coefficient $\left[m\atop j\right]_q$ as
\begin{gather*}
[n]_q={1-q^n \over 1-q}, \qquad [n]_q!=[1]_q[2]_q\cdots [n]_q,
\qquad \left[m\atop j\right]_q=\prod_{k=1}^j {[m-k+1]_q\over [k]_q}={[m]_q!\over [j]_q![m-j]_q!}.
\end{gather*}
\end{thm}
We prove this in Section~\ref{Kostka}.
The present article is organized as follows.
In Section~\ref{F-Koornwinder}, we derive a
slightly improved version of the fourfold
summation formula for the Koornwinder polynomial with one column diagram.
See~\cite{HS} as for the original version.
In Section~\ref{MatrixInversion}, we give the transition matrices from the Koornwinder polynomials
$P_{(1^r)}(x|a,b,c,d|q,t)$
with one column diagrams,
to certain degenerations of the
Koornwinder polynomials $P_{(1^r)}(x|a,-a,c,d|q,t)$, $P_{(1^r)}(x|a,-a,c,-c|q,t)$,
$P_{(1^r)}\big(x|t^{1/2}c,-t^{1/2}c,c,-c|q,t\big)$ and
$P_{(1^r)}\big(x|t^{1/2},-t^{1/2},1,-1|q,t\big)$.
We show that these
transition matrices are described by the matrix inversion formula of Bressoud.
In Section~\ref{Ftr}, we present some technical preparations for our proof of Theorem~\ref{main2}.
Namely, we give a certain set of five term relations for ${}_{4}\phi_3$ series of the basic hypergeometric
series associated with the transition matrices.
In Section~\ref{ProofMAIN}, we prove Theorems~\ref{main2} and~\ref{main1}.
In Section~\ref{KostkaB}, we study some degenerate cases, including the calculation of
the Kostka polynomials of type~$B_n$.
In Section~\ref{SolOfRec}, we give a~solution to the recursion relation of the
transition matrix in Theorem~\ref{main2}.
In the appendix, we recall briefly the asymptotically free eigenfunctions of the
Askey--Wilson polynomials and discuss the
relation of the transition matrix. In addition,
we present a conjecture for the asymptotically free eigenfunctions of
the $B_n$ $q$-Toda operator.
Throughout the paper,
we use the standard notation (see~\cite{GR})
\begin{gather*}
(z;q)_\infty =\prod_{k=0}^{\infty}\big(1-q^k z\big),
\qquad
(z;q)_k=\frac{(z;q)_{\infty}}{\big(q^kz;q\big)_{\infty}},\qquad k\in\mathbb{Z}, \\
(a_1, a_2, \ldots, a_r;q)_k = (a_1;q)_k (a_2;q)_k \cdots (a_r;q)_k,
\qquad k\in\mathbb{Z}, \\
{}_{r+1}\phi_r\left[ {a_1,a_2,\ldots,a_{r+1}\atop b_1,\dots,b_{r}};q,z\right]=
\sum_{n=0}^\infty {(a_1, a_2, \ldots, a_{r+1};q)_n \over
(q, b_1, b_2, \ldots, b_{r};q)_n} z^n , \\
{}_{r+1}W_r(a_1;a_4,a_5,\ldots,a_{r+1};q,z)=
{}_{r+1}\phi_r\left[ {a_1,q a_1^{1/2},-q a_1^{1/2},a_4,\ldots,a_{r+1}\atop
a_1^{1/2},-a_1^{1/2},q a_1/a_4,\dots,qa_1/a_{r+1}};q,z\right].
\end{gather*}
\section[Fourfold summation formula for Koornwinder polynomials with one column diagram]{Fourfold summation formula for Koornwinder polynomials\\ with one column diagram}\label{F-Koornwinder}
Recall the definition of the Koornwinder polynomials.
Let $a$, $b$, $c$, $d$, $q$, $t$ be complex parameters. We assume that $|q|<1$.
Set $\alpha=(abcd/q)^{1/2}$ for simplicity.
Let $x=(x_1,\ldots,x_n)$ be a sequence of independent indeterminates.
The hyperoctahedral group of rank $n$ is denoted by $W_n = \mathbb{Z}_2^n \rtimes \mathfrak{S}_n$.
Let $\mathbb{C}\big[x_1^{\pm}, x_2^{\pm}, \ldots, x_n^{\pm}\big]^{W_n}$ be
the ring of $W_n$-invariant Laurent polynomials in~$x$.
For a partition $\lambda = (\lambda_1, \lambda_2, \ldots, \lambda_n)$ of length $n$,
i.e., $\lambda_i\in \mathbb{Z}_{\geq 0}$ and $\lambda_1\geq \cdots\geq \lambda_n$, we denote
by $m_{\lambda}=m_{\lambda}(x)$ the monomial symmetric polynomial being defined as the orbit sums of monomials
\begin{gather*}
m_{\lambda} = {1 \over |\text{Stab}(\lambda)|} \sum_{\mu \in W_n \cdot \lambda} \prod_{i} x_i^{\mu_i},
\end{gather*}
where $\text{Stab}(\lambda)=\{ s \in W_n \,|\, s \lambda = \lambda \}$.
Koornwinder's $q$-difference operator ${\mathcal D}_x={\mathcal D}_x(a,b,c,d|q,t)$ \cite{Ko} reads
\begin{gather*}
{\mathcal D}_x=
\sum_{i=1}^n {(1-ax_i)(1-bx_i)(1-cx_i)(1-dx_i)\over
\alpha t^{n-1}\big(1-x_i^2\big)(1-qx_i^2)}
\prod_{j\neq i} {(1-t x_ix_j)(1-t x_i/x_j)\over (1-x_ix_j)(1-x_i/x_j)}
\big(T_{q,x_i}^{+1}-1\big) \\
\hphantom{{\mathcal D}_x=}{} +
\sum_{i=1}^n {(1-a/x_i)(1-b/x_i)(1-c/x_i)(1-d/x_i)\over
\alpha t^{n-1}\big(1-1/x_i^2\big)\big(1-q/x_i^2\big)}\\
\hphantom{{\mathcal D}_x=}{}\times
\prod_{j\neq i} {(1-t x_j/x_i)(1-t /x_ix_j)\over (1-x_j/x_i)(1-1/x_ix_j)}
\big(T_{q,x_i}^{-1}-1\big),
\end{gather*}
where we have used the notation $T_{q,x_i}^{\pm1}f(x_1,\ldots,x_i,\ldots ,x_n)=f\big(x_1,\ldots,q^{\pm 1}x_i,\ldots ,x_n\big)$.
The Koornwinder polynomial
$P_\lambda(x)=P_\lambda(x|a,b,c,d|q,t)\in \mathbb{C}\big[x_1^{\pm 1},\ldots,x_n^{\pm 1}\big]^{W_n}$
is uniquely characterized by the conditions
\begin{gather*}
(a) \quad \mbox{ $P_\lambda(x)=m_\lambda(x)+\mbox{lower terms}$ w.r.t.\ the dominance ordering}, \\
(b) \quad \mbox{ ${\mathcal D}_x P_\lambda=d_\lambda P_\lambda$.}
\end{gather*}
The eigenvalue $d_\lambda$ is explicitly written as
\begin{gather*}
d_\lambda=\sum_{j=1}^n \big\langle abcdq^{-1}t^{2n-2j}q^{\lambda_j}\big\rangle
\big\langle q^{\lambda_j}\big\rangle
=
\sum_{j=1}^n \big\langle \alpha t^{n-j}q^{\lambda_j}; \alpha t^{n-j}\big\rangle,
\end{gather*}
where we have used the notations
$\langle x\rangle=x^{1/2}-x^{-1/2}$ and
$\langle x;y\rangle=\langle xy\rangle\langle x/y\rangle=x+x^{-1}-y-y^{-1}$
for simplicity of display.
The aim of this paper is to study the Koornwinder polynomials
$P_{(1^r)}(x|a,b,c,d|q,t)$ \mbox{($0\!\leq\! r\!\leq\! n$)} associated with the one column diagrams,
and establish some explicit formulas for them. Note that we will treat the
most general six parameter case with arbitrary $a$, $b$, $c$, $d$, $q$ and~$t$ for
any number~$n$ of variables.
No attempt, however, is made to
investigate the cases with two columns or more complicated partitions in this work.
The only exception to this rule is the Appendix where we present a conjecture
about the asymptotically free solution to the $q$-difference Toda equation of type~$B_n$.
Recall the symmetric Laurent polynomial $E_r(x)$'s in Definition~\ref{E_r}.
Our starting point in this paper is the fourfold summation formula
in Theorem~\ref{HS} (Theorem~2.2 in~\cite{HS}), namely
\begin{gather*}
P_{(1^r)}(x|a,b,c,d|q,t)=
\sum_{k,l,i,j\geq 0} (-1)^{i+j}
E_{r-2k-2l-i-j} (x)
\widehat{c}\,'_e\big(k,l;t^{n-r+1+i+j}\big) \widehat{c}_o\big(i,j; t^{n-r+1}\big).
\end{gather*}
We first need to derive a slightly modified version of this fourfold summation
formula as stated in Theorem~\ref{HSnew},
obtaining a better description of the transition matrices
associated with the following degeneration scheme:
\begin{gather}
P_{(1^r)}(x|a,b,c,d|q,t)\longleftrightarrow P_{(1^r)}(x|a,-a,c,d|q,t)
\longleftrightarrow P_{(1^r)}(x|a,-a,c,-c|q,t) \notag\\
\qquad{} \longleftrightarrow P_{(1^r)}\big(x|t^{1/2}c,-t^{1/2}c,c,-c|q,t\big)
\longleftrightarrow P_{(1^r)}\big(x|t^{1/2},-t^{1/2},1,-1|q,t\big). \label{dscheme}
\end{gather}
We prove that the transition matrices associated with
each of these degeneration steps are given in terms of the matrix inversion formula of
Bressoud.
In order to prove Theorem~\ref{HSnew} we require the following proposition.
\begin{prp} \label{prop1}
$\sum\limits_{i = 0}^m \widehat{c_o}^{\rm new} (i,m-i;s) =\sum\limits_{i = 0}^m \widehat{c_o} (i,m-i;s)$.
\end{prp}
\begin{proof} We have
\begin{gather}
\sum_{i = 0}^m \widehat{c_o}^{\rm new} (i,m-i;s)
= {(-c/d;t)_i (s;t)_i \big({-}sa^2/t;t\big)_i \big(s^2a^2c^2/t^3; t\big)_m
\over
(t;t)_i \big(sac/t^{3/2};t\big)_i \big({-}sac/t^{3/2};t\big)_i \big({-}s^2a^2cd/t^2; t\big)_m}
d^m\label{8phi7} \\
\hphantom{\sum_{i = 0}^m \widehat{c_o}^{\rm new} (i,m-i;s)=}{} \times
{}_8W_7\big({-}s^2a^2cd/t^3; t^m s^2a^2c^2/t^3, sad/t, -a/b, scd/t, t^{-m}; t, -tb/c\big).\nonumber
\end{gather}
By Watson's transformation formula \cite[p.~43, equation~(2.5.1)]{GR},
the ${}_8W_7$ series in~$(\ref{8phi7})$ equals
\begin{gather*}
{
\big({-}s^2a^2cd/t^2, sab/t; t\big)_m
\over
\big(s^2abcd/t^2, -sa^2/t; t\big)_m
}
{}_4\phi_3
\left[
{
t^{-m}, -a/b, scd/t, -t^{-m+2}/sac
\atop
-t^{-m+1}d/c, -sac/t, t^{-m+2}/sab
}
; t, t
\right].
\end{gather*}
On the other hand, we have
\begin{gather}
\sum_{i = 0}^m \widehat{c_o} (i,m-i;s) =
{
\big(s, s^2a^2c^2/t^3, -c/d, sab/t ; t\big)_m
\over
\big(s^2abcd/t^2, sac/t^{3/2}, -sac/t^{3/2}, t ; t\big)_m
}
d^m\nonumber\\
\hphantom{\sum_{i = 0}^m \widehat{c_o} (i,m-i;s) =}{}\times
{}_4\phi_3
\left[
{
-a/b, scd/t, t^{-m}, -t^{-m+2}/sac
\atop
-sac/t, -t^{-m+1}d/c, t^{-m+2}/sab
}
; t, t
\right]. \label{co_4phi3}
\end{gather}
This shows that $\sum\limits_{i = 0}^m \widehat{c_o}^{\rm new} (i,m-i;s) =\sum\limits_{i = 0}^m \widehat{c_o} (i,m-i;s)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{HSnew}.] By Theorem \ref{HS} we have
\begin{gather*}
P_{(1^r)}(x|a,b,c,d|q,t) =
\sum_{k,l,i,j\geq 0} (-1)^{i+j}
E_{r-2k-2l-i-j} (x)
\widehat{c}\,'_e\big(k,l;t^{n-r+1+i+j}\big) \widehat{c}_o\big(i,j; t^{n-r+1}\big) \\
\qquad{} =
\sum_{k,l \geq 0} \sum_{m \geq 0} (-1)^m
\widehat{c}\,'_e\big(k,l;t^{n-r+1+m}\big) E_{r-2k-2l-m} (x)
\sum_{i = 0}^m
\widehat{c_o}\big(i,m-i; t^{n-r+1}\big) \\
\qquad{} =
\sum_{k,l \geq 0} \sum_{m \geq 0} (-1)^m
\widehat{c}\,'_e\big(k,l;t^{n-r+1+m}\big) E_{r-2k-2l-m} (x)
\sum_{i = 0}^m
\widehat{c_o}^{\rm new}\big(i,m-i; t^{n-r+1}\big).
\end{gather*}
This completes the proof of Theorem \ref{HSnew}.
\end{proof}
Now we turn to an explanation of the meaning of the new fourfold summation formula
in Theorem~\ref{HSnew}, from the point of view of the degeneration scheme~(\ref{dscheme}).
\begin{lem}\label{lem-1}
We have
\begin{gather*}
\widehat{c}\,'_e(k,l;s) =
{ \big(tc^2/a^2 ; t^2\big)_k \big(sc^2t ; t^2\big)_k \big(s^2c^4/t^2 ; t^2\big)_k
\over
\big(t^2 ; t^2\big)_k \big(sc^2/t ; t^2\big)_k \big(s^2a^2c^2/t ; t^2\big)_k }
{ \big(1/c^2 ; t\big)_l (s/t ; t)_{2k+l}
\over
(t ; t)_l \big(sc^2 ; t\big)_{2k+l} }
{ 1-st^{2k+2l-1}
\over
1-st^{-1} } a^{2k}c^{2l}\\
\hphantom{\widehat{c}\,'_e(k,l;s)}{} =
{ \big(tc^2/a^2 ; t^2\big)_k \big(sc^2t ; t^2\big)_k \big(s^2c^4/t^2 ; t^2\big)_k
\over
\big(t^2 ; t^2\big)_k \big(sc^2/t ; t^2\big)_k \big(s^2a^2c^2/t ; t^2\big)_k }
{ (s/t ; t)_{2k}
\over
\big(sc^2 ; t\big)_{2k} }
{ 1-st^{2k-1}
\over
1-st^{-1} } a^{2k}\\
\hphantom{\widehat{c}\,'_e(k,l;s)=}{} \times
{ \big(1/c^2 ; t\big)_l \big(t^{2k}s/t ; t\big)_{l}
\over
(t ; t)_l \big(t^{2k}sc^2 ; t\big)_{l} }
{ 1-st^{2k}t^{2l-1}
\over
1-st^{2k}t^{-1} } c^{2l}\\
\hphantom{\widehat{c}\,'_e(k,l;s)}{} = \widehat{c}\,'_e(k,0;s) \widehat{c}\,'_e\big(0,l;t^{2k}s\big) ,
\end{gather*}
and
\begin{gather*}
\widehat{c_o}^{\rm new} (i,j;s) =
{
(-a/b;t)_i (s;t)_i (sac/t;t)_i (sad/t;t)_i (scd/t;t)_i \big({-}s^2a^2cd/t^3; t\big)_i
\over
(t;t)_i \big(s^2abcd/t^2; t\big)_i \big({-}s^2a^2cd/t^3;t^2\big)_i \big({-}s^2a^2cd/t^2; t^2\big)_i
}
b^i \\
\hphantom{\widehat{c_o}^{\rm new} (i,j;s)=}{}\times {
(-c/d;t)_j \big(t^i s;t\big)_j \big({-}t^i s a^2/t;t\big)_j \big(t^{2i}s^2 a^2c^2/t^3; t\big)_j
\over
(t;t)_j \big({-}t^{2i}s^2a^2cd/t^2; t\big)_j \big(t^{2i} s^2 a^2c^2/t^3; t^2\big)_j
}
d^j \notag \\
\hphantom{\widehat{c_o}^{\rm new} (i,j;s)}{} = \widehat{c_o}^{\rm new} (i,0;s)\widehat{c_o}^{\rm new} \big(0,j;t^i s\big).
\end{gather*}
Hence we have the following recursive structure:
\begin{gather*}
P_{(1^r)}(x|a,b,c,d|q,t) =
\sum_{k,l,i,j\geq 0} (-1)^{i+j}
E_{r-2k-2l-i-j} (x)
\widehat{c}\,'_e\big(k,l;t^{n-r+1+i+j}\big) \widehat{c}_o\big(i,j; t^{n-r+1}\big) \\
\hphantom{P_{(1^r)}(x|a,b,c,d|q,t)}{} =
\sum_{i\geq 0}(-1)^i \widehat{c_o}^{\rm new} (i,0;s)
\Biggl(
\sum_{j\geq 0} (-1)^j \widehat{c_o}^{\rm new} \big(0,j;t^i s\big)\\
\hphantom{P_{(1^r)}(x|a,b,c,d|q,t)=}{} \times \Biggl(
\sum_{k\geq 0} \widehat{c}\,'_e\big(k,0;t^{i+j}s\big) \Biggl(
\sum_{l\geq 0} \widehat{c}\,'_e\big(0,l;t^{i+j+2k}s\big)E_{r-2k-2l-i-j} (x)\Biggr)\Biggr)\Biggr) ,
\end{gather*}
where $s=t^{n-r+1}$.
\end{lem}
\begin{dfn} \label{Pdefshort} We denote the specialization of the parameters in the degeneration scheme~(\ref{dscheme}) by the Roman numbers {\rm IV, III, II, I} as follows:
\begin{align*}
\begin{array}{llllll}
&\Bigl|& (a,b,c,d) &\Bigl|&P^{BC_n}_{(1^r)}(x|a,b,c,d|q,t)\\\hline
{\rm IV}&\Bigl|& (a,b,c,d)&\Bigl|&P^{BC_n,{\rm IV}}_{(1^r)}(x)=P^{BC_n}_{(1^r)}(x|a,b,c,d|q,t)\\
{\rm III}&\Bigl|&(a,-a,c,d)&\Bigl|&
P^{BC_n,{\rm III}}_{(1^r)}(x)=P^{BC_n}_{(1^r)}(x|a,-a,c,d|q,t)\\
{\rm II}&\Bigl|&(a,-a,c,-c)&\Bigl|&
P^{BC_n,{\rm II}}_{(1^r)}(x)=P^{BC_n}_{(1^r)}(x|a,-a,c,-c|q,t)\\
{\rm I}&\Bigl|&\big(t^{1/2}c,-t^{1/2}c,c,-c\big)&\Bigl|&
P^{BC_n,{\rm I}}_{(1^r)}(x)=P^{BC_n}_{(1^r)}\big(x|t^{1/2}c,-t^{1/2}c,c,-c|q,t\big)\\\hline
&\Bigl|&\big(t^{1/2},-t^{1/2},1,-1\big)&\Bigl|&
E_r(x)=P^{BC_n}_{(1^r)}\big(x|t^{1/2},-t^{1/2},1,-1|q,t\big)
\end{array}
\end{align*}
\end{dfn}
\begin{lem}\label{lem-2}
When
the parameters are in the strata {\rm III},
we have $\widehat{c_o}^{\rm new} (i,0;s)=0$ for $i>0$.
When
the parameters are in the strata {\rm II},
we have $\widehat{c_o}^{\rm new} (i,0;s)=0$ for $i>0$ and
$\widehat{c_o}^{\rm new} (0,j;s)=0$ for $j>0$.
When
the parameters are in the strata {\rm I},
we have $\widehat{c_o}^{\rm new} (i,0;s)=0$ for $i>0$,
$\widehat{c_o}^{\rm new} (0,j;s)=0$ for $j>0$ and
$\widehat{c}\,'_e(k,0;s)=0$ for $k>0$.
When $(a,b,c,d)=\big(t^{1/2},-t^{1/2},1,-1\big)$, we have
$P^{BC_n}_{(1^r)}\big(x|t^{1/2},-t^{1/2},1,-1|q,t\big)=E_r(x)$.
\end{lem}
\begin{proof} These immediately follow from the definitions of $\widehat{c_o}^{\rm new} (i,j;s)$ and $\widehat{c}\,'_e(k,l;s)$ and
Theorem~\ref{HSnew}.
\end{proof}
\begin{thm} \label{dmat} We have
\begin{subequations}\label{4degs}
\begin{gather}
P^{BC_n,{\rm IV}}_{(1^r)}(x)=
\sum_{i \geq 0}
(-1)^i \widehat{c_o}^{\rm new}\big(i,0 ;t^{n-r+1}\big)
P^{BC_n,{\rm III}}_{(1^{r-i})}(x), \label{deg1} \\
P^{BC_n,{\rm III}}_{(1^r)}(x) =
\sum_{j \geq 0} (-1)^j \widehat{c_o}^{\rm new}\big(0,j ;t^{n-r+1}\big)
P^{BC_n,{\rm II}}_{(1^{r-j})}(x), \label{deg2} \\
P^{BC_n,{\rm II}}_{(1^{r})}(x) =
\sum_{k \geq 0} \widehat{c_e'}\big(k,0 ;t^{n-r+1}\big)
P^{BC_n,{\rm I}}_{(1^{r-2k})}(x), \label{deg3} \\
P^{BC_n,{\rm I}}_{(1^{r})}(x)=\sum_{l \geq 0} \widehat{c_e'}\big(0,l ;t^{n-r+1}\big)E_{r-2l}(x). \label{deg4}
\end{gather}
\end{subequations}
Here we have used the shorthand notation in Definition~{\rm \ref{Pdefshort}}
\begin{alignat*}{3}
&P^{BC_n,{\rm IV}}_{(1^r)}(x)=P^{BC_n}_{(1^r)}(x|a,b,c,d|q,t), \qquad&&
P^{BC_n,{\rm III}}_{(1^r)}(x)=P^{BC_n}_{(1^r)}(x|a,-a,c,d|q,t),& \\
&P^{BC_n,{\rm II}}_{(1^r)}(x)=P^{BC_n}_{(1^r)}(x|a,-a,c,-c|q,t), \qquad&&
P^{BC_n,{\rm I}}_{(1^r)}(x)=P^{BC_n}_{(1^r)}\big(x|t^{1/2}c,-t^{1/2}c,c,-c|q,t\big).&
\end{alignat*}
\end{thm}
\begin{proof}
First, set the parameters $(a,b,c,d)$ as in the strata I. Then
in view of Lemmas~\ref{lem-1} and~\ref{lem-2}, we have~(\ref{deg4}).
Next,
when we go up by one step to the strata~II,
we have~(\ref{deg3}) from~(\ref{deg4}),
Lemmas~\ref{lem-1} and~\ref{lem-2}.
In the same way,
when we go up to the strata~III,
we have~(\ref{deg2}) from~(\ref{deg3}),
Lemmas~\ref{lem-1} and~\ref{lem-2}.
Going up one more time to the top strata IV,
we have~(\ref{deg1}) from~(\ref{deg2}),
Lemmas~\ref{lem-1} and~\ref{lem-2}.
This completes the proof of Theorem~\ref{dmat}.
\end{proof}
\section[Matrix inversions for degeneration scheme of Koornwinder polynomials]{Matrix inversions for degeneration scheme\\ of Koornwinder polynomials} \label{MatrixInversion}
In this section we investigate the transition matrices
appearing in Theorem \ref{dmat} and their inverse matrices,
in terms of the matrix inversion formula of Bressoud \cite{B}.
\begin{thm}[{\cite[p.~1, Theorem]{B}, \cite[p.~5, Corollary]{L}}] \label{Bressoud-1} Define the infinite lower triangular
matrix
$\mathcal{M}(u,v;x,y;q)=(\mathcal{M}_{i,j}(u,v;x,y;q))_{0\leq i,j<\infty}$ with entries given by
\begin{gather*}
\mathcal{M}_{r,r-2i}(u,v;x,y;q)
=
y^i v^i { (x/y ; q)_i \over (q ; q)_i}
{ \big(u q^{r-2i} ; q\big)_{2i} \over \big(uxq^{r-i} ; q\big)_i \big(uyq^{r-2i+1} ; q\big)_i }, \qquad
r, i \in \mathbb{Z}_{\geq 0}, \ i \leq \lfloor r/2 \rfloor ,
\end{gather*}
and zero otherwise. Then we have
$\mathcal{M}(u,v;x,y;q) \mathcal{M}(u,v;y,z;q)=\mathcal{M}(u,v;x,z;q)$.
In particular, $\mathcal{M}(u,v;x,y;q)$ and $\mathcal{M}(u,v;y,x;q)$ are mutually inverse.
\end{thm}
\begin{dfn}\label{Bressoud-2}
Set
\[ d_{\mathcal{M}}(u,v)_r = { \big(t^2 v^{1/2} ; t\big)_r \over \big(u^{1/2} ; t\big)_r } \big(u^{1/4}/v^{3/4}\big)^r.
\]
Let $\widetilde{\mathcal{M}}(u,v;x,y;t)$ denote the conjugation of the matrix $\mathcal{M}\big(u,v,x,y;t^2\big)$
by the $d_{\mathcal{M}}(u,v)_r$ with entries
\begin{gather*}
\widetilde{\mathcal{M}}_{r,r-2i}(u,v;x,y;t)=
\mathcal{M}_{r,r-2i}\big(u,v;x,y;t^2\big) d_{\mathcal{M}}(u,v)_r/d_{\mathcal{M}}(u,v)_{r-2i} \nonumber \\
\qquad{} =
{ (x/y ; t^2)_i \over \big(t^2 ; t^2\big)_i}
{ \big(v^{1/2} t^{r-2i+2} ; t\big)_{2i} \over \big(u^{1/2} t^{r-2i} ; t\big)_{2i} }
{ \big(u t^{2r-4i} ; t^2\big)_{2i} \over \big(uxt^{2r-2i} ; t^2\big)_i \big(uyt^{2r-4i+2} ; t^2\big)_i }
(y u^{1/2}/v^{1/2})^{i}.
\end{gather*}
Note that $\widetilde{\mathcal{M}}(u,v;x,y;t)$ and $\widetilde{\mathcal{M}}(u,v;y,x;t)$ are mutually inverse.
\end{dfn}
\begin{thm}\label{Bressoud-3}
Define the matrix $\mathcal{K}(x,y;q)$ with entries
\begin{gather*}
\mathcal{K}_{i,j}(x,y;q)=y^{i-j}
{ (x/y;q)_{i-j} \over (q;q)_{i-j} }
{1 \over \big(xq^{i+j};q\big)_{i-j} \big(yq^{2j+1};q\big)_{i-j} }.
\end{gather*}
Then
$\mathcal{K}(x,y;q)$ and $\mathcal{K}(y,x;q)$ are mutually inverse.
\end{thm}
\begin{proof}From the matrix
$\mathcal{M}(u,v;x,y;q)$, we obtain the even-reduced lower triangular matrix
$\mathcal{M}^{\rm e}(u,v;x,y;q)=(\mathcal{M}^{\rm e}_{i,j}(u,v;x,y;q))$ with entries
$\mathcal{M}^{\rm e}_{i,j}(u,v;x,y;q)=\mathcal{M}_{2i,2j}(u,v;x,y;q)$.
Then we have
$\mathcal{M}^{\rm e}(u,v;x,y;q)\mathcal{M}^{\rm e}(u,v;y,z;q)=
\mathcal{M}^{\rm e}(u,v;x,z;q)$, implying that
the matrix
\begin{gather*}
\mathcal{K}(x,y;q)=\lim_{u\rightarrow 1}\mathcal{M}^{\rm e}(u,1;x/u,y/u;q)
\end{gather*}
satisfies $\mathcal{K}(x,y;q)\mathcal{K}(y,z;q)=\mathcal{K}(x,z;q)$.
\end{proof}
\begin{dfn} \label{Nkr}Set
\[ d_{\mathcal{N}}(\vector{u},v)_r = v^{-r} (u_1;t)_r (u_2;t)_r (u_3;t)_r (u_4;t)_r\]
for $\vector{u}=(u_1, u_2, u_3, u_4)$.
Let $\mathcal{N}(\vector{u},v;x,y;t)$ denote the conjugation of the matrix $\mathcal{K}(x,y;t)$
by the $d_{\mathcal{N}}(\vector{u},v)_r$ with entries defined by
\begin{gather*}
\mathcal{N}_{r,r-i}(\vector{u},v,x,y;t)=\mathcal{K}_{r,r-i}(xv,yv;t) d_{\mathcal{N}}(\vector{u},v)_r / d_{\mathcal{N}}(\vector{u},v)_{r-i} \\
\hphantom{\mathcal{N}_{r,r-i}(\vector{u},v,x,y;t)}{} = y^i { (x/y;t)_i \over (t;t)_i }
{ \big(u_1 t^{r-i};t\big)_i \big(u_2 t^{r-i};t\big)_i \big(u_3 t^{r-i};t\big)_i \big(u_4 t^{r-i};t\big)_i
\over
\big(xv t^{2r-i};t\big)_i \big(yv t^{2r-2i+1};t\big)_i },\qquad \!\!\! r, i \in \mathbb{Z}_{\geq 0} .
\end{gather*}
Then
$\mathcal{N}(\vector{u},v;x,y;t)$ and $\mathcal{N}(\vector{u},v;y,x;t)$ are mutually inverse.
\end{dfn}
\begin{prp} \label{compare}
All the transition coefficients $ ({-}1)^i\widehat{c_o}^{\rm new}\!\big(i ,\! 0 ; t^{n-r+1}\big)$,
$ ({-}1)^j\widehat{c_o}^{\rm new}\!\big(0 ,\! j ; t^{n-r+1}\big)$, $\widehat{c_e'}\big(k,0 ;t^{n-r+1}\big)$ and
$\widehat{c_e'}\big(0,l ;t^{n-r+1}\big) $
in Theorem~{\rm \ref{dmat}} are
given in terms of the Bressoud matrices
$\mathcal{N}(\vector{u},v;x,y;t)$, $\widetilde{\mathcal{M}}(u,v;x,y;t)$ and
$\mathcal{M}(u,v;x,y;q)$. Namely,
we have
\begin{gather*}
(-1)^i\widehat{c_o}^{\rm new}\big(i,0 ;t^{n-r+1}\big) \\
\qquad{} =
\mathcal{N}_{r,r-i}\big(t^{-n}, t^{-n+1}/ac, t^{-n+1}/ad, t^{-n+1}/cd, -t^{-2n}/acd, -t/b, t/a;t\big), \\
(-1)^j\widehat{c_o}^{\rm new}\big(0,j ;t^{n-r+1}\big) \\
\qquad{} =
\mathcal{N}_{r,r-j}\big(t^{-n}, -t^{-n+1}/a^2, t^{-n+1}/ac, -t^{-n+1}/ac, t^{-2n}/a^2c, -t/d, t/c ;t\big), \\
\widehat{c_e'}\big(k,0 ;t^{n-r+1}\big) =
\widetilde{\mathcal{M}}_{r,r-2k}
\big(t^{-2n+2}/c^4, t^{-2n-4}, c^2/ta^2, 1/t^2 ; t\big), \\
\widehat{c_e'}\big(0,l ;t^{n-r+1}\big) =
\mathcal{M}_{r,r-2l}\big(t^{-n}, t, 1/c^2, 1 ; t\big).
\end{gather*}
\end{prp}
\begin{proof}These can be checked by straightforward calculations using the definitions.
We only demonstrate the first equation.
By Definition \ref{Nkr} we have
\begin{gather}
(-1)^i
\mathcal{N}_{r,r-i}\big(t^{-n}, t^{-n+1}/ac, t^{-n+1}/ad, t^{-n+1}/cd, -t^{-2n}/acd, -t/b, t/a;t\big)
\label{p35}\\
{}=
(-t/a)^i { (-a/b;t)_i \over (t;t)_i }
{ \big(t^{-n+r-i};t\big)_i \big(t^{-n+r-i+1}/ac;t\big)_i \big(t^{-n+r-i+1}/ad;t\big)_i \big(t^{-n+r-i+1}/cd;t\big)_i
\over
\big(t^{-2n+2r-i+1}/abcd ;t\big)_i \big({-}t^{-2n+2r-2i+2}/a^2cd;t\big)_i }.\notag
\end{gather}
Noting that we have
\begin{gather*}
\big(X t^{-i};t\big)_i = (-X/t)^i t^{-\binom{i}{2}} \big(X^{-1}t;t\big)_i, \\
\big(X t^{-2i};t\big)_i = {\big(t/X;t^2\big)_{n}\big(t^2/X;t^2\big)_{n} \over (t/X;t)_{n}} \big({-}X/t^2 \big)^i t^{-3 \binom{i}{2}},
\end{gather*}
and recalling the definition of $\widehat{c_o}^{\rm new} (i,j;s)$ in Theorem~\ref{HSnew},
we find that r.h.s.\ of~(\ref{p35}) reduces to
\begin{gather*}
{
(-a/b;t)_i \big(t^{n-r+1};t\big)_i \big(t^{n-r}ac;t\big)_i \big(t^{n-r}ad;t\big)_i \big(t^{n-r}cd;t\big)_i
\big({-}t^{2n-2r-1}a^2cd; t\big)_i
\over
(t;t)_i \big(t^{2n-2r}abcd; t\big)_i \big({-}t^{2n-2r-1}a^2cd;t^2\big)_i \big({-}t^{2n-2r}a^2cd; t^2\big)_i
}
b^i\\
\qquad{} =\widehat{c_o}^{\rm new} \big(i,0;t^{n-r+1}\big).\tag*{\qed}
\end{gather*}\renewcommand{\qed}{}
\end{proof}
\begin{thm} \label{dmat-dual} The following relations are inverse to those given in equation~\eqref{4degs}
\begin{gather*}
P^{BC_n,{\rm III}}_{(1^r)}(x)\label{deg1-dual} \\
\quad{}=
\sum_{i \geq 0}
\mathcal{N}_{r,r-i}\big(t^{-n}, t^{-n+1}/ac, t^{-n+1}/ad, t^{-n+1}/cd, -t^{-2n}/acd, t/a,-t/b;t\big)
P^{BC_n,{\rm IV}}_{(1^{r-i})}(x),\notag \\
P^{BC_n,{\rm II}}_{(1^r)}(x)\label{deg2-dual} \\
\quad{}=
\sum_{j \geq 0}
\mathcal{N}_{r,r-j}\big(t^{-n}, -t^{-n+1}/a^2, t^{-n+1}/ac, -t^{-n+1}/ac, t^{-2n}/a^2c, t/c,-t/d ;t\big)
P^{BC_n,{\rm III}}_{(1^{r-j})}(x),\notag \\
P^{BC_n,{\rm I}}_{(1^{r})}(x) = \sum_{k \geq 0} \widetilde{\mathcal{M}}_{r,r-2k}
\big(t^{-2n+2}/c^4, t^{-2n-4}, 1/t^2, c^2/ta^2 ; t\big)
P^{BC_n,{\rm II}}_{(1^{r-2k})}(x) ,\label{deg3-dual} \\
E_{r}(x)=
\sum_{l \geq 0}\mathcal{M}_{r,r-2l}\big(t^{-n}, t, 1,1/c^2 ; t\big)P^{BC_n,{\rm I}}_{(1^{r-2l})}(x).\label{deg4-dual}
\end{gather*}
\end{thm}
\begin{proof}These follow from the Bressoud matrix inversion formulas (Theorem \ref{Bressoud-1}, Definition \ref{Bressoud-2} and Theorem \ref{Bressoud-3}), Theorem~\ref{dmat} and Proposition~\ref{compare}.
\end{proof}
\section{Five term relations} \label{Ftr}
In this section we give some preparations in order to prove Theorem~\ref{main2}.
We need to recall some of the results in~\cite{HS}
concerning the four term relations for the ${}_{4}\phi_{3}$ series
associated with the matrix~$\mathcal{M}$.
Then we give the five term relations for the ${}_{4}\phi_{3}$ series
associated with the matrix~$\mathcal{N}$.
\subsection[Matrices $\mathsf{M}=(\mathsf{M}_{ij})$, $\mathsf{N}=(\mathsf{N}_{ij})$ and series $B(n,r,p)$]{Matrices $\boldsymbol{\mathsf{M}=(\mathsf{M}_{ij})}$, $\boldsymbol{\mathsf{N}=(\mathsf{N}_{ij})}$ and series $\boldsymbol{B(n,r,p)}$}
\begin{dfn} \label{MandN}
Define the lower-triangular matrices $\mathsf{M}=(\mathsf{M}_{ij})$, $\mathsf{N}=(\mathsf{N}_{ij})$ by
the following products of the Bressoud matrices:
\begin{gather*}
\mathsf{M} =
\widetilde{\mathcal{M}}
\big(t^{-2n+2}/c^4, t^{-2n-4}, c^2/ta^2, 1/t^2 ; t\big)
\mathcal{M}\big(t^{-n}, t, 1/c^2, 1 ; t\big), \\
\mathsf{N} =
\mathcal{N}\big(t^{-n}, t^{-n+1}/ac, t^{-n+1}/ad, t^{-n+1}/cd, -t^{-2n}/acd, -t/b, t/a;t\big)\\
\hphantom{\mathsf{N} =}{} \times
\mathcal{N}\big(
t^{-n}, -t^{-n+1}/a^2, t^{-n+1}/ac, -t^{-n+1}/ac, t^{-2n}/a^2c, -t/d, t/c ;t\big).
\end{gather*}
\end{dfn}
Writing the matrix elements explicitly, we have
\begin{gather}
\mathsf{M}_{i,j} =
\sum_{l=0}^{\lfloor (i-j)/2 \rfloor}
\widetilde{\mathcal{M}}_{i, i-2l}
\big(t^{-2n+2}/c^4, t^{-2n-4}, c^2/ta^2, 1/t^2 ; t\big)
\mathcal{M}_{i-2l, j}\big(t^{-n}, t, 1/c^2, 1 ; t\big) \notag \\
\hphantom{\mathsf{M}_{i,j}}{} =\sum_{l=0}^{\lfloor (i-j)/2 \rfloor}
\widehat{c_e'}\big(l,0 ;t^{n-i+1}\big)
\widehat{c_e'}\big(0,\lfloor (i-j)/2 \rfloor -l ;t^{n-i+2l+1}\big), \qquad i \geq j, \notag \\
\mathsf{N}_{i,j} =
\sum_{l=0}^{i-j}
\mathcal{N}_{i, i-l}\big(t^{-n}, t^{-n+1}/ac, t^{-n+1}/ad, t^{-n+1}/cd, -t^{-2n}/acd, -t/b, t/a;t\big) \notag \\
\hphantom{\mathsf{N}_{i,j} =}{} \times
\mathcal{N}_{i-l, j}\big(
t^{-n}, -t^{-n+1}/a^2, t^{-n+1}/ac, -t^{-n+1}/ac, t^{-2n}/a^2c, -t/d, t/c ;t\big) \notag \\
\hphantom{\mathsf{N}_{i,j}}{} =\sum_{l=0}^{i-j}
(-1)^{i-j}
\widehat{c_o}^{\rm new}\big(l,0 ;t^{n-i+1}\big)
\widehat{c_o}^{\rm new}\big(0,i-j-l ;t^{n-i+l+1}\big), \qquad i \geq j. \label{defNij}
\end{gather}
\begin{dfn} \label{B(n,r,p)}
Define the series $B(n,r,p)$ as the $(r,r-p)$-th matrix element of the product matrix $\mathsf{M}\mathsf{N}$
\begin{gather*}
B(n,r,p)=\bigl(\mathsf{M}\mathsf{N}\bigr)_{r,r-p}.
\end{gather*}
\end{dfn}
Writing them explicitly, we have ($p \in \mathbb{Z}_{\geq 0}$)
\begin{gather*}
B(n,r,2p) = \sum_{k=0}^{p} \mathsf{M}_{r - 2k, r - 2p}\mathsf{N}_{r, r-2k} \\
\hphantom{B(n,r,2p)}{} = \sum_{k=0}^{p} \sum_{i=0}^{p-k} \sum_{j=0}^{2k}
\widehat{c_e'}\big(i,0 ;t^{n-r+2k+1}\big) \widehat{c_e'}\big(0,p-k-i ;t^{n-r+2k+2i+1}\big) \\
\hphantom{B(n,r,2p)=}{} \times
(-1)^{2k} \widehat{c_o}^{\rm new}\big(j,0 ;t^{n-r+1}\big) \widehat{c_o}^{\rm new}\big(0,2k-j ;t^{n-r+j+1}\big), \\
B(n,r,2p+1) = \sum_{k=1}^{p+1}
\mathsf{M}_{r - 2k + 1, r - 2p - 1}
\mathsf{N}_{r, r-2k+1} \\
\hphantom{B(n,r,2p+1)}{} = \sum_{k=1}^{p+1} \sum_{i=0}^{p-k+1} \sum_{j=0}^{2k-1}
\widehat{c_e'}\big(i,0 ;t^{n-r+2k}\big) \widehat{c_e'}\big(0,p-k+1-i ;t^{n-r+2k+2i}\big) \\
\hphantom{B(n,r,2p+1)=}{} \times
(-1)^{2k-1} \widehat{c_o}^{\rm new}\big(j,0 ;t^{n-r+1}\big) \widehat{c_o}^{\rm new}\big(0,2k-1-j ;t^{n-r+j+1}\big).
\end{gather*}
Note that we have
\begin{gather*}
B(n,r,p) \\
= \sum_{i+2k+2l \leq p}
\mathcal{N}_{r, r + 2l + 2k + i - p}\big(t^{-n}, t^{-n+1}/ac, t^{-n+1}/ad, t^{-n+1}/cd, -t^{-2n}/acd, -t/b,
t/a;t\big)\\
\quad {} \times
\mathcal{N}_{r + 2l + 2k + i - p, r + 2l + 2k - p}\big(
t^{-n}, -t^{-n+1}/a^2, t^{-n+1}/ac, -t^{-n+1}/ac, t^{-2n}/a^2c, -t/d, t/c ;t\big) \\
\quad {} \times
\widetilde{\mathcal{M}}_{r + 2l + 2k - p, r + 2l - p}
\big(t^{-2n+2}/c^4, t^{-2n-4}, c^2/ta^2, 1/t^2 ; t\big)
\mathcal{M}_{r + 2l - p, r - p }\big(t^{-n}, t, 1/c^2, 1 ; t\big) \\
= \sum_{i+2k+2l \leq p}
(-1)^{p} \widehat{c_o}^{\rm new}\big(p-2l-2k-i,0 ;t^{n-r+1}\big)
\widehat{c_o}^{\rm new}\big(0,i ;t^{n-r-2l-2k-i+p+1}\big) \\
\quad{} \times
\widehat{c_e'}\big(k,0 ;t^{n-r-2l-2k+p+1}\big)
\widehat{c_e'}\big(0,l ;t^{n-r-2l+p+1}\big).
\end{gather*}
\subsection[Four term relations for $\mathsf{M}$]{Four term relations for $\boldsymbol{\mathsf{M}}$}
We remark that the matrix $\mathsf{M}_{i,j}$ is denoted by $\mathcal{B}_{i,j}$ in~\cite{HS}.
\begin{dfn} In view of Lemma \ref{lem-1}, set for simplicity
\begin{gather*}
m_1(s,k):=\widehat{c}\,'_e(k,0;s)=
{ \big(tc^2/a^2 ; t^2\big)_k \big(sc^2t ; t^2\big)_k \big(s^2c^4/t^2 ; t^2\big)_k (s ; t)_{2k}
\over
\big(t^2 ; t^2\big)_k \big(sc^2/t ; t^2\big)_k \big(s^2a^2c^2/t ; t^2\big)_k \big(sc^2 ; t\big)_{2k}}a^{2k}, \\
m_0(s,l):= \widehat{c}\,'_e(0,l;s)=
{ \big(1/c^2 ; t\big)_l (s/t ; t)_l (s ; t)_{2l}
\over
(t ; t)_l \big(sc^2 ; t\big)_l (s/t ; t)_{2l} }c^{2l}.
\end{gather*}
\end{dfn}
\begin{dfn} \label{defMsl}Define
\begin{gather*}
\mathsf{M}(s,l):=(-1)^l s^{-l} {\big(s^2/t^2; t^2\big)_l \over \big(t^2; t^2\big)_l}
{1-s^2 t^{4l-2} \over 1-s^2 t^{-2}}
{}_{4}\phi_3 \left[ { -sa^2, -sc^2, s^2 t^{2l-2}, t^{-2l}
\atop
-s, -st, s^2a^2 c^2/t } ; t^2, t^2\right].
\end{gather*}
\end{dfn}
\begin{prp} For $s=t^{n-r+1}$, we have
\begin{gather*}
\mathsf{M}_{r,r-2l} =\mathsf{M}(s,l) = \sum_{k=0}^l m_1(s,k) m_0\big(st^{2k}, l-k\big).
\end{gather*}
\end{prp}
We rewrite Theorem 6.1(a) in \cite{HS} as follows.
\begin{thm}[{\cite[Theorem 6.1(a)]{HS}}]\label{HS-ftr}
We have
\begin{gather*}
\mathsf{M}_{r-2l, r-2k} + \mathsf{M}_{r-2l,r-2k+2}
= \mathsf{M}_{r-2l+1, r-2k+1} + f\big(t^{n-r+2l} | a, -a, c, -c\big) \mathsf{M}_{r-2l-1, r-2k+1}.
\end{gather*}
\end{thm}
We can rewrite Theorem \ref{HS-ftr} for generic $s$ as follows.
\begin{thm}[{\cite[Theorem 6.1(a)]{HS}}]
For generic $s$ we have
\begin{gather*}
\mathsf{M}(st, l) + \mathsf{M}(st, l-1)
= \mathsf{M}(s, l) + f(s | a, -a, c, -c) \mathsf{M}\big(st^2, l-1\big).
\end{gather*}
\end{thm}
This follows from the following lemma.
\begin{lem} \label{mformula}
We have
\begin{gather*}
m_1(s,k) + f(s | a,-b,c,-d) m_1\big(st^2,k-1\big) \notag \\
\qquad{} =
m_1(st,k) + f\big(st^{2k-2} | t^{1/2}c, -t^{1/2}c, c, -c\big) m_1(st,k-1), \\
m_0(s,l) + f\big(s|t^{1/2}c, -t^{1/2}c, c, -c\big) m_0\big(st^2,l-1\big) = m_0(st,l) + m_0(st,l-1).
\end{gather*}
Note that $f\big(st^{2l-2} | t^{1/2}, -t^{1/2}, 1, -1\big)=1$.
\end{lem}
\subsection[Five term relations for $\mathsf{N}$]{Five term relations for $\boldsymbol{\mathsf{N}}$}
\begin{dfn}\label{defn} In view of Lemma~\ref{lem-1}, set for simplicity
\begin{gather*}
n_1(s,i) : =(-1)^i \widehat{c_o}^{\rm new} (i,0;s) \notag \\
\hphantom{n_1(s,i)}{} ={ (-a/b ; t)_i (s ; t)_i (sac/t ; t)_i (sad/t ; t)_i (scd/t ; t)_i \big({-}s^2a^2cd/t^3 ; t\big)_i
\over
(t ; t)_i \big(s^2abcd/t^2 ; t\big)_i \big({-}s^2a^2cd/t^3 ; t\big)_{2i} }
(-b)^i, \\
n_0(s,j): =(-1)^j \widehat{c_o}^{\rm new} (0,j;s) \notag \\
\hphantom{n_0(s,j)}{} =
{ (-c/d ; t)_j (s ; t)_j \big({-}sa^2/t ; t\big)_j \big(s^2a^2c^2/t^3 ; t\big)_j \big(s^2a^2c^2/t^3 ; t^2\big)_j
\over
(t ; t)_j \big({-}sa^2cd/t^2 ; t\big)_j \big(sa^2c^2/t^3 ; t\big)_{2j} }(-d)^j.
\end{gather*}
\end{dfn}
\begin{dfn} \label{defNsl1}
Define
\begin{gather*}
\mathsf{N}(s,j) :={
\big({-}c/d, s, s^2 a^2 c^2/t^3, sab/t; t\big)_j
\over
\big(t, sac/t^{3/2}, -sac/t^{3/2}, s^2 a b c d/t^2; t\big)_j
}
(-d)^j \\
\hphantom{\mathsf{N}(s,j) :=}{} \times {}_{4}\phi_3
\left[
{t^{-j}, -a/b, scd/t, -t^{-j+2}/sac
\atop
-t^{-j+1}d/c, -sac/t, t^{-j+2}/sab
}
;t ,t \right].
\end{gather*}
\end{dfn}
\begin{prp} For $s=t^{n-r+1}$, we have
\begin{gather*}
\mathsf{N}_{r,r-j} =\mathsf{N}(s,j)= \sum_{i=0}^{j}n_1(s,i)n_0\big(st^i, j-i\big).
\end{gather*}
\end{prp}
\begin{proof}Set $s=t^{n-r+1}$ for simplicity. By~(\ref{defNij}) in Definition~\ref{MandN}, we have
\begin{gather*}
\mathsf{N}_{r,r-j}= \sum_{i=0}^{j}n_1(s,i)n_0\big(st^{i}, j-i\big) \notag \\
\hphantom{\mathsf{N}_{r,r-j}}{} =(-1)^{j}\sum_{i=0}^{j}
\widehat{c_o}^{\rm new}(i,0 ;s) \widehat{c_o}^{\rm new}\big(0,j-i ;st^{i}\big)
= (-1)^j \sum_{i = 0}^j \widehat{c_o}^{\rm new} (i,j-i;s) \notag \\
\hphantom{\mathsf{N}_{r,r-j}}{}={
\big({-}c/d, s, s^2 a^2 c^2/t^3, sab/t; t\big)_j
\over
\big(t, sac/t^{3/2}, -sac/t^{3/2}, s^2 a b c d/t^2; t\big)_j}(-d)^j \notag \\
\hphantom{\mathsf{N}_{r,r-j}=}{} \times {}_{4}\phi_3
\left[
{t^{-j}, -a/b, scd/t, -t^{-j+2}/sac
\atop
-t^{-j+1}d/c, -sac/t, t^{-j+2}/sab
}
;t ,t \right] = \mathsf{N}(s,j).
\end{gather*}
Here in the last step, we have used (\ref{co_4phi3}) in the proof of Proposition~\ref{prop1}.
\end{proof}
We obtain a five term relation for $\mathsf{N}(s,j)$ as follows.
\begin{thm}\label{str}
For generic $s$, we have
\begin{gather}
\mathsf{N}(s, j) + g(s|a,b,c,d) \mathsf{N}(st,j-1) + f(s|a,b,c,d) \mathsf{N}\big(s t^2, j-2\big)
\notag \\
\qquad{}
= \mathsf{N}(st,j) + f\big(st^{j-2} | a, -a, c, -c\big) \mathsf{N}(st,j-2). \label{n0}
\end{gather}
\end{thm}
We require the following lemma in order to show Theorem~\ref{str}.
\begin{lem} \label{nrel}
\begin{subequations}
\begin{gather}
n_1(s,i) + g(s|a,b,c,d) n_1(st, i-1) + f(s|a,b,c,d) n_1\big(st^2,i-2\big) \label{n1rel} \\
\qquad{} = n_1(st,i) + g\big(st^{i-1} | a, -a, c, d\big) n_1(st,i-1) + f\big(st^{i-2} | a, -a, c, -d\big) n_1(st,i-2),\notag \\
n_0(s,j) + g(s | a, -a, c, d) n_0(st,j-1) + f(s | a, -a, c, d) n_0\big(st^2,j-2\big) \notag \\
\qquad{} =
n_0(st,j) + f\big(st^{j-2} | a, -a, c, -c\big) n_0(st,j-2). \label{n0rel}
\end{gather}
\end{subequations}
Note that $g(s | a, -a, c, -c)=0$.
\end{lem}
\begin{proof}
This follows from a direct calculation.
\end{proof}
\begin{proof}[Proof of Theorem \ref{str}]
We have
\begin{gather}
\mathsf{N}(st,j) + f\big(st^{j-2} | a, -a, c, -c\big) \mathsf{N}(st,j-2) \notag \\
\qquad{}=
\sum_{i=0}^{j}n_1(st, i)n_0\big(st^{i+1}, j-i\big) + f\big(st^{j-2} | a, -a, c, -c\big) \sum_{i=0}^{j-2}n_1(st, i)n_0\big(st^{i+1}, j-2-i\big) \notag \\
\qquad{}=
\sum_{i=0}^{j-2} n_1(st, i) \big( n_0\big(st^{i+1}, j-i\big) + f\big(st^{j-2} | a, -a, c, -c\big)n_0(st^{i+1}, j-i-2) \big) \notag\\
\quad\qquad{}+
n_1(st, j-1)n_0\big(st^{j}, 1\big) + n_1(st, j)n_0\big(st^{j+1}, 0\big). \label{n1}
\end{gather}
Applying (\ref{n0rel}), and noting that $n_0\big(st^{j+1}, 0\big)=1$, we have
\begin{gather}
\text{r.h.s.\ of } (\ref{n1}) =
\sum_{i=0}^{j-2} n_1(st, i) \big(
n_0\big(st^{i}, j-i\big)
+ g(st^{i} | a, -a, c, d) n_0\big(st^{i+1}, j-i-1\big) \notag \\
\hphantom{\text{r.h.s.\ of } (\ref{n1}) =}{} + f\big(st^{i} | a, -a, c, d\big) n_0\big(st^{i+2}, j-i-2\big) \big)
+ n_1(st, j-1)n_0\big(st^{j}, 1\big) + n_1(st, j) \!\notag \\
\hphantom{\text{r.h.s.\ of } (\ref{n1}) }{}=
\sum_{i=1}^{j-1} n_1(st, i+1) n_0\big(st^{i+1}, j-i-1\big) +n_1(st, 0) n_0(s, j)\notag \\
\hphantom{\text{r.h.s.\ of } (\ref{n1}) =}{} + n_1(st, 1) n_0(st, j-1)
-n_1(st, j-1) n_0\big(st^{j-1}, 1\big) - n_1(st, j) n_0\big(st^{j}, 0\big) \notag \\
\hphantom{\text{r.h.s.\ of } (\ref{n1}) =}{} +\sum_{i=1}^{j-1} g\big(st^{i} | a, -a, c, d\big) n_1(st, i) n_0\big(st^{i+1}, j-i-1\big) \notag \\
\hphantom{\text{r.h.s.\ of } (\ref{n1}) =}{} + g(s | a, -a, c, d) n_1(st, 0) n_0(st, j-1)\notag\\
\hphantom{\text{r.h.s.\ of } (\ref{n1}) =}{}
-g\big(st^{j-1} | a, -a, c, d\big) n_1(st, j-1) n_0\big(st^{j}, 0\big) \notag \\
\hphantom{\text{r.h.s.\ of } (\ref{n1}) =}{} +
\sum_{i=1}^{j-1} f\big(st^{i-1} | a, -a, c, d\big) n_1(st, i-1) n_0\big(st^{i+1}, j-i-1\big) \notag \\
\hphantom{\text{r.h.s.\ of } (\ref{n1}) =}{} +
n_1(st, j-1)n_0\big(st^{j}, 1\big) + n_1(st, j). \label{n2}
\end{gather}
Concerning the l.h.s.\ of (\ref{n0}), we have
\begin{gather}
\mathsf{N}(s,j)
=\sum_{i=0}^j n_1(s,i) n_0\big(st^i, j-i\big) \notag \\
\hphantom{\mathsf{N}(s,j)}{}= \sum_{i=1}^{j-1} n_1(s,i+1) n_0\big(st^{i+1}, j-i-1\big) + n_1(s,0) n_0(s, j)
+n_1(s,1) n_0(st, j-1),\!\!\! \label{n23}\\
g(s|a,b,c,d) \mathsf{N}(st,j-1)
=g(s|a,b,c,d) \sum_{i=0}^{j-1}n_1(st, i)n_0\big(st^{i+1}, j-i-1\big) \notag \\
\hphantom{g(s|a,b,c,d) \mathsf{N}(st,j-1)}{} = \sum_{i=1}^{j-1}g(s|a,b,c,d) n_1(st, i)n_0\big(st^{i+1}, j-i-1\big)\notag\\
\hphantom{g(s|a,b,c,d) \mathsf{N}(st,j-1)=}{} + g(s|a,b,c,d) n_1(st, 0)n_0(st, j-1), \label{n24}\\
f(s|a,b,c,d) \mathsf{N}(st^2,j-2)
=f(s|a,b,c,d)\sum_{i=0}^{j-2} n_1\big(st^2,i\big) n_0\big(st^{i+2},j-i-2\big) \notag \\
\hphantom{f(s|a,b,c,d) \mathsf{N}(st^2,j-2)}{} =\sum_{i=1}^{j-1}f(s|a,b,c,d) n_1\big(st^2,i-1\big) n_0\big(st^{i+1},j-i-1\big). \label{n25}
\end{gather}
By the equations (\ref{n2}), (\ref{n23}), (\ref{n24}) and (\ref{n25}), and
applying (\ref{n1rel}), the equation (\ref{n0}) is shown to be equivalent to the identity
\begin{gather}
n_1(s,0) n_0(s, j)+n_1(s,1) n_0(st, j-1)
+ g(s|a,b,c,d) n_1(st, 0)n_0(st, j-1) \notag \\
=n_1(st, 0) n_0(s, j) + n_1(st, 1) n_0(st, j-1)
-n_1(st, j-1) n_0\big(st^{j-1}, 1\big) - n_1(st, j) n_0\big(st^{j}, 0\big) \notag \\
\quad{}+g(s | a, -a, c, d) n_1(st, 0) n_0(st, j-1)
-g\big(st^{j-1} | a, -a, c, d\big) n_1(st, j-1) n_0\big(st^{j}, 0\big) \notag \\
\quad {}+n_1(st, j-1)n_0\big(st^{j}, 1\big) + n_1(st, j). \label{n3}
\end{gather}
By a direct calculation, one can show that the equation (\ref{n3}) holds. This completes the proof of Theorem \ref{nrel}.
\end{proof}
\subsection[Five term relations for ${B}(n,r,p)$]{Five term relations for $\boldsymbol{{B}(n,r,p)}$}
Now, we obtain the five term relations associated with the ${B}(n,r,p)$ as follows.
\begin{thm} \label{strBthm}
We have
\begin{gather}
B(n, r, p) + B(n, r, p - 2) =
B(n, r+1, p) +
f\big(t^{n-r}|a,b,c,d\big) B(n, r-1, p - 2) \nonumber\\
\hphantom{B(n, r, p) + B(n, r, p - 2) =}{} + g\big(t^{n-r}|a,b,c,d\big) B(n, r, p - 1).
\label{strB}
\end{gather}
\end{thm}
\begin{proof}First, we shall show $(\ref{strB})$ for $p=2k$. We have
\begin{gather}
\text{l.h.s.\ of } (\ref{strB}) =
\sum_{l=0}^{k} \mathsf{M}_{r - 2l, r - 2k}
\mathsf{N}_{r, r-2l}
+
\sum_{l=0}^{k-1}
\mathsf{M}_{r - 2l, r - 2k+2}
\mathsf{N}_{r, r-2l} \notag \\
\hphantom{\text{l.h.s.\ of } (\ref{strB})}{} =
\mathsf{M}_{r-2k, r-2k} \mathsf{N}_{r, r-2k}
+ \sum_{l=0}^{k-1}
\mathsf{M}_{r - 2l, r - 2k}
\mathsf{N}_{r, r-2l}
+
\sum_{l=0}^{k-1}
\mathsf{M}_{r - 2l, r - 2k+2}
\mathsf{N}_{r, r-2l} \notag \\
\hphantom{\text{l.h.s.\ of } (\ref{strB})}{} =
\mathsf{N}_{r, r-2k}
+ \sum_{l=0}^{k-1}\bigl(
\mathsf{M}_{r - 2l, r - 2k} + \mathsf{M}_{r - 2l, r - 2k+2} \bigr)
\mathsf{N}_{r, r-2l}.\label{strB-1}
\end{gather}
By Theorem \ref{HS-ftr}, we have
\begin{gather*}
\text{r.h.s.\ of } (\ref{strB-1})\\
\qquad{} =\mathsf{N}_{r, r-2k} + \sum_{l=0}^{k-1}\bigl(
\mathsf{M}_{r-2l+1, r-2k+1} + f(t^{n-r+2l-2}|a,-a,c,-c) \mathsf{M}_{r-2l-1, r-2k+1} \bigr)
\mathsf{N}_{r, r-2l} \\
\qquad{} =
\sum_{l=0}^{k} \mathsf{M}_{r-2l+1, r-2k+1}\mathsf{N}_{r, r-2l}
+ \sum_{l=0}^{k-1} f\big(t^{n-r2l-2}|a,-a,c,-c\big) \mathsf{M}_{r-2l-1, r-2k+1} \mathsf{N}_{r, r-2l}.
\end{gather*}
Then we have
\begin{gather}
\text{(l.h.s.\ $-$ r.h.s.\ ) of (\ref{strB})} \notag \\
{}=\sum_{l=0}^{k} \mathsf{M}_{r-2l+1, r-2k+1}\mathsf{N}_{r, r-2l}
+ \sum_{l=0}^{k-1} f\big(t^{n-r+2l}|a,-a,c,-c\big) \mathsf{M}_{r-2l-1, r-2k+1} \mathsf{N}_{r, r-2l} \notag \\
\qquad{} - \sum_{l=0}^{k}
\mathsf{M}_{r+1 - 2l, r+1 - 2k}
\mathsf{N}_{r+1, r+1-2l} - \sum_{l=0}^{k-1}
f\big(t^{n-r}|a,b,c,d\big) \mathsf{M}_{r-1 - 2l, r - 2k +1} \mathsf{N}_{r-1, r-1-2l} \notag \\
\qquad{}
-g(t^{n-r}|a,b,c,d) \sum_{l=0}^{k}
\mathsf{M}_{r - 2l + 1, r - 2k + 1}
\mathsf{N}_{r, r-2l+1} \notag \\
=\sum_{l=0}^{k} \mathsf{M}_{r-2l+1, r-2k+1}
\big(\mathsf{N}_{r, r-2l} - \mathsf{N}_{r+1, r+1-2l} \big) \notag \\
\qquad + \sum_{l=0}^{k-1} \mathsf{M}_{r-2l-1, r-2k+1}
\big(f\big(t^{n-r+2l}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l} -
f\big(t^{n-r}|a,b,c,d\big) \mathsf{N}_{r-1, r-1-2l} \big) \notag\\
\qquad{} - \sum_{l=0}^{k}
\mathsf{M}_{r - 2l + 1, r - 2k + 1} g\big(t^{n-r}|a,b,c,d\big) \mathsf{N}_{r, r-2l+1}. \label{strB-2}
\end{gather}
The second summation in the r.h.s.\ of (\ref{strB-2}) can be recast as
\begin{gather*}
\sum_{l=0}^{k-1} \mathsf{M}_{r-2l-1, r-2k+1}
\big(f\big(t^{n-r+2l}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l} -
f\big(t^{n-r}|a,b,c,d\big)
\mathsf{N}_{r-1, r-1-2l} \big) \\
= \sum_{l=1}^{k} \mathsf{M}_{r-2l+1, r-2k+1}
\big(f\big(t^{n-r+2l-2}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l+2} -
f\big(t^{n-r}|a,b,c,d\big)
\mathsf{N}_{r-1, r-2l+1}\big) \\
=
\sum_{l=0}^{k} \mathsf{M}_{r-2l+1, r-2k+1}
\big(f\big(t^{n-r+2l-2}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l+2} -
f\big(t^{n-r}|a,b,c,d\big)
\mathsf{N}_{r-1, r-2l+1} \big).
\end{gather*}
Hence, the r.h.s.\ of (\ref{strB-2}) reduces to
\begin{gather*}
\sum_{l=0}^{k} \mathsf{M}_{r-2l+1, r-2k+1}
\big(\mathsf{N}_{r, r-2l} - \mathsf{N}_{r+1, r+1-2l}+
f\big(t^{n-r+2l}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l} \notag \\
\qquad {}-
f\big(t^{n-r}|a,b,c,d\big)
\mathsf{N}_{r-1, r-1-2l}
- g\big(t^{n-r}|a,b,c,d\big)
\mathsf{N}_{r, r-2l+1} \big) =0.
\end{gather*}
Here we have used Theorem \ref{str}.
Second, we shall show $(\ref{strB})$ for $p=2k+1$.
\begin{gather}
\text{l.h.s.\ of } (\ref{strB}) =
\sum_{l=1}^{k+1}
\mathsf{M}_{r - 2l+1, r - 2k-1}
\mathsf{N}_{r, r-2l+1}
+
\sum_{l=1}^{k}
\mathsf{M}_{r - 2l+1, r - 2k+1}
\mathsf{N}_{r, r-2l+1} \notag \\
\hphantom{\text{l.h.s.\ of } (\ref{strB})}{} =
\mathsf{M}_{r - 2k-1, r - 2k-1}
\mathsf{N}_{r, r-2k-1}
+
\sum_{l=1}^{k}
\mathsf{M}_{r - 2l+1, r - 2k-1}
\mathsf{N}_{r, r-2l+1} \notag \\
\hphantom{\text{l.h.s.\ of } (\ref{strB})=}{} +
\sum_{l=1}^{k}
\mathsf{M}_{r - 2l+1, r - 2k+1}
\mathsf{N}_{r, r-2l+1} \notag \\
\hphantom{\text{l.h.s.\ of } (\ref{strB})}{} =
\mathsf{N}_{r, r-2k-1}
+ \sum_{l=1}^{k}\big(
\mathsf{M}_{r - 2l+1, r - 2k-1} + \mathsf{M}_{r - 2l+1, r - 2k+1}\big)
\mathsf{N}_{r, r-2l+1}.\label{strB-1o}
\end{gather}
By Theorem \ref{HS-ftr}, we have
\begin{gather*}
\text{r.h.s.\ of } (\ref{strB-1o})\\
\qquad{} =\mathsf{N}_{r, r-2k-1}
+ \sum_{l=1}^{k}\big(
\mathsf{M}_{r - 2l+2, r - 2k} +
f\big(t^{n-r+2l-1}|a,-a,c,-c\big)\mathsf{M}_{r - 2l, r - 2k} \big)
\mathsf{N}_{r, r-2l+1} \\
\qquad{} =
\sum_{l=1}^{k+1} \mathsf{M}_{r - 2l+2, r - 2k}\mathsf{N}_{r, r-2l+1}+ \sum_{l=1}^{k}
f\big(t^{n-r+2l-1}|a,-a,c,-c\big)\mathsf{M}_{r - 2l, r - 2k} \mathsf{N}_{r, r-2l+1}.
\end{gather*}
Then we have
\begin{gather}
\text{(l.h.s.\ $-$ r.h.s.\ ) of (\ref{strB})} \notag \\
\qquad{}=\sum_{l=1}^{k+1} \mathsf{M}_{r - 2l+2, r - 2k}\mathsf{N}_{r, r-2l+1}
+ \sum_{l=1}^{k}
f\big(t^{n-r+2l-1}|a,-a,c,-c\big)\mathsf{M}_{r - 2l, r - 2k}
\mathsf{N}_{r, r-2l+1} \notag \\
\qquad\quad{}
- \sum_{l=1}^{k+1}
\mathsf{M}_{r - 2l+2, r- 2k}
\mathsf{N}_{r+1, r-2l+2}
- \sum_{l=1}^{k}
f\big(t^{n-r}|a,b,c,d\big) \mathsf{M}_{r- 2l, r - 2k}
\mathsf{N}_{r-1, r-2l} \notag \\
\qquad\quad{}
-g\big(t^{n-r}|a,b,c,d\big) \sum_{l=0}^{k}
\mathsf{M}_{r - 2l, r - 2k}
\mathsf{N}_{r, r-2l} \notag \\
\qquad{}=\sum_{l=1}^{k+1} \mathsf{M}_{r-2l+2, r-2k}
\big(\mathsf{N}_{r, r-2l+1} - \mathsf{N}_{r+1, r-2l+2} \big) \notag \\
\qquad\quad{}
+ \sum_{l=1}^{k} \mathsf{M}_{r-2l, r-2k}
\big(f\big(t^{n-r+2l-1}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l+1}
-f\big(t^{n-r}|a,b,c,d\big)
\mathsf{N}_{r-1, r-2l}\big) \notag
\\
\qquad\quad{}
- \sum_{l=1}^{k+1}
\mathsf{M}_{r - 2l + 2, r - 2k}
g\big(t^{n-r}|a,b,c,d\big)
\mathsf{N}_{r, r-2l+2}. \label{strB-2o}
\end{gather}
The second summation in the r.h.s.\ of (\ref{strB-2o}) can be modified as
\begin{gather*}
\sum_{l=1}^{k} \mathsf{M}_{r-2l, r-2k}
\big(f\big(t^{n-r+2l-1}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l+1}
-f\big(t^{n-r}|a,b,c,d\big)\mathsf{N}_{r-1, r-2l}\big) \\
= \sum_{l=2}^{k+1} \mathsf{M}_{r-2l+2, r-2k}
\big(f\big(t^{n-r+2l-3}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l+3}
-f\big(t^{n-r}|a,b,c,d\big)
\mathsf{N}_{r-1, r-2l+2} \big) \\
= \sum_{l=1}^{k+1} \mathsf{M}_{r-2l+2, r-2k}
\big(f\big(t^{n-r+2l-3}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l+3}
-f\big(t^{n-r}|a,b,c,d\big)\mathsf{N}_{r-1, r-2l+2} \big).
\end{gather*}
Hence, the r.h.s.\ of (\ref{strB-2o}) reduces to
\begin{gather*}
\sum_{l=1}^{k+1} \mathsf{M}_{r-2l+2, r-2k}\big(\mathsf{N}_{r, r-2l+1} - \mathsf{N}_{r+1, r-2l+2}
+f\big(t^{n-r+2l-3}|a,-a,c,-c\big)\mathsf{N}_{r, r-2l+3} \notag \\
\qquad{} -f\big(t^{n-r}|a,b,c,d\big)\mathsf{N}_{r-1, r-2l+2}
-g\big(t^{n-r}|a,b,c,d\big)\mathsf{N}_{r, r-2l+2} \big) =0.
\end{gather*}
Here we have used Theorem \ref{str}.
\end{proof}
\section{Proof of Theorems \ref{main2} and \ref{main1}} \label{ProofMAIN}
\subsection[Transition coefficients $\mathcal{C}^{(n)}_{i,j}$]{Transition coefficients $\boldsymbol{\mathcal{C}^{(n)}_{i,j}}$}
In this section we set $f(s) = f(s|a,b,c,d)$ and $g(s) = g(s|a,b,c,d)$ for simplicity.
Recall that we have the expansion (Theorem~\ref{dmat}, Definitions~\ref{MandN} and~\ref{B(n,r,p)})
\begin{gather}
P_{(1^r)} (x|a,b,c,d|q,t)=\sum_{p=0}^r B(n,r,p)E_{r-p}(x).\label{tr1}
\end{gather}
In the previous section, we have established the five term relations for $B(n,r,p)$ in Theorem~\ref{strBthm}.
\begin{dfn}\label{C^n}
Define $\mathcal{C}^{(n)}=\big(\mathcal{C}^{(n)}_{i, j}\big)_{0\leq i,j\leq n}$ as
the finite upper triangular (i.e., $\mathcal{C}^{(n)}_{i, j}=0$ for $i>j$)
transition matrix from the monomial basis $(m_{(1^{r})}(x))$ to the
one with Koornwinder polynomials $(P_{(1^r)} (x|a,b,c,d|q,t))$:
\begin{gather}
P_{(1^r)} (x|a,b,c,d|q,t)=\sum_{k=0}^r
\mathcal{C}^{(n)}_{n-r,n-r+k} m_{(1^{r-k})}(x).\label{tr2}
\end{gather}
\end{dfn}
Recall Lemma \ref{Lem-Em} (Lemma~3.3 in~\cite{HS}):
\begin{gather*}
E_r(x)= \sum_{k=0}^{\lfloor{r \over 2}\rfloor} \binom{n-r+2k}{k} m_{(1^{r-2k})}(x),
\end{gather*}
where $\binom{m}{j}$ denotes the ordinary binomial coefficient.
\begin{prp} \label{C-Binom}
We have
\begin{gather}
\mathcal{C}^{(n)}_{n-r,n-r+2h} = \sum_{p=0}^h B(n,r,2p) \binom{n-r+2h}{ h-p}, \qquad 0 \leq h \leq \lfloor r/2\rfloor , \label{C-B-1} \\
\mathcal{C}^{(n)}_{n-r,n-r+2h+1} = \sum_{p=0}^h B(n,r,2p+1) \binom{n-r+2h+1}{ h-p},
\qquad 0 \leq h \leq \lfloor (r-1)/2 \rfloor .\nonumber
\end{gather}
\end{prp}
\begin{proof}These follow from (\ref{tr1}) and (\ref{tr2}).
\end{proof}
\begin{prp}\label{propC^n}
We have $i>j \Rightarrow \mathcal{C}^{(n)}_{i, j}=0$,
$\mathcal{C}^{(n)}_{i, i}=1$ $(i\geq 1)$, and
\begin{gather}
\mathcal{C}^{(n)}_{i,j} = \mathcal{C}^{(n)}_{i-1, j-1} + g\big(t^i\big) \mathcal{C}^{(n)}_{i, j-1} + f\big(t^i\big)
\mathcal{C}^{(n)}_{i+1, j-1}. \label{REC}
\end{gather}
\end{prp}
\begin{proof}By Definition \ref{C^n}, we have the triangularity $i>j \Rightarrow \mathcal{C}^{(n)}_{i, j}=0$.
We have from~(\ref{C-B-1}) that $\mathcal{C}^{(n)}_{i, i}=1$ $(i\geq 1)$.
Next we shall show~(\ref{REC}) for the case: $i=n-r$, $j=n-r+2h$ ($h \geq 0$).
(The other case $i=n-r$, $j=n-r+2h+1$ ($h \geq 0$) can be treated in the same way,
hence we can omit it safely.) The r.h.s.\ of~(\ref{REC}) in this case reads
\begin{gather}
\mathcal{C}^{(n)}_{n-r-1, n-r+2h-1} + g\big(t^{n-r}\big) \mathcal{C}^{(n)}_{n-r, n-r+2h-1}
+ f\big(t^{n-r}\big) \mathcal{C}^{(n)}_{n-r+1, n-r+2h-1} \notag \\
\qquad{} =
\sum_{p=0}^{h}\! B(n,r+1,2p) \binom{n-r-1+2h}{ h-p}
+g\big(t^{n-r}\big) \sum_{p=0}^{h-1}\! B(n,r,2p+1) \binom{n-r+2h-1}{h-1-p} \notag \\
\qquad\quad{} +f\big(t^{n-r}\big) \sum_{p=0}^{h-1} B(n,r-1,2p) \binom{n-r+2h-1}{h-1-p}. \label{pm1}
\end{gather}
Concerning the first term in the r.h.s.\ of (\ref{pm1}), we have
\begin{gather}
\sum_{p=0}^{h} B(n,r+1,2p) \binom{n-r-1+2h}{ h-p} \notag \\
\qquad{}=\binom{n-r-1+2h}{h} + \sum_{p=1}^{h} B(n,r+1,2p) \binom{n-r-1+2h}{h-p} \notag \\
\qquad{}=\binom{n-r-1+2h}{h} + \sum_{p=0}^{h-1} B(n,r+1,2p+2) \binom{n-r-1+2h}{h-p-1}.\label{pm2}
\end{gather}
By Theorem \ref{strBthm}, the other parts of the r.h.s.\
of (\ref{pm1}) are calculated as
\begin{gather}
\sum_{p=0}^{h-1} \binom{n-r+2h-1}{h-1-p}
\big( g\big(t^{n-r}\big) B(n,r,2p+1) + f\big(t^{n-r}\big) B(n,r-1,2p) \big) \notag \\
\qquad{} =\sum_{p=0}^{h-1} \binom{n-r+2h-1}{h-1-p}
\big(B(n,r,2p+2) + B(n,r,2p) - B(n,r+1,2p+2)\big) \notag \\
\qquad{} =\sum_{p=0}^{h-1} \binom{n-r+2h-1}{h-1-p}
\big(B(n,r,2p+2) - B(n,r+1,2p+2)\big) \notag \\
\qquad\quad {} + \sum_{p=0}^{h-1} \left(\binom{n-r+2h}{h-p} - \binom{n-r+2h-1}{h-p}\right)B(n,r,2p) \notag \\
\qquad{} =\sum_{p=0}^{h-1} \binom{n-r+2h-1}{h-1-p}
\big(B(n,r,2p+2) - B(n,r+1,2p+2)\big) \notag \\
\qquad\quad {}+ \sum_{p=0}^{h} \left(\binom{n-r+2h}{h-p} - \binom{n-r+2h-1}{h-p}\right)B(n,r,2p). \label{pm3}
\end{gather}
Hence, from (\ref{pm2}) and (\ref{pm3}), the r.h.s.\ of (\ref{pm1}) is
\begin{gather}
\binom{n-r-1+2h}{h} + \sum_{p=0}^{h} \binom{n-r+2h}{h-p} B(n,r,2p) \notag \\
\qquad{} +\sum_{p=0}^{h-1} \binom{n-r+2h-1}{h-1-p} B(n,r,2p+2)
-\sum_{p=0}^{h}\binom{n-r+2h-1}{h-p}B(n,r,2p). \label{pm4}
\end{gather}
Then noting that the last term of (\ref{pm4}) is recast as
\begin{gather*}
\binom{n-r+2h-1}{h}+
\sum_{p=1}^{h}\binom{n-r+2h-1}{h-p}B(n,r,2p) \notag \\
\qquad{} =\binom{n-r+2h-1}{h}+
\sum_{p=0}^{h-1}\binom{n-r+2h-1}{h-p-1}B(n,r,2p+2),
\end{gather*}
the r.h.s.\ of (\ref{pm1}) is calculated as
\begin{gather*}
\sum_{p=0}^{h} \binom{n-r+2h}{h-p} B(n,r,2p) = \mathcal{C}^{(n)}_{n-r, n-r+2h}.\tag*{\qed}
\end{gather*}\renewcommand{\qed}{}
\end{proof}
\subsection{Proof of Theorem \ref{main2}} \label{ProofMAIN2}
Now we are ready to state our proof of Theorem~\ref{main2}.
\begin{proof}[Proof of Theorem \ref{main2}]
Consider the following recursion equation for the
infinite upper triangular matrix $X=(X_{i,j})_{0\leq i,j<\infty}$:
\begin{subequations}
\begin{gather}
X \text{ is upper triangular,} \label{X-1}\\
X_{i,i} = 1, \qquad i \geq 0, \label{X-2}\\
X_{i,j} = X_{i-1, j-1} + g\big(t^i\big) X_{i, j-1} + f\big(t^i\big) X_{i+1, j-1}.\label{X-3}
\end{gather}
\end{subequations}
The solution to this recursion equation (\ref{X-1}), (\ref{X-2}), (\ref{X-3})
for $X$ exists uniquely.
Write by $\mathcal{C}=(\mathcal{C}_{i,j})_{0\leq i,j<\infty}$ the unique solution.
The stability property (i.e., the independence of
$\mathcal{C}$ on $n$ is clear.
We have established that $\mathcal{C}^{(n)}=\big(\mathcal{C}^{(n)}_{i, j}\big)$
satisfies the same recursion equation (\ref{X-1}), (\ref{X-2}), (\ref{X-3}).
Hence we have $\mathcal{C}^{(n)}_{i, j}=\mathcal{C}_{i,j}$
($0\leq i,j\leq n$). This proves Theorem~\ref{main2}.
\end{proof}
\subsection{Proof of Theorem \ref{main1}} \label{ProofMAIN1}
By Theorem \ref{main2} we have
\begin{gather*}
P^{BC_n}_{(1^r)} = \sum_{k=0}^r \mathcal{C}_{n-r, n-r+k} m_{(1^{r-k})}(x_1, x_2, \ldots, x_n).
\end{gather*}
Noting that
\begin{gather*}
m_{(1^{r-k})}(x_1, x_2, \ldots, x_n) = m_{(1^{r-k})}(x_1, x_2, \ldots, x_{n-1})\\
\hphantom{m_{(1^{r-k})}(x_1, x_2, \ldots, x_n) =}{}
+ (x_n+1/x_n )m_{(1^{r-k-1})}(x_1, x_2, \ldots, x_{n-1}),
\end{gather*}
we have
\begin{gather}
\mathcal{C}_{n-r, n-r+k} m_{(1^{r-k})}(x_1, x_2, \ldots, x_n) =\mathcal{C}_{n-r, n-r+k} m_{(1^{r-k})}(x_1, x_2, \ldots, x_{n-1})
\label{main1eq1}\\
\hphantom{\mathcal{C}_{n-r, n-r+k} m_{(1^{r-k})}(x_1, x_2, \ldots, x_n) =}{}
+ \mathcal{C}_{n-r, n-r+k} (x_n+1/x_n) m_{(1^{r-k-1})}(x_1, x_2, \ldots, x_{n-1}).\nonumber
\end{gather}
The first term of (\ref{main1eq1}), by (\ref{rec3}), is
\begin{gather*}
\mathcal{C}_{n-r, n-r+k} m_{(1^{r-k})}(x_1, x_2, \ldots, x_{n-1})
=\big( \mathcal{C}_{n-r-1, n-r+k-1} + g\big(t^{n-r}\big) \mathcal{C}_{n-r, n-r+k-1}\\
\qquad{}+ f\big(t^{n-r}\big) \mathcal{C}_{n-r+1, n-r+k-1}\big)
m_{(1^{r-k})}(x_1, x_2, \ldots, x_{n-1}).
\end{gather*}
Then we have
\begin{gather*}
P^{BC_n}_{(1^r)} = \sum_{k=0}^r \mathcal{C}_{n-r-1, n-r-1+k} m_{(1^{r-k})}(x_1, x_2, \ldots, x_{n-1}) \notag \\
\hphantom{P^{BC_n}_{(1^r)} =}{} + \sum_{k=0}^{r-1} \big((x_n+1/x_n) + g\big(t^{n-r}\big)\big)\mathcal{C}_{n-r, n-r+k} m_{(1^{r-k-1})}(x_1, x_2, \ldots, x_{n-1}) \notag \\
\hphantom{P^{BC_n}_{(1^r)} =}{} + \sum_{k=0}^{r-2} f(t^{n-r}) \mathcal{C}_{n-r+1, n-r+1+k} m_{(1^{r-k-2})}(x_1, x_2, \ldots, x_{n-1}).
\end{gather*}
This completes the proof of Theorem \ref{main1}. \qed
\section[Some degenerations of Macdonald polynomials of type $B_n$
with one column diagrams and Kostka polynomials]{Some degenerations of Macdonald polynomials of type $\boldsymbol{B_n}$\\
with one column diagrams and Kostka polynomials} \label{KostkaB}
This section is devoted to the study on certain degenerations of our formulas for the Macdonald polynomial $P^{(B_n, B_n)}_{(1^r)}(x|a;q,t)$.
\subsection[Macdonald polynomials of type $(B_n, B_n)$]{Macdonald polynomials of type $\boldsymbol{(B_n, B_n)}$}
Setting the parameters as $(a,b,c,d; q,t) \rightarrow \big(q^{1/2}, -q^{1/2}, -1, t; q,t\big)$ in the Koornwinder polynomial $P_{\lambda}$, we obtain the Macdonald polynomials of type $(B_n, B_n)$
\begin{gather*}
P_{\lambda}^{(B_n, B_n)}(x|a;q,t) = P_{\lambda}\big(x|q^{1/2}, -q^{1/2}, -1, a| q,t\big).
\end{gather*}
\begin{thm}\label{tqt} We have
\begin{gather*}
P_{(1^r)}^{(B_n, B_n)}(x|t;q,t) =\sum_{j = 0}^{r}
{
\big(1/t, t^{n-r+1}, -t^{n-r} q, t^{2n-2r} q/t; t\big)_j
\over
\big(t, t^{2n-2r+1}q; t\big)_j \big(t^{2n-2r} q/t; t^2\big)_j
}(-t)^j \\
\hphantom{P_{(1^r)}^{(B_n, B_n)}(x|t;q,t) =}{}
\times \sum_{k = 0}^{\lfloor{r-j \over 2}\rfloor}
{ \big(t/q ; t^2\big)_k \big(t^{n-r+2+j} ; t^2\big)_k \big(t^{2n-2r+2j} ; t^2\big)_k
\over
\big(t^2 ; t^2\big)_k \big(t^{n-r+j} ; t^2\big)_k \big(t^{2n-2r+1+2j}q ; t^2\big)_k }
{\big(t^{n-r+j} ; t\big)_{2k}
\over
\big(t^{n-r+1+j} ; t\big)_{2k} }\\
\hphantom{P_{(1^r)}^{(B_n, B_n)}(x|t;q,t) =}{}\times
{ 1-t^{n-r+2k+j}
\over
1-t^{n-r+j} }q^{k} E_{r-2k-j}(x), \\
E_r(x) =\sum_{k=0}^{\lfloor{r \over 2}\rfloor}
{ \big(t^{n-r+1}; t\big)_{2k}
\over
\big(t^{n-r}; t\big)_{2k} }
{ \big(q/t; t^2\big)_{k}
\over
\big(t^2, t^{2n-2r+2}; t^2\big)_{k} }
{ \big(t^{2n-2r}; t^2\big)_{2k} \big(t^{2n-2r-1}q; t^2\big)_{k}
\over
\big(t^{2n-2r-1}q; t^2\big)_{2k}}
t^k \\
\hphantom{E_r(x) =}{} \times\! \sum_{j=0}^{r-2k}\! (-1)^j\!
{ \big(t^{n-r+2k+1}, -t^{n-r+2k}q, -t^{n-r+2k}q^{1/2}, t^{n-r+2k}q^{1/2}; t\big)_j
\over
\big(t^{2n-2r+4k}q; t\big)_{2j} }
P^{(B_n,B_n)}_{(1^{r-2k-j})}(t; q,t).
\end{gather*}
\end{thm}
\begin{cor} Setting $t=q$, we have the formula for the Schur polynomials
$s^{B_n}_{(1^r)}(x) = P_{(1^r)}^{(B_n, B_n)}(x|q;q,q)$:
\begin{gather*}
s^{B_n}_{(1^r)}(x) = E_r(x) + E_{r-1}(x), \\
E_r(x) = \sum_{j=0}^r (-1)^j s^{B_n}_{(1^{r-j})}(x).
\end{gather*}
Hence, from Lemma~{\rm \ref{Lem-Em}}, we have
\begin{gather*}
s^{B_n}_{(1^r)}(x) =
\sum_{j=0}^{\lfloor{r \over 2}\rfloor}{n-r+2j \atopwithdelims() j} m_{(1^{r-2j})}(x)
+
\sum_{j=0}^{\lfloor{r-1 \over 2}\rfloor}{n-r+2j+1 \atopwithdelims() j} m_{(1^{r-2j-1})}(x).
\end{gather*}
\end{cor}
\subsection[Hall--Littlewood polynomials $P_{(1^r)}^{(B_n, B_n)}(x|t;0,t)$ and Kostka polynomials]{Hall--Littlewood polynomials $\boldsymbol{P_{(1^r)}^{(B_n, B_n)}(x|t;0,t)}$ and Kostka polynomials} \label{Kostka}
\begin{thm} Setting $q=0$, we have
\begin{gather*}
P_{(1^r)}^{(B_n, B_n)}(x|t;0,t)
=\sum_{k=0}^{\lfloor{r \over 2}\rfloor}
(-1)^k t^{k^2}
{ [n-r+2k]_t
\over
[n-r]_t
}
\left[ n-r+k-1 \atop k \right]_{t^2}
E_{r-2k}(x) \notag \\
\qquad{} +\big(1-t^{n-r+1}\big)
\sum_{k=0}^{\lfloor{r-1 \over 2}\rfloor}
(-1)^k t^{k^2}
{ [n-r+2k+1]_t
\over
[n-r+1]_t
}
\left[ n-r+k \atop k \right]_{t^2}
E_{r-1-2k}(x), \\
E_r(x) =
\sum_{k=0}^{\lfloor{r \over 2}\rfloor}
{ \big(t^{n-r+1}; t\big)_{2k}
\over
\big(t^{n-r}; t\big)_{2k} }
{ \big(t^{2n-2r}; t^2\big)_{2k}
\over
\big(t^2, t^{2n-2r+2}; t^2\big)_{k} }
t^k \sum_{j=0}^{r-2k}(-1)^j
\big(t^{n-r+2k+1}; t\big)_j
P^{(B_n, B_n)}_{(1^{r-2k-j})}(t; 0,t) \notag \\
\hphantom{E_r(x)}{} =
\sum_{l=0}^{r}\sum_{k=0}^{\lfloor{l \over 2}\rfloor}
{ \big(t^{2n-2r}; t^2\big)_{2k}
\over
\big(t^2; t^2\big)_{k} \big(t^{n-r}; t\big)_{2k} \big(t^{2n-2r+2}; t^2\big)_{k}
}
t^k
(-1)^l \big(t^{n-r+1}; t\big)_{l} P^{(B_n, B_n)}_{(1^{r-l})}(t; 0,t).
\end{gather*}
\end{thm}
\begin{dfn} Let $K^{B_n}_{(1^r) (1^{r-l})}(t)$ be the transition coefficient defined by
\begin{gather*}
s^{B_n}_{(1^r)}(x) = \sum_{l=0}^{r} K^{B_n}_{(1^r) (1^{r-l})}(t) P^{(B_n, B_n)}_{(1^{r-l})}(t; 0,t).
\end{gather*}
\end{dfn}
Then we have the following theorem.
\begin{thm} The $K^{B_n}_{(1^r) (1^{r-l})}(t)$ is
given by
\begin{gather*}
K^{B_n}_{(1^r) (1^{r-l})}(t) =
\sum_{k=0}^{\lfloor{l \over 2}\rfloor}
{ \big(t^{2n-2r}; t^2\big)_{2k} \big(t^{n-r+1}; t\big)_{l}
\over
\big(t^2; t^2\big)_{k} \big(t^{n-r}; t\big)_{2k} \big(t^{2n-2r+2}; t^2\big)_{k}
}
t^k
(-1)^l \notag \\
\hphantom{K^{B_n}_{(1^r) (1^{r-l})}(t) =}{}+
\sum_{k=0}^{\lfloor{l-1 \over 2}\rfloor}
{ \big(t^{2n-2r+2}; t^2\big)_{2k} \big(t^{n-r+2}; t\big)_{l-1}
\over
\big(t^2; t^2\big)_{k} \big(t^{n-r+1}; t\big)_{2k} \big(t^{2n-2r+4}; t^2\big)_{k}
}
t^k
(-1)^{l-1}.
\end{gather*}
\end{thm}
\begin{thm} We have
\begin{gather*}
K^{(B_n)}_{(1^r) (1^{r-l})}(t)
=
\begin{cases} \displaystyle
t^L
\left[ n-r+2L \atop L \right]_{t^2},
& l=2L , \vspace{1mm}\\
\displaystyle
t^{L+n-r+1}
\left[ n-r+2L+1 \atop L \right]_{t^2},
& l=2L+1 .
\end{cases}
\end{gather*}
In particular, $K^{B_n}_{(1^r) (1^{r-l})}(t)$ is a polynomial in $t$ with nonnegative integral coefficients.
\end{thm}
\begin{proof} Set
\begin{gather*}
\alpha_{k}
:=
{ \big(t^{2n-2r}; t^2\big)_{2k} \big(1-t^{n-r+1}\big)
\over
\big(t^2; t^2\big)_{k} \big(t^{n-r}; t\big)_{2k} \big(t^{2n-2r+2}; t^2\big)_{k}
}
t^k, \\
\beta_{k}
:=
{ \big(t^{2n-2r+2}; t^2\big)_{2k}
\over
\big(t^2; t^2\big)_{k} \big(t^{n-r+1}; t\big)_{2k} \big(t^{2n-2r+4}; t^2\big)_{k}
}
t^k.
\end{gather*}
Then we have
\begin{gather}
K^{B_n}_{(1^r) (1^{r-l})}(t) =\big(t^{n-r+2} ;t\big)_{l-1}
\left( (-1)^l \sum_{k=0}^{\lfloor{l \over 2}\rfloor} \alpha_{k}+
(-1)^{l-1} \sum_{k=0}^{\lfloor{l-1 \over 2}\rfloor} \beta_{k} \right).\label{67}
\end{gather}
We shall prove (\ref{67}) for $l=2L$ by induction on $L$. We have
\begin{gather}
K^{B_n}_{(1^r) (1^{r-(2L+2)})}(t) =\big(t^{n-r+2} ;t\big)_{2L+1} \left( \sum_{k=0}^{L+1} \alpha_{k}
-
\sum_{k=0}^{L} \beta_{k} \right) =
(t^{n-r+2} ;t)_{2L+1} (\alpha_{L+1} - \beta_{L}) \notag \\
\quad+
\big(1-t^{n-r+2L+1}\big)\big(1-t^{n-r+2L+2}\big) \left(
\big(t^{n-r+2} ;t\big)_{2L-1}
\left( \sum_{k=0}^{L} \alpha_{k}
-
\sum_{k=0}^{L-1} \beta_{k} \right) \right) \notag \\
=
\big(t^{n-r+2} ;t\big)_{2L+1} (\alpha_{L+1} - \beta_{L})
+
\big(1-t^{n-r+2L+1}\big)\big(1-t^{n-r+2L+2}\big)
{ \big(t^{2n-2r+2L+2} ;t^2\big)_L
\over
\big(t^2 ;t^2\big)_L
} t^L \notag \\
={ \big(t^{2n-2r+2(L+1)+2} ;t^2\big)_{L+1}\over
\big(t^2 ;t^2\big)_{L+1} } t^{L+1}.\label{68}
\end{gather}
We can prove (\ref{68}) for $l=2L+1$ similarly.
\end{proof}
\section{Solution to the recursion relation}\label{SolOfRec}
We give a combinatorial expression of the entries $\mathcal{C}_{r,r+l}$
in the transition matrix $\mathcal{C}$ for $r, l \in \mathbb{Z}_{\geq 0}$.
In this section we set $f(s) =f(s|a,b,c,d)$ and $g(s) =g(s|a,b,c,d)$
for simplicity.
For $m,n \in \mathbb{Z}_{\geq 0}$ let us define the finite set
\begin{gather*}
\mathcal{P}_{m,n}^{(r)}= \{X_1 X_2 \cdots X_{m+n} \,|\,
X_i = f\big(t^{r-d_i}\big) \text{ or } g\big(t^{r-d_i}\big) \text{ for } 1 \leq i \leq m+n, \, d_i \in
\mathbb{Z}
\},
\end{gather*}
which satisfies three conditions:
\begin{itemize}\itemsep=0pt
\item[1)] $0 \leq d_1 \leq r$,
\item[2)] $| \{ i \,|\, X_i = f(t^{r-d_i}) \} |= m$ \text{ and }
$| \{ j \,|\,X_j = g(t^{r-d_j}) \} |= n$,
\item[3)] If $(X_i, X_{i+1}) =
\begin{cases}
\big(f\big(t^{r-d_i}\big),f\big(t^{r-d_{i+1}}\big)\big) \text{ or } \big(f\big(t^{r-d_i}\big),g\big(t^{r-d_{i+1}}\big)\big)& \text{then } d_i-1 \leq d_{i+1} \leq r, \\
\big(g\big(t^{r-d_{i}}\big),g\big(t^{r-d_{i+1}}\big)\big) \text{ or } \big(g\big(t^{r-d_{i}}\big),f\big(t^{r-d_{i+1}}\big)\big) & \text{then } d_i \leq d_{i+1} \leq r.
\end{cases}
$
\end{itemize}
\begin{thm} Let us define $\mathcal{C}_{0,0}=1$. For $r,k \in \mathbb{Z}_{\geq 0}$ we have
\begin{gather*}
\mathcal{C}_{r,r+2k}
= \sum_{k_1,k_2 \in \mathbb{Z}_{\geq 0} \atop k_1 + k_2 =k}
\sum_{X \in \mathcal{P}_{k_1,2k_2}^{(r)}} X,
\qquad \mathcal{C}_{r,r+2k+1}
= \sum_{k_1,k_2 \in \mathbb{Z}_{\geq 0} \atop k_1 + k_2 =k}
\sum_{X \in \mathcal{P}_{k_1,2k_2+1}^{(r)}} X.
\end{gather*}
\end{thm}
|
2,869,038,156,263 | arxiv | \section{Motives}
One of my dissatisfactions with the standard model is that for the
explanation of the mass spectra of quarks and leptons,
we must choose the coefficients $y_{ij}^f$ in the Yukawa coupling
$\sum_f \sum_{i,j} \overline{f}_L^i f_{jR} \langle\phi^0\rangle$
($f=\nu,e,u,d$, and $i,j$ are family indices) ``by hand".
In order to reduce this dissatisfaction, for example, let us suppose
U(3)$_{family}$ nonet Higgs fields
which couple with fermions as
$\sum_f\sum_{i,j}\overline{f}_L^i\langle\phi_i^{0j}\rangle f_{jR}$.
Unfortunately, we know that the mass spectra of up- and
down-quarks and charged leptons are not identical and the Kobayashi-Maskawa
[1] (KM) matrix is not a unit matrix.
Moreover, we know that in such multi-Higgs models, in general,
flavor changing neutral currents (FCNC) appear unfavorably.
Nevertheless, I would like to dare to challenge to a model
with U(3)$_{family}$ nonet Higgs bosons
which leads to a seesaw-type quark and lepton mass matrix
$$
M_f \simeq m_L M_F^{-1} m_R \ .\eqno(1)
$$
My motives are as follows.
One of the motives is a phenomenological success of a charged lepton
mass relation [2]
$$
m_e+m_\mu+m_\tau=\frac{2}{3}(\sqrt{m_e}+\sqrt{m_\mu}+\sqrt{m_\tau})^2
\ , \eqno(2)
$$
which predicts $m_\tau = 1776.96927\pm 0.00052\pm 0.00005$ MeV
for the input values [3] of $m_e=0.51099906\pm 0.00000015$ MeV and
$m_\mu=105.658389\pm 0.000034$ MeV (the first and second errors in (1.2)
come from the errors of $m_\mu$ and $m_e$, respectively).
Recent measurements [4] of tau lepton mass
$m_\tau = 1776.96^{+0.18+0.20}_{-0.19-0.16}$ MeV
excellently satisfies the charged lepton mass relation (2).
An attempt to derive the mass relation (2) from a Higgs model has been
tried [5]:
We assumed U(3)$_{family}$ nonet Higgs bosons $\phi_i^j$ $(i,j=1,2,3)$,
whose potential
is given by
$$V(\phi)=\mu^2{\rm Tr}(\phi\phi^\dagger)
+\frac{1}{2}\lambda\left[{\rm Tr}(\phi\phi^\dagger)\right]^2
+\eta\phi_s\phi_s^*{\rm Tr}(\phi_{oct}\phi_{oct}^\dagger) \ . \eqno(3)
$$
Here, for simplicity, the SU(2)$_L$ structure of $\phi$ has been neglected,
and we have expressed the nonet Higgs bosons $\phi_i^j$ by
the form of $3\times 3$ matrix,
$$
\phi=\phi_{oct}+\frac{1}{\sqrt{3}}\phi_s\; {\bf 1}\ ,\eqno(4)
$$
where $\phi_{oct}$ is the octet part of $\phi$, i.e., Tr$(\phi_{oct})=0$,
and {\bf 1} is a $3\times 3$ unit matrix.
For $\mu^2<0$, conditions for minimizing the potential (3) lead to
the relation
$$
v_s^*v_s = {\rm Tr}\left( v_{oct}^\dagger v_{oct} \right)\ , \eqno(5)
$$
together with $v=v^\dagger$, where $v=\langle \phi\rangle$,
$v_{oct}=\langle \phi_{oct}\rangle$ and $v_s=\langle \phi_s\rangle$,
so that we obtain the relation
$$
{\rm Tr}\left(v^2\right) =
\frac{2}{3} \left[{\rm Tr}(v)\right]^2 \ . \eqno(6)
$$
If we assume a seesaw-like mechanism for charged lepton mass matrix
$M_e$, $ M_e \simeq m M_E^{-1} m $, with $m\propto v$ and heavy lepton
mass matrix $M_E\propto {\bf 1}$, we can obtain the mass relation (2).
Another motives is a phenomenological success [6] of
quark mass matrices with a seesaw-type form (1),
where
$$
m_L\propto m_R \propto M_e^{1/2}\equiv \left(
\begin{array}{ccc}
\sqrt{m_e} & 0 & 0 \\
0 & \sqrt{m_\mu} & 0 \\
0 & 0 & \sqrt{m_\tau}
\end{array} \right) \ ,\eqno(7)
$$
$$
M_F\propto {\bf 1}+b_F e^{i\beta_F} 3X
\equiv \left(
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array} \right) + b_F e^{i\beta_F} \left(
\begin{array}{ccc}
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1
\end{array} \right) \ . \eqno(8)
$$
The model can successfully provide predictions for quark mass ratios
(not only the ratios $m_u/m_c$, $m_c/m_t$, $m_d/m_s$ and $m_s/m_b$,
but also $m_u/m_d$, $m_c/m_s$ and $m_t/m_b$) and
KM matrix parameters.
These phenomenological successes can be reasons why
the model with a U(3)$_{family}$ nonet Higgs bosons, which
leads to a seesaw-type mass matrix (1), should be taken seriously.
\section{Outline of the model}
The model is based on
SU(2)$_L\times$ SU(2)$_R\times$U(1)$_Y\times$U(3)$_{family}$ [7] symmetries.
These symmetries except for U(3)$_{family}$ are gauged.
The prototype of this model was investigated by Fusaoka and the author [8].
However, their Higgs potential leads to massless
physical Higgs bosons, so that it brings some troubles into the theory.
In the present model, the global symmetry U(3)$_{family}$ will be
broken explicitly, and not spontaneously, so that massless physical
Higgs bosons will not appear.
The quantum numbers of our fermions and Higgs bosons are summarized
in Table I.
\begin{center}
Table I. Quantum numbers of fermions and Higgs bosons
\begin{tabular}[t]{|c|c|c|c|c|} \hline\hline
& $Y$ & SU(2)$_L$ & SU(2)$_R$ & U(3)$_{family}$ \\
\hline
$f_L$ & $(\nu, \: e)_L^{Y=-1}$, $(u, \: d)_L^{Y=1/3}$
& {\bf 2} & {\bf 1} & {\bf 3} \\
$f_R$ & $(\nu, \: e)_R^{Y=-1}$, $(u, \: d)_R^{Y=1/3}$
& {\bf 1} & {\bf 2} & {\bf 3} \\
$F_L$ & $N_L^{Y=0}$, $E_L^{Y=-2}$, $U_L^{Y=4/3}$, $D_L^{Y=-2/3}$
& {\bf 1} & {\bf 1} & {\bf 3} \\
$F_R$ & $N_R^{Y=0}$, $E_R^{Y=-2}$, $U_R^{Y=4/3}$, $D_R^{Y=-2/3}$
& {\bf 1} & {\bf 1} & {\bf 3} \\ \hline
$\phi_L$ & $(\phi^+,\phi^0)_L^{Y=1}$ & {\bf 2} & {\bf 1} & {\bf 8}+{\bf 1} \\
$\phi_R$ & $(\phi^+,\phi^0)_R^{Y=1}$ &{\bf 1} & {\bf 2} & {\bf 8}+{\bf 1} \\
$\Phi_F$ & $\Phi_0^{Y=0}$, $\Phi_X^{Y=0}$ & {\bf 1} & {\bf 1} & {\bf 1},
\ {\bf 8} \\ \hline\hline
\end{tabular}
\end{center}
\vspace{0.5cm}
Note that in our model there is no Higgs boson which belongs to ({\bf 2},
{\bf 2}) of SU(2)$_L\times$SU(2)$_R$.
This guarantees that we obtain a seesaw-type mass matrix (2) by
diagonalization of a $6\times 6$ mass matrix for fermions $(f,F)$:
$$
\left( \begin{array}{cc}
0 & m_L \\
m_R & M_F
\end{array} \right) \Longrightarrow
\left( \begin{array}{cc}
M_f & 0 \\
0 & M'_F
\end{array} \right) \ , \eqno(9)
$$
where
$M_f\simeq -m_L M_F^{-1}m_R$ and
$M'_F\simeq M_F$
for $M_F\gg m_L \ , \ m_R$.
(See Fig.~1.)
\vspace*{1cm}
\setlength{\unitlength}{0.7mm}
\begin{picture}(160,100)(0,-25)
\linethickness{0.5mm}
\thicklines\put(0,30){\line(1,0){160}}
\thicklines\multiput(20,30)(40,0){4}{\vector(1,0){0}}
\multiput(40,30)(0,2){10}{\line(0,1){1}}
\multiput(80,30)(0,2){10}{\line(0,1){1}}
\multiput(120,30)(0,2){10}{\line(0,1){1}}
\multiput(40,30)(40,0){3}{\circle*{2}}
\put(0,20){\makebox(40,10)[b]{$f_L$}}
\put(40,20){\makebox(40,10)[b]{$F_R$}}
\put(80,20){\makebox(40,10)[b]{$F_L$}}
\put(120,20){\makebox(40,10)[b]{$f_R$}}
\put(30,20){\makebox(20,10)[c]{$g_f^L$}}
\put(70,20){\makebox(20,10)[c]{$g_F$}}
\put(110,20){\makebox(20,10)[c]{$g_f^R$}}
\put(0,10){\makebox(40,10)[b]{({\bf 2, 1, 3})}}
\put(40,10){\makebox(40,10)[b]{({\bf 1, 1, 3})}}
\put(80,10){\makebox(40,10)[b]{({\bf 1, 1, 3})}}
\put(120,10){\makebox(40,10)[b]{({\bf 1, 2, 3})}}
\put(20,55){\makebox(40,10)[t]{$\langle\phi_L^0\rangle$}}
\put(60,55){\makebox(40,10)[t]{$\langle\Phi_F\rangle$}}
\put(100,55){\makebox(40,10)[t]{$\langle\phi_R^0\rangle$}}
\put(15,65){\makebox(40,10)[t]{({\bf 2, 1, 8+1})}}
\put(60,65){\makebox(40,10)[t]{({\bf 1, 1, 1})}}
\put(105,65){\makebox(40,10)[t]{({\bf 1, 2, 8+1})}}
\multiput(30,40)(40,0){3}{\makebox(20,20){{\bf $\times$}}}
\end{picture}
\vspace*{-20mm}
\begin{center}
Fig.~1. \ Mass generation mechanism of $M_f\simeq m_L M_F^{-1} m_R$.
\end{center}
\section{Higgs potential and ``nonet" ansatz}
We assume that $\langle\phi_R\rangle \propto \langle\phi_L\rangle$,
i.e., each term in $V(\phi_R)$ takes the coefficient
which is exactly proportional to the corresponding term in $V(\phi_L)$.
This assumption means that there is a kind of ``conspiracy" between
$V(\phi_R)$ and $V(\phi_L)$.
However, in the present stage, we will not go into this problem moreover.
Hereafter, we will drop the index $L$ in $\phi_L$.
The potential $V(\phi)$ is given by
$$
V(\phi)=V_{nonet} + V_{Oct\cdot Singl} + V_{SB} \ , \eqno(10)
$$
where
$V_{nonet}$ is a part of $V(\phi)$ which satisfies a ``nonet" ansatz
stated below, $V_{Oct\cdot Singl}$ is a part which violates the ``nonet"
ansatz, and $V_{SB}$ is a term which breaks U(3)$_{family}$ explicitly.
The ``nonet" ansatz is as follows: the octet component $\phi_{oct}$
and singlet component $\phi_s$ of the Higgs scalar fields $\phi_L$
($\phi_R$) always appear with the combination of (4) in the
Lagrangian.
Under the ``nonet" ansatz, the SU(2)$_L$ invariant (and also U(3)$_{family}$
invariant) potential $V_{nonet}$ is, in general, given by
$$
V_{nonet}= \mu^2 {\rm Tr}(\overline{\phi}\phi)
+\frac{1}{2}\lambda_1
(\overline{\phi}_i^j \phi_j^i)(\overline{\phi}_k^l \phi_l^k)
$$
$$
+\frac{1}{2}\lambda_2
(\overline{\phi}_i^j \phi_k^l)(\overline{\phi}_l^k \phi_j^i)
+\frac{1}{2}\lambda_3
(\overline{\phi}_i^j \phi_k^l)(\overline{\phi}_j^i \phi_l^k)
+\frac{1}{2}\lambda_4
(\overline{\phi}_i^j \phi_j^k)(\overline{\phi}_k^l \phi_l^i)
$$
$$
+\frac{1}{2}\lambda_5
(\overline{\phi}_i^j \phi_l^i)(\overline{\phi}_k^l \phi_j^k)
+\frac{1}{2}\lambda_6
(\overline{\phi}_i^j \phi_j^k)(\overline{\phi}_l^i \phi_k^l)
+\frac{1}{2}\lambda_7
(\overline{\phi}_i^j \phi_k^l)(\overline{\phi}_j^k \phi_l^i) \ ,\eqno(11)
$$
where $(\overline{\phi} \phi)=\phi^-\phi^+
+\overline{\phi}^0 \phi^0$.
On the other hand, the ``nonet ansatz" violation terms $V_{Oct\cdot Singl}$
are given by
$$
V_{Oct\cdot Singl}=
\eta_1(\overline{\phi}_s\phi_s){\rm Tr}(\overline{\phi}_{oct} \phi_{oct})
+ \eta_2\left( \overline{\phi}_s(\phi_{oct})_i^j\right)
\left( (\overline{\phi}_{oct})_j^i \phi_s\right)
$$
$$
+ \eta_3\left( \overline{\phi}_s(\phi_{oct})_i^j\right)
\left( \overline{\phi}_s (\phi_{oct})_j^i\right)
+ \eta_3^*\left( (\overline{\phi}_{oct})_i^j \phi_s\right)
\left( (\overline{\phi}_{oct})_j^i \phi_s\right) \ .\eqno(12)
$$
For a time, we neglect the term $V_{SB}$ in (10).
For $\mu^2<0$, conditions for minimizing the potential (10) lead to
the relation
$$
v_s^2={\rm Tr}(v_{oct}^2)=\frac{-\mu^2}
{2(\lambda_1+\lambda_2+\lambda_3)+(\eta_1+\eta_2+2\eta_3)} \ , \eqno(13)
$$
under the conditions $\lambda_4+\lambda_5+2(\lambda_6+\lambda_7)=0$,
and $v=v^\dagger$, where we have put $\eta_3=\eta_3^*$ for simplicity.
Hereafter, we choose the family basis as
$$
v=\left(
\begin{array}{ccc}
v_1 & 0 & 0 \\
0 & v_2 & 0 \\
0 & 0 & v_3
\end{array}
\right) \ .\eqno(14)
$$
For convenience, we define the parameters $z_i$ as
$$ z_i\equiv \frac{v_i}{v_0} =\sqrt{\frac{m_i^e}{m_e+m_{\mu}+m_{\tau}}} \ ,
\eqno(15)
$$
where
$$
v_0=(v_1^2+v_2^2+v_3^2)^{{1}/{2}} \ , \eqno(16)
$$
so that $(z_1, z_2,z_3)=(0.016473,0.23687,0.97140)$.
We define two independent diagonal elements of $\phi_{oct}$ as
$$
\begin{array}{l}
\phi_x =x_1\phi_1^1+x_2\phi_2^2+x_3\phi_3^3 \ , \\
\phi_y =y_1\phi_1^1+y_2\phi_2^2+y_3\phi_3^3 \ ,
\end{array} \eqno(17)
$$
where the coefficients $x_i$ and $y_i$ are given by
$$ x_i={\sqrt{2}} z_i -{1}/{\sqrt{3}} \ , \eqno(18) $$
$$
(y_1,y_2,y_3)=({x_2-x_3},{x_3-x_1},x_1-x_2)/{\sqrt{3}} \ . \eqno(19)
$$
Then, the replacement $\phi^0\rightarrow\phi^0 + v$ means that
$\phi^0_s \rightarrow\phi^0_s + v_s$;
$\phi^0_x \rightarrow\phi^0_x + v_x$;
$\phi^0_y \rightarrow\phi^0_y$;
$(\phi^0)_i^j \rightarrow(\phi^0)_i^j \ \ (i\neq j)$,
where $v_i=v_s/\sqrt{3}+x_i v_x$.
This means that even if we add a term
$$
V_{SB}=\xi \left(\overline{\phi}_y\phi_y
+\sum_{i\neq j}\overline{\phi}_i^j\phi_j^i\right) \ , \eqno(20)
$$
in the potential $V_{nonet}+V_{Oct\cdot Singl}$, the relation (13)
are still unchanged.
\section{Physical Higgs boson masses}
For convenience, we define:
$$
\begin{array}{ccc}
\left(
\begin{array}{c}
\phi^+ \\
\phi^0
\end{array} \right)
& = & \frac{1}{\sqrt{2}}
\left(
\begin{array}{c}
i\sqrt{2} \, \chi^+ \\
H^0-i\chi^0
\end{array} \right)
\end{array}
\ , \eqno(21)
$$
$$
\begin{array}{cccc}
\left(
\begin{array}{c}
\phi_1 \\
\phi_2 \\
\phi_3
\end{array} \right)
& = &
\left(
\begin{array}{ccc}
z_1 & z_2 & z_3 \\
z_1-\sqrt{\frac{2}{3}} & z_2-\sqrt{\frac{2}{3}} & z_3-\sqrt{\frac{2}{3}} \\
\sqrt{\frac{2}{3}}(z_2-z_3) & \sqrt{\frac{2}{3}}(z_3-z_1)
& \sqrt{\frac{2}{3}}(z_1-z_2)
\end{array} \right)
\left(
\begin{array}{c}
\phi_1^1 \\
\phi_2^2 \\
\phi_3^3
\end{array} \right)
\end{array}
\ . \eqno(22)
$$
Then, we obtain masses of these Higgs bosons which are sumalized in Table II.
\vglue.1in
\begin{quotation}
Table II. Higgs boson masses squared in unit of $v_0^2=(174\; {\rm GeV})^2$,
where $\overline{\xi}=\xi/v_0^2$.
For simplicity, the case of $\lambda_4=\lambda_5=\lambda_6=\lambda_7=0$
are tabled.
\end{quotation}
$$
\begin{array}{|c|c|c|c|}\hline\hline
\phi & H^0 & \chi^0 & \chi^\pm \\ \hline
m^2(\phi_1) &
\begin{array}{c}
2(\lambda_1+\lambda_2+\lambda_3)\\
+\eta_1+\eta_2+2\eta_3 \end{array}
& 0 & 0 \\ \hline
m^2(\phi_2) & -(\eta_1+\eta_2+2\eta_3) & -2(\lambda_3+2\eta_3) &
-(\lambda_2+\lambda_3 +\eta_2+2\eta_3) \\[.1in] \hline
m^2(\phi_3) & \overline{\xi} & \overline{\xi}-2(\lambda_3+\eta_3) &
\overline{\xi}-[\lambda_2+\lambda_3 +\frac{1}{2}(\eta_2+2\eta_3)]
\\[.1in] \hline
m^2(\phi_i^j) & =m^2(H_3^0) & =m^2(\chi_3^0) & = m^2(\chi_3^\pm)
\\[.1in] \hline\hline
\end{array}
$$
\vglue.1in
The massless states $\chi_1^\pm$ and $\chi_1^0$ are eaten by weak bosons
$W^\pm$ and $Z^0$, so that they are not physical bosons.
The mass of $W^\pm$ is given by $m^2_W=g^2 v_0^2/2$, so that
the value of $v_0$ defined by (16) is $v_0=174$ GeV.
\section{Interactions of the Higgs bosons}
{\bf (A) \ Interactions with gauge bosons}
Interactions of $\phi_L$ with gauge bosons are calculated from the kinetic
term Tr($D_{\mu}\overline{\phi}_{L}D^{\mu}\phi_L$). The results are as
follows :
$$
H_{EW}=+i\left( eA_{\mu}+\frac{1}{2}g_z\cos2\theta_W Z_{\mu}\right)
{\rm Tr}(\chi^{-}\buildrel\leftrightarrow\over\partial\,^{\mu} \chi^+)
+\frac{1}{2}g_z Z_{\mu}{\rm Tr}
(\chi^{0} \buildrel\leftrightarrow\over\partial\,^{\mu} H^0)
$$
$$
+ \frac{1}{2}g \left\{ W_{\mu}^+[{\rm Tr}
(\chi^{-} \buildrel\leftrightarrow\over\partial\,^{\mu} H^0)
-i(\chi^{-} \buildrel\leftrightarrow\over\partial\,^{\mu} \chi^+)
+{\rm h.c.}\right\}
$$
$$
+\frac{1}{2}\left( 2gm_W W_{\mu}^-W^{+\mu}
+g_z m_Z Z_{\mu}Z^{\mu}\right) H_1^0
\ , \eqno(23)
$$
where $g_z=g/\cos\theta_W$ and $\chi_1^{\pm}=\chi_1^0=0$.
Note that the interactions of $H^0_1$ are exactly same as that of $H^0$
in the standard model.
\vglue.1in
{\bf (B) \ Three-body interactions among Higgs bosons}
$$
H_{\phi\phi\phi}=\frac{1}{2\sqrt{2}}\frac{m^2(H_1^0)}{v_0} H_1^0
{\rm Tr}(H^0 H^0)
+\frac{1}{2\sqrt{2}}\frac{m^2(H_2^0)}{v_0}\left( H_1^0 H_2^0 H_2^0
- H_1^0 H_1^0 H_2^0\right) + \ \ \cdots \ . \eqno(24)
$$
The full expression will be given elsewhere.
\vglue.1in
{\bf (C) \ Interactions with fermions}
Our Higgs particles $\phi_L$ do not have interactions with light fermions
$f$ at tree level, and they can couple only between light fermions $f$ and
heavy fermions $F$.
However, when the $6\times 6$ fermion mass matrix is diagonalized as (9),
the interactions of $\phi_L$ with
the physical fermion states (mass eigenstates) become
$$
\left( \begin{array}{cc}
0 & \Gamma_L \\
0 & 0
\end{array} \right) \Longrightarrow
\left( \begin{array}{cc}
\Gamma_{11} & \Gamma_{12} \\
\Gamma_{21} & \Gamma_{22}
\end{array} \right) \ , \eqno(25)
$$
where $\Gamma_L=y_f \phi_L$, and
$$
\Gamma_{11}\simeq U_L^f \phi_L v^{-1} U_L^{f\dagger} D_f \ . \eqno(26)
$$
For charged leptons, since $U_L^e={\bf 1}$,
the interactions of $\phi_L^0$ are given by
$$
H_{Yukawa}^{lepton}=\frac{1}{2\sqrt{2}}\sum_{i,j}\left[
\overline{e}_i(a_{ij}-b_{ij}\gamma_5)e_j (H^0)_i^j
+ i
\overline{e}_i(b_{ij}-a_{ij}\gamma_5)e_j (\chi^0)_i^j
\right] \ ,\eqno(27)
$$
$$
a_{ij}=\frac{m_i}{v_i}+\frac{m_j}{v_j} \ , \ \ \
b_{ij}=\frac{m_i}{v_i}-\frac{m_j}{v_j} \ . \eqno(28)
$$
Therefore, in the pure leptonic modes, the exchange of $\phi_L$ cannot
cause family-number non-conservation.
For quarks, in spite of $U_L^q\neq {\bf 1}$, the Higgs boson $H_1^0$ still
couples with quarks $q_i$ diagonally:
$$
H_{Yukawa}^{quark}=
\frac{1}{\sqrt{2}} \sum_i \frac{m_i^q}{v_0}(\overline{q}_i q_i) H_1^0
\ + \ \ \cdots \ \ . \eqno(29)
$$
However, the dotted parts which are interaction terms of $\phi_2$, $\phi_3$
and $\phi_i^j$ ($i\neq j$) cause family-number non-conservation.
\section{Family-number changing and conserving neutral currents}
{\bf (A) \ Family-number changing neutral currents}
In general, the Higgs boson $H_1^0$ do not contribute to
flavor-changing neutral currents (FCNC), and only
the other bosons contribute to $\overline{P}^0$-$P^0$
mixing.
The present experimental values [3]
$\Delta m_K =m(K_L)-m(K_S)=(0.5333\pm 0.0027)\times 10^{10}
\ \hbar {\rm s}^{-1}$,
$|\Delta m_D| =|m(D_1^0)-m(D_2^0)|<20\times 10^{10}
\ \hbar {\rm s}^{-1}$,
$\Delta m_B =m(D_H)-m(D_L)=(0.51\pm 0.06)\times 10^{12}
\ \hbar {\rm s}^{-1}$, and so on, give the lower bound of Higgs bosons
$m(H_2^0),\ m(\chi_2^0) > 10^5$ GeV.
For the special case of $m(H)=m(\chi)$,
we obtain the effective Hamiltonian
$$
H_{FCNC}=\frac{1}{3}\left(
\frac{1}{m^2(H_2^0)}-\frac{1}{m^2(H_3^0)}\right)
\sum_{i\neq j}\frac{m_im_j}{v_0^2}
\sum_k \left(
\frac{1}{z_k^2}+\frac{z_k-z_l-z_m}{z_1z_2z_3}\right)
$$
$$
\times
(U_i^kU_j^{k*})^2
\left[(\overline{f}_if_j)^2-(\overline{f}_i\gamma_5f_j)^2\right]
\ , \eqno(30)
$$
where $(k,l,m)$ are cyclic indices of $(1,2,3)$,
so that the bound can reduce to
$m(H_2^0)=\ m(\chi_2^0) >$ a few TeV.
Note that FCNC can highly be suppressed if $m(H_2)\simeq m(H_3)$.
\vglue.1in
{\bf (B) \ Family-number conserving neutral currents}
The strictest restriction on the lower bound of the Higgs boson masses
comes from
$$
\frac{B(K_L\rightarrow e^\pm \mu^\mp)}{B(K_L \rightarrow\pi^0\ell^\pm\nu)}
\simeq
\left(\frac{v_0}{m_H}\right)^4 \times 1.94 \times 10^{-6} \ .\eqno(31)
$$
The present data [3] $B(K_L\rightarrow e^\pm\mu^\mp)_{exp}
< 3.3\times10^{-11}$ leads to the lower bound
$m_{H3}/v_0 > 12$, i.e., $m_{H3} > 2.1$ TeV.
\section{Productions and decays of the Higgs bosons}
As stated already, as far as our Higgs boson $H_1^0$ is concerned,
it is hard to distinguish it from $H^0$ in the standard model.
We discuss what is a new physics expected concerned with
the other Higgs bosons.
\vglue.1in
{\bf (A) \ Productions}
Unfortunately, since masses of our Higgs bosons $\phi_2$ and $\phi_3$ are
of the order of a few TeV, it is hard to observe a production
$$
e^+ + e^- \rightarrow Z^* \rightarrow (H^0)_i^j \ \ \ \ \ \
+ \ (\chi^0)_j^i \ ,
$$
$$\hspace{2.8cm}\hookrightarrow f_i+\overline{f}_j \ \ \ \ \ \
\hookrightarrow f_j+\overline{f}_i \ , \eqno(32)
$$
even in $e^+e^-$ super linear colliders which are
planning in the near future.
Only a chance of the observation of our Higgs bosons $\phi_i^j$ is
in a production
$$ u \rightarrow t + (\phi)_1^3 \ ,\eqno(33) $$
at a super hadron collider with several TeV beam energy,
for example, at LHC,
because the coupling $a_{tu}$ ($b_{tu}$) is sufficiently large:
$$
a_{tu} \simeq \frac{m_t}{v_3} +\frac{m_u}{v_1}=1.029+ 0.002
, \eqno(34)
$$
[c.f. $a_{bd} \simeq ({m_b}/{v_3})+({m_d}/{v_1})
=0.026+ 0.003$].
\vglue.1in
{\bf (B) \ Decays}
Dominant decay modes of $(H^0)_3^{2}$ and $(H^0)_3^{1}$
are hadronic ones,
i.e., $(H^0)_3^{2}\rightarrow t\overline{c}$,
$b\overline{s}$ \ and $(H^0)_3^{1}\rightarrow t\overline{u}$,
$b\overline{d}$ .
Only in $(H^0)_2^{1}$ decay,
a visible branching ratio of leptonic decay is expected:
$$
\Gamma(H_2^{1}\rightarrow c\overline{u}):
\Gamma(H_2^{1}\rightarrow s\overline{d})
:\Gamma(H_2^{1}\rightarrow \mu^-e^+)
$$
$$
\simeq 3\left[\left(\frac{m_c}{v_2}\right)^2+
\left(\frac{m_u}{v_1}\right)^2\right] :
3\left[\left(\frac{m_s}{v_2}\right)^2
+\left(\frac{m_d}{v_1}\right)^2\right] :
\left[\left(\frac{m_{\mu}}{v_2}\right)^2
+\left(\frac{m_e}{v_1}\right)^2\right]
$$
$$
= 73.5\% : 24.9\% : 1.6\% .\eqno(35)
$$
\section{Summary}
We have proposed a U(3)-family nonet Higgs boson scenario,
which leads to a seesaw-type quark and lepton mass matrix
$M_f \simeq m_L M_F^{-1} m_R$.
It has been investigated what a special form of the the potential
$V(\phi)$ can provide the relation
$$
m_e+m_\mu+m_\tau
=\frac{2}{3}(\sqrt{m_e}+\sqrt{m_\mu}+\sqrt{m_\tau})^2 \ ,
$$
and the lower bounds on the masses of $\phi_L$ have been estimated
from the data of $P^0$-$\overline{P}^0$ mixing and rare meson decays.
Unfortunately, the Higgs bosons, except for $H_1^0$, in the present
scenario are very heavy, i.e., $m_{H}\simeq m_\chi \sim$ a few TeV.
We expect that our Higgs boson $(\phi^0)_1^3$ will be observed through
the reaction $u \rightarrow t+(\phi^0)_1^3$ at LHC.
The present scenario is not always satisfactory from the theoretical
point of view:
\noindent
(1) A curious ansatz, the ``nonet" ansatz, has been assumed.
\noindent
(2) The potential includes an explicitly symmetry breaking term $V_{SB}$.
These problems are future tasks of our scenario.
\vglue.3in
\centerline{\bf Acknowledgments}
Portions of this work (quark mass matrix phenomenology)
were begun in collaboration with H.~Fusaoka [6].
I would like to thank him for helpful conversations.
The problem of the flavor-changing neutral currents in the present model
was pointed out by K.~Hikasa.
I would sincerely like to thank Professor K.~Hikasa for valuable comments.
An improved version of this work is in preparation in collaboration with
Prof.~M.~Tanimoto.
I am indebted to Prof.~M.~Tanimoto for helpful comments.
I would also like to thank the organizers of this workshop,
especially, Professor R.~Najima for a successful and enjoyable workshop.
This work was supported by the Grant-in-Aid for Scientific Research,
Ministry of Education, Science and Culture, Japan (No.06640407).
\vglue.3in
\newcounter{0000}
\centerline{\bf References and Footnote}
\begin{list}
{[~\arabic{0000}~]}{\usecounter{0000}
\labelwidth=0.8cm\labelsep=.1cm\setlength{\leftmargin=0.7cm}
{\rightmargin=.2cm}}
\item M.~Kobayashi and T.~Maskawa, Prog.~Theor.~Phys. {\bf 49}, 652 (1973).
\item Y.~Koide, Lett.~Nuovo Cimento {\bf 34}, 201 (1982); Phys.~Lett.
{\bf B120}, 161 (1983); Phys.~Rev. {\bf D28}, 252 (1983);
Mod.~Phys.~Lett. {\bf 8}, 2071 (1993).
\item Particle data group, Phys.~Rev. {\bf D50}, 1173 (1994).
\item J.~R.~Patterson, a talk presented at the International Conference on
{\it High Energy Physics}, Glasgow, July 20 -- 27, 1994.
\item Y.~Koide, Mod.~Phys.~Lett. {\bf A5}, 2319 (1990).
\item Y.~Koide and H.~Fusaoka, US-94-02, 1994 (hep-ph/9403354),
(unpublished);
H.~Fusaoka and Y.~Koide, AMUP-94-09 \& US-94-08, 1994 (hep-ph/9501299),
to be published in Mod.~Phys.~Lett. (1995).
Also see, Y.~Koide, Phys.~Rev. {\bf D49}, 2638 (1994).
\item The ``family symmetry" is also called a ``horizontal symmetry":
K.~Akama and H.~Terazawa, Univ.~of Tokyo, report No.~257 (1976)
(unpublished); T.~Maehara and T.~Yanagida, Prog.~Theor.~Phys. {\bf 60},
822 (1978); F.~Wilczek and A.~Zee, Phys.~Rev.~Lett. {\bf 42}, 421 (1979);
A.~Davidson, M.~Koca and K.~C.~Wali, Phys.~Rev. {\bf D20}, 1195 (1979);
J.~Chakrabarti, Phys.~Rev. {\bf D20}, 2411 (1979).
\item Y.~Koide and H.~Fusaoka, US-94-02, (1994), (hep-ph/9403354),
(unpublished).
\end{list}
\end{document}
|
2,869,038,156,264 | arxiv | \section{Introduction}
\label{sec:intro}
The most precise measurement of the anomalous magnetic
moment of the muon, obtained by E821 at Brookhaven~\cite{Bennett:2006fi},
differs by more than three standard deviations from the
theoretical expectation.
At present, the uncertainties on the theory and on the experimental sides
are of similar sizes.
For recent reviews and analyses,
see, e.g., refs.~\cite{Grange:2015fou,Blum:2013xva,Hagiwara:2011af,Davier:2010nc,Jegerlehner:2009ry}.
With the planned
E989 experiment at Fermilab~\cite{Grange:2015fou} and E34
at J-PARC~\cite{Aoki:2011cdr}, it is of utmost importance
to increase the precision of the standard
model prediction in line with the expected experimental improvement by a factor
of about five~\cite{Bennett:2006fi,Aoki:2011cdr,Grange:2015fou}. If
the discrepancy persisted at this even higher level of accuracy,
this should help
to pin down any particular beyond-the-standard-model
scenario and constrain the parameters of new interactions.
With an impressive QED five-loop evaluation~\cite{Aoyama:2012wk}
available, the theoretical uncertainty is
dominated by non-perturbative effects and, in particular,
by the leading hadronic contribution to the electromagnetic vacuum
polarization tensor, with the second biggest source of
uncertainty being the hadronic light-by-light scattering contribution.
The hadronic contribution to the
vacuum polarization tensor is also important in view of the running of the
electromagnetic fine structure constant and of the Weinberg weak mixing
angle~\cite{Moch:2014tta,Benayoun:2014tra,Bodenstein:2012pw,Hagiwara:2011af,Jegerlehner:2011mw}
from low to high scales.
The standard method~\cite{Blum:2002ii,Gockeler:2003cw}
employed in lattice calculations
of the leading hadronic
contribution to the anomalous magnetic moment
$a_{\ell}=(g_{\ell}-2)/2$ of a charged
lepton $\ell\in\{e,\mu,\tau\}$
consists of computing the renormalized vacuum polarization function
and inserting this into the leading-order QED formula.
The hadronic vacuum polarization tensor, which is the main
object of this study, is defined as
\begin{align}
\label{eq:vacu}
\Pi_{\mu\nu}(p)
&= \int \!\textmd{d}^4\! x \, e^{ipx} \expv{j_\mu(x) j_\nu(0)}\\\nonumber
&=\left(p_{\mu}p_{\nu}-\delta_{\mu\nu}p^2\right)\Pi(p^2)\,,
\end{align}
where
\begin{equation}
\label{eq:current}
j_{\mu}=\frac{q_u}{e}\bar{u}\gamma_{\mu}u
+\frac{q_d}{e}\bar{d}\gamma_{\mu}d+\frac{q_s}{e}\bar{s}\gamma_{\mu}s+\cdots
\end{equation}
denotes the quark electromagnetic current in position space
and $q_u/e=2/3$, $q_d/e=q_s/e=-1/3$ are the fractional
quark charges.
Due to electromagnetic current conservation,
$\Pi_{\mu\nu}$ is transverse and can be parameterized in terms
of a single vacuum polarization function $\Pi(p^2)$, where we employ
Euclidean spacetime conventions, i.e.\ the spacelike $p^2>0$
correspond to virtualities.
$\Pi(p^2)$ undergoes additive renormalization but the renormalized
combination
\begin{equation}
\label{eq:pren}
\Pi_{\mathrm{R}}(p^2)=\Pi(p^2)-\Pi(0)
\end{equation}
is ultraviolet finite.
It turns out that the leading hadronic contribution $a_\mu^{\mathrm{had,LO}}$
to the anomalous magnetic moment of the muon (see the
definition equation~(\ref{eq:amu}) below)
depends most strongly on $\Pi_{\mathrm{R}}(p^2)$ at relatively
small argument values.
Since small momenta correspond to large Euclidean distances,
naively implementing equation~(\ref{eq:vacu})
results in a bad signal over noise ratio in this region.
This becomes even worse for calculations
of the quark-line disconnected contributions,
which therefore have been neglected in almost all previous lattice studies.
Where these were taken into
account~\cite{Feng:2011zk,Francis:2014hoa,Burger:2015oya},
they dominated the statistical error.
Another problem of many past lattice attempts is a conceptual one:
$\Pi(0)$ often is extrapolated from $\Pi(p^2)$ at $p^2>0$ and
the parametrization used constitutes a source of
systematic uncertainty that
is difficult to estimate.
Here we propose methods that address both of the above issues.
The vacuum polarization at $p^2=0$ is shown to be equal to the
bare magnetic susceptibility of the system, which can
be determined independently on the lattice. We investigate
different methods to achieve this, giving consistent results.
We also discuss how this quantity diverges as a function of
the lattice spacing towards the continuum limit.
Most importantly, we introduce a new method for computing
both the connected and the disconnected contributions
to the hadronic vacuum polarization function with unprecedented precision,
in particular at small momenta.
This consists of calculating $\Pi(p^2)$ at $p^2>0$
through the response of the system to oscillatory
background electromagnetic fields. The new method is similar in spirit
to employing momentum sources~\cite{Martinelli:1994ty,Gockeler:1998ye},
allowing us to spend more effort on the low-$p^2$ points,
thereby increasing their precision, without wasting
resources on large momenta where $\Pi_{\mathrm{R}}(p^2)$
can easily be obtained within small relative errors, with a much smaller
impact on the predicted value of $a_{\mu}^{\mathrm{had,LO}}$.
The methods are tested on $N_f=2+1$ staggered ensembles at the physical point,
neglecting QED effects on the quark propagation which are
of a higher order in the fine-structure constant $\alpha$. In this situation,
due to $\sum_{f\in\{u,d,s\}}q_f=0$, disconnected
contributions vanish for $m_s=m_{ud}$ but need to be
taken into account for $m_s>m_{ud}$, which we do.
Since we neglect charm quark effects,
we have to restrict ourselves to $p^2< m_c^2$.
At high momenta our results can, however, be combined with measurements
of the $R$-ratio as well as with perturbation theory:
the non-singlet and singlet QCD contributions to the Adler function
have been calculated in massless QCD
to $\mathcal{O}(\alpha_s^4)$
in the strong coupling constant in refs.~\cite{Baikov:2010je} and
\cite{Baikov:2012zn}, respectively.
This article is organized as follows. In section~\ref{sec:review}
we review previous calculational strategies, followed by
section~\ref{sec:new}, where
we introduce our background field method and link this to
magnetic susceptibilities. We also discuss renormalization issues
and comment on relations between the Adler function
and the entropy density at high temperatures.
Finally, in section~\ref{sec:results}
we present the simulation setup
and first results, before we conclude.
The equivalence between magnetic susceptibilities and the vacuum
polarization is demonstrated in appendix~\ref{sec:appA}, and the
details of our numerical implementation are discussed in appendices~\ref{appB}
and~\ref{sec:appC}.
\section{Summary of previously employed methods}
\label{sec:review}
The leading hadronic contribution to the anomalous magnetic moment
is given as~\cite{Blum:2002ii,Lautrup:1971jf}
\begin{equation}
a_{\ell}^{\mathrm{had,LO}} = 4\alpha^2\!\int_0^\infty \!\!\textmd{d} p^2 K_{\ell}(p^2)\Pi_{\mathrm{R}}(p^2)\,,
\label{eq:amu}
\end{equation}
where the perturbative kernel function is defined as
\begin{equation}
K_{\ell}(p^2) = \frac{m_{\ell}^2 \,p^2 Z_{\ell}(p^2)^3 \left[ 1-p^2Z_{\ell}(p^2)\right]}{1+m_{\ell}^2\,p^2Z(p^2)^2}
\end{equation}
with
\begin{equation}
Z_{\ell}(p^2) = \frac{\sqrt{1+4m_{\ell}^2/p^2}-1}{2m_{\ell}^2}\,.
\label{eq:amu2}
\end{equation}
The renormalized hadronic vacuum polarization function is defined
in equations~(\ref{eq:vacu}) and (\ref{eq:pren}) above.
Note that the above expressions are valid to
leading-order in terms of the
QED fine-structure constant $\alpha=e^2/(4\pi)\approx 1/137$,
i.e.\ to $\mathcal{O}(\alpha^2)$,
which, at this order, can be pulled out of the integral.
In the limit of small momenta, where $\Pi_{\mathrm{R}}(p^2)\propto p^2$,
the argument of the integral has its maximum at
$p_0^2\approx (\sqrt{5}-2)m_{\ell}^2$. For the muon with
$m_{\mu}\approx 0.105\,$GeV
this implies $p_0^2\approx 0.0026\,\textmd{GeV}^2$:
an enormous volume would be necessary to resolve
this momentum region, at least without the use of twisted boundary
conditions~\cite{Sachrajda:2004mi,DellaMorte:2011aa},
since $\pi/p_0\approx 2\pi/m_{\mu}\approx 12\,\textmd{fm}$.
Fortunately,
the integral as a whole turns out to be
dominated by somewhat higher momenta:
it still picks up about $50\%$ of its value from momenta
larger than $10\,p_0^2$.
The predicted value of $a_{\mu}^{\mathrm{had,LO}}$ strongly
depends on $\Pi_{\mathrm{R}}(p^2)$ at these
still relatively small momenta $p^2\sim 0.03\,\textmd{GeV}^2$.
This is nicely illustrated, e.g., in ref.~\cite{Golterman:2014ksa},
in figure 3 of
ref.~\cite{DellaMorte:2011aa} and in figure 1 of ref.~\cite{Burger:2015oya}.
\subsection{Information from experiment}
The hadronic polarization tensor (and also the leading hadronic
contribution to the lepton anomalous magnetic moments~\cite{Lautrup:1971jf})
can be obtained by analytic continuation of the $R$-ratio
of the total cross section $\sigma(e^+e^-\rightarrow\mathrm{hadrons})$
over the tree-level
$e^+e^-\rightarrow\mu^+\mu^-$ expectation
(see, e.g., refs.~\cite{Adler:1974gd,Bernecker:2011gh}):
\begin{equation}
\label{eq:rrat}
\Pi_{\mathrm{R}}(p^2) = \frac{p^2}{12\pi^2} \int_{4m_{\pi}^2}^\infty\!\! \textmd{d} s\,
\frac{R(s)}{s(s+p^2)}\,.
\end{equation}
$R$-ratio measurements~\cite{Davier:2010nc,Hagiwara:2011af} can in principle
be augmented by other experimental data,
including $\tau$-decays into
final states containing $\pi^+\pi^-$, see, e.g., ref.~\cite{Benayoun:2012wc}.
\begin{figure}[ht!]
\centering
\includegraphics[width=8cm]{experiment.pdf}
\caption{\label{fig:exp}
The renormalized vacuum polarization determined from the experimental
$R$-ratio~\protect\cite{Bogdan}, performing the integral~\protect(\ref{eq:rrat}) up
to $s=s_{\max}=(2\,\textmd{GeV})^2$, where three quark flavors are
active. Also indicated is the result of the integral
supplemented by three-flavor perturbation theory for $s>(2\,\textmd{GeV})^2$.
}
\end{figure}
In figure~\ref{fig:exp} we show the so-determined renormalized vacuum polarization
as a function of $p^2$~\cite{Bogdan}.
The present relative precision of $\Pi_{\mathrm{R}}$
is 0.64\% at $p^2=0.025\,\textmd{GeV}^2$,
increasing to 0.74\% at $p^2=0.6\,\textmd{GeV}^2$~\cite{Davier:2010nc,Bogdan}.
Achieving a statistical error below 1\% around $p^2=0.03\,\textmd{GeV}^2$
already constitutes an enormous challenge for present-day lattice determinations,
and such results still need to be extrapolated to the infinite volume
and continuum limits and, often, to physical quark masses.
In principle, lattice data at large $p^2$ values -- where discretization errors are
enhanced -- can be substituted by results from the $R$-ratio.
Such a combined strategy may prove optimal for an accurate determination of
$a_\mu^{\mathrm{had,LO}}$, once sufficiently precise lattice results
become available.
\subsection{Lattice determinations of \boldmath{$\Pi(0)$}}
In the past, two strategies have been used to obtain the zero-momentum subtraction
$\Pi(0)$. One possibility are fits of $\Pi(p^2)$ data, e.g., to pole
parametrizations, assuming
vector dominance~\cite{Gockeler:2003cw,Aubin:2006xv,Boyle:2011hu,DellaMorte:2011aa,Gregory:2013taa}, which
is also suggested to be the dominant contribution by
chiral perturbation theory~\cite{Aubin:2006xv}. Extending the
fit region towards large momenta,
such pole ans\"atze have
also been combined with
polynomial parametrizations~\cite{Aubin:2006xv,Feng:2011zk,Burger:2013jya,Burger:2015oya}, motivated by perturbation theory.
Another popular and less model-dependent way to obtain
the normalization is through
Pad\'e approximants~\cite{DellaMorte:2011aa,Aubin:2012me,Golterman:2014ksa,Marinkovic:2015zaa}.
As an alternative, one can compute derivatives of
$\Pi_{\mu\nu}(p)$ from its definition in terms of
the continuum Fourier transformation~(\ref{eq:vacu}).
Then the divergent contribution that
needs to be subtracted from $\Pi(p^2)$ can, e.g., be obtained
via
\begin{align}
\Pi(0)&=\left.-\frac{1}{2}\frac{\partial^2}{\partial p_{\mu}^2}\Pi_{\nu\nu}(p)\right|_{p=0} \qquad\quad(\mu\neq\nu)\nonumber\\\label{eq:tmoment}
&=\frac{1}{2}\int\!\textmd{d}^4\!x\,x_{\mu}^2\,\langle j_{\nu}(x)j_{\nu}(0)\rangle=\frac{1}{2}\int\!\textmd{d} t\,t^2\,G(t)\,,
\end{align}
where no summation over $\nu$ is implied and
in the last step we identified $\mu$ with the time-direction,
to emphasize the correspondence to the second $t$-moment of
a zero-momentum projected two-point function
\begin{equation}
\label{eq:get}
G(t)=\int\!\textmd{d}^3\!r\,
\langle j_{i}(\mathbf{r},t)j_{i}(0)\rangle\,.
\end{equation}
This method was used, e.g., in
refs.~\cite{Francis:2013fzp,Malak:2015sla}, to
obtain this subtraction.
Finally, in ref.~\cite{deDivitiis:2012vs} the expansion
of the two-point current-current correlation function in
powers of $p_{\mu}$ is carried out already on the level of
quark propagators.
This enables the direct computation of
$\Pi(0)=\partial^2\Pi_{12}/(\partial p_1\partial p_2)|_{p=0}$,
without relying on a continuum formula. However, this comes
at the price of computing the expectation value of an operator
involving up to four fermion matrix inversions, without even considering
disconnected contributions.
\subsection{Lattice determinations of \boldmath{$\Pi(p^2)$}, \boldmath{$\Pi_{\mathrm{R}}(p^2)$}
or moments thereof}
The lattice vector Ward-Takahashi identity
reads $\hat{p}_{\mu}\Pi_{\mu\nu}=0$ and
therefore~\cite{Gockeler:2000kj,Blum:2002ii,Gockeler:2003cw}
\begin{equation}
\Pi_{\mu\nu}(p^2)=\left(\hat{p}_{\mu}\hat{p}_{\nu}-\delta_{\mu\nu}\hat{p}^2\right)\Pi(p^2)\,,
\label{eq:latticeWI}
\end{equation}
where $\hat{p}_{\mu}=(2/a)\sin(ap_{\mu}/2)$. This change that
affects $\Pi(p^2)$ at high momenta
has been implemented in almost all
lattice studies, as well as
a modified phase $e^{ipx}\mapsto
e^{ip(x+a\hat{\mu}/2-a\hat{\nu}/2)}$ within the
Fourier sum for $\mu\neq\nu$.
Most lattice evaluations use what we will refer to below as
the conventional method.
This amounts to directly computing the lattice version of equation~(\ref{eq:vacu}),
see, e.g., refs.~\cite{Blum:2002ii,Gockeler:2003cw,Aubin:2006xv,Feng:2011zk,Boyle:2011hu,Burger:2013jya,Gregory:2013taa,Malak:2015sla}.
In some cases, lower momenta
were made accessible by the use of
twisted boundary conditions~\cite{DellaMorte:2011aa,Aubin:2013daa,DellaMorte:2014rta}.
Very recently, another interesting method, stochastically
averaging over different twists, has been suggested~\cite{Lehner:2015bga}
that reduces finite volume effects and allows to realize
very small momenta. The main problem of modifying the fermionic
boundary conditions is that this cannot
easily be extended to incorporate quark-line disconnected contributions.
Obviously, equation~(\ref{eq:amu}) can be Taylor expanded in powers
of $p^2$ and the coefficients can be related to those
of the corresponding
expansion of $\Pi_{\mathrm{R}}(p^2)$. Generalizing
equation~(\ref{eq:tmoment}) above, the first and higher order derivatives
of $\Pi_{\mu\mu}$ with respect to $p^2$ can be obtained, computing
$t^2$-moments of two-point zero-momentum (spatial) projected current-current
correlators. This was explored within ref.~\cite{Feng:2013xsa}
and carried out for the first few moments of the
connected strange and charm quark
contributions to $a_{\mu}^{\mathrm{had,LO}}$ in ref.~\cite{Chakraborty:2014mwa}.
In ref.~\cite{Bernecker:2011gh}
the anomalous magnetic moment was directly related to the
zero momentum projected current-current two-point function.
This approach was then employed, e.g.,
in refs.~\cite{Francis:2013fzp,DellaMorte:2014rta,Francis:2014hoa}.
So far, disconnected contributions have been included in very few lattice
studies~\cite{Feng:2011zk,Francis:2014hoa,Burger:2015oya}.
While their effect seems to be small, the associated
statistical error exceeds that of the connected terms.
Here we will find that this need not be the case.
There exist theoretical expectations
regarding the size of flavor singlet contributions:
exploiting the fact that
$m_{\omega}, m_{\phi}>m_{\rho}$, it was
demonstrated~\cite{Francis:2014hoa} that
the ratio of the disconnected contribution over the total
momentum-projected current-current two-point function
$G(t)$, defined in equation (\ref{eq:get}), approaches
the value $-1/9$, in the limit of large Euclidean times for
$N_f=2+1$ quark flavors. This ratio will, however, not automatically
propagate into $\Pi_{\mathrm{R}}(p^2)$ that depends on $G(t)$ at
all times $t$.
Next-to-leading order chiral perturbation
theory arguments show the disconnected contribution to
also account for $-1/9$ of the total
$\Pi_{\mathrm{R}}(p^2)$~\cite{DellaMorte:2010aq}.
However, this observation builds on the fact that the
correlator of the iso-singlet vector current
$\bar u \gamma_\mu u + \bar d \gamma_\mu d$
is momentum-independent to this order of chiral perturbation theory ---
which we found is not at all satisfied by the lattice data.
Thus, direct computation of the disconnected terms cannot be avoided
in a systematic study.
Our numerical results will shed light onto the size
of the disconnected contribution at low $p^2$.
\section{Vacuum polarization from susceptibilities}
\label{sec:new}
\subsection{The method}
The photon vacuum polarization tensor~(\ref{eq:vacu})
can also be interpreted as a momentum space current-current correlation function
\begin{equation}
\Pi_{\mu\nu}(p)
=\frac{1}{V_4} \expv{\widetilde{j_\mu}(p) \widetilde{j_\nu}(-p)},
\label{eq:Piq}
\end{equation}
where $V_4$ denotes the four-dimensional volume of the system
and $\widetilde{j_{\mu}}$ is the
Fourier transform of the electromagnetic current defined
in equation~(\ref{eq:current}):
\begin{equation}
\widetilde{j_\mu}(p) = \int \textmd{d}^4\! x\, e^{ipx} j_{\mu}(x)\,.
\end{equation}
Depending on the lattice definition of $j_{\mu}$,
the polarization tensor~(\ref{eq:Piq})
may or may not renormalize multiplicatively with $Z_V^2$.
Here, we work with a conserved current, i.e.\ $Z_V=1$.
In the following we will relate the vacuum polarization
to the leading response
of the free energy density $f$ of the system to background
electromagnetic fields.
To illustrate the relation between the two objects on a qualitative level,
it is instructive
to represent the vacuum polarization tensor by the diagram
\begin{figure}[h!]
\centering
\includegraphics[width=3cm]{diagram-crop.pdf}
\end{figure}
\noindent
where a momentum $p$ flows in and out of the photon legs. Here, the gray blob
indicates all possible closed loops formed by quark and gluon propagators ---
i.e.\ the perturbative expression for the free energy density $f$.
The legs may
be thought of as photons corresponding to a background electromagnetic field
$A_\mu$
with momentum $p$.
Pulling out these legs is achieved by taking appropriate derivatives
of $f$ with respect to the background field.
While background electric fields turn the Euclidean QCD
action complex and are thus
problematic in lattice simulations, background magnetic fields
can be realized without complications. Employing the latter gives
access to the spatial components
$\Pi_{ij}$ and hence to all components $\Pi_{\mu\nu}$ since in Euclidean
spacetime at zero temperature the indices can be relabelled at will.
To find the background field corresponding to $\Pi_{\mu\nu}(p)$, we define
the magnetic fields
\begin{equation}
\mathbf{B}^{p}(x) = B\sin(px)\,\mathbf{e}_3\,, \qquad
\mathbf{B}^{0} = B\,\mathbf{e}_3\,,
\label{eq:Bfields}
\end{equation}
pointing in the third spatial direction.
While $\mathbf{B}^p$ is an oscillatory magnetic field with oscillation frequency $p$,
$\mathbf{B}^0$ is a homogeneous background.
The corresponding susceptibilities are obtained as the second
derivatives of the free energy density with respect to the amplitude of
the magnetic field:
\begin{equation}
\chi_p = -\left.\frac{\partial^2 f[\mathbf{B}^{p}]}{\partial (eB)^2}\right|_{B=0}\,.
\label{eq:chidef1}
\end{equation}
These susceptibilities are
normalized by the square of the elementary charge $e>0$ to ensure that
only the renormalization group-invariant combination $eB$ appears in the definitions.
Note that $\chi_p$ can be evaluated on gauge ensembles generated at $B=0$.
The explicit calculation in appendix~\ref{sec:appA} shows that
\begin{equation}
2\chi_p = \Pi(p^2)\,, \qquad\chi_0 = \Pi(0)\,.
\label{eq:mainresult}
\end{equation}
These relations form a new representation of the vacuum polarization
function in terms of susceptibilities with respect to the magnetic fields
defined in equation~(\ref{eq:Bfields})
and are the main result of this article.
Unlike the conventional method,
where the polarization function is
extracted from the same set of position space current-current
correlators for all momenta, equation~(\ref{eq:mainresult}) gives access
to $\Pi(p^2)$ at one single lattice momentum $p$. While this
certainly increases the costs of calculating $\Pi$ over a large
range of momenta, it also allows for a better signal-to-noise ratio
within momentum regions of particular interest.
As argued above, for the determination of the hadronic
contribution to the muon anomalous
magnetic moment $a_\mu^{\mathrm{had,LO}}$,
low momenta $p^2\sim 0.03\,\textmd{GeV}^2$
are much more important than the high-$p$ region.
While $\expv{j_\mu(x)j_\nu(0)}$ mixes information about
all allowed values of $p$, here such a mixing is avoided.
Just as the vacuum polarization tensor, $\chi_p$ and $\chi_0$
can also be separated into
connected and disconnected contributions. We will demonstrate in
section~\ref{sec:results} below that,
using this new approach, an unprecedented accuracy can be achieved
for both the connected and the disconnected contributions to the
vacuum polarization function, already at moderate computational costs.
An additional advantage of the method is that it gives direct
access to $\Pi(0)$.
To summarize, to arrive at a prediction for $a_\mu^{\mathrm{had,LO}}$ it
is desirable to
improve the accuracy in the low-$p$ region and to calculate $\Pi(0)$
independently. The method we propose
accomplishes both of these requirements.
\subsection{Renormalization}
Before presenting the details of the implementation and our numerical results,
it is instructive to discuss the renormalization properties
of $\chi_0$ in more detail.
Equation~(\ref{eq:mainresult}) reveals that
the homogeneous susceptibility is additively divergent, just as $\Pi(0)$.
To see where this divergence comes from, let us consider the multiplicative renormalization of the
background magnetic field (and the corresponding renormalization
of the electric charge),
\begin{equation}
e^2 = Z_e^{-1} e_r^2\,, \qquad B^2 = Z_e B_r^2\,, \qquad eB=e_rB_r\,,
\end{equation}
with the renormalization factor
\begin{equation}
Z_e = 1 + 2b_1 e_r^2 \log(\mu a)\,,
\end{equation}
where $a$ is the lattice spacing (inverse of the regulator) and
$\mu$ the renormalization scale.
Notice that since the magnetic field is external and has no dynamics,
only the lowest-order
QED $\beta$-function coefficient --- denoted as $b_1$ --- appears
in $Z_e$~\cite{Schwinger:1951nm,Dunne:2004nc,Bali:2014kia}.
The total free energy density $f^{\mathrm{tot}}$ of the system is the sum of
$f$ and the energy $B^2/2$ of the magnetic field.
Since varying the background field
should not change the ultraviolet
properties of the system, $f^{\mathrm{tot}}$ must be free of $B$-dependent
divergences.
This implies that the divergence of the pure magnetic energy
\begin{equation}
\frac{B^2}{2} = \frac{B_r^2}{2} + b_1 (eB)^2 \log(\mu a)
\end{equation}
is exactly cancelled by an analogous divergence of $f$.
Plugging this divergence into the definition~(\ref{eq:chidef1}),
we obtain
\begin{equation}
\chi_0 = 2 b_1(a)\, \log(\mu a)\,.
\label{eq:chiren}
\end{equation}
The renormalization scale $\mu$ is fixed by the requirement that there
should be no
finite quadratic terms in $f^{\mathrm{tot}}$ other than $B_r^2/2$~\cite{Schwinger:1951nm}.
Let us emphasize that $b_1$ is the lowest-order coefficient of the QED $\beta$-function, however,
with all QCD corrections taken into account.
To highlight this, we explicitly indicate the dependence of
$b_1$ on the lattice spacing.
Perturbatively, this reads~\cite{Baikov:2012zm}
\begin{equation}
b_1(a) = \sum_{f=u,d,s} (q_f/e)^2 \frac{1}{4\pi^2} \left[ 1 + \frac{g^2(a)}{4\pi^2} + \ldots \right]\,,
\label{eq:b1p}
\end{equation}
where $g^2(a)$ is the QCD coupling.
Equation~(\ref{eq:chiren}) allows to connect lattice results for $\chi_0$
to perturbation theory, once the lattice spacing is small enough,
cf.\ ref.~\cite{Bali:2014kia}.
\subsection{Implication for hot or dense QCD}
As a side-remark, we mention that the
correspondence~(\ref{eq:mainresult}) can be generalized
to high temperatures.
In this case
it results in a relation between the entropy density and the perturbative
Adler function~\cite{Bali:2014kia}. The latter is defined as
the logarithmic derivative of the polarization function
with respect to the squared momentum~\cite{Adler:1974gd}:
\begin{equation}
D(p^2) = 12\pi^2 \frac{\partial \Pi(p^2)}{\partial \log(p^2)}\,.
\label{eq:Adler}
\end{equation}
Let us consider QCD at a high temperature $T$, which exceeds
all other dimensionful scales in the system.
In this limit, the argument of $\Pi$ is set by a thermal scale
$\mu_{\mathrm{th}}=2\pi T$,
leading to the correspondence
$\Pi(\mu_{\mathrm{th}}^2)\leftrightarrow \chi_0(T^2)$.
(The susceptibility at
high temperatures indeed only depends on $T^2$~\cite{Bali:2014kia}.)
For the Adler function, this implies the relation
\begin{equation}
D(\mu_{\mathrm{th}}^2) \longleftrightarrow 12\pi^2 \frac{\partial \,\chi_0}{\partial \log T^2} =
6 \pi^2 T \left.\frac{\partial^2 s}{\partial (eB)^2}\right|_{B=0}\,,
\label{eq:drel}
\end{equation}
where in the second step we used the definition of the entropy density
$s\equiv -\partial f/\partial T$.
Equation~(\ref{eq:drel}) reveals that the leading dependence of
the entropy density
on the magnetic field at high temperatures is fixed by the Adler function,
i.e.\ by perturbative QED physics.
Repeating the above argument with $T$ replaced by a chemical potential $\mu$
(or by an isospin chemical potential $\mu_{\mathrm{I}}$) gives an analogous relation
for the quark number density $n=-\partial f/\partial \mu$
at high $\mu$ (or for the isospin density $n_{\mathrm{I}} = -\partial f/\partial \mu_{\mathrm{I}}$ at high $\mu_{\mathrm{I}}$).
We believe these are highly non-trivial findings.
\section{Simulation details and numerical results}
\label{sec:results}
We employ the $N_f=2+1$ staggered lattice ensembles~\cite{Bali:2011qj,Bali:2012zg}
generated at physical pion and kaon masses.
Each ensemble --- summarized in table~\ref{tab:1} --- consists of a
hundred to a few hundred effectively statistically decorrelated
configurations.
Details of the simulation algorithm and of the lattice setup can be
found in refs.~\cite{Aoki:2005vt,Borsanyi:2010cj,Bali:2011qj}.
\begin{table}[ht!]
\caption{\label{tab:1}Lattice ensembles investigated; the largest lattice spacing reads $a_0=0.29\,\textmd{fm}$.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
$N_s$ & $N_t$ & $\beta$ & $a\,[\textmd{fm}]$ & $\log(a/a_0)$ \\ \hline
24 & 32 & 3.45 & 0.290 & 0 \\
24 & 32 & 3.55 & 0.216 & -0.295 \\
32 & 48 & 3.67 & 0.153 & -0.636 \\
40 & 48 & 3.75 & 0.125 & -0.843 \\
40 & 48 & 3.85 & 0.099 & -1.078 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[ht!]
\centering
\includegraphics[width=8cm]{lowp_1a_cmp.pdf}
\caption{\label{fig:lowp}
The low-momentum region of the oscillatory susceptibilities as measured on the
$24^3\times 32$ configurations at $\beta=3.45$. The curves
correspond to polynomial- and Pad\'e-type
extrapolations of $2\chi_p$ to $p=0$.
The direct determination $\chi_0$ is shifted horizontally to
the left for better visibility.
Also included are results obtained using random wall sources,
displaced horizontally to the right.
}
\end{figure}
\subsection{Oscillatory susceptibilities}
First we discuss results on the susceptibilities $\chi_p=\Pi(p^2)/2$ with respect to
the oscillatory backgrounds. These are determined via the noisy estimator
technique described in appendix~\ref{appB}.
A typical set of low-momentum results is shown in figure~\ref{fig:lowp}.
The data include both the connected and the disconnected contributions
to $\Pi(p^2)$.
The figure also includes results obtained via the conventional method, however,
employing stochastic wall sources (for our
numerical implementation, see appendix~\ref{sec:appC}).
The comparison reveals full agreement between the
two approaches. The statistical error of the random wall data
increases towards small momenta, whereas
it remains tiny even for the lowest
non-vanishing $p^2$-value shown for the oscillatory susceptibilities.
Note that the number of inversions employed to obtain the data point at the lowest momentum
was the same, $N_{\mathrm{inv}}=3000$, for both approaches.
In most previous lattice studies, $\Pi(0)$
was obtained by extrapolating $\Pi(p^2)$ to zero.
Some possible extrapolations, employing polynomials or Pad\'e approximants,
fitted over various ranges in $p^2$, are included in the figure. These fits
are also compared to the direct determinations via the homogeneous
susceptibility $\chi_0$ (see section~\ref{sec:homchi} below) and via
the zero-momentum projected current-current correlation function $G(t)$
according to equation~(\ref{eq:tmoment}), again obtained using random wall sources.
Within
their scatter, at $p^2=0$ the extrapolations agree with the direct
determinations.
We remark that increasing the precision for the lowest few momenta
stabilizes such extrapolations tremendously.
\subsection{Homogeneous susceptibility and renormalized vacuum polarization}
\label{sec:homchi}
The susceptibility $\chi_0$ with respect to a homogeneous background is of interest
for QCD thermodynamics in magnetic fields and has been the subject of detailed
studies in the past few years.
The determination of $\chi_0$ is considerably more complicated than that of $\chi_p$
due to the quantization of the magnetic flux $\Phi$.
On the one hand, oscillatory magnetic fields have zero flux
and can be varied continuously, allowing
for a direct differentiation with respect to $B$. On the other hand,
homogeneous fields have nonzero flux. Therefore, such a differentiation
cannot be carried out to determine $\chi_0$,
see appendix~\ref{appB}.
Several approaches, summarized in refs.~\cite{Bali:2014kia,D'Elia:2015rwa},
have been developed recently to overcome this problem.
Here we compare results obtained using the finite difference
method~\cite{Bonati:2013lca},
the generalized integral method~\cite{Bali:2014kia} and
the half-half method~\cite{Levkova:2013qda}.
The former two approaches are based on simulations at non-zero magnetic
flux values, numerically differentiating the results with respect to $\Phi$.
The half-half method involves calculating expectation values directly at $B=0$,
employing a setup where the magnetic field is positive in one half
and negative in the other half of the lattice.
In this case, since the total flux is zero, a direct differentiation
with respect to the amplitude is possible.
However, the discontinuity of the magnetic field turns out to
dramatically enhance
finite volume effects in $\chi_0$, see
below.\footnote{These finite volume effects cancel
to a large extent in the difference
$\chi_0(T)-\chi_0(T=0)$~\cite{Ludmilla}, which is relevant for
QCD thermodynamics in background magnetic fields.}
\begin{figure}[b]
\centering
\includegraphics[width=8cm]{chihom.pdf}
\caption{\label{fig:chihom}
Magnetic susceptibility with respect to a homogeneous background as
a function of the logarithm of the lattice spacing
($a_0=0.29\,\textmd{fm}$), using three different approaches
(the generalized integral method~\cite{Bali:2014kia},
the finite difference method~\cite{Bonati:2013vba,Claudio}
and data generated in this study using the half-half method~\cite{Levkova:2013qda}).
Also included are a comparison to $\mathcal{O}(g^2)$ perturbation theory
and a parametrization via a rational ansatz.
}
\end{figure}
In figure~\ref{fig:chihom}, we compare all three approaches.
The results from the
generalized integral method and from the finite difference approach
are taken from refs.~\cite{Bonati:2013vba,Claudio} while
the half-half results are new. Not all lattice spacings
are covered by all the methods.
While the results of the generalized integral method\footnote{Here
we compare data obtained on $N_t>N_s$ zero-temperature lattices.
On the configurations of ref.~\cite{Bali:2014kia} at finite (but low)
temperatures, $\chi_0$ was found to have slightly
smaller absolute values for fine lattices of table~\ref{tab:1} ($\beta\ge 3.67)$.}
and of the finite
difference approach are consistent with each other,
the half-half approach consistently underestimates the magnitude of the
susceptibility.
The difference between that approach on the one hand and the other two methods
on the other hand is found to be as large as $10\%$
and reduces only very slowly with increasing lattice volumes.\footnote{The
comparison between the half-half method and the generalized
integral method on our coarsest lattice, already
presented in ref.~\cite{Bali:2014kia},
has been updated by increasing the statistics and the number
of noisy estimators
to reveal the significant difference visible in figure~\ref{fig:chihom}.}
Altogether, we conclude that the half-half method is insufficient
for our purposes
and discard it in the following.
Perturbation theory predicts the dependence
of $\chi_0$ on the lattice spacing, see
equations~(\ref{eq:chiren}) and~(\ref{eq:b1p}).
In figure~\ref{fig:chihom}
the data are plotted against $\log(a/a_0)$
to verify the expected logarithmic
divergence.
We include the leading $\mathcal{O}(g^2)$ QCD correction to the
lowest-order QED $\beta$-function coefficient $b_1$.
The renormalization scale $\mu$ is fitted to match the lattice results
(dashed green line).
In addition, we multiply the resulting
curve by a rational function that approaches
unity as $a\rightarrow 0$ (solid yellow error band). This
band defines the homogeneous magnetic susceptibility
$\chi_0(a)$, as shown for one lattice spacing
in the very left of figure~\ref{fig:lowp}.
The resulting renormalization scale
reads $\mu=0.123(8)\,\textmd{GeV}$, consistent with our determination
in ref.~\cite{Bali:2014kia}.
The $\Pi(p^2)$ results are shown for all five ensembles of table~\ref{tab:1}
in figure~\ref{fig:lowp_2}, where $\Pi(0)=\chi_0$ with the susceptibility
$\chi_0$ determined as detailed above.
Notice that the statistical
uncertainties (again, both connected and disconnected terms
are taken into account) within our window of lattice spacings
remain at the sub-percent level
for $p^2>0$ and are about one percent for $p=0$.
Taking into account the statistical errors
of $\Pi(p^2)$ and of the independently determined $\Pi(0)$,
the renormalized
vacuum polarization~(\ref{eq:pren})
is plotted in figure~\ref{fig:finalres} for the whole
momentum region under consideration. For orientation
we also show the three flavor perturbation theory result
for $p^2>2\,\textmd{GeV}^2$, where we truncate the
formulae of refs.~\cite{Baikov:2010je} and~\cite{Baikov:2012zn}
at $\mathcal{O}(\alpha_s^2)$.
The perturbative curve is
only defined up to an overall constant shift, which we adjust by
matching to a continuum extrapolation around
$p^2=2\,\textmd{GeV}^2$.
It is clear from the figure that --- as one would expect ---
lattice spacing effects become more prominent towards high momenta.
In addition, the vacuum polarization
obtained from the experimental $R$-ratio (cf.\ the blue points in figure~\ref{fig:exp}) is included in figure~\ref{fig:finalres}.
\begin{figure}[ht!]
\centering
\includegraphics[width=8cm]{lowp.pdf}
\caption{\label{fig:lowp_2}
Vacuum polarization via magnetic susceptibilities in the low-momentum region.
The data include both connected and disconnected contributions.
}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=8cm]{finalres.pdf}
\caption{\label{fig:finalres}
Subtracted vacuum polarization with independent determinations of $\Pi(p^2)$ and
$\Pi(0)$. The data include both connected and disconnected contributions.
The solid red line indicates the experimental result (cf.\ figure~\protect\ref{fig:exp})
and the dotted line the three-loop perturbative prediction (see the text).
}
\end{figure}
Having obtained the renormalized hadronic vacuum polarization,
we can use equations~(\ref{eq:amu}) to (\ref{eq:amu2})
\cite{Lautrup:1971jf,Blum:2002ii} to predict its contribution to the
muon anomalous magnetic moment.
Choosing a third-order spline interpolation, we obtain values in the range
$a_\mu^{\mathrm{had,LO}}=(4 \ldots 5)\cdot 10^{-8}$ and an upward trend
towards the continuum limit. This is encouraging as the
$R$-ratio predictions of refs.~\cite{Davier:2010nc} and \cite{Hagiwara:2011af}
for the $N_f=2+1+1$ flavor theory
read $a_\mu^{\mathrm{had,LO}}=6.923 (42)\cdot 10^{-8}$
and $a_\mu^{\mathrm{had,LO}}=6.949 (43)\cdot 10^{-8}$, respectively.
However, given that the present lattices are rather coarse
($0.1\,\textmd{fm}\lesssim a<0.3\,\textmd{fm}$),
we do not yet attempt a full-fledged continuum limit extrapolation.
(Note that at these lattice spacings, the taste splitting of the staggered pion
multiplet is still sizeable~\cite{Borsanyi:2010cj}.
Thus, large lattice artefacts originating from the heavier pion states are not unexpected,
since $a_\mu^{\mathrm{had,LO}}$ is highly sensitive to the pseudoscalar masses.)
\subsection{Statistical accuracy and disconnected contributions}
Next, we perform a quantitative comparison between the oscillatory susceptibility method,
the conventional approach with random wall sources and that with point sources.
We demonstrate that the statistical error of $\Pi(p^2)$ can be pushed well below that of
existing studies in the literature -- even with the disconnected terms taken into account.
We calculated $\Pi(p^2)$ using all three methods on 120 configurations from the $\beta=3.45$ ensemble
for a single momentum $p^2=0.03\,\textmd{GeV}^2$ using an increased number of sources.
Figure~\ref{fig:noisy} shows the statistical error
as a function of the number of inversions $N_{\rm inv}$.
The details of our implementation can be found in appendices~\ref{appB}
and~\ref{sec:appC}.
As visible in the figure, the oscillatory susceptibility method allows to save $50-60\%$ of
the computational effort with respect to the random wall approach. This difference mainly comes
from the disconnected contributions, which can be calculated very accurately via susceptibilities.
In fact, the statistical error in this approach is dominated by the connected
contribution,\footnote{To see why this is the case, note that the number of estimates increases quadratically
with $N_{\mathrm{inv}}$ for the
disconnected terms but only linearly for the connected ones, see the discussion in
appendix~\ref{appB}. Therefore,
the error on the latter eventually overtakes that of the former,
before both show the expected asymptotic
$\sigma^2\simeq c_1(1+c_2/N_{\mathrm{inv}})$ fall-off.
The inherent gauge noise $c_1$ can only be reduced by increasing the
number of configurations.}
as is also visible in the
figure.
As expected, the conventional method with point sources is
not applicable for the determination of the disconnected terms.
Obviously, it is favorable in terms of the total
computer time spent to increase the number of configurations
instead of the number of inversions per configuration.
We remark that the total number of exact inversions necessary
to achieve a given error can be considerably
reduced by methods like the hopping parameter
expansion~\cite{Thron:1997iy,Michael:1999rs},
truncated eigenmode substitution~\cite{Neff:2001zr,Bali:2005fu,Foley:2005ac},
the truncated solver method~\cite{Collins:2007mh,Bali:2009hu,Blum:2012uh} and,
in the case of Wilson-like fermions, employing spin-explicit
stochastic sources~\cite{Bernardson:1993he,Viehoff:1997wi,Wilcox:1999ab}.
\begin{figure}
\centering
\includegraphics[width=8cm]{noisy_cmp3.pdf}
\caption{\label{fig:noisy} Statistical error of the
total (connected plus disconnected)
$\Pi(p^2=0.03\,\textmd{GeV}^2)$
as a function of the number of inversions. Compared are the results obtained
from oscillatory susceptibilities, using point sources and
random wall sources. In addition, the error of the connected oscillatory
susceptibility alone is shown. Note the logarithmic scale.}
\end{figure}
Finally, we discuss the disconnected contribution $\Pi^{\mathrm{dis}}$ in more detail.
A particular feature of $\Pi^{\mathrm{dis}}$ is that it requires no additive renormalization.
To see this, note that $\Pi^{\mathrm{dis}}(0)$ vanishes in the perturbative continuum limit,
since it is of order $g^6(a)$ in the strong coupling~\cite{Baikov:2012zn},
which dampens the logarithmic divergence
and results in $\Pi^{\mathrm{dis}}(0)$ to fall off as $1/\log^2(a)$
for $a\to0$.
In our three-flavor case the disconnected term even vanishes
identically in perturbation theory
due to $\sum_{f=u,d,s}q_f=0$, once quark masses can be neglected,
i.e.\ $a^{-1}\gg m_s$.
Based on this observation, in figure~\ref{fig:disco}
we plot the unsubtracted disconnected
vacuum polarization for all our lattice spacings.
(The number of inversions was $N_{\mathrm{inv}}=800$ for each momentum,
with the exception of the left-most point.)
Overall, $\Pi^{\mathrm{dis}}$ is consistent with zero,
where the two points that deviate by more than two standard deviations
from this assumption are statistically expected and
no systematic dependence on the lattice spacing or on the volume is apparent.
With the exception of three outliers with large error bars, all central values are below
$2\cdot10^{-4}$ in magnitude.
\begin{figure}[ht!]
\centering
\includegraphics[width=8cm]{disconnected.pdf}
\caption{\label{fig:disco}
Disconnected contribution to $\Pi(p^2)$ as a function of $p^2$ for our five
lattice spacings.
}
\end{figure}
Using all available estimators ($N_{\mathrm{inv}}=20\,000$)
for the $\beta=3.45$ ensemble at $p^2=0.03\,\textmd{GeV}^2$,
our most accurate
determinations for the unsubtracted and the subtracted vacuum polarizations read
\begin{equation}
\begin{split}
p^2=0.03\,\textmd{GeV}^2: \quad\quad \Pi &= -0.058362(117)\,, \\
\Pi^{\mathrm{dis}} &= +0.000021(026)\,, \\
\Pi_{\mathrm{R}} &= +0.002355(198)\,.
\end{split}
\label{eq:numbers}
\end{equation}
Here, $\Pi(p^2)$ and $\Pi^{\mathrm{dis}}(p^2)$ were measured using the oscillatory
susceptibility method.
(We highlight again that the error of $\Pi^{\mathrm{dis}}$ is much smaller than that of the total $\Pi$.)
The vacuum polarization at zero momentum was obtained via random wall sources.
Based on the discussion above about the vanishing of $\Pi^{\mathrm{dis}}(0)$ in the
continuum limit, only the connected part of $\Pi(0)$ is necessary for the subtraction.
The relative error of the so-obtained $\Pi_{\mathrm{R}}$ at this momentum is $8\%$, and
is dominated by the error of $\Pi(0)$. Clearly, towards higher $p^2$,
where the magnitude of $\Pi(p^2)$ increases, the relative error on $\Pi_{\mathrm{R}}$
rapidly decreases.
\section{Summary}
We developed a new approach to determine the hadronic vacuum polarization $\Pi(p^2)$ on the lattice.
It is based on calculating magnetic susceptibilities $\chi_p$ with respect to oscillatory background fields
for $p^2>0$ and a homogeneous background for $p^2=0$. The proof of the equivalence between $\chi_p$
and $\Pi(p^2)$ is given in appendix~\ref{sec:appA}.
The oscillatory susceptibilities are obtained by evaluating the appropriate expectation values
using noisy estimators, as described in appendix~\ref{appB}.
Unlike the conventionally used approach, based on position space
current-current correlators,
which mixes information about all possible lattice momenta,
the present method enables us to determine the vacuum polarization
with increased precision for individual low momenta. The low momentum region is
of relevance
for an accurate determination of the
leading hadronic contribution to the muon anomalous magnetic moment.
In principle, the lattice determination of $\Pi(p^2)-\Pi(0)$ at a selected
set of low momenta can also be combined with experimental results for the $R$-ratio to increase
the accuracy of $a_\mu^{\mathrm{had,LO}}$.
The proposed method not only reduces statistical errors at low momenta but also
allows for
an independent measurement of $\Pi(0)$,
instead of having to rely on extrapolations of $\Pi(p^2)$ from $p^2>0$.
We discussed three different methods to determine the homogeneous susceptibility $\chi_0=\Pi(0)$.
The most straightforward method, which relies only on simulations at zero magnetic field
(the so-called half-half method), was found to
suffer from large finite-volume effects of up to $10\%$ of the full value. Instead, we combined
existing results on $\chi_0$ from refs.~\cite{Bonati:2013vba,Bali:2014kia} that are
based on simulations at non-zero background fields.
We also tested stochastic wall sources to obtain
$\Pi(0)$ as the second moment of a momentum projected current-current
correlation function and found that it can compete with the accuracy of the
homogeneous susceptibility for a sufficiently large number of random sources.
It is interesting to note that $\chi_0$ can also be obtained via stochastic wall sources at
finite temperatures, giving direct access to the renormalized magnetic susceptibility
$\chi_0(T)-\chi_0(T=0)$ that enters the QCD equation of state at finite magnetic
fields~\cite{Bali:2013esa,Bonati:2013lca,Levkova:2013qda,Bonati:2013vba,Bali:2013owa,Bali:2014kia}.
The method was tested on staggered $N_f=2+1$ flavor ensembles with various lattice spacings.
Already on a few hundred configurations, a statistical accuracy below one percent is achieved
for $\Pi(p^2)$. The disconnected contributions have been included in all cases.
Figure~\ref{fig:errors} shows an order-of-magnitude comparison
of our statistical accuracy to that of existing
calculations in the literature, wherever data or figures with error bars are
available for $\Pi$ at $p^2\approx 0.03 \, \textmd{GeV}^2$~\cite{Burger:2015oya,DellaMorte:2011aa,Bernecker:2011gh,Aubin:2006xv,Boyle:2011hu,Burger:2013jya,Aubin:2012me,Gregory:2013taa,DellaMorte:2014rta,Marinkovic:2015zaa}.
(Note that the approach followed in ref.~\cite{Francis:2013fzp} involves parameterizing
the lattice data for the zero-momentum projected two-point function $G(t)$
of equation~(\ref{eq:get}),
making a comparison for $\Pi$ difficult.)
We remark that this incomplete comparison does not distinguish between different lattice
volumes, spacings or pion masses but just serves as a qualitative indicator of
the accuracy.
It reveals that our statistical errors, obtained on a comparably
small number of gauge configurations, are by far the smallest within
the lattice
studies shown in figure~\ref{fig:errors}.
However, the approach of employing the experimental $R$-ratio
is still by about an order of magnitude more accurate.
Nevertheless,
by applying the methods used in this paper to ensembles with substantially higher statistics,
the desired accuracy may be reached in the near future.
\begin{figure}
\centering
\includegraphics[width=8cm]{errors.pdf}
\caption{\label{fig:errors} The statistical error of
the vacuum polarization at low momenta around $p^2=0.03 \,\textmd{GeV}^2$ for several
lattice studies in the literature and for the present work (shaded area).
Open points denote the error of the unsubtracted $\Pi(p^2)$, while full symbols
indicate that of the renormalized $\Pi_{\mathrm{R}}(p^2)$. Studies involving only the
connected contribution are indicated in yellow, while those also taking into account the
disconnected terms in blue. The determination using the experimental $R$-ratio
is also included for comparison (solid green point).
}
\end{figure}
\acknowledgments
This research was funded by the DFG (SFB/TRR 55). The authors acknowledge useful
discussions with Bastian Brandt, Vladimir Braun, Falk Bruckmann,
Pavel Buividovic, Andreas Sch\"afer, K\'{a}lm\'{a}n Szab{\'o}
and in particular
with Bogdan Malaescu and Andreas Hoecker who made available to us
preliminary results on the renormalized hadronic vacuum polarization
function obtained from their analysis of $R$-ratio measurements.
|
2,869,038,156,265 | arxiv | \section{Introduction and main results}
How well can a convex body be approximated by a
polytope?
This is a fundamental question not only in convex geometry, but also in view of applications in stochastic geometry,
complexity, geometric algorithms and many more (e.g., \cite{BuRe, Edelsbrunner, Gardner1995, GardnerKiderlenMilanfar, GlasauerGruber, GruberI, GruberII, PW2, Reitzner2004II}).
\par
Accuracy of approximation is often measured in the symmetric difference
metric, which reflects the volume deviation
of the approximating and approximated objects.
Approximation of a convex body $K$ by inscribed or circumscribed polytopes
with respect to this metric has been studied extensively
and many of the major questions have been resolved. We refer to, e.g., the surveys and books by Gruber \cite {Gr3, Gr4, GruberBook} and the references there and to, e.g., \cite{Ba1, Boeroetzky2000, BoeroetzkyReitzner2004, GRS1, Ludwig1999, Reitzner2004, Schneider1981, SchneiderWeil, SchuettWerner2003}.
\par
Sometimes
it is more advantageous to consider the surface area deviation $\Delta_s$ \cite{ BoeroetzkyCsikos2009, BoeroetzkyReitzner2004, Groemer2000} instead of the volume deviation $\Delta_v$.
It is especially desirable because
if best approximation of convex bodies is replaced by random approximation, then
we have essentially the same amount of information for volume, surface area, and mean width (\cite{BoeroetzkyReitzner2004},\cite{BoeroetzkySchneider2010}), which are three of the quermassintegrals of a convex body (see, e.g., \cite{Gardner1995, SchneiderBook}).
\par
For convex bodies $K$ and $L$ in $\mathbb{R}^n$ with boundaries $\partial K$ and $\partial L$, the symmetric surface area deviation is defined as
\begin{equation} \label{deviation}
\Delta_s(K,L)=\vol_{n-1} \left( \partial (K \cup L)\right)-\vol_{n-1} \left( \partial (K \cap L)\right).
\end{equation}
\par
Typically, approximation by polytopes often involves side conditions, like a prescribed
number of
vertices, or, more generally,
$k$-dimensional faces \cite{Boeroetzky2000}. Most often in the literature, it is required that the body contains the approximating polytope or vice versa. This is too restrictive as a requirement and it
needs to be dropped.
Here, we do exactly that and prove upper and lower bounds for approximation of convex bodies by arbitrarily positioned polytopes in the
symmetric surface area deviation. This addresses questions asked by Gruber \cite{GruberBook}.
\begin{theo}\label{upperbound:vertices}
There exists an absolute constant $c>0$ such that for every integer $n \geq 3$, there is an $N_n$ such that for every $N \geq N_n$ there is a polytope $P_N$ in $\mathbb{R}^n$ with $N$
vertices such that
$$
\Delta_s(B_2^n,P_N) \leq c \, \frac{\vol_{n-1} \left( \partial B^n_2 \right)}{N^\frac{2}{n-1}}.
$$
\end{theo}
Here, $B^n_2$ is the $n$-dimensional Euclidean unit ball with boundary $S^{n-1}=\partial B^n_2$.
Moreover, throughout the paper $a, b, c, c_1, c_2$ will denote positive absolute constants that may change from line to line.
\par
The proof of Theorem \ref{upperbound:vertices} is based on a random construction.
A crucial step in its proof is a result by J. M\"uller \cite{JMueller} on the surface deviation of a polytope {\em contained} in the unit ball. It describes the asymptotic behavior of the surface deviation of a random polytope $P_N$, the convex hull of $N$
randomly (with respect to the uniform measure) and independently chosen points on the boundary of the unit ball as the number of vertices increases. It says that
\begin{eqnarray}\label{Mueller-surface}
& & \lim_{N\to\infty}
\frac{\vol_{n-1}(S^{n-1})-\Bbb E \vol_{n-1}(\partial P_N)}
{N^{-\frac{2}{n-1}}}
=\frac{n-1}{n+1} \ \frac{\Gamma\left(n+\tfrac{2}{n-1}\right)}{2(n-2)!}
\frac{\left(\vol_{n-1}\left(\partial B_{2}^{n}\right)\right)^{\frac{n+1}{n-1}} }
{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}}.
\end{eqnarray}
The right hand side of (\ref{Mueller-surface}) is of the order $c\,n\vol_{n-1}(\partial B^n_2)$.
Thus, dropping the restriction that $P_N$ is contained in $B^n_2$ improves the estimate by a factor of dimension. The same phenomenon was observed for the volume deviation in \cite{LudwigSchuettWerner}.
\vskip 3mm
For the facets, we obtain the following lower bound for a polytope in arbitrary position.
\begin{theo}\label{AbschUntenSurf} There is a constant $c>0$ and $M_{0}\in\mathbb N$
such that for
all $n\in\mathbb N$ with $n\geq 2$, all $M\in\mathbb N$ with $M\geq M_{0}$ and all polytopes $P_M$ in
$\mathbb R^{n}$ with no more than $M$ facets
$$
\Delta_s(B_{2}^{n},P_M)
\geq c\,\frac{\vol_{n-1}(\partial B^n_2)}{M^{\frac{2}{n-1}}}.
$$
\end{theo}
Again, we gain by a factor of dimension if we drop the requirement that the polytope contains $B^n_2$. Indeed, it follows from \cite{Gr3, McVi} that the order of best approximation
$\Delta_v(B^n_2, P_M^{\text{best}})$ with $B^n_2 \subset P_M$ behaves asymptotically, for $M \rightarrow \infty$, like $\vol_{n-1}(\partial B^n_2)$. Now observe that when $B^n_2 \subset P_M$, $n \ \Delta_v(B^n_2, P_M)= \Delta_s(B^n_2, P_M)$.
\vskip 3mm
As a corollary to Theorem \ref{AbschUntenSurf}, we deduce a lower bound in the case of simple polytopes with at most $N$ vertices.
A polytope in $\mathbb R^{n}$ is called simple if at every vertex exactly
$n$ facets meet.
\vskip 2mm
\begin{cor}
There is a constant $c>0$ and $N_{0}\in\mathbb N$
such that for
all $n\in\mathbb N$ with $n\geq2$, all $N\in\mathbb N$ with $N\geq N_{0}$ and all
simple polytopes $P_{N}$ in
$\mathbb R^{n}$ with no more than $N$ vertices
$$
\Delta_s(B_{2}^{n},P_{N})
\geq c\,\frac{\vol_{n-1}(\partial B^n_2)}{N^{\frac{2}{n-1}}}.
$$
\end{cor}
\vskip 5mm
The authors want to thank the Institute for Mathematics and Its Applications (IMA) at the University of Minnesota for their hospitality. It was during their stay there when most of the work on the paper was carried out. We also want to thank the referee and the editor for their careful work.
\section{Notation and auxiliary lemmas}
For a convex body $K$ in $\mathbb {R}^n$, we denote by $\intt(K)$ its interior. Its $n$-dimensional volume is $\vol_n(K)$ and the surface area of its boundary $\partial K$ is $\vol_{n-1}(\partial K)$.
The usual surface area measure on $\partial K$ is denoted by $\mu_{\partial K}$. The convex hull of points $x_1, \dots, x_m$ is $[x_1, \dots, x_m]$.
\par
The affine hyperplane in $\mathbb R^{n}$ through the point $x$ and orthogonal to the vector
$\xi$ is denoted by $H(x,\xi)$.
\par
For any further notions related to convexity, we refer to the books by e.g., Gruber \cite{GruberBook} and Schneider \cite{SchneiderBook}.
\par
We start with several lemmas needed for the proof of Theorem \ref{upperbound:vertices}.
The first lemma says that almost all random polytopes of points chosen
from a convex body are simplicial. Intuitively this is obvious: If we have chosen $x_{1},\dots,x_{n}$ and we want to choose $x_{n+1}$
so that it is an element of the hyperplane spanned by
$x_{1},\dots,x_{n}$, then we are choosing $x_{n+1}$ from a nullset.
We refer to, e.g., \cite{SchuettWerner2003} for the details.
\vskip 3mm
\begin{lemma}\label{Lemma: simplicial}
Almost all random polytopes of points chosen from the boundary of the
Euclidean ball with respect to the normalized surface measure are
simplicial.
\end{lemma}
\vskip 2mm
We also need the following two lemmata due to Miles \cite{Miles}.
\vskip 2mm
\begin{lemma}\label{Miles1}\rm\cite{Miles}
\begin{eqnarray*}
& & d\mu_{\partial B_{2}^{n}}(x_{1})\cdots d\mu_{\partial B_{2}^{n}}(x_{n})
\\
& & \hskip 5mm
=(n-1)!\frac{\vol_{n-1}([x_{1},\dots,x_{n}])}{(1-p^{2})^{\frac{n}{2}}}
d\mu_{\partial B_{2}^{n}\cap H}(x_{1})\cdots d \mu_{\partial
B_{2}^{n}\cap H}(x_{n})\, dp\, d\mu_{\partial B_{2}^{n}}(\xi),
\end{eqnarray*}
where $\xi$ is the normal to the hyperplane $H$ through $x_{1},\dots,x_{n}$ and
$p$ is the distance of the hyperplane $H$ to the origin.
\end{lemma}
\vskip 2mm
\begin{lemma}\label{Miles2}{\rm\cite{Miles}}
\begin{eqnarray*}
&&\hskip -4mm \int_{\partial B_{2}^{n}(0,r)}\cdots\int_{\partial B_{2}^{n}(0,r)}
(\vol_n([x_{1},\dots,x_{n+1}]))^{2}
\,d\mu_{\partial B_{2}^{n}(0,r)}(x_{1})\cdots d\mu_{\partial B_{2}^{n}
(0,r)}(x_{n+1}) \\
&& \hskip 4mm
=\frac{(n+1)r^{2n}}{n!n^{n}}
(\vol_{n-1}(\partial B_{2}^{n}(r)))^{n+1}
=\frac{(n+1)r^{n^{2}+2n-1}}{n!n^{n}}
(\vol_{n-1}(\partial B_{2}^{n}))^{n+1}.
\end{eqnarray*}
\end{lemma}
\vskip 2mm
A cap $C$ of the Euclidean ball $B_{2}^{n}$ is the intersection
of a half space $H^{-}$ with $B_{2}^{n}$. The radius of such a cap
is the radius of the $(n-1)$-dimensional ball
$B_{2}^{n}\cap H$.
\vskip 2mm
\noindent The next two ingredients needed are from \cite{SchuettWerner2003}.
\vskip 2mm
\begin{lemma}\label{SchW1}{\rm\cite{SchuettWerner2003}}
Let $H$ be a hyperplane, $p$ its distance from the origin
and $s$ the surface area of the cap
$B_{2}^{n}\cap H^{-}$, i.e.,
$$
s=\vol_{n-1}(\partial B_{2}^{n}\cap H^{-}).
$$
Then
\begin{equation*}
\frac{dp}{ds}
=-\frac{1}
{(1-p^{2})^{\frac{n-3}{2}}\vol_{n-2}(\partial B_{2}^{n-1})}.
\end{equation*}
\end{lemma}
\vskip 4mm
The following lemma is Lemma 3.13 from \cite{SchuettWerner2003}.
\vskip 2mm
\begin{lemma}\label{surface-radius}{\rm\cite{SchuettWerner2003}}
Let $C$ be a cap of the Euclidean unit ball. Let $s$ be the
surface area of this cap and $r$ its radius. Then we have
\begin{eqnarray*}
& & \left(\frac{s}{\vol_{n-1}(B_{2}^{n-1})}\right)^{\frac{1}{n-1}}
-\frac{1}{2(n+1)}\left(\frac{s}{\vol_{n-1}(B_{2}^{n-1})}
\right)^{\frac{3}{n-1}}
- c \left(\frac{s}{\vol_{n-1}(B_{2}^{n-1})}
\right)^{\frac{5}{n-1}} \\
&&\\
&& \leq r(s) \\
&& \leq\left(\frac{s}{\vol_{n-1}(B_{2}^{n-1})}\right)^{\frac{1}{n-1}}
-\frac{1}{2(n+1)}\left(\frac{s}{\vol_{n-1}(B_{2}^{n-1})}
\right)^{\frac{3}{n-1}}+ c \left(\frac{s}{\vol_{n-1}(B_{2}^{n-1})}
\right)^{\frac{5}{n-1}},
\end{eqnarray*}
where $c$ is a numerical constant.
\end{lemma}
\vskip 5mm
\section{Proof of Theorem \ref{upperbound:vertices} }
To prove Theorem \ref{upperbound:vertices},
we use a probabilistic argument. We follow the strategy given in \cite{LudwigSchuettWerner}. Instead of volume deviation, we now have to compute the expected surface area deviation between
$B_{2}^{n}$ and a random polytope $[x_{1},\dots,x_{N}]$
whose vertices are chosen randomly and independently from the boundary of
a Euclidean ball with slightly bigger radius.
For technical reasons, we choose the points from the
boundary of $B_{2}^{n}$ and we approximate $(1-\gamma)B_{2}^{n}$.
It will turn out that $\gamma$ is of the order $N^{-\frac{2}{n-1}}$.
\vspace{2mm}
\noindent The expected surface area difference between $(1-\gamma) B_{2}^{n}$
and a random polytope $P_{N}$ is
\begin{eqnarray*}
&&
\hskip -5mm \Bbb{E}\left[\Delta_s((1-\gamma)B_2^n,P_N)\right] = \\
&& \hskip -5mm
\int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\biggl[\vol_{n-1}\left[\partial(P_N\cup(1-\gamma)B_2^n)\right]
-\vol_{n-1}\left[\partial(P_N\cap(1-\gamma)B_2^n)\right]\biggr]\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N),
\end{eqnarray*}
where $\prob =\frac{ \mu_{\partial B^n_2}}{\vol_{n-1}(\partial B^n_2)}$ is the uniform probability measure on $\partial B_2^n$.
For a given $N$, we choose $\gamma$ such that
\begin{equation}\label{choice c}
\vol_{n-1}\Big(\partial\left((1-\gamma)B_2^n\right)\Big) = (1-\gamma)^{n-1}\vol_{n-1}\left(\partial B_2^n\right)
= \Bbb{E}\vol_{n-1}(\partial P_N).
\end{equation}
From (\ref{Mueller-surface}) we see that for large $N$, $(1-\gamma)^{n-1}$ is
asymptotically equal to
\begin{equation*}\label{c-asymptotic}
1- N^{-\frac{2}{n-1}}\ \frac{n-1}{n+1}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}( B_{2}^{n-1})}
\right)^{\frac{2}{n-1}}
\frac{\Gamma(n+\frac{2}{n-1})}{2(n-2)!}.
\end{equation*}
As $(1-\gamma)^{n-1} \geq 1-(n-1) \gamma $, we get for large enough $N$ that
\begin{equation}\label{c-asymptotic2}
\gamma \geq
\frac{N^{-\frac{2}{n-1}}}{n+1}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}
\right)^{\frac{2}{n-1}}
\frac{\Gamma(n+\frac{2}{n-1})}{2(n-2)!}.
\end{equation}
For $\gamma$ small enough, $(1-\gamma)^{n-1} \leq 1- (1-\frac{1}{n}) (n-1) \gamma $. Hence we get for small enough $\gamma$ and large enough $N$ that
\begin{equation}\label{c-asymptotic3}
\gamma
\leq
\frac{n}{n-1} \frac{ N^{-\frac{2}{n-1}}}{n+1}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}
\right)^{\frac{2}{n-1}}
\frac{\Gamma(n+\frac{2}{n-1})}{2(n-2)!}.
\end{equation}
Therefore, for $N$ large enough, there are absolute constants $a$ and $b$ such that
\begin{equation}\label{c-estimate}
a \ N^{-\frac{2}{n-1}}
\leq
\gamma\leq b \ N^{-\frac{2}{n-1}}.
\end{equation}
We continue the computation of the expected surface area deviation. Since
$$ \vol_{n-1}\left[\partial\left((1-\gamma)B_2^n\right)\right] =\Bbb E\vol_{n-1}\left[\partial((1-\gamma)B_2^n)\cap P_N\right] + \Bbb E\vol_{n-1}\left[\partial((1-\gamma)B_2^n)\cap P_N^c\right]
$$
and
$$\Bbb E\vol_{n-1}(\partial P_N)=\Bbb E\vol_{n-1}\left(\partial P_N\cap(1-\gamma)B_2^n\right)+\Bbb E\vol_{n-1}\left(\partial P_N\cap\left[(1-\gamma)B_2^n\right]^c\right),
$$
our choice of $\gamma$ means that
\begin{align} \label{choice c-2}
&\Bbb E\vol_{n-1}\left[\partial((1-\gamma)B_2^n)\cap P_N\right] + \Bbb E\vol_{n-1}\left[\partial((1-\gamma)B_2^n)\cap P_N^c\right]\\ \nonumber
=\ &\Bbb E\vol_{n-1}\left(\partial P_N\cap(1-\gamma)B_2^n\right)+\Bbb E\vol_{n-1}\left(\partial P_N\cap\left[(1-\gamma)B_2^n\right]^c\right).
\end{align}
\noindent Thus,
\begin{align*}
&\Bbb{E}\left[\Delta_s((1-\gamma)B_2^n,P_N)\right]&\\
&=\Bbb{E} \vol_{n-1}\left[\partial((1-\gamma)B_2^n)\cap P_N^c\right] + \Bbb{E} \vol_{n-1}\left(\partial P_N\cap\left[(1-\gamma)B_2^n\right]^c\right)&\\
&- \Bbb{E} \vol_{n-1}\left(\partial P_N\cap(1-\gamma)B_2^n\right)- \Bbb{E} \vol_{n-1}\left[\partial((1-\gamma)B_2^n)\cap P_N\right]&\\
&=2\Big(\Bbb{E} \vol_{n-1}\left[\partial((1-\gamma)B_2^n)\cap P_N^c\right]- \Bbb{E}\vol_{n-1}\left[(1-\gamma)B_2^n\cap\partial P_N\right]\Big), &
\end{align*}
where the last equality follows from equation (\ref{choice c-2}). Hence,
\begin{align*}
&\Bbb{E}\left[\Delta_s((1-\gamma)B_2^n,P_N)\right]=& \\
&2\int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\biggl\{\vol_{n-1}\left[\partial\left((1-\gamma)B_2^n\right)\cap P_N^c\right]-\vol_{n-1}\left[(1-\gamma)B_2^n\cap\partial P_N\right] \biggr\}\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N) .&
\end{align*}
\par
\noindent We first consider
\begin{eqnarray*}
I_1&=&\int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\vol_{n-1}\left[\partial\left((1-\gamma)B_2^n\right)\cap P_N^c\right]d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N)\\
&=& \int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\vol_{n-1}\left[\partial\left((1-\gamma)B_2^n\right)\cap P_N^c\right]\mathbbm{1}_{\{0\in \intt (P_N)\}}\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N)\\
&+&\int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\vol_{n-1}\left[\partial\left((1-\gamma)B_2^n\right)\cap P_N^c\right]\mathbbm{1}_{\{0\not\in \intt (P_N)\}}\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N)\\
&\leq& \int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\vol_{n-1}\left[\partial\left((1-\gamma)B_2^n\right)\cap P_N^c\right]\mathbbm{1}_{\{0\in \intt (P_N)\}}\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N)\\
&+&\vol_{n-1}(\partial B_2^n)\int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\mathbbm{1}_{\{0\not\in \intt (P_N)\}}\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N).
\end{eqnarray*}
\noindent By a result of \cite{Wendel} the second summand equals
$$
\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}\sum_{k=0}^{n-1}{N-1\choose k}
\leq\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}.
$$
Therefore,
\begin{eqnarray}\label{EstI1-1}
I_{1}&\leq& \int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\vol_{n-1}\left[\partial\left((1-\gamma)B_2^n\right)\cap P_N^c\right]\mathbbm{1}_{\{0\in \intt (P_N)\}}\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N)\nonumber\\
&&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}.
\end{eqnarray}
We introduce functions $\phi_{j_1\cdots j_n}:\prod_{i=1}^N\partial B_2^n\to\Bbb{R}$ defined by
\begin{align*}\phi_{j_1\cdots j_n}(x_1,...,x_N)=\begin{cases}
0, \text{ if }[x_{j_1},...,x_{j_n}]\text{ is not an }(n-1)\text{-dimensional face of }[x_1,...,x_N] \\
0, \text{ if }0\not\in \intt \left( [x_1,...,x_N]\right)\\
\vol_{n-1}((1-\gamma)S^{n-1}\cap P_{N}^{c}\cap \operatorname{cone}(x_{j_1},...,x_{j_n}))), \text{ otherwise}.
\end{cases}\end{align*}
\par
\noindent For vectors $y_1, \dots, y_k$ in $\mathbb {R}^n$,
$$
\cone(y_1, \dots, y_k) = \left\{\sum_{i=1}^k a_i y_i\bigg |\text{ }\forall i: a_i\geq 0 \right\}
$$ is the cone spanned by $y_1, \dots, y_k$.
\noindent
From (\ref{EstI1-1}) we get
\begin{eqnarray} \label{I1,1}
I_1 &\leq& \int_{\partial B_2^n}\cdots\int_{\partial B_2^n}
\sum_{\{j_1,...,j_n\}\subset\{1,...,N\}}
\phi_{j_1,...,j_n}(x_1,...,x_N)\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N) \nonumber \\
&&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}.
\end{eqnarray}
Inequality (\ref{I1,1}) holds since $0\in \intt (P_N)$ and
$\displaystyle\Bbb{R}^n=\bigcup_{[x_{j_1},...,x_{j_n}]\\\text{ is a facet of }P_N}\cone(x_{j_1},...,x_{j_n}).$
By Lemma \ref{Lemma: simplicial}, $P_N=[x_1,...,x_N]$ is simplicial with probability 1. Thus, the previous expression equals
\begin{align*}
{N\choose n}\int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\phi_{1...n}(x_1,...,x_N)\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N)
+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}.
\end{align*}
Let $H$ be the hyperplane containing the points $x_{1},\dots,x_{n}$.
The set of points where $H$ is not well-defined has measure $0$.
Let $H^{+}$ be the halfspace containing $0$. Then
\begin{eqnarray*}
&&\mathbb P^{N-n}
(\{(x_{n+1},\dots,x_{N})| [x_{1},\dots,x_{n}]
\hskip 1mm \mbox{is facet of}\hskip 1mm
[x_{1},\dots,x_{N}]\hskip 1mm \mbox{and}\hskip 1mm
0\in[x_{1},\dots,x_{N}]
\}) \\
&&\hskip 20mm
=\left(\frac{\mbox{vol}_{n-1}(\partial B_{2}^{n}\cap H^{+})}
{\mbox{vol}_{n-1}(\partial B_{2}^{n})}\right)^{N-n}.
\end{eqnarray*}
Therefore, the above expression equals
\begin{align*}
&{N\choose n}\int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\left[\frac{\vol_{n-1}\left(\partial B_2^n\cap H^+\right)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n}\\
&\times \vol_{n-1}\left[(1-\gamma)S^{n-1}\cap H^-\cap\cone(x_1,...,x_n)\right]d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_n) +\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}.&\\
&={N\choose n}\frac{(n-1)!}{\left(\vol_{n-1}(\partial B_2^n)\right)^n}\int_{\xi\in S^{n-1}}\int_{p=0}^1\int_{\partial(B_2^n\cap H)}\cdots\int_{\partial(B_2^n\cap H)}\left[\frac{\vol_{n-1}\left(\partial B_2^n\cap H^+\right)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n}&\\
&\times\frac{\vol_{n-1}\left([x_1,...,x_n]\right)}{\left(1-p^2\right)^{n/2}}\vol_{n-1}\left[(1-\gamma)S^{n-1}\cap H^-\cap\cone(x_1,...,x_n)\right]&\\
&\times d\mu_{\partial(B_2^n\cap H)}(x_1)\cdots d\mu_{\partial(B_2^n\cap H)}(x_n)\,dp\, d\mu_{\partial B_2^n}(\xi)
+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}.
\end{align*}
For the last equality we have used Lemma \ref{Miles1}. It was shown in \cite{LudwigSchuettWerner} that for $p\leq 1-\frac{1}{n}$,
\begin{eqnarray*}
\left(\frac{\vol_{n-1}(\partial
B_{2}^{n}\cap H^{+})}{\vol_{n-1}
(\partial B_{2}^{n})}\right)^{N-n}
\leq \exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right)
\end{eqnarray*}
and the rest of the expression is bounded. Thus, there is a positive constant $c_{n}$
such that for all $n\in\mathbb N$
\begin{align}\label{I1}
I_1 &\leq {N\choose n}\frac{(n-1)!}{\left(\vol_{n-1}(\partial B_2^n)\right)^n}\int_{\xi\in S^{n-1}}\int_{p=1-\frac{1}{n} }^1\int_{\partial(B_2^n\cap H)}\cdots\int_{\partial(B_2^n\cap H)}\left[\frac{\vol_{n-1}\left(\partial B_2^n\cap H^+\right)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n}&\nonumber \\
&\times\frac{\vol_{n-1}\left([x_1,...,x_n]\right)}{\left(1-p^2\right)^{n/2}}\vol_{n-1}\left[(1-\gamma)S^{n-1}\cap H^-\cap\cone(x_1,...,x_n)\right]&\nonumber\\
&\times d\mu_{\partial(B_2^n\cap H)}(x_1)\cdots d\mu_{\partial(B_2^n\cap H)}(x_n)\, dp\, d\mu_{\partial B_2^n}(\xi) \nonumber \\
&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}+c_{n}\exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right).
\end{align}
\noindent
Now we consider
\begin{eqnarray*}
I_2&=&\int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\vol_{n-1}\left[(1-\gamma)B_2^n\cap\partial P_N\right]d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N)\\
&\geq& \int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\vol_{n-1}\left[(1-\gamma)B_2^n\cap\partial P_N\right]\mathbbm{1}_{\{0\in \intt (P_N)\}}\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N)\\
&=&\int_{\partial B_2^n}\cdots\int_{\partial B_2^n}\sum_{\{j_1,...,j_n\}\subset\{1,...,N\}}\psi_{j_1\cdots j_n} (x_1,...,x_N)\,d\Bbb{P}(x_1)\cdotsd\Bbb{P}(x_N),
\end{eqnarray*}
\noindent
where the map $\psi_{j_1\cdots j_n}:\prod_{i=1}^N\partial B_2^n\to\Bbb{R}$ is defined by \begin{align*}\psi_{j_1\cdots j_n}(x_1,...,x_N)=\begin{cases}
0, \text{ if }[x_{j_1},...,x_{j_n}]\text{ is not an }(n-1)\text{-dimensional face of }[x_1,...,x_N] \\
0, \text{ if }0\not\in \intt \left([x_1,...,x_N]\right)\\
\vol_{n-1}\left[(1-\gamma)B_2^n\cap [x_{j_1},...,x_{j_n}]\right], \text{ otherwise}.
\end{cases}\end{align*}
\noindent
We proceed now for $I_2$ as above for $I_1$, also using Lemma \ref{Miles1}, and get that the previous integral is greater than or equal
\begin{align} \label{I2}
&{N\choose n}\frac{(n-1)!}{\left(\vol_{n-1}(\partial B_2^n)\right)^n}\int_{\xi\in S^{n-1}}\int_{p=0}^1\int_{\partial(B_2^n\cap H)}\cdots\int_{\partial(B_2^n\cap H)}\left[\frac{\vol_{n-1}\left(\partial B_2^n\cap H^+\right)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n}& \nonumber\\
&\times\frac{\vol_{n-1}([x_1,...,x_n])}{\left(1-p^2\right)^{n/2}}\vol_{n-1}\left[(1-\gamma)B_2^n\cap H\cap\cone(x_1,...,x_n)\right]& \nonumber\\
&\times d\mu_{\partial(B_2^n\cap H)}(x_1)\cdots d\mu_{\partial(B_2^n\cap H)}(x_n)\, dp\, d\mu_{\partial B_2^n}(\xi).&
\end{align}
\noindent
Therefore, with (\ref{I1}) and (\ref{I2}),
\begin{align*}
&\Bbb{E}\left[\Delta_s((1-\gamma)B_2^n,P_N)\right] \leq 2{N\choose n}\frac{(n-1)!}{\left(\vol_{n-1}(\partial B_2^n)\right)^n}&\\
&\times \int_{\xi\in S^{n-1}}\int_{1-\frac{1}{n}}^1\int_{\partial(B_2^n\cap H)}\cdots\int_{\partial(B_2^n\cap H)}\left[\frac{\vol_{n-1}\left(\partial B_2^n\cap H^+\right)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n}\frac{\vol_{n-1}([x_1,...,x_n])}{\left(1-p^2\right)^{n/2}}&\\
&\times\biggl[\vol_{n-1}\left[(1-\gamma)S^{n-1}\cap H^-\cap\cone(x_1,...,x_n)\right]-\vol_{n-1}\left[(1-\gamma)B_2^n\cap [x_1,...,x_n]\right]\biggr]
&\\
&\times d\mu_{\partial(B_2^n\cap H)}(x_1)\cdots d\mu_{\partial(B_2^n\cap H)}(x_n)\, dp\, d\mu_{\partial B_2^n}(\xi)\\
&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}+c_{n}\exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right).
\end{align*}
\noindent
We notice that
\begin{eqnarray*}
&&\hskip -10mm \vol_{n-1}\left[(1-\gamma)S^{n-1}\cap H^-\cap\cone(x_1,...,x_n)\right] \leq \\
&& \hskip 10mm \left(\frac{1-\gamma}{p}\right)^{n-1}
\vol_{n-1}\left( (1-\gamma)B^n_2 \cap[x_1,...,x_n] \right).
\end{eqnarray*}
Thus,
\begin{align*}
&\Bbb{E}\left[\Delta_s((1-\gamma)B_2^n,P_N)\right] \leq
2{N\choose n} \frac{(n-1)!}{\left(\vol_{n-1}(\partial B_2^n)\right)^n}\\
&\times \int_{\xi\in S^{n-1}}\int_{1-\frac{1}{n}}^1\int_{\partial(B_2^n\cap H)}\cdots\int_{\partial(B_2^n\cap H)}\left[\frac{\vol_{n-1}\left(\partial B_2^n\cap H^+\right)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n} \ \frac{\left(\vol_{n-1}[x_1,...,x_n]\right)^2}{\left(1-p^2\right)^{n/2}}&\\
&\times \max\left\{0,\left(\frac{1-\gamma}{p}\right)^{n-1}-1\right\}d\mu_{\partial(B_2^n\cap H)}(x_1)\cdots d\mu_{\partial(B_2^n\cap H)}(x_n)\, dp\, d\mu_{\partial B_2^n}(\xi)
\\
&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}+c_{n}\exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right).
\end{align*}
\noindent
By Lemma \ref{Miles2} this equals
\begin{align*}
&2{N\choose n}\frac{n}{(n-1)^{n-1}}\frac{\left(\vol_{n-2}(\partial B_2^{n-1})\right)^n}{\left(\vol_{n-1}(\partial B_2^n)\right)^n}\int_{\partial B_2^n}\int_{1-\frac{1}{n}}^1\left[\frac{\vol_{n-1}\left(\partial B_2^n\cap H^+\right)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n}&\\
&\times\max\left\{0,\left(\frac{1-\gamma}{p}\right)^{n-1}-1\right\}\frac{r^{n^2-2}}{(1-p^2)^{n/2}}\,dp\, d\mu_{\partial B_2^n}(\xi)
\\
&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}+c_{n}\exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right),
\end{align*}
\noindent
where $r$ denotes the radius of $B_2^n\cap H$. The expression
$B_2^n\cap H$ is a function of the distance $p$ of the hyperplane $H$
from the origin.
Since the integral does not depend on the direction $\xi$ and $r^2+p^2=1$, this last expression is equal to
\begin{align*}
&2{N\choose n}\frac{n}{(n-1)^{n-1}}\frac{\left(\vol_{n-2}(\partial B_2^{n-1})\right)^n}{\left(\vol_{n-1}(\partial B_2^{n-1})\right)^{n-1}}\int_{1-\frac{1}{n}}^1\left[\frac{\vol_{n-1}\left(\partial B_2^n\cap H^+\right)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n}&\\
&\times\max\left\{0,\left(\frac{1-\gamma}{p}\right)^{n-1}-1\right\}r^{n^2-n-2}\,dp
\\
&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}+c_{n}\exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right),
\end{align*}
\noindent
which equals
\begin{align*}
&2{N\choose n}\frac{n}{(n-1)^{n-1}}\frac{\left(\vol_{n-2}(\partial B_2^{n-1})\right)^n}{\left(\vol_{n-1}(\partial B_2^n)\right)^{n-1}}\int_{1-\frac{1}{n}}^{1-\gamma}\left[1-\frac{\vol_{n-1}(\partial B_2^n\cap H^-)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n}&\\
&\times\left[\left(\frac{1-\gamma}{p}\right)^{n-1}-1\right]r^{n^2-n-2}\,dp
+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}+c_{n}\exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right).
\end{align*}
\noindent
Since $p\geq 1-\frac{1}{n}$ and, by (\ref{c-estimate}), $\gamma$ is of the order $N^{-\frac{2}{n-1}}$, we have for sufficiently large $N$
$$\displaystyle\left(\frac{1-\gamma}{p}\right)^{n-1}-1\leq n(1-\gamma-p).
$$
Therefore, the previous expression can be estimated by
\begin{align*}
&2{N\choose n}\frac{n^2}{(n-1)^{n-1}}\frac{\left(\vol_{n-2}(\partial B_2^{n-1})\right)^n}{\left(\vol_{n-1}(\partial B_2^n)\right)^{n-1}}\int_{1-\frac{1}{n}}^{1-\gamma}\left[1-\frac{\vol_{n-1}(\partial B_2^n\cap H^-)}{\vol_{n-1}(\partial B_2^n)}\right]^{N-n} \frac{1-\gamma-p}{r^{n+2-n^2}}\,dp
\\
&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}+c_{n}\exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right).
\end{align*}
\noindent
Let $\phi:[0,1]\to[0,\infty)$ be the function defined by
$$
\phi(p)=\frac{\vol_{n-1}(\partial
B_{2}^{n}\cap H^{-})}{\vol_{n-1}(\partial
B_{2}^{n})}
$$
where $H$ is a hyperplane with distance $p$ from the origin.
As in \cite{LudwigSchuettWerner}, we now choose
$$
s=\phi(p)=\frac{\vol_{n-1}(\partial
B_{2}^{n}\cap H^{-})}{\vol_{n-1}(\partial
B_{2}^{n})}
$$
as our new variable under the integral.
We apply Lemma \ref{SchW1} in order to change the variable under
the integral and get that the above expression is smaller or equal to
\begin{eqnarray} \label{integral111}
&\displaystyle{N\choose n}\frac{(\vol_{n-2}(\partial
B_{2}^{n-1}))^{n-1}}
{(\vol_{n-1}(\partial
B_{2}^{n}))^{n-2}}
\frac{n^2}{(n-1)^{n-1}}
\int_{\phi(1-\gamma)}^{\phi(1-\frac{1}{n})}
(1-s)^{N-n}
(1-\gamma-p)r^{(n-1)^{2}}
ds \nonumber
\\
&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}+c_{n}\exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right),
\end{eqnarray}
where $\phi(p)$ is the normalized surface area of the cap with distance $p$
of the hyperplane to $0$.
Before we proceed, we want to estimate $\phi(1-\gamma)$.
The radius $r$ and the distance $p$
satisfy $1=p^{2}+r^{2}$. It was shown in \cite{LudwigSchuettWerner} that
$$
r^{n-1}\frac{\vol_{n-1}(B_{2}^{n-1})}
{\vol_{n-1}(\partial B_{2}^{n})}
\leq \phi\left(\sqrt{1-r^{2}}\right)
\leq
\frac{1}{\sqrt{1-r^{2}}}r^{n-1}\frac{\vol_{n-1}(B_{2}^{n-1})}
{\vol_{n-1}(\partial B_{2}^{n})}.
$$
We include the argument from \cite{LudwigSchuettWerner} for completeness. We compare $\phi$ with the surface area of the
intersection $B_{2}^{n}\cap H$ of the Euclidean ball and the
hyperplane $H$. We have
$$
\frac{\vol_{n-1}(B_{2}^{n}\cap H)}
{\vol_{n-1}(\partial
B_{2}^{n})}
=r^{n-1}\frac{\vol_{n-1}(B_{2}^{n-1})}
{\vol_{n-1}(\partial
B_{2}^{n})}.
$$
Since the orthogonal projection onto $H$ maps $\partial
B_{2}^{n}\cap H^{-}$ onto $B_{2}^{n}\cap H$, the
left hand inequality follows.
\par
The right hand inequality follows again by
considering the orthogonal projection onto $H$. The surface area
of a
surface element of $\partial B_{2}^{n}\cap H^{-}$ equals the
surface area of the one it is mapped to in
$B_{2}^{n}\cap H$ divided by the cosine of the angle between
the normal to $H$ and the normal to $\partial B_{2}^{n}$ at
the given point. The cosine is always greater than
$\sqrt{1-r^{2}}$.
\par
For $p=1-\gamma$ we have
$r=\sqrt{2\gamma-\gamma^{2}}\leq\sqrt{2\gamma}$.
Therefore we get by (\ref{c-asymptotic3}),
\begin{eqnarray}
\phi(1-\gamma)
&\leq &
\frac{2^{\frac{n-1}{2}}}{1-\gamma}
\ \frac{\vol_{n-1}(B_{2}^{n-1})}
{\vol_{n-1}(\partial B_{2}^{n})}
\left\{
\frac{n}{n-1} \frac{N^{-\frac{2}{n-1}}}{n+1}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\mbox{vol}_{n-2}(\partial B_{2}^{n-1})}
\right)^{\frac{2}{n-1}}
\frac{\Gamma(n+\frac{2}{n-1})}{2(n-2)!}\right\}^{\frac{n-1}{2}}
\nonumber\\
&=&\frac{N^{-1}}{1-\gamma}
\ \left\{\frac{n}{n+1} \ \frac{\Gamma\left(n+\frac{2}{n-1}\right)}{(n-1)!}
\right\}^{\frac{n-1}{2}}.
\end{eqnarray}
The quantity $\gamma$ is of the order $N^{-\frac{2}{n-1}}$, so
$1/(1-\gamma)$ is as close to $1$ as we desire for $N$ large enough.
Moreover, for all $n\in\mathbb N$
$$
\left(\frac{n}{n+1}\right)^{\frac{n-1}{2}}
\leq 1.
$$
Therefore, for all $n\in\mathbb N$ and
$N$ large enough
$$
\phi(1-\gamma)
\leq
\frac{1}{N}\left\{\frac{ \Gamma(n+\frac{2}{n-1})}{(n-1)!}
\right\}^{\frac{n-1}{2}}.
$$
For all $n\in\mathbb N$ with $n\geq2$,
\begin{equation}\label{EstGamma1}
\left\{\frac{ \Gamma(n+\frac{2}{n-1})}{(n-1)!}
\right\}^{\frac{n-1}{2}}
\leq 2n.
\end{equation}
We
verify the estimate. Stirling's formula tells us that
for all $x>0$
$$
\sqrt{2\pi x}x^{x}e^{-x}
<\Gamma(x+1)
<\sqrt{2\pi x}x^{x}e^{-x}e^{\frac{1}{12x}}.
$$
Therefore,
$$
\frac{ \Gamma(n+\frac{2}{n-1})}{(n-1)!}
\leq\left(1+\frac{2}{(n-1)^2}\right)^{n-\frac{1}{2}+\frac{2}{n-1}}
\left(n-1\right)^{\frac{2}{n-1}}
e^{-\frac{2}{n-1}}e^{\frac{1}{12(n-1+\frac{2}{n-1})}}
$$
and
$$
\left(\frac{\Gamma(n+\frac{2}{n-1})}{(n-1)!}\right)^{\frac{n-1}{2}}
\leq\frac{n-1}{e}
\left(1+\frac{2}{(n-1)^2}\right)^{\frac{(n-1)(2n-1)}{4}}
\left(1+\frac{2}{(n-1)^2}\right)
e^{\frac{n-1}{24(n-1+\frac{2}{n-1})}}.
$$
The right hand expression is asymptotically equal to
$(n-1) e^{1/24}$ and (\ref{EstGamma1}) follows.
Altogether,
\begin{equation}\label{s(1-c)}
\phi(1-\gamma)\leq \frac{2n}{N}.
\end{equation}
Since $p=\sqrt{1-r^{2}}$,
we get for all $r$ with $0\leq r\leq 1$
$$
1-\gamma-p=1-\gamma-\sqrt{1-r^{2}}\leq \frac{1}{2}r^{2}+r^{4}-\gamma.
$$
This estimate is equivalent to
$1-\frac{1}{2}r^{2}-r^{4}\leq\sqrt{1-r^{2}}$. The
left hand side is negative for $r\geq 0.9$ and thus the
inequality holds for $r$ with $0.9\leq r\leq 1$. For $r$
with $0\leq r\leq 0.9$ we square both sides.
Thus the integral (\ref{integral111}) is smaller or equal to
\begin{eqnarray*}
&&{N\choose n}\frac{(\vol_{n-2}(\partial
B_{2}^{n-1}))^{n-1}}
{(\vol_{n-1}(\partial
B_{2}^{n}))^{n-2}}
\frac{n^2}{(n-1)^{n-1}}
\int_{\phi(1-\gamma)}^{\phi(1-\frac{1}{n})}
(1-s)^{N-n}
\left(\frac{1}{2}r^{2}+r^{4}-\gamma \right)r^{(n-1)^{2}}
ds \\
&&+\vol_{n-1}\left(\partial B_{2}^{n}\right)
2^{-N+1}n N^{n}+c_{n}\exp\left(-\frac{N-n}{n^{\frac{n+1}{2}}}\right).
\end{eqnarray*}
Now we evaluate the integral of this expression. Again, we proceed exactly as in \cite{LudwigSchuettWerner} with the obvious modifications.
We include the arguments for completeness.
We use Lemma \ref{surface-radius}. By differentiation we verify that
$(\frac{1}{2}r^{2}+r^{4}-\gamma)r^{(n-1)^{2}}$ is a monotone function
of $r$. Here we use that $\frac{1}{2}r^{2}+r^{4}-\gamma$ is
nonnegative.
\begin{eqnarray*}
&& \hskip -7mm \int_{\phi(1-\gamma)}^{\phi(1-\frac{1}{n})}
(1-s)^{N-n}
\left[\frac{1}{2}r^{2}+r^{4}-\gamma \right]r^{(n-1)^{2}}ds
\leq
\frac{1}{2}\int_{0}^{1}
(1-s)^{N-n}
\left[s\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right]^{n-1+\frac{2}{n-1}}ds \\
&&\hskip -6mm
+\int_{0}^{1}
(1-s)^{N-n}
\left(s\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{4}{n-1}}ds
-\int_{0}^{1}
(1-s)^{N-n}
\gamma \left(s\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1}ds \\ &&\hskip
10mm +\int_{0}^{\phi(1-\gamma)}
(1-s)^{N-n}
\gamma \left(s\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1}ds.
\end{eqnarray*}
By (\ref{c-asymptotic2}),
\begin{eqnarray*}
&&\hskip -12mm\int_{\phi(1-\gamma)}^{1}
(1-s)^{N-n}
\left(\frac{1}{2}r^{2}+r^{4}-\gamma \right)r^{(n-1)^{2}}ds \\
&&\hskip 10mm
\leq\frac{1}{2}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{2}{n-1}}
\frac{\Gamma(N-n+1)\Gamma(n+\frac{2}{n-1})}{\Gamma(N+1+\frac{2}{n-1})}
\nonumber\\ &&\hskip 10mm
+\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{4}{n-1}}
\frac{\Gamma(N-n+1)\Gamma(n+\frac{4}{n-1})}{\Gamma(N+1+\frac{4}{n-1})}
\nonumber\\ &&\hskip 10mm
-
\left(\frac{\vol_{n-1}(\partial
B_{2}^{n})} {\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1}
\frac{\Gamma(N-n+1)\Gamma(n)}{\Gamma(N+1)}
\\ &&\hskip 25mm \times \
\frac{N^{-\frac{2}{n-1}}}{n+1}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}
\right)^{\frac{2}{n-1}}
\frac{\Gamma(n+\frac{2}{n-1})}{2(n-2)!}
\\&&\hskip 10mm
+\gamma \cdot \phi(1-\gamma )\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1}
\max_{s\in[0,\phi(1-\gamma)]}(1-s)^{N-n}s^{n-1}.
\end{eqnarray*}
Thus,
\begin{eqnarray}
&&\int_{\phi(1-\gamma)}^{1}
(1-s)^{N-n}
\left(\frac{1}{2}r^{2}+r^{4}-\gamma \right)r^{(n-1)^{2}}ds \\
&&\hskip 10mm
\leq\frac{1}{2}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{2}{n-1}}
\frac{\Gamma(N-n+1)\Gamma(n+\frac{2}{n-1})}{\Gamma(N+1+\frac{2}{n-1})}
\nonumber\\ &&\hskip 10mm
+\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{4}{n-1}}
\frac{\Gamma(N-n+1)\Gamma(n+\frac{4}{n-1})}{\Gamma(N+1+\frac{4}{n-1})}
\nonumber\\ &&\hskip 10mm
-\frac{1}{2}\frac{n-1}{n+1}
\left(\frac{\vol_{n-1}(\partial
B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{2}{n-1}}
\frac{\Gamma(N-n+1)\Gamma\left(n+\frac{2}{n-1}\right)}
{\Gamma(N+1)}N^{-\frac{2}{n-1}}
\nonumber\\&&\hskip 10mm
+\gamma \cdot \phi(1-\gamma)\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1}
\max_{s\in[0,\phi(1-\gamma)]}(1-s)^{N-n}s^{n-1}.
\nonumber
\end{eqnarray}
The second summand is asymptotically equal to
\begin{eqnarray}
&&\hskip -15mm \left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{4}{n-1}}
\frac{(N-n)!(n-1)!n^{\frac{4}{n-1}}}{N!(N+1)^{\frac{4}{n-1}}}
\nonumber\\&& \hskip 12mm
=\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{4}{n-1}}
\frac{n^{-1+\frac{4}{n-1}}}{{N\choose n}(N+1)^{\frac{4}{n-1}}}.
\end{eqnarray}
This summand is of the order $N^{-\frac{4}{n-1}}$,
while the others are of the order $N^{-\frac{2}{n-1}}$.
\par
We consider the sum of the first and third summands, which is equal to
\begin{eqnarray*}
\frac{1}{2}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{2}{n-1}}
\frac{\Gamma(N-n+1)\Gamma(n+\frac{2}{n-1})}{\Gamma(N+1+\frac{2}{n-1})}
\left(1-
\frac{n-1}{n+1}\
\frac{
\Gamma(N+1+\frac{2}{n-1})}
{\Gamma(N+1)N^{\frac{2}{n-1}}}\right).
\end{eqnarray*}
Since $\Gamma(N+1+\frac{2}{n-1})$ is asymptotically equal to
$(N+1)^{\frac{2}{n-1}}\Gamma(N+1)$, the sum of the first and third
summand
is for large $N$ of the order
\begin{equation}
\frac{2}{n+1}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{2}{n-1}}
\frac{\Gamma(N-n+1)
\Gamma(n+\frac{2}{n-1})}{\Gamma(N+1+\frac{2}{n-1})},
\end{equation}
which in turn is of the order
\begin{equation}
\frac{1}{n^{2}}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{2}{n-1}}
{N\choose n}^{-1}N^{-\frac{2}{n-1}}.
\end{equation}
We consider now the fourth summand. By (\ref{c-estimate})
and (\ref{s(1-c)}) the fourth summand is less than
\begin{equation}\label{4summand}
bN^{-\frac{2}{n-1}}\frac{n-1}{e^{23/24}N}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1}
\max_{s\in[0,\phi(1-\gamma)]}(1-s)^{N-n}s^{n-1}.
\end{equation}
The maximum of the function $(1-s)^{N-n}s^{n-1}$
is attained at $(n-1)/(N-1)$ and the function
is increasing on the interval $[0,(n-1)/(N-1)]$. Therefore,
since $\phi(1-\gamma)<(n-1)/(N-1)$ the
maximum of this function over the interval $[0,\phi(1-\gamma)]$ is
attained at $\phi(1-\gamma)$. By (\ref{s(1-c)}) we have
$\phi(1-\gamma)\leq e^{\frac{1}{24}}\frac{n-1}{eN}$ and thus
for $N$ sufficiently big
\begin{eqnarray*}
\max_{s\in[0,\phi(1-\gamma)]}(1-s)^{N-n}s^{n-1}
&&\leq
\left(1-\frac{n-1}{e^{23/24}N}\right)^{N-n}
\left(e^{\frac{1}{24}}\frac{n-1}{eN}\right)^{n-1} \\
&&\leq\exp\left(\frac{n-1}{24}-\frac{(n-1)(N-n)}{e^{23/24}N}\right)
\left(\frac{n}{eN}\right)^{n-1} \\
&&\leq\exp\left(-\frac{n-1}{4}\right)
\left(\frac{n}{eN}\right)^{n-1}.
\end{eqnarray*}
Thus we get with a new constant $b$ that (\ref{4summand}) is smaller than or equal to
$$
bN^{-\frac{2}{n-1}}\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1}
e^{-\frac{n}{4}}\frac{n^{n}e^{-n}}{N^{n}}.
$$
This is asymptotically equal to
\begin{equation}
bN^{-\frac{2}{n-1}}\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1}
e^{-\frac{n}{4}}\frac{1}{{N\choose n}\sqrt{2\pi n}}.
\end{equation}
Altogether, (\ref{integral111}) for $N$ sufficiently big can be estimated by
\begin{eqnarray*}
&&\hskip -5mm {N\choose n}\frac{(\vol_{n-2}(\partial
B_{2}^{n-1}))^{n-1}}
{(\vol_{n-1}(\partial
B_{2}^{n}))^{n-2}}
\frac{n^2}{(n-1)^{n-1}}
\left\{\frac{1}{n^{2}}
\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1+\frac{2}{n-1}}
{N\choose n}^{-1}N^{-\frac{2}{n-1}}\right.
\\ &&\hskip 50mm\left.
+ \ bN^{-\frac{2}{n-1}}\left(\frac{\vol_{n-1}(\partial B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{n-1}
e^{-\frac{n}{4}}\frac{1}{{N\choose n}\sqrt{2\pi n}}\right\}.
\end{eqnarray*}
This can be estimated by a constant times
\begin{eqnarray} \label{last-est}
&&
(\vol_{n-1}(\partial B_{2}^{n}))
n^2\bigg\{\frac{1}{n^2}
N^{-\frac{2}{n-1}}
+bN^{-\frac{2}{n-1}}
e^{-\frac{n}{4}}\frac{1}{\sqrt{2\pi n}}\bigg\}.
\end{eqnarray}
\noindent
Finally, it should be noted that we have been estimating the approximation of $(1-\gamma) B^n_2$ and not that of $B^n_2$. Therefore we need to multiply
(\ref{last-est}) by $(1-\gamma)^{-{(n-1)}}$. By (\ref{c-estimate}),
$$
(1-\gamma)^{n-1} \geq 1 - b\frac{n-1}{N^\frac{2}{n-1}},
$$
so that we have for all $N \geq (2b(n-1))^\frac{n-1}{2}$ that $(1-\gamma)^{-{(n-1)}} \leq 2$.
\qedwhite
\bigskip
\section{Proof of Theorem \ref{AbschUntenSurf} }
For the proof of Theorem \ref{AbschUntenSurf} we need several more ingredients. Throughout this section, we denote by $\|\cdot\|_2$ the Euclidean norm on $\mathbb{R}^n$
and by $B^n_2(\xi, r)$ the $n$-dimensional Euclidean ball with radius $r$ centered at $\xi$.
\par
For a polytope $P$, the map
$T:\partial P \cap B^n_2 \to \partial B_{2}^{n}$ is such that it maps an element $x$ with a unique outer normal $N(x)$
onto the following element of $\partial B_{2}^{n}$
\begin{equation}\label{T}
x \mapsto T(x)=\partial B_{2}^{n} \cap \{x+s N(x) : s \geq 0, N(x) \hskip 1mm \mbox{normal at}\hskip 1mm x\}.
\end{equation}
Points not having a unique normal have measure $0$ and their image is
prescribed in an arbitrary way.
\vskip 2mm
\begin{lemma}\label{AbschInnenFacet}
For all $n\in\mathbb N$ with $n\geq 2$, all $M\in\mathbb N$ with $M\geq 3$,
all polytopes $P_M$ in $\mathbb R^{n}$ with facets $F_{i}$, $i=1,\dots,M$
and
for all $i=1,\dots,M$ we have
$$
\vol_{n-1}(T(F_{i}\cap B_{2}^{n}))
-\vol_{n-1}(F_{i}\cap B_{2}^{n})
\geq\frac{1}{32} \ \frac{\left(\vol_{n-1}(F_{i}\cap B_{2}^{n})\right)^{\frac{n+1}{n-1}}}{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}}.
$$
\end{lemma}
\vskip 2mm
\noindent
{\bf Proof.} In the case that $F_{i}\cap B_{2}^{n}$ is the empty set, the inequality holds since
both sides of the inequality equal $0$.
\par
Let $\xi_{i}$, $i=1,\dots,M$, be the outer normals of $P_M$ to $F_{i}$ and let $t_{i}\in\mathbb R$
be such that $H(t_{i}\xi_{i},\xi_{i})$ is the hyperplane containing $F_{i}$.
By definition, the volume radius of $F_{i}\cap B_{2}^{n}$ is
\begin{equation}\label{AbschInnenFacet-2}
r_{i}=\left(\frac{\vol_{n-1}(F_{i}\cap B_{2}^{n})}
{\vol_{n-1}(B_{2}^{n-1})}\right)^{\frac{1}{n-1}}.
\end{equation}
We decompose the set $F_{i}$ into the two sets
$$
F_{i}^{1}=F_{i}\cap B_{2}^{n}(t_{i}\xi_{i},\tfrac{r_{i}}{2})
\hskip 10mm\mbox{and}\hskip 10mm
F_{i}^{2}=F_{i}\cap (B_{2}^{n}(t_{i}\xi_{i},\tfrac{r_{i}}{2}))^{c}.
$$
$F_{i}^{1}$ may be the empty set but, as we shall see during the proof,
$F_{i}^{2}$ is never empty provided $F_{i}\cap B_{2}^{n}$ is nonempty.
The map $T$ stretches an infinitesimal surface element at $x$ by the factor
$\frac{1}{|\langle \xi_{i},T(x)\rangle|}$. Therefore,
\begin{eqnarray}\label{AbschInnenFacet-3}
\vol_{n-1}(T(F_{i}\cap B_{2}^{n}))
=\int_{F_{i}\cap B_{2}^{n}}\frac{dx}{|\langle \xi_{i},T(x)\rangle|}
=\int_{F_{i}^{1}\cap B_{2}^{n}}\frac{dx}{|\langle \xi_{i},T(x)\rangle|}
+\int_{F_{i}^{2}\cap B_{2}^{n}}\frac{dx}{|\langle \xi_{i},T(x)\rangle|}.
\end{eqnarray}
For all $x\in F_{i}^{2}\cap B_{2}^{n}$ we have
\begin{equation}\label{AbschInnenFacet-1}
|\langle \xi_{i},T(x)\rangle|
\leq\sqrt{1-\frac{1}{4}r_{i}^{2}}.
\end{equation}
We verify this. There is $s\geq0$ with
$T(x)=x+s\xi_{i}$. This implies $\|x+s\xi_{i}\|_{2}=1$, and consequently
$$
s+\langle x,\xi_{i}\rangle
=\sqrt{1-\|x\|_{2}^{2}+\langle x,\xi_{i}\rangle^{2}}.
$$
Moreover, $x\in (B_{2}^{n}(t_{i}\xi_{i},\tfrac{r_{i}}{2}))^{c}$ means
$$
\frac{r_{i}^{2}}{4}<\|x-t_{i}\xi_{i}\|_{2}^{2}
=\|x\|_{2}^{2}-2t_{i}\langle x,\xi_{i}\rangle+t_{i}^{2}
=\|x\|_{2}^{2}-\langle x,\xi_{i}\rangle^{2}.
$$
Thus,
$$
\langle \xi_{i},T(x)\rangle
=\langle \xi_{i},x+s\xi_{i}\rangle
=\langle \xi_{i},x\rangle+s
=\sqrt{1-\|x\|_{2}^{2}+\langle x,\xi_{i}\rangle^{2}}
<\sqrt{1-\frac{r_{i}^{2}}{4}}
$$
and we have shown (\ref{AbschInnenFacet-1}).
By (\ref{AbschInnenFacet-3}) and (\ref{AbschInnenFacet-1}),
\begin{eqnarray*}
\vol_{n-1}(T(F_{i}\cap B_{2}^{n}))
&\geq& \vol_{n-1}(F_{i}^{1}\cap B_{2}^{n})
+\frac{\vol_{n-1}(F_{i}^{2}\cap B_{2}^{n})}{\sqrt{1-\frac{r_{i}^{2}}{4}}} \\
&\geq& \vol_{n-1}(F_{i}^{1}\cap B_{2}^{n})
+\vol_{n-1}(F_{i}^{2}\cap B_{2}^{n})\sqrt{1+\frac{r_{i}^{2}}{4}} .
\end{eqnarray*}
Since $r_{i}\leq1$,
\begin{eqnarray*}
\vol_{n-1}(T(F_{i}\cap B_{2}^{n}))
&\geq&\vol_{n-1}(F_{i}^{1}\cap B_{2}^{n})
+\left(1+\frac{r_{i}^{2}}{16} \right)\vol_{n-1}(F_{i}^{2}\cap B_{2}^{n}) \\
&=&\vol_{n-1}(F_{i}\cap B_{2}^{n})
+\frac{r_{i}^{2}}{16} \vol_{n-1}(F_{i}^{2}\cap B_{2}^{n}).
\end{eqnarray*}
Since $F_{i}^{1}\subseteq B_{2}^{n}(t_{i}\xi_{i},\frac{r_{i}}{2})$, we have
$\vol_{n-1}(F_{i}^{1})\leq\frac{r_{i}^{n-1}}{2^{n-1}}\vol_{n-1}(B_{2}^{n-1})$.
With (\ref{AbschInnenFacet-2})
\begin{eqnarray*}
\vol_{n-1}(T(F_{i}\cap B_{2}^{n}))
&\geq& \vol_{n-1}(F_{i}\cap B_{2}^{n})
+\frac{r_{i}^{2}}{16} \left(\vol_{n-1}(F_{i}^{}\cap B_{2}^{n})-\frac{r_{i}^{n-1}}{2^{n-1}}\vol_{n-1}(B_{2}^{n-1})\right) \\
&\geq&\vol_{n-1}(F_{i}\cap B_{2}^{n})
+
\frac{\left(\vol_{n-1}(F_{i}\cap B_{2}^{n})\right)^{\frac{n+1}{n-1}}}{ 16 \left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}} }
-
\frac{\left(\vol_{n-1}(F_{i}\cap B_{2}^{n})\right)^{\frac{n+1}{n-1}}}{2^{n+3}\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}}.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
\vol_{n-1}(T(F_{i}\cap B_{2}^{n}))
-\vol_{n-1}(F_{i}\cap B_{2}^{n})
\geq\frac{1}{32}\frac{ \left(\vol_{n-1} \left(F_{i}\cap B_{2}^{n}\right) \right)^{\frac{n+1}{n-1}}}{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}}.
\end{eqnarray*}
$\Box$
\vskip 3mm
\begin{prop}\label{AbschInnen}
For all $n\in\mathbb N$ with $n\geq 2$, all $M\in\mathbb N$ with $M\geq 3$,
all polytopes $P_M$ in $\mathbb R^{n}$ with at most $M$ facets we have
$$
\vol_{n-1}(\partial B_{2}^{n}\cap P_{M}^{c})-
\vol_{n-1}(\partial P_{M}\cap B_{2}^{n})
\geq \frac{1}{32}
\frac{\left(\vol_{n-1}(B_{2}^{n}\cap\partial P_{M})\right)^{\frac{n+1}{n-1}}}{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}} \
\frac{1}{M^{\frac{2}{n-1}}}.
$$
\end{prop}
\vskip 2mm
\noindent
{\bf Proof.}
Let $T$ be as in (\ref{T}). Then
\begin{eqnarray*}
\vol_{n-1}(\partial B_{2}^{n}\cap P_{M}^{c})-
\vol_{n-1}(\partial P_{M}\cap B_{2}^{n})
\geq\vol_{n-1}\left(\bigcup_{i=1}^{M}T(F_{i}\cap B_{2}^{n})\right)-
\vol_{n-1}\left(\bigcup_{i=1}^{M}(F_{i}\cap B_{2}^{n})\right) .
\end{eqnarray*}
Since the intersection of two sets $F_{i}$ and $F_{i^{\prime}}$ is a nullset
and by Lemma \ref{AbschInnenFacet},
\begin{eqnarray*}
&&\vol_{n-1}(\partial B_{2}^{n}\cap P_{M}^{c})-
\vol_{n-1}(\partial P_{M}\cap B_{2}^{n}) \\
&&\geq\sum_{i=1}^{M}\vol_{n-1}(T(F_{i}\cap B_{2}^{n}))-
\sum_{i=1}^{M}\vol_{n-1}(F_{i}\cap B_{2}^{n})
\geq\frac{1}{32}\sum_{i=1}^{M}
\frac{\left(\vol_{n-1}(F_{i}\cap B_{2}^{n})\right)^{\frac{n+1}{n-1}}}{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}}.
\end{eqnarray*}
As
$$
\sum_{i=1}^{M} \vol_{n-1}(F_{i}\cap B_{2}^{n})=\vol_{n-1}(B_{2}^{n}\cap\partial P_{M}),
$$
by H\"older's inequality
$$
\sum_{i=1}^{M}
\left(\vol_{n-1}(F_{i}\cap B_{2}^{n})\right)^{\frac{n+1}{n-1}}
\geq\frac{\left(\vol_{n-1}(B_{2}^{n}\cap\partial P_{M})\right)^{\frac{n+1}{n-1}}}{M^{\frac{2}{n-1}}}.
$$
Therefore,
$$
\vol_{n-1}(\partial B_{2}^{n}\cap P_{M}^{c})-
\vol_{n-1}(\partial P_{M}\cap B_{2}^{n})
\geq \frac{1}{32}
\frac{\left(\vol_{n-1}(B_{2}^{n}\cap\partial P_{M})\right)^{\frac{n+1}{n-1}}}{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}}
\frac{1}{M^{\frac{2}{n-1}}}.
$$
$\Box$
\vskip 3mm
Let $R:\mathbb R^{n}\to S^{n-1}$, $ x \mapsto R(x)=\frac{x}{\|x\|_2}$ be
the radial projection.
\vskip 2mm
\begin{lemma}\label{AbschAussen}
For all $n\in\mathbb N$ with $n\geq 2$, all $M\in\mathbb N$ with $M\geq 3$,
all polytopes $P_{M}$ in $\mathbb R^{n}$ with $0\in \intt(P_{M})\subseteq 2B_{2}^{n}$
and with facets $F_{i}$, $i=1,\dots,M$
and
for all $i=1,\dots,M$
$$
\vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})-
\vol_{n-1}(R(F_{i}\cap (B_{2}^{n})^{c}))
\geq \frac{1}{128}
\frac{\left(\vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})\right)^{\frac{n+1}{n-1}}}{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}}.
$$
\end{lemma}
\vskip 2mm
\noindent
{\bf Proof.}
Let $\xi_{i}$, $i=1,\dots,M$, be the normals to $F_{i}$ and let $t_{i}\in\mathbb R$
be such that $H(t_{i}\xi_{i},\xi_{i})$ is the hyperplane containing $F_{i}$.
\par
Since $0$ is an interior point of $P_{M}$, $R$ maps $\partial P_{M}$ bijectively
onto $\partial B_{2}^{n}$. In particular, $R$ maps $\partial P_{M}\cap (B_{2}^{n})^{c}$
up to a nullset bijectively onto $\partial B_{2}^{n}\cap P_{M}$. The map $R$ stretches
an infinitesimal surface element at $x$ by the factor
$\frac{\langle \xi_{i},\frac{x}{\|x\|_{2}}\rangle}{\|x\|_{2}^{n-1}}$.
\par
The volume
radius of $F_{i}\cap (B_{2}^{n})^{c}$ is
\begin{equation}\label{VR}
\rho_{i}
=\left(\frac{\vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})}{\vol_{n-1}(B_{2}^{n-1})}\right)^{\frac{1}{n-1}}.
\end{equation}
For all $x\in F_{i}\cap (B_{2}^{n})^{c}$ we have $\|x\|_{2}>1$. Thus,
\begin{equation}\label{AbschAussen-2}
\vol_{n-1}(R(F_{i}\cap (B_{2}^{n})^{c}))
=\int_{F_{i}\cap (B_{2}^{n})^{c}}
\frac{\left\langle \xi_{i},\frac{x}{\|x\|_{2}}\right\rangle}{\|x\|_{2}^{n-1}}\,dx
\leq\int_{F_{i}\cap (B_{2}^{n})^{c}}
\left\langle \xi_{i},\frac{x}{\|x\|_{2}}\right\rangle dx.
\end{equation}
We decompose the set $F_{i}\cap (B_{2}^{n})^{c}$
into two sets
$$
A_{i}=F_{i}\cap (B_{2}^{n})^{c}\cap B_{2}^{n}\left(t_{i}\xi_{i},\tfrac{\rho_{i}}{2}\right)
\hspace{5mm} \text{and} \hspace{5mm}
B_{i}=F_{i}\cap (B_{2}^{n})^{c}\cap \left(B_{2}^{n}\left(t_{i}\xi_{i},\tfrac{\rho_{i}}{2}\right)\right)^{c}.
$$
For all $x\in F_{i}\cap (B_{2}^{n}(t_{i}\xi_{i},\frac{\rho_{i}}{2}))^{c}$ we have
\begin{equation}\label{AbschAussen-1}
\left\langle \xi_{i},\frac{x}{\|x\|_{2}}\right\rangle
\leq\sqrt{1-\frac{\rho_{i}^{2}}{4\|x\|_{2}^{2}}}.
\end{equation}
We verify this. The inequality $\|x-t_{i}\xi_{i}\|_{2}>\frac{\rho_{i}}{2}$ implies
$$
\frac{\rho_{i}^{2}}{4}<\|x\|_{2}^{2}-2t_{i}\langle x,\xi_{i}\rangle+t_{i}^{2}
=\|x\|_{2}^{2}-\langle x,\xi_{i}\rangle^{2}.
$$
Thus (\ref{AbschAussen-1}) follows.
By (\ref{AbschAussen-2}) and (\ref{AbschAussen-1}),
$$
\vol_{n-1}(R(F_{i}\cap (B_{2}^{n})^{c}))
\leq\int_{A_{i}}
\left\langle \xi_{i},\frac{x}{\|x\|_{2}}\right\rangle dx
+\int_{B_{i}}
\left\langle \xi_{i},\frac{x}{\|x\|_{2}}\right\rangle dx
\leq\int_{A_{i}} dx
+\int_{B_{i}}
\sqrt{1-\frac{\rho_{i}^{2}}{4\|x\|_{2}^{2}}}\, dx .
$$
Since $P_{M}\subseteq 2B_{2}^{n}$,
\begin{eqnarray*}
\vol_{n-1}(R(F_{i}\cap (B_{2}^{n})^{c}))
&\leq& \vol_{n-1}(A_{i})+\vol_{n-1}(B_{i})\sqrt{1-\frac{\rho_{i}^{2}}{16}} \\
&\leq&\vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})-\frac{\rho_{i}^{2}}{64} \vol_{n-1}(B_{i}).
\end{eqnarray*}
Since $\vol_{n-1}(A_{i})\leq\frac{\rho_{i}^{n-1}}{2^{n-1}} \vol_{n-1}(B_{2}^{n-1})$,
we have $ \vol_{n-1}(B_{i}) \geq \vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c}) -\frac{\rho_{i}^{n-1}}{2^{n-1}} \vol_{n-1}(B_{2}^{n-1})$.
Therefore, with (\ref{VR}),
\begin{eqnarray*}
\vol_{n-1}(R(F_{i}\cap (B_{2}^{n})^{c}))
&\leq&\left(1-\frac{\rho_{i}^{2}}{64}\right) \vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})
+\frac{\rho_{i}^{n+1}}{2^{n+5}} \vol_{n-1}(B_{2}^{n-1}) \\
&=& \vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})
-\frac{\left(\vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})\right)^{\frac{n+1}{n-1}}}{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}}
\left(\frac{1}{64}-\frac{1}{2^{n+5}}\right).
\end{eqnarray*}
$\Box$
\vskip 3mm
\begin{prop}\label{AbschAuss}
For all $n\in\mathbb N$ with $n\geq 2$, all $M\in\mathbb N$ with $M\geq 3$,
all polytopes $P_{M}$ in $\mathbb R^{n}$ with at most $M$ facets and with
$0\in \intt(P_{M})\subseteq 2B_{2}^{n}$
$$
\vol_{n-1}(\partial P_{M}\cap (B_{2}^{n})^{c})-
\vol_{n-1}(\partial B_{2}^{n}\cap P_{M})
\geq \frac{1}{128}
\frac{\left(\vol_{n-1}(\partial P_{M}\cap (B_{2}^{n})^{c})\right)^{\frac{n+1}{n-1}}}
{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}M^{\frac{2}{n-1}}}.
$$
\end{prop}
\vskip 2mm
\noindent
{\bf Proof.} By Lemma \ref{AbschAussen},
\begin{eqnarray*}
&&\vol_{n-1}(\partial P_{M}\cap (B_{2}^{n})^{c})-
\vol_{n-1}(\partial B_{2}^{n}\cap P_{M}) \\
&&\geq\sum_{i=1}^{M}\bigg[\vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})-
\vol_{n-1}(R(F_{i}\cap (B_{2}^{n})^{c}))\bigg]
\geq\frac{1}{128}\sum_{i=1}^{M}
\frac{\left(\vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})\right)^{\frac{n+1}{n-1}}}{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}}.
\end{eqnarray*}
As
$$
\vol_{n-1}(\partial P_{M}\cap (B_{2}^{n})^{c})
=\sum_{i=1}^{M} \vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c}),
$$
H\"older's inequality implies
$$
\left(\sum_{i=1}^{M}
\left(\vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})\right)^{\frac{n+1}{n-1}}\right)^{\frac{n-1}{n+1}}
M^{\frac{2}{n+1}}\geq\sum_{i=1}^{M}
\vol_{n-1}(F_{i}\cap (B_{2}^{n})^{c})
=\vol_{n-1}(\partial P_{M}\cap (B_{2}^{n})^{c}).
$$
Consequently,
\begin{eqnarray*}
\vol_{n-1}(\partial P_{M}\cap (B_{2}^{n})^{c})-
\vol_{n-1}(\partial B_{2}^{n}\cap P_{M})
\geq\frac{1}{128}
\frac{\left(\vol_{n-1}(\partial P_{M}\cap (B_{2}^{n})^{c})\right)^{\frac{n+1}{n-1}}}
{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}M^{\frac{2}{n-1}}}.
\end{eqnarray*}
$\Box$
\vskip 3mm
\noindent
{\bf Proof of Theorem \ref{AbschUntenSurf}.}
We may assume that the origin is an interior point of $P_{M}$. If not, then
$P_{M}$ is contained in a Euclidean half ball and, by convexity,
the surface area of $P_{M}$ is smaller than that of the half ball,
$ \vol_{n-1}\left(\partial P_{M} \right) \leq\frac{1}{2} \vol_{n-1}(\partial B_{2}^{n})+\vol_{n-1}(B_{2}^{n-1})$. So,
for sufficiently large $M$,
\begin{eqnarray*}
\Delta_s(B_{2}^{n},P_{M})
\geq \vol_{n-1}(\partial B^n_2)
- \vol_{n-1}(\partial P_{M})
\geq\tfrac{1}{2}\vol_{n-1}(\partial B^n_2)- \vol_{n-1}(B_{2}^{n-1})
\geq \frac{\vol_{n-1}(\partial B^n_2)} {M^{\frac{2}{n-1}}}.
\end{eqnarray*}
In the same way, we see that for sufficiently large $M$ we may assume that
$\vol_{n-1}(\partial P_{M})\geq\frac{1}{2} \vol_{n-1}(\partial B_{2}^{n})$.
\par
Moreover, we may assume that $P_{M}\subseteq 2B_{2}^{n}$.
If not, there is $x_{0}\in P_{M}$ with $\|x_{0}\|_{2}\geq 2$.
For $M$ sufficiently big we may assume that
$\frac{1}{2}B_{2}^{n}\subseteq P_{M}$. Therefore,
$$
\Delta_s(B_{2}^{n},P_{M})
\geq \vol_{n-1}(\partial[x_{0},\tfrac{1}{2}B_{2}^{n}]\cap (B_{2}^{n})^{c}),
$$
where $[x_{0},\tfrac{1}{2}B_{2}^{n}]$ denotes the convex hull of the point $x_{0}$
with the Euclidean ball of radius $\frac{1}{2}$.
\par
By Propositions \ref{AbschInnen} and \ref{AbschAuss},
\begin{eqnarray*}
\Delta_s(B_{2}^{n},P_{M})
&=&\vol_{n-1}(\partial B_{2}^{n} \cap P_{M}^{c})
- \vol_{n-1}(\partial P_{M}\cap B_{2}^{n}) \\
&&+\vol_{n-1}(\partial P_{M}\cap (B_{2}^{n})^{c})
-\vol_{n-1}(\partial B_{2}^{n}\cap P_{M}) \\
&\geq&\frac{1}{32} \
\frac{\left(\vol_{n-1}(B_{2}^{n}\cap\partial P_{M})\right) ^{\frac{n+1}{n-1}}} {\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}M^{\frac{2}{n-1}}}
+\frac{1}{128} \ \frac{\left(\vol_{n-1}(\partial P_{M}\cap (B_{2}^{n})^{c})\right)^{\frac{n+1}{n-1}}}
{\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}M^{\frac{2}{n-1}}} .
\end{eqnarray*}
By H\"older's inequality,
\begin{eqnarray*}
\Delta_s(B_{2}^{n},P_{M})
\geq\frac{1}{128 \cdot2^{\frac{2}{n-1}}} \
\frac{\left(\vol_{n-1}(\partial P_{M})\right)^{\frac{n+1}{n-1}}} {\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}M^{\frac{2}{n-1}}}.
\end{eqnarray*}
For sufficiently large $M$ we have $\vol_{n-1}(\partial P_{M})\geq\frac{1}{2}\vol_{n-1}(\partial B^n_2)$.
Therefore,
$$
\Delta_s(B_{2}^{n},P_{M})\geq
\frac{1}{2^{12}} \
\frac{\left(\vol_{n-1}(\partial B^n_2) \right)^{\frac{n+1}{n-1}}} {\left(\vol_{n-1}(B_{2}^{n-1})\right)^{\frac{2}{n-1}}M^{\frac{2}{n-1}}}.
$$
There is a constant $c>0$ such that for all $n\in\mathbb N$ with $n\geq 2$
$$
\left(\frac{\vol_{n-1}(\partial B^n_2)}{\vol_{n-1}(B_{2}^{n-1})}\right)^{\frac{2}{n-1}}\geq c.
$$
Therefore, with a new constant $c$,
$$
\Delta_s(B_{2}^{n},P_{M})
\geq
c\,\frac{\vol_{n-1}(\partial B^n_2)}{M^{\frac{2}{n-1}}}.
$$
$\Box$
\vskip 3mm
|
2,869,038,156,266 | arxiv | \section{Introduction}
The large mass of the top quark~\cite{rev} makes it a natural
laboratory to search for new phenomena beyond those predicted by
the Standard Model (SM). One possible avenue of research consists
in using effective operators of dimensions larger than four to
parameterize the effects of any new physics. One advantage of
using this formalism is that one works in a model independent
manner. The complete set of dimension five and six operators that
preserve the gauge symmetries of the SM is quite large, and was
first obtained by Buchm\"uller and Wyler~\cite{buch}. This
methodology has been used by many authors to study the top quark.
For instance, in refs.~\cite{whis} contributions to top quark
physics arising from several types of dimension five and six
operators were studied. In ref.~\cite{saav} a detailed study of
the $W\,t\,b$ vertex was undertaken. Analysis of flavor changing
neutral currents in supersymmetric theories and models with two
Higgs doublets may be found in~\cite{fcnc}. For a recent study of
single top production in supersymmetric models see~\cite{sola} and
for a study on single top-quark production in flavor-changing Z'
models see~\cite{ari}. NLO and threshold corrections to top quark
flavor changing operators were obtained in~\cite{liu}. The four
fermion operator contributions to $t\bar{t}$ production were
studied in detail in~\cite{4f}.
Recently~\cite{nos1,nos2} we studied the effects on the
phenomenology of the top quark of a subset of operators of
dimension six - namely, those with contributions to strong flavor
changing neutral currents. The set chosen included several
operators studied by other authors, but which had not been
considered together before. We also benefitted greatly from
working in a fully gauge-invariant manner, taking advantage of the
equations of motion to eliminate several unknown effective
couplings. We considered both gluonic operators and four-fermion
ones. A detailed analysis of the contributions of these operators
in phenomena such as the top's width, its rare decays and the
cross section for single top production at the LHC was performed.
It was shown that the operator set we chose may have large
contributions to the single top production cross section at the
LHC, and that that channel is an excellent probe into the
existence of new physics.
In this paper we wish to analyze the effect of that set of
effective operators on other potentially interesting channels of
top production at the LHC, namely: top and anti-top production;
associated production of a top quark with a single gauge boson (a
photon, a $W$ or a $Z$ boson); and associated production of a top
quark and a Higgs boson. Our aim, as in
references~\cite{nos1,nos2}, is to produce analytical expressions
whenever possible, so that the results of this paper may be used
directly by our experimental colleagues in their Monte Carlo
simulations. This work is structured as follows: in
section~\ref{sec:eff} we will review the effective operator
formalism and the operators studied in refs.~\cite{nos1,nos2}.
Namely, we will explain the criteria behind that choice and the
role the equations of motion play in how many of them are truly
independent. In section~\ref{sec:ttbar} we will study the effect
of our operator set in the production of $t\bar{t}$ pairs at the
LHC. In section~\ref{sec:tgauge} we will analyze the processes of
associated production of a top and a gauge boson and of a top and
a Higgs boson. In section~\ref{sec:num} we will present numerical
results for the cross sections of these processes at the LHC.
Finally, we will conclude in section~\ref{sec:conc} with an
overall analysis of the results.
\section{The effective operator approach}
\label{sec:eff}
A physical system rarely provides us enough information for a
complete description of its properties. A way to solve this
problem is to parameterize any physical effects not yet observed
by introducing an effective lagrangian with a set of new
interactions to be determined phenomenologically. This effective
lagrangian has the Standard Model as its low energy limit, and can
serve to represent the effect of any high-energy theory at a given
energy scale $\Lambda$. We write this lagrangian as
\begin{equation}
{\cal L} \;\;=\;\; {\cal L}^{SM} \;+\; \frac{1}{\Lambda}\,{\cal L}^{(5)} \;+\;
\frac{1}{\Lambda^2}\,{\cal L}^{(6)} \;+\; O\,\left(\frac{1}{\Lambda^3}\right)
\;\;\; ,
\label{eq:l}
\end{equation}
where ${\cal L}^{SM}$ is the SM lagrangian and ${\cal L}^{(5)}$
and ${\cal L}^{(6)}$ are all of the dimension 5 and 6 operators
which, like ${\cal L}^{SM}$, are invariant under the gauge
symmetries of the SM. The ${\cal L}^{(5)}$ terms break baryon and
lepton number conservation, and usually are not considered. This
leaves us with the ${\cal L}^{(6)}$ operators. Some of these,
after spontaneous symmetry breaking, generate dimension five
terms. The complete list of effective operators was obtained
in~\cite{buch}. Our purpose, in this and previous
works~\cite{nos1, nos2} is to study flavor changing interactions,
restricted to the strong sector of the theory, involving a top
quark. Therefore, we choose operators with a single top quark,
that do not comprise gauge or Higgs bosons (except for those that
arise from covariant derivatives), and that involve some sort of
strong flavor changing interactions. Finally, we choose those
${\cal L}^{(6)}$ operators that have no sizeable impact on low
energy physics (by which we mean below the TeV scale).
Only two gluon operators survive these criteria which, in the
notation of ref.~\cite{buch}, are written as
\begin{align}
{\cal O}_{uG} &= \;\;i\,\frac{\alpha_{ij}}{\Lambda^2}\,\left(\bar{u}^i_R\,
\lambda^a\, \gamma^\mu\,D^\nu\,u^j_R\right)\,G^a_{\mu\nu} \nonumber
\vspace{0.2cm} \\
{\cal O}_{uG\phi} &= \;\;\frac{\beta_{ij}}{\Lambda^2}\,\left(\bar{q}^i_L\,
\lambda^a\, \sigma^{\mu\nu}\,u^j_R\right)\,\tilde{\phi}\,G^a_{\mu\nu} \;\;\; .
\label{eq:op}
\end{align}
$q_L$ and $u_R$ are spinors (a left quark doublet and up-quark
right singlet of $SU(2)$, respectively), $\tilde{\phi}$ is the
charge conjugate of the Higgs doublet and $G^a_{\mu\nu}$ is the
gluon tensor. $\alpha_{ij}$ and $\beta_{ij}$ are complex
dimensionless couplings and the $(i,j)$ are flavor indices.
According to the criteria listed above, one of these indices must belong to the
third generation. After spontaneous symmetry breaking the neutral
component of the field $\phi$ acquires a vev
($\phi_0\,\rightarrow\,\phi_0\,+\,v$, with $v\,=\, 246/\sqrt{2}$
GeV) and the second of these operators generates a dimension five
term. The lagrangian for new physics is then given by
\begin{align}
{\cal L}\;\; =&\;\;\; \alpha_{tu}\,{\cal O}_{tu}\;+\; \alpha_{ut}\,{\cal O}_{ut}
\;+\; \beta_{tu}\,{\cal O}_{tu\phi}\;+\;\beta_{ut}\,{\cal O}_{ut\phi}\;+\;
\mbox{h.c.} \nonumber \vspace{0.2cm} \\
=& \;\;\;\frac{i}{\Lambda^2}\,\left[\alpha_{tu}\,\left(\bar{t}_R\,\lambda^a\,
\gamma^\mu \,D^\nu\,u_R\right)\;+\; \alpha_{ut}\,\left(\bar{u}_R\,\lambda^a\,
\gamma^\mu\, D^\nu\,t_R\right)\right]\,G^a_{\mu\nu} \;\;\;+ \nonumber
\vspace{0.2cm} \\
& \;\;\;\frac{v\,+\,h/\sqrt{2}}{\Lambda^2}\,\left[\beta_{tu}\,\left(\bar{t}_L\,\lambda^a
\, \sigma^{\mu\nu}\,u_R\right)\;+\; \beta_{ut}\,\left(\bar{u}_L\,\lambda^a\,
\sigma^{\mu\nu}\,t_R\right)\right]\,G^a_{\mu\nu} \;\;+\;\; \mbox{h.c.}
\;\;\;,
\label{eq:lf}
\end{align}
where $h$ is the SM Higgs boson.
This lagrangian describes new vertices such as $g\,\bar{t}\,u$,
$g\,\gamma \, \bar{t}\,u$, $g\,Z\, \bar{t}\,u$ and $g\,h\,\bar{t}\,u$.
There are, of course, analogous vertices involving the top quark,
instead of the anti-top one. We will also consider an analogous
lagrangian (with new couplings $\alpha_{tc}$, $\beta_{ct}$,
\ldots) for vertices of the form $g\,\bar{t}\,c$, $g\,\gamma \,
\bar{t}\,c$, $g\,Z\, \bar{t}\,c$ and $g\,h\,\bar{t}\,c$. Notice
how the operators with $\beta$ couplings correspond to a
chromomagnetic momentum for the $t$ quark. Several extensions of
the SM, such as supersymmetry and two Higgs doublet models, may
generate contributions to this type of operator~\cite{chro}.
\begin{figure}[ht]
\epsfysize=6cm \centerline{\epsfbox{feynrul.eps}} \caption{Feynman
rules for anomalous gluon vertices.} \label{feynrul1}
\end{figure}
\begin{figure}[ht]
\epsfysize=4.5cm \centerline{\epsfbox{feynrul2.eps}}
\caption{Feynman rules for anomalous gluon-$\gamma$ and gluon-Z
vertices.} \label{feynrul2}
\end{figure}
\begin{figure}[ht]
\epsfysize=4cm \centerline{\epsfbox{feynrul3.eps}}
\caption{Feynman rules for anomalous gluon-h vertex.}
\label{feynrul3}
\end{figure}
The Feynman rules for these anomalous vertices are shown in
figs.~\eqref{feynrul1},~\eqref{feynrul2} and~\eqref{feynrul3}, with quark
momenta following the arrows and incoming gluon
momenta. The only operators that contribute to the Feynman rule in
fig.~\eqref{feynrul2}
are ${\cal O}_{ut}$ and ${\cal O}_{tu}$. They generate the vertices $g\,\gamma
\, \bar{t}\,u$ and $g\,Z\, \bar{t}\,u$ when we consider the electroweak
gauge fields present in the covariant derivatives of eq.~\eqref{eq:lf}.
On the other hand, the Feynman rule in fig.~\eqref{feynrul3} comes from the
operator
${\cal O}_{uG\phi}$, where the vev was replaced by the Higgs field. Of
course, we have considered analogous vertices involving the $c$ quark
instead of the $u$ one.
In ref.~\cite{nos1} we calculated the effect of these operators on
the width of the quark top. They allow for the decay
$t\,\rightarrow\,u\,g$ ($t\,\rightarrow \,c\,g$) (which is also
possible in the SM, but only at higher orders), and the
corresponding width is given by
\begin{align}
\Gamma (t \rightarrow u g) &=\; \frac{m^3_t}{12
\pi\Lambda^4}\,\Bigg\{ m^2_t \,\left|\alpha_{ut} +
\alpha^*_{tu}\right|^2 \,+\, 16 \,v^2\, \left(\left| \beta_{tu}
\right|^2 + \left| \beta_{ut} \right|^2 \right) \;\;\; +
\vspace{0.3cm} \nonumber \\
& \hspace{2.2cm}\, 8\, v\, m_t\,\mbox{Im}\left[ (\alpha_{ut} + \alpha^*_{tu})
\, \beta_{tu} \right] \Bigg\} \label{eq:wid}
\end{align}
and an analogous expression for $\Gamma (t \rightarrow c g)$. In
this expression and in all results of this paper we have set all quark masses,
except that of the top, equal to zero. We performed the full calculations
and verified that the error introduced by this approximation is extremely small.
Notice how the top width~\eqref{eq:wid} depends on $\Lambda^{-4}$.
There are processes with a $\Lambda^{-2}$ dependence, namely the
interference terms between the anomalous operators and the SM
diagrams of single top quark production, via the exchange of a W
gauge boson - processes like
$u\,\bar{d}\,\rightarrow\,t\,\bar{d}$. They were studied in
ref.~\cite{nos2} in detail, and we discovered that, due to a
strong CKM suppression, the contributions from the anomalous
vertices are extremely small.
As was discussed in refs.~\cite{nos1,nos2}, the operators that compose the
lagrangian~\eqref{eq:lf} are not completely independent. If one performs
integrations by parts and uses the fermionic equations of
motion~\cite{buch,grz}, one obtains the following relations
between them:
\begin{align}
{\cal O}^{\dagger}_{ut} &= {\cal O}_{tu}\;-\;\frac{i}{2} (\Gamma^{\dagger}_u\,
{\cal O}^{\dagger}_{u t \phi} \,+\, \Gamma_u \,{\cal O}_{t u \phi}) \nonumber \\
{\cal O}^{\dagger}_{ut} &= {\cal O}_{tu}\;-\;i\, g_s\, \bar{t}\, \gamma_{\mu}\,
\gamma_R\, \lambda^a\,u\, \sum_i (\bar{u}^i\, \gamma^{\mu}\, \gamma_R\,
\lambda_a u^i\,+\, \bar{d}^i\, \gamma^{\mu}\, \gamma_R\, \lambda_a\, d^i)
\;\;\; ,
\label{eq:rel}
\end{align}
where $\Gamma_u$ are the Yukawa couplings of the up quark and
$g_s$ the strong coupling constant. In the second equation
four-fermion terms appear, which means that they have to be taken
into account in these studies. Indeed, their role was of great
importance for the processes studied in ref.~\cite{nos2}. In the
current paper, however, they will have no bearing in our results.
The most interesting thing about eqs.~\eqref{eq:rel}, which is in
fact a direct consequence of working in a fully gauge invariant
manner, is that they tell us that there are two relations between
the several operators. This means that we are allowed to set two
of the couplings to zero. We have used this in ref.~\cite{nos2} to
simplify immensely the expressions obtained, by setting one of the
four-fermion couplings to zero, as well as making $\beta_{tu} =
\beta_{tc} = 0$. For consistency, we will make the same choice in
the current work, to allow for a direct comparison with the
results of~\cite{nos2}.
In refs.~\cite{nos1,nos2} we considered the contributions from the
anomalous flavor changing operators to single top production. In
particular, we calculated all processes with a top quark in the
final state, with a jet or isolated, stemming from either direct
top production or associated production with a gluon or a light
quark. We determined that the single top channel is an excellent
one for detection of new physics, as our calculations demonstrated
that one could obtain a significant increase in the cross section
of single top quark via the anomalous couplings relative to the SM
predicted values. We now wish to study the impact that these same
operators may have on other channels of top production, namely
$t\,\bar{t}$ production and the associated production of a top and
a gauge or Higgs boson. These are all channels of great physical
interest. In the former case, the LHC is expected to be a
veritable top-anti-top factory, with an estimated production of
around eight million top quark pairs per year and per experiment.
With such high statistics, a deviation from the SM prediction has,
{\em a priori}, a good chance of being detected. In the latter
case, the final state presents a very clear signal for
experiments. As such it should be easy to isolate it from the LHC
backgrounds.
At this point we must emphasize one important aspect: we are {\em
not} considering the most general set of operators for associated
top plus gauge or Higgs boson production processes. As mentioned
before, there are contributions from the electroweak dimension six
operators that we could have considered as well. However, that is
not our goal: we have established that the operator set we have
chosen, which corresponds to a specific type of possible new
physics (strong flavor changing interactions), may have a sizeable
impact on the single top channel. We now wish to verify if the
same operators might have important contributions to other
interesting channels. This is an important verification, to ensure
the consistency of the operator set chosen. If it predicts
significant increases on several physical processes but only some
of those are observed, then we will have a powerful clue that the
operators we chose do not tell the whole story of the new top
physics at the LHC. If, however, the observations are according to
the predictions arising from these operators, that will constitute
good evidence that they parameterize well whatever new physics
lies beyond the SM.
\section{Cross sections for $g\, g \,\rightarrow\,t\,\bar{t}$ and $q\,\bar{q}\,
\rightarrow\,t\,\bar{t}$}
\label{sec:ttbar}
There are three Feynman diagrams contributing to the partonic
cross section of $t\,\bar{t}$ production, as is shown in
fig.~\eqref{fig:gg}. All of these diagrams contribute to the
process $p\, p \,\rightarrow\,t\,\bar{t}$ and interfere with the
(analogous) SM tree-level diagrams for the same processes. Notice
that, since each of these diagrams includes {\em two} anomalous
vertices, they will generate amplitudes of the order
$1/\Lambda^4$.
\begin{figure}[ht]
\epsfysize=4.75cm \centerline{\epsfbox{ttbar.eps}}
\caption{Feynman diagrams for $t \, \bar{t}$ production.}
\label{fig:gg}
\end{figure}
With the Feynman rules shown in fig.~\eqref{feynrul1} and the
corresponding SM Feynman rules, it is easy to obtain the
interference cross section for $g\, g \,\rightarrow\,t\,\bar{t}$,
given by
\begin{equation}
\frac{d\,\sigma(g\,g\rightarrow t\,\bar{t})}{dt}\;\; =\;\; -\,
\frac{g_s^2}{1536\,\pi \,\Lambda^4}\; \frac{F^1_{gg} \,
|\alpha_{ut}+\alpha_{tu}|^2 \, + \,F^2_{gg} \,
(|\beta_{ut}|^2+|\beta_{tu}|^2) \, + F^3_{gg} \, Im[\alpha_{ut} \,
\beta_{tu} -\alpha_{tu} \, \beta_{tu}^*] \, }{
s^3\,\left( m_t^2 - t \right) \,t\,\left( m_t^2 - u \right) \,u} \;\;\; , \label{eq:gg}
\end{equation}
where
\begin{align}
F^1_{gg}\,&=\,7\,m_t^{12}\,t - 23\,m_t^{10}\,t^2 +
16\,m_t^8\,t^3 + 7\,m_t^{12}\,u -
20\,m_t^{10}\,t\,u + 51\,m_t^8\,t^2\,u -73\,m_t^6\,t^3\,u+
\nonumber \vspace{0.3cm} \\
& \;\;\;\; 37\,m_t^4\,t^4\,u- 23\,m_t^{10}\,u^2 + 51\,m_t^8\,t\,u^2 -
32\,m_t^6\,t^2\,u^2 - 31\,m_t^4\,t^3\,u^2 +
35\,m_t^2\,t^4\,u^2 +
\nonumber \vspace{0.3cm} \\
& \;\;\;\; 2\,t^5\,u^2 +16\,m_t^8\,u^3-73\,m_t^6\,t\,u^3 -
31\,m_t^4\,t^2\,u^3 +
42\,m_t^2\,t^3\,u^3 - 16\,t^4\,u^3 + 37\,m_t^4\,t\,u^4 +
\nonumber \vspace{0.3cm} \\
& \;\;\;\; 35\,m_t^2\,t^2\,u^4 - 16\,t^3\,u^4 + 2\,t^2\,u^5 \,
\nonumber \vspace{0.8cm} \\
F^2_{gg} \, &=\,\, 16 \, v^2 \, t \, u
\left( 7\,m_t^6\,t - 15\,m_t^4\,t^2
+ 8\,m_t^2\,t^3 + 7\,m_t^6\,u - 26\,m_t^4\,t\,u +20\,m_t^2\,t^2\,u +
\right.
\nonumber \vspace{0.5cm} \\
& \;\;\;\;
\left. t^3\,u - 15\,m_t^4\,u^2 +
20\,m_t^2\,t\,u^2 - 16\,t^2\,u^2 + 8\,m_t^2\,u^3 + t\,u^3 \right) \, \;\;\;
\nonumber \vspace{0.8cm} \\
F^3_{gg} \, &=\,\, - \, 2 \, v \, m_t
\left( 7\,m_t^{10}\,t - 23\,m_t^8\,t^2 +
16\,m_t^6\,t^3 + 7\,m_t^{10}\,u - 52\,m_t^8\,t\,u +
145\,m_t^6\,t^2\,u-
\right.
\nonumber \vspace{0.5cm} \\
& \;\;\;\;
158\,m_t^4\,t^3\,u +
60\,m_t^2\,t^4\,u - 23\,m_t^8\,u^2 +145\,m_t^6\,t\,u^2 - 258\,m_t^4\,t^2\,u^2 +
136\,m_t^2\,t^3\,u^2 +
\nonumber \vspace{0.3cm} \\
& \;\;\;\;
\left. 4\,t^4\,u^2 + 16\,m_t^6\,u^3 -
158\,m_t^4\,t\,u^3 + 136\,m_t^2\,t^2\,u^3 - 64\,t^3\,u^3 +
60\,m_t^2\,t\,u^4 + 4\,t^2\,u^4 \right) \;\;\;
. \nonumber \vspace{0.1cm} \\
\end{align}
For the $q\, \bar{q} \,\rightarrow\,t\,\bar{t}$ partonic channel we obtain
\begin{align}
\vspace{0.1cm} \nonumber \\
\frac{d\,\sigma(q\,q\rightarrow t\,\bar{t})}{dt} \, & =\; -\,
\frac{g_s^2}{108\,\pi \,\Lambda^4\, s^3}\; \left(F^1_{qq} \, |\alpha_{tu}|^2
\, + \,F^2_{qq} \, |\alpha_{ut}| ^2 \, + \, F^3_{qq} \,
Re[\alpha_{ut} \, \alpha_{tu}]\, \right.
\nonumber \vspace{2.5cm} \\
& \;\;\;\; \left.
F^4_{qq} \, (|\beta_{ut}|^2+|\beta_{tu}|^2) \, + \, F^5_{qq} \,
Im[\alpha_{ut} \, \beta_{tu}] \, + \, F^6_{qq} \, Im[\alpha_{tu}
\, \beta_{tu}^*] \right) \;\;\; ,
\end{align}
where
\begin{align}
F^1_{qq}\,&=\,(m_t^2-t) \, u^{2} \, ;
\nonumber \vspace{0.8cm} \\
F^2_{qq} \, &=\,\, - \, (m_t^6 + t \, m_t^4 - 4 \, u \, m_t^4 -
2 \, t^2 \, m_t^2 + u^2 \, m_t^2 - 6 \, t \, u \, m_t^2 + t \,
u^2)\, ;
\nonumber \vspace{0.8cm} \\
F^3_{qq} \, &=\,\, - \, (m_t^6 - t\, m_t^4 - 4\, u\, m_t^4 +
2\, t\, u\, m_t^2 - 2\, t\, u^2) \, ;
\nonumber \vspace{0.8cm} \\
F^4_{qq} \, &=\,\, 8 \, (3 u \, m_t^2 + s^2 - u^2) \, v^2 \, ;
\nonumber \vspace{0.8cm} \\
F^5_{qq} \, &=\,\, 2 \, m_t \, v\, (m_t^4 - 5\, t\, m_t^2 +
4 \, u \, m_t^2 + 4 \, t^2 + 12\, t\, u) \, ;
\nonumber \vspace{0.8cm} \\
F^6_{qq} \, &=\,\, 2 \, m_t \, v\, (m_t^4 - t\, m_t^2 -6 \, u\,
m_t^2 +\, 4 \, t\, u) \, .
\end{align}
Despite the rather long expressions, notice that the dependence on
the $\{ \alpha\,,\,\beta\}$ anomalous couplings is quite simple.
We have kept all couplings in these expressions, but recall that,
due to the freedom allowed by the equations of motion, we are
allowed to set $\beta_{tu} = 0$. Finally, there are identical
expressions for the partonic cross sections involving the charm
anomalous couplings.
\section{Cross sections for associated $t \, \gamma$, $t \, Z$, $t \, W$ and
$t \, h$ production}
\label{sec:tgauge}
For the processes $q \, g \,\rightarrow\,t\,\gamma, \, Z$ there
are once more contributions from three diagrams. This process does
not occur in the SM at tree level. This is, thus, an order
$1/\Lambda^4$ process. In our previous papers, the top quark was
produced alongside with a quark or a gluon, and therefore detected
through a final state of t+jets. Here, we have the production of
a top quark along with a gauge or Higgs boson, hopefully a much
``cleaner" signal for experimental detection. However, we must
recall that the final states of these channels will also include
jets, stemming either from initial and final state gluonic
radiation, or from remnants of the proton-proton collisions.
\begin{figure}[ht]
\epsfysize=4.5cm \centerline{\epsfbox{gqZgammat.eps}}
\caption{Feynman diagrams for the processes of $t \,\gamma$ and $t\,Z$
production}
\label{fig:gq}
\end{figure}
Using the Feynman rules from figs.~\eqref{feynrul1}
and~\eqref{feynrul2} we may obtain the cross section for
associated top plus photon production, given by
\begin{equation}
\frac{d\,\sigma(g\,q\rightarrow t\,\gamma)}{dt}\;\; =\;\;
\frac{e^2}{18 \, m_t^2 \, s^2 \, t \, (t+u)^2}\; (m_t^6-t \,
m_t^4+ s^2 \, m_t^2+3 \, s \, t \, m_t^2- 2 \, s^2 \, t) \, u \,
\, \Gamma (t \rightarrow q \, g) \;\;\; . \label{eq:gamma}
\end{equation}
where $\Gamma (t \rightarrow q \, g)$ stands for the decay width
of a top quark into a light up quark and a gluon. This result is
remarkably simple, and quite elegant. Similar expressions had been
obtained in refs.~\cite{nos1,nos2} for single top production in
the direct, gluon-gluon and gluon-quark channels. In fact, we
verified that every time there was a gluon in the initial or final
states the differential cross section was always proportional to a
partial decay width of the top. It is interesting to see the same
thing happening when a gluon is replaced by a photon.
Eq.~\eqref{eq:gamma} then establishes a very powerful link between
the rare decays of the top quark and the cross section for
associated top plus photon production.
Let us now consider the associated production of a top quark and a
$Z^0$ gauge boson. The calculation is similar to the top plus
photon channel, modulus the obvious kinematic differences, and the
different Feynman rules. We obtain, for the differential cross
section, the expression
\begin{equation}
\frac{d\,\sigma(g\,u\rightarrow t\,Z)}{dt}\;\; =\;\; \frac{e^2 \,
m_t^2}{1728\,\pi \,\Lambda^4\, S^2_{2w}}\; \frac{F^1_{tZ} \,
|\alpha_{ut}+\alpha^*_{tu}|^2 \, + \,F^2_{tZ} \,
Im[(\alpha_{ut}+\alpha^*_{tu}) \, \beta_{tu} ]\, + \, F^3_{tZ} \,
|\beta_{tu}|^2 + F^4_{tZ} |\beta_{ut}|^2 \, \, }{m_z^2 \, s^2 \, t
\, (t+u)^2} \;\;\; , \label{eq:Z}
\end{equation}
where
\begin{align}
F^1_{tZ}\,&=\,18\,m_t^2\,m_z^2\,s^2\,t +
48\,m_t^2\,m_z^2\,s^2\,S_w^2\,t + 9\,s^2\,t^3 + 32\,m_t^6\,m_z^2\,S_w^4\,u +
\nonumber \vspace{0.3cm} \\
& \;\;\;\;\;
32\,m_t^2\,m_z^2\,s^2\,S_w^4\,u - 18\,m_z^2\,s^2\,t\,u+ 48\,m_z^2\,s^2\,S_w^2\,t\,u -
\nonumber \vspace{0.3cm} \\
& \;\;\;\;\; 32\,m_t^4\,m_z^2\,S_w^4\,t\,u +
96\,m_t^2\,m_z^2\,s\,S_w^4\,t\,u -
64\,m_z^2\,s^2\,S_w^4\,t\,u + 9\,s^2\,t^2\,u \;\;\;\;
\nonumber \vspace{0.8cm} \\
F^2_{tZ} \, &=\,\, \frac{4}{m_t} \left(18\,m_t^2\,m_z^2\,s^2\,t +
48\,m_t^2\,m_z^2\,s^2\,S_w^2\,t + 9\,s^2\,t^3 +32\,m_t^6\,m_z^2\,S_w^4\,u +
\right.
\nonumber \vspace{0.5cm} \\
& \;\;\;\;\;
32\,m_t^2\,m_z^2\,s^2\,S_w^4\,u- 18\,m_z^2\,s^2\,t\,u + 48\,m_z^2\,s^2\,S_w^2\,t\,u -
\nonumber \vspace{0.5cm} \\
& \;\;\;\;
\left.
32\,m_t^4\,m_z^2\,S_w^4\,t\,u +
96\,m_t^2\,m_z^2\,s\,S_w^4\,t\,u -
64\,m_z^2\,s^2\,S_w^4\,t\,u + 9\,s^2\,t^2\,u \right) \;\;\;
\nonumber \vspace{0.8cm} \\
F^3_{tZ} \, &=\,\,\frac{16}{m_t^2} \left( 54\,m_t^6\,m_z^2\,s -
54\,m_t^4\,m_z^2\,s^2 -
48\,m_t^4\,m_z^2\,s^2\,S_w^2 +48\,m_t^2\,m_z^2\,s^3\,S_w^2 -
\right.
\nonumber \vspace{0.5cm} \\
& \;\;\;\;
54\,m_t^4\,m_z^2\,s\,t +54\,m_t^2\,m_z^2\,s^2\,t + 9\,s^2\,t^3 +18\,m_t^6\,m_z^2\,u - 54\,m_t^4\,m_z^2\,s\,u +
\nonumber \vspace{0.5cm} \\
& \;\;\;\;
18\,m_t^2\,m_z^2\,s^2\,u -
192\,m_t^4\,m_z^2\,s\,S_w^2\,u +192\,m_t^2\,m_z^2\,s^2\,S_w^2\,u -
48\,m_z^2\,s^3\,S_w^2\,u +
\nonumber \vspace{0.5cm} \\
& \;\;\;\;
32\,m_t^6\,m_z^2\,S_w^4\,u +
32\,m_t^2\,m_z^2\,s^2\,S_w^4\,u -18\,m_t^4\,m_z^2\,t\,u +
54\,m_t^2\,m_z^2\,s\,t\,u -
\nonumber \vspace{0.5cm} \\
& \;\;\;\;
18\,m_z^2\,s^2\,t\,u -
32\,m_t^4\,m_z^2\,S_w^4\,t\,u +
96\,m_t^2\,m_z^2\,s\,S_w^4\,t\,u -64\,m_z^2\,s^2\,S_w^4\,t\,u +
\nonumber \vspace{0.5cm} \\
& \;\;\;
\left.
9\,s^2\,t^2\,u -48\,m_t^4\,m_z^2\,S_w^2\,u^2 +
144\,m_t^2\,m_z^2\,s\,S_w^2\,u^2 -
48\,m_z^2\,s^2\,S_w^2\,u^2
\right) \;\;\;
\nonumber \vspace{0.8cm} \\
F^4_{tZ} \, &=\,\, \frac{16}{m_t^2} \left(18\,m_t^2\,m_z^2\,s^2\,t
+
48\,m_t^2\,m_z^2\,s^2\,S_w^2\,t + 9\,s^2\,t^3 +
\right.
\nonumber \vspace{0.5cm} \\
& \;\;\;\;\; 32\,m_t^6\,m_z^2\,S_w^4\,u +
32\,m_t^2\,m_z^2\,s^2\,S_w^4\,u -18\,m_z^2\,s^2\,t\,u + 48\,m_z^2\,s^2\,S_w^2\,t\,u -
\nonumber \vspace{0.5cm} \\
& \;\;\;\; \left.
32\,m_t^4\,m_z^2\,S_w^4\,t\,u +
96\,m_t^2\,m_z^2\,s\,S_w^4\,t\,u -
64\,m_z^2\,s^2\,S_w^4\,t\,u + 9\,s^2\,t^2\,u \right) \;\;\;
,
\end{align}
where we have used, for short, the convention $S_w = \sin(\theta_W)$ and
$S_{2w} = \sin(2\theta_W)$. The nice proportion to the top decay width is
destroyed in this expression, caused, no doubt, by the fact that the $Z^0$
boson, unlike the gluon or photon, is not massless.
We now turn to $t \, W$ production. Contrary to the $t \, Z$ and $t \,
\gamma$ processes, this one occurs at tree level in the SM. A discussion
about the problems related with $t \, W$ production and detection
can be found in~\cite{wt}.
\begin{figure}[ht]
\epsfysize=5cm \centerline{\epsfbox{gqWt.eps}} \caption{Feynman
diagrams for $t \, W^-$ production.}
\label{fig:wt}
\end{figure}
In fig.~\eqref{fig:wt} we show the SM Feynman diagrams contributing
to this process, along with the single anomalous diagram that also
contributes to it. There is only one diagram because there are no contributions
of the type of fig.~\eqref{feynrul2}. The reason is simple: it is the covariant
derivative terms that give rise to the ``four-legged" diagram of
fig.~\eqref{feynrul2}, and that
derivative is acting on an $SU(2)$ gauge singlet. Therefore, only the
hypercharge U(1) gauge field, $B_\mu$, contributes to the vertex. This means
that the operators ${\cal O}_{uG}$ do {\em not} give rise to any diagram of
the type of fig.~\eqref{feynrul2} for the $t \, W$ channels.
A simple calculation yields the SM tree level cross section, which reads
\begin{align}
\frac{d\,\sigma^{SM}(g\,d\rightarrow t\,W^-)}{dt}\; &
=\;\frac{e^2\,g_s^2 \, |V_{td}|^2 } {384\,
m_w^2\,\pi \,s^3\,S_w^2\,( m_t^2 - t)^2} \, ( m_t^2\,u\,
( - t\,u + m_t^2\,( 2\,t + u)) +
\nonumber \vspace{1cm} \\
& \;\;\;\;\;
2\,m_w^2\,( 2\,m_t^4\,u +
( s + u ) \,( 2\,s^2 + 2\,s\,u +
u\,( -2\,m_t^2 + u )))) \, \, .
\label{eq:wsm}
\end{align}
The interference terms between the anomalous diagram and the SM
ones (for an internal q quark, with $q\, =\, u\,,\,c$) were
computed in ref.~\cite{nos1}, and are given by
\begin{equation}
\frac{d\,\sigma^{INT}(g\,d\rightarrow t\,W^-)}{dt}\; =\;\frac{- \,
e^2\,g_s \, |V_{td} \, V_{qd}| \, m_t\,v
\left(- m_t^2\,t\,u +
2\,m_w^2\,\left( s^2 + \left( m_t^2 + s \right) \,u \right)
\right)
}{48\,m_w^2\,\pi \,s^2\,S_w^2\,
\left( m_t^2 - t \right) \,t} \,\, \frac{Re[\beta_{qt}]}{\Lambda^2} \,\,\, .
\label{eq:wint}
\end{equation}
Finally, the new anomalous term, which corresponds to the squared
amplitude of the anomalous diagram in fig.~\eqref{fig:wt}, is
given by
\begin{equation}
\frac{d\,\sigma^{NEW}(g\,d\rightarrow t\,W^-)}{dt}\; =\;\frac{e^2
\, |V_{qd}|^2 \left( m_t^2 - t \right) \,
\left( s\,t + 2\,m_w^2\,u \right) \,v^2}{24\,m_w^2\,\pi
\,s^2
S_w^2\,t} \,\, \frac{|\beta_{qt}|^2}{\Lambda^4} \, \, \, .
\end{equation}
Notice the dependence, in eqs.~\eqref{eq:wsm} and~\eqref{eq:wint}, on the
CKM matrix elements. We concluded, in~\cite{nos1}, that they were of vital
importance for the final results.
Finally, we consider the production of a top quark associated with
a SM Higgs boson. Like the $t \, Z$ and $t \, \gamma$ processes,
this process does not occur at tree level in the SM. The Feynman
diagrams are the same as for the top plus $\gamma$ or $Z$
processes. There are only two differences. First, we have to use
the four point Feynman rule in fig.~\eqref{feynrul3} instead of
the one in fig.~\eqref{feynrul2}. The ``four-legged" Feynman rule
involves now only the $\beta$ couplings instead of the $\alpha$
ones, which is natural, when you remember that the ${\cal O}_{uG}$
operators do not involve the Higgs boson in any way, whereas the
${\cal O}_{uG\phi}$ ones do. The second difference is the fact
that the diagram for top plus Higgs production analogous to the
second one in figure~\eqref{fig:gq} does {\em not} appear in these
calculations. The reason is very simple - the Higgs-quark vertex
is proportional to the mass of the quark in question and we have
taken, as explained earlier, $m_u\,=\,m_c\,=\,0$.
The cross section for $t \, h$ production can then be written as
\begin{equation}
\frac{d\,\sigma(g\,u\rightarrow t\,h)}{dt}\;\; =\;\;
\left\{F^1_{th} \, |\alpha_{ut}+\alpha^*_{tu}|^2 \, + \,F^2_{th}
\, Im[(\alpha_{ut}+\alpha^*_{tu}) \, \beta_{tu} ]\, + \, F^3_{th}
\, (|\beta_{tu}|^2 + |\beta_{ut}|^2)\right\}\,\frac{1}{\Lambda^4}
\;\;\; , \label{eq:h}
\end{equation}
where
\begin{align}
F^1_{th}\,&=\frac{e^2\,m_t^2\,s^2\,
\left( -\left( m_h^2\,m_t^2 \right) - s\,t +
m_t^2\,\left( 4\,s + t \right) \right) }{48\,m_w^2\,
{\left( m_t^2 - s \right) }^2\,S_w^2} \;\;\;\;
\nonumber \vspace{1.0cm} \\
F^2_{th} \, &=\,\frac{-\left( e\,m_t\,s^2\,
\left( -\left( {\sqrt{2}}\,m_w\,\left( m_t^2 - s \right) \,
S_w\,\left( 2\,m_t^2 - t \right) \right) +
e\,m_t^2\,
\left( -m_h^2 + 2\,\left( m_t^2 + s \right) \right) \,v \right)
\right) }{12\,m_w^2\,{\left( m_t^2 - s \right) }^2\,
S_w^2} \;\;\;
\nonumber \vspace{1.0cm} \\
F^3_{th} \, &=\,\, \frac{-1}{3\,m_w^2\,
{\left( m_t^2 - s \right) }^2\,S_w^2} \left( s\,\left( -m_t^2 + s \right)
\,t\,
\left( 2\,m_w^2\,\left( -m_t^2 + s \right) \,S_w^2 +
2\,{\sqrt{2}}\,e\,m_t^2\,m_w\,S_w\,v -
e^2\,m_t^2\,v^2 \right) \right) +
\;
\nonumber \vspace{1.0cm} \\
& \; \;\;\;\; m_t^2\,s\,\left( 2\,m_w^2\,{\left( m_t^2 - s \right)
}^2\, S_w^2 - 2\,{\sqrt{2}}\,e\,m_w\,
\left( m_t^4 - s^2 \right) \,S_w\,v +
e^2\,\left( -\left( m_h^2\,s \right) +
{\left( m_t^2 + s \right) }^2 \right) \,v^2 \right) \; .
\end{align}
\section{Results for the integrated cross sections}
\label{sec:num}
\subsection{$t \, \bar{t}$ production}
The cross section for the gluon-gluon channel is identical for the
processes with anomalous couplings of the $u$ or $c$ quarks. For
the quark--antiquark cross section via a $c$ quark, the numerical
results are extremely small, and we do not present them. We have
used throughout this work the CTEQ6 parton density functions
(pdfs)~\cite{cteq6} and included a cut of 15 GeV on the transverse
momentum of the partons in the final state. This will allow a
direct comparison with the results of reference~\cite{nos2}, where
a similar cut was considered to help remove collinear and soft
singularities in the gluon-quark processes. Finally, for this
particular process, we have taken the factorization scale $\mu_F$
equal to twice the mass of the top quark. As was mentioned in
refs.~\cite{nos1,nos2}, this will produce smaller values of the
cross sections than we would obtain if we had, for instance, set
$\mu_F$ equal to the partonic center-of-mass energy. With these
specifications, the results we obtain are
\begin{align}
\sigma_{p\,p\,\rightarrow\,g\,g\,\rightarrow\,t\bar{t}} &=\; \left\{-0.4 \,
|\alpha_{ut}+\alpha_{tu}|^2 \, + \, 7.6 \,
(|\beta_{ut}|^2+|\beta_{tu}|^2) \, + 9.1 \, Im[\alpha_{ut} \,
\beta_{tu} -\alpha_{tu} \, \beta_{tu}^*] \,
\right\}\,\frac{1}{\Lambda^4}
\;\mbox{pb} \vspace{0.3cm}\nonumber \\
\sigma_{p\,p\,\rightarrow\,q\,\bar{q}\,\rightarrow\,t \, \bar{t}} &=\;
\left\{\, -0.2 \, |\alpha_{tu}|^2 \, - \, 0.4 \, |\alpha_{ut}| ^2 \, + \, 0.5 \,
Re[\alpha_{ut} \, \alpha_{tu}]\, - \, 0.5 \,
(|\beta_{ut}|^2+|\beta_{tu}|^2) \, \, \right.
\nonumber \vspace{0.5cm} \\
&
\left. \;\;\;\; \;\; - \, 0.6 \, Im[\alpha_{ut} \, \beta_{tu}]
\, - \, 0.1 \, Im[\alpha_{tu} \, \beta_{tu}^*]
\right\}\,
\frac{1}{\Lambda^4}\;\mbox{pb} \;\;\; .
\end{align}
So the total results for the proton-proton cross sections are
\begin{align}
\sigma^{(u)}_{p\,p\,\rightarrow\,t \,
\bar{t}} &=\; \left\{\, -0.6 \, |\alpha_{tu}|^2 \, - \, 0.8 \,
|\alpha_{ut}| ^2 \, - \, 0.3 \, Re[\alpha_{ut} \, \alpha_{tu}]\, +
\, 7.1 \, (|\beta_{ut}|^2+|\beta_{tu}|^2) \, \, \right.
\nonumber \vspace{0.5cm} \\
&
\left. \;\;\;\; \;\; + \, 8.5 \, Im[\alpha_{ut} \, \beta_{tu}]
\, + \, 9.0 \, Im[\alpha_{tu} \, \beta_{tu}^*]
\right\}\,
\frac{1}{\Lambda^4}
\;\mbox{pb} \vspace{0.3cm}\nonumber \\
\sigma^{(c)}_{p\,p\,\rightarrow\,t\bar{t}} &=\; \left\{-0.4 \,
|\alpha_{ct}+\alpha_{tc}|^2 \, + \, 7.6 \,
(|\beta_{ct}|^2+|\beta_{tc}|^2) \, + 9.1 \, Im[\alpha_{ct} \,
\beta_{tc} -\alpha_{tc} \, \beta_{tc}^*] \,
\right\}\,\frac{1}{\Lambda^4} \;\mbox{pb} \;\;\; .
\end{align}
\subsection{$t \, \gamma$ and $t \, Z$ production}
The results for top plus $\gamma$ production are particulary
simple, as the cross sections are proportional to the top decay
width to an up-type quark plus a gluon. The pdf suppression of the
$c$ quarks, however, makes the corresponding contributions to the
cross section extremely small, which is why we only present the
$u$ quark terms. For this channel, we chose $\mu_F \,=\, m_t$. The
proton-proton cross sections for top plus gamma production are
then given by
\begin{align}
\sigma^{(u)}_{p\,p\,\rightarrow\,t \, \gamma} \;\; &=\;\; 228 \,
\Gamma (t \rightarrow u \, g) \;|V_{tb}|^2 \;\mbox{pb}\; ;
\vspace{0.5cm}\nonumber \\
\sigma^{(u)}_{p\,p\,\rightarrow\,\bar{t} \, \gamma} \;\; &=\;\; 32.6
\, \Gamma (t \rightarrow u \, g) \; |V_{tb}|^2 \;\mbox{pb}\; .
\label{eq:gamma1}
\end{align}
We have also presented the cross section for anti-top plus gamma
production. To obtain this quantity we simply used the
differential cross section for the $t\,+\,\gamma$ channel,
eq.~\eqref{eq:gamma}, since there is no difference, in terms of
effective operators, between both processes. The different numbers
in eqs.~\eqref{eq:gamma1}, then, arise solely from different pdf
contributions (namely the $u$ and $\bar{u}$ quarks).
For the $t\,Z$ processes the expressions we obtain, after integrating on the
pdfs (with $\mu_F\,=\,m_t\,+\,m_Z$), are
\begin{align}
\sigma^{(u)}_{p\,p\,\rightarrow\,t \, Z} \;\; &=\;\; \left\{ \, 4.0
\, |\alpha_{ut}+\alpha^*_{tu}|^2 \, + \, 32.1 \,
Im[(\alpha_{ut}+\alpha^*_{tu}) \, \beta_{tu} ]\, + \, 63.8 \,
|\beta_{tu}|^2 + 65.3 |\beta_{ut}|^2 \right\} \,
\frac{1}{\Lambda^4} \; \mbox{pb} ;
\vspace{0.5cm}\nonumber \\
\sigma^{(u)}_{p\,p\,\rightarrow\, \bar{t} \, Z} \;\; &=\;\; \left\{
\, 0.4 \, |\alpha_{ut}+\alpha^*_{tu}|^2 \, + \, 3.4 \,
Im[(\alpha_{ut}+\alpha^*_{tu}) \, \beta_{tu} ]\, + \, 6.7 \,
|\beta_{tu}|^2 + 7.0 |\beta_{ut}|^2 \right\} \,
\frac{1}{\Lambda^4} \; \mbox{pb} ;
\vspace{0.5cm}\nonumber \\
\sigma^{(c)}_{p\,p\,\rightarrow\,t \, Z} \;\; &=
\sigma^{(c)}_{p\,p\,\rightarrow\, \bar{t} \, Z}\;\; = \left\{ \,
0.2\, |\alpha_{ct}+\alpha^*_{tc}|^2 \, + \, 1.6 \,
Im[(\alpha_{ct}+\alpha^*_{tc}) \, \beta_{tc} ]\, + \, 3.2 \,
|\beta_{tc}|^2 + 3.4 |\beta_{ct}|^2 \right\} \,
\frac{1}{\Lambda^4} \; \mbox{pb}. \label{eq:Z1}
\end{align}
\subsection{$t \, W$ production}
With the SM diagrams of figure~\eqref{fig:wt}, we obtain the
expected tree-level result for $\sigma_{p \, p \rightarrow
t\,W^-}$, which is about 30 pb. Unlike the previous cases of
production of a top quark alongside with a gauge boson, we now
have interference terms between the SM diagrams and the anomalous
ones, which are of order $\Lambda^{-2}$. We obtained these
interference cross sections in ref.~\cite{nos1} and present them
here again. For the $u$ quark anomalous couplings we have
($\mu_F\,=\,m_t\,+\,m_W$)
\begin{align}
\sigma^{INT}(p\,p \rightarrow t\,W^-)\; &=\; 0.031 \;
Re[\beta_{ut}]\; \frac{1}{\Lambda^2} \;\; \mbox{pb} ;
\vspace{0.5cm}\nonumber \\
\sigma^{INT}(p\,p \rightarrow \bar{t} \,W^+)\; &=\; 0.022 \;
Re[\beta_{ut}]\; \frac{1}{\Lambda^2} \; \;\mbox{pb} ,
\end{align}
whereas for the $c$ quark couplings the interference cross sections are
\begin{align}
\sigma^{INT}(p\,p \rightarrow t\,W^-)\; &=\; 0.065\;
Re[\beta_{ct}] \; \frac{1}{\Lambda^2} \; \; \mbox{pb} ;
\vspace{0.5cm}\nonumber \\
\sigma^{INT}(p\,p \rightarrow \bar{t} \,W^+)\; &=\; 0.063 \;
Re[\beta_{ct}] \; \frac{1}{\Lambda^2} \; \mbox{pb} .
\end{align}
These cross sections have extremely small coefficients, which is
due to a double CKM cancellation, as we have discussed in
ref.~\cite{nos1}. This cancellation makes the contribution to the
cross section arising from the square of the anomalous diagram the
largest, as long as the scale of new physics isn't too large
(namely, the interference terms for the u coupling are negligible
as long as $\Lambda/\sqrt{|\beta_{ut}|} < 45$ GeV).
The new contributions to $t\,W$ production are then given by, for the $u$
couplings,
\begin{align}
\sigma^{NEW}(p\,p \rightarrow t\,W^-)\; &=\; 63.4 \,
|\beta_{ut}|^2 \frac{1}{\Lambda^4} \; \mbox{pb}
\vspace{0.5cm}\nonumber \\
\sigma^{NEW}(p\,p \rightarrow \bar{t} \,W^+)\; &=\; 18.0 \,
|\beta_{ut}|^2 \frac{1}{\Lambda^4} \; \mbox{pb}\;\;\;,
\end{align}
and, for the $c$ quark couplings,
\begin{align}
\sigma^{NEW}(p\,p \rightarrow t\,W^-)\; &=\; 14.2 \,
|\beta_{ct}|^2 \frac{1}{\Lambda^4} \; \mbox{pb}
\vspace{0.5cm}\nonumber \\
\sigma^{NEW}(p\,p \rightarrow \bar{t} \,W^+)\; &=\; 11.9 \,
|\beta_{ct}|^2 \frac{1}{\Lambda^4} \; \mbox{pb}\;\;\; .
\end{align}
It is interesting to notice that because the SM process occurs
mostly through a $g \, b$ initial state, and the pdfs for a $b$
quark and a $\bar{b}$ quark are essentially identical, there is
almost no difference in $t$ and $\bar{t}$ production. That is,
$\sigma^{SM}(t\,W^-)\, - \, \sigma^{SM}(\bar{t} \,W^+)\; \approx\;
0$. However, the interference terms and the new ones receive
contributions from all quarks leading to a difference
\begin{align}
&\sigma^{INT}(t\,W^-)\, - \, \sigma^{INT}(\bar{t} \,W^+)\;=\; 0.09
Re[\beta_{ut}] \; \frac{1}{\Lambda^2} \; \mbox{pb};
\vspace{0.5cm}\nonumber \\
&\sigma^{NEW}(t\,W^-)\, - \, \sigma^{NEW}(\bar{t} \,W^+)\;=\; 45.4
|\beta_{ut}|^2 \; \frac{1}{\Lambda^4} \; \mbox{pb}.
\end{align}
Therefore, this asymmetry could be a clear sign of new physics.
Moreover, it depends on only one of the anomalous couplings and, if no
asymmetry of this kind is observed, a stringent bound could be set on
$\beta_{ut}$.
\subsection{$t \, h$ production}
Finally, we consider the numerical results for associated top plus
Higgs boson production. The cross sections depend, of course, on
the unknown value of the Higgs mass. As we will observe, though,
that dependence is not a strong one. We will consider two values
for the Higgs mass, $m_h\,=\,120$ GeV (the preferred value from
the current experimental bounds, and a typical value of a
supersymmetric Higgs mass) and $m_h\,=\,300$ GeV. Once again, the
results we obtained for production of $t\,h$ via the $c$ quark are
too small, and we do not show them. Likewise, the pdf suppression
of the anti-up and anti-charm quark heavily suppress the
production of an anti-top quark and a Higgs boson. We are left
with $t\,h$ production via the $u$ quark, which, for $m_h\,=\,120$
GeV, reads
\begin{equation}
\frac{d\,\sigma(g\,u\rightarrow t\,h)}{dt}\;\; =\;\; \left\{5.9 \,
|\alpha_{ut}+\alpha^*_{tu}|^2 \, + \,23.6 \,
Im[(\alpha_{ut}+\alpha^*_{tu}) \, \beta_{tu} ]\, + \, 95.2 \,
(|\beta_{tu}|^2 + |\beta_{ut}|^2)\right\}\,\frac{1}{\Lambda^4} \;\;\; ,
\label{eq:h120}
\end{equation}
and, for $m_h\,=\,300$ GeV, we have
\begin{equation}
\frac{d\,\sigma(g\,u \rightarrow t\,h)}{dt}\;\; =\;\; \left\{ 3.2
\, |\alpha_{ut}+\alpha^*_{tu}|^2 \, + \,25.8 \,
Im[(\alpha_{ut}+\alpha^*_{tu}) \, \beta_{tu} ]\, + \, 52.1\,
(|\beta_{tu}|^2 + |\beta_{ut}|^2)\, \right\}\frac{1}{\Lambda^4}
\;\;\; . \label{eq:h300}
\end{equation}
We observe some variation of the coefficients of the $\alpha$ and
$\beta$ coefficients, the cross section for the larger Higgs mass
being, naturally, smaller. However, that variation is not a
dramatic one, which is perhaps due to the extremely high
center-of-mass energy available in the proton-proton collisions at
the LHC.
\section{Conclusions}
\label{sec:conc}
In this section we perform a joint analysis of the results
obtained so far in this paper and those from our previous
works~\cite{nos1,nos2}. We have calculated all possible one and
two body decays, at the partonic level, originating from a set of
strong flavor changing operators satisfying our predefined
criteria. We now wish to observe if there is any correlation
between the cross sections of top quark production we computed. In
what follows we have used the liberty afforded to us by the
equations of motion to set $\beta_{tu}\,=\,\beta_{tc}\, =\,0$, to
simplify our calculations.
To investigate the dependence of the cross sections on the values
of the anomalous couplings, we generated random values for these,
and plotted the cross sections against the branching ratios of the
top quark for the decays $t \rightarrow g\,u$ and $t \rightarrow
g\,c$. Our rationale for doing this is a simple one: as was
discussed in refs.~\cite{chro,juan}, the top quark branching
ratios for these decays may vary by as much as eight orders of
magnitude, from $\sim\,10^{-12}$ in the SM to $\sim\,10^{-4}$ for
some supersymmetric models. This quantity, then, is a good measure
of whether any physics beyond that of the standard model exists.
\begin{figure}[ht]
\epsfysize=9cm \centerline{\epsfbox{topWu.eps}} \caption{Cross
sections for the processes $p\,p \rightarrow t \,+\, jet$
(crosses) and $p\,p \rightarrow t \, + \, W$ (stars) via an $u$
quark, as a function of the branching ratio $BR(t \rightarrow g \,
u)$. } \label{fig:sig_t_u}
\end{figure}
In fig.~\ref{fig:sig_t_u} we show the plot of the cross sections
for the processes $p \, p \rightarrow t\, + \, jet$ and $p \,p
\rightarrow t\, +\, W$ via a $u$ quark versus the branching ratio
$BR(t \rightarrow g \, u)$. This plot was obtained by varying the
constants $\alpha$ and $\beta$ in a random way. Each combination
of $\alpha$ and $\beta$ originates a given branching ratio and a
particular value for each cross section. Obviously, another set of
points may generate the same value for the branching ratio but a
different value for the cross section, which justifies the
distribution of values of $\sigma(p\,p \rightarrow t\,+\,jet)$ and
$\sigma(p\,p \rightarrow t\,+\,W)$. We chose values of $\alpha$
and $\beta$ for which the branching ratio varies between its SM
value and the maximum theoretical supersymmetric value it may
assume. In fig.~\eqref{fig:sig_t_c} we show a similar plot, but
for top plus jet and top plus a $W$ boson production via a $c$
quark.
\begin{figure}[ht]
\epsfysize=9cm \centerline{\epsfbox{topWc.eps}} \caption{$p \, p
\rightarrow t \, + \, jet$ (+) and $p \, p \rightarrow t \, + \,
W$ (*) via a $c$ quark as a function of the branching ratio $BR(t
\rightarrow g \, c)$ .} \label{fig:sig_t_c}
\end{figure}
It is obvious from both fig.~\ref{fig:sig_t_u} and
fig.~\ref{fig:sig_t_c} that if the branching ratio is close to its
SM value there is no chance to observe new strong flavor changing
physics at the LHC. However, as we approach the larger values of
$10^{-4}$, the cross section for single top becomes visible and
the $W \, t$ cross section approaches 0.1 pb. Notice that the
$t\,W$ cross section is proportional to only one of the couplings,
which makes it a very attractive observable - it may allow us to
impose constraints on a single anomalous coupling.
It should be noted that single top production depends also on the
contributions of the four fermion operators. Hence, even if the
branching ratios $BR(t \rightarrow g \, u (c))$ are very small,
there is still the possibility of having a large single top cross
section with origin in the four fermion couplings. In figs.~\ref{fig:sig_t_u}
and~\ref{fig:sig_t_c} we did not consider this possibility, setting the
four-fermion couplings to zero. For a discussion on the four-fermion
couplings do see~\cite{nos2}.
In fig.~\ref{fig:Zgamma_u} we plot the cross sections for $p \, p \rightarrow
t \, + \, Z$ and $p \, p \rightarrow t \, + \,\gamma$ via a $u$ quark, versus
the branching ratio $BR(t \rightarrow g \, u)$. The
equivalent plot with an internal $c$ quark is similar, but the
values for the cross section are much smaller. In this plot we can see
that both cross sections are very small in the range of $\{\alpha\,\beta\}$
considered. These results imply that their contribution will hardly be seen at
the LHC, unless the values for the branching ratio are peculiarly large.
\begin{figure}[ht]
\epsfysize=9cm \centerline{\epsfbox{Zgamma.ps}} \caption{Cross
sections for the processes $p \, p \rightarrow t \, + \, Z$
(upper line) and $p \, p \rightarrow t \, + \,\gamma$ (lower line)
via a $u$ quark, as a function of the branching ratio $BR(t
\rightarrow g \, u)$ .} \label{fig:Zgamma_u}
\end{figure}
The same, in fact, could be said for $p \, p \rightarrow t \, + \,
h$. In fig.~\ref{fig:top_higgs} we present a plot for this cross
section, again as a function of the branching ratio of $t
\rightarrow g \, u$, for two values of the Higgs mass. We readily
see that, even for the smallest allowed SM Higgs mass, the values
are very small. The same holds true for the process involving the
anomalous couplings of the $c$ quark.
\begin{figure}[ht]
\epsfysize=9cm \centerline{\epsfbox{top_higgs.ps}} \caption{Cross
section of the process $p \, p \rightarrow t \, + \, h$ via a $u$
quark versus the branching ratio $BR(t \rightarrow g \, u)$ for
$m_h\,=\,120$ GeV and $m_h\,=\,300$ GeV.} \label{fig:top_higgs}
\end{figure}
The smallness of the effects of these operators in the several cross sections
holds true, as well, for the top--anti-top channel. In this case, even for
a branching ratio $BR(t \rightarrow g \, u) \,\simeq\,10^{-4}$, the
contributions to the cross section $\sigma(p \, p \rightarrow t \,\bar{t})$
do not exceed, in absolute value, one picobarn. They may be positive or
negative, but always extremely small.
In conclusion, the effective operators we have considered in this
paper and references~\cite{nos1,nos2} are extremely constrained in
their impact on the several channels of top quark production.
Namely, figs.~\eqref{fig:sig_t_u} through~\eqref{fig:top_higgs}
illustrate that, with the exception of the cross section for
production of a single top plus a jet, the other channels are
expected to have anomalous contributions which are probably too
small to be observed at the LHC. Thus, if there are indeed strong
flavor changing neutral current effects on the decays of the top
quark, the results of this paper show that their impact will be
restricted to a single channel, single top plus jet production. It
is entirely possible, according to our results, to have an excess
in the cross section $\sigma(p \, p \rightarrow t \, +
\,\mbox{jet})$ arising from new physics described by the operators
we have considered here, at the same time obtaining results for
the production of a top quark alongside a gauge and Higgs boson,
or for $t\bar{t}$ production, which are entirely in agreement with
the SM predictions. This reinforces the conclusion of
reference~\cite{nos2}: that the cross section for single top plus
jet production is an excellent probe for the existence of new
physics beyond that of the SM. It is a channel extremely sensitive
to the presence of that new physics, and boasts a significant
excess in its cross section, whereas many other channels involving
the top quark remain unchanged. Nevertheless, we are encouraged by
the fact that it may still be possible to use some of these
unchanged channels, such as top plus $W$ production, to constrain
the $\beta$ parameters, through the study of asymmetries such as
$\sigma(p \, p \rightarrow t\,W^-)\, - \, \sigma(p \, p
\rightarrow \bar{t} \,W^+)$.
\vspace{0.25cm} {\bf Acknowledgments:} Our thanks to our
colleagues from LIP for valuable discussions. Our further thanks
to Ant\'onio Onofre for a careful reading of the manuscript. This
work is supported by Funda\c{c}\~ao para a Ci\^encia e Tecnologia
under contract POCI/FIS/59741/2004. P.M.F. is supported by FCT
under contract SFRH/BPD/5575/2001.
\newpage
|
2,869,038,156,267 | arxiv | \section{Introduction}
Since the discovery of the Quantum Hall Effect (QHE) \cite{DasSarma}, the condensed matter community devoted great efforts in finding other topological states of matter in which fundamental physical properties are insensitive to smooth changes in material parameters and can be modified only through quantum phase transitions. In recent years, a new class of these peculiar systems have been experimentally observed: the topological insulators \cite{Hasan10, Qi11}. Their main characteristics are the presence of a gap in the bulk, analogous to the one of the ordinary band insulators, and gapless edge states protected by time-reversal symmetry. In two spatial dimensions they are the first realization of the Quantum Spin Hall Effect (QSHE), theoretically predicted in graphene with spin-orbit interaction \cite{Kane05a, Kane05b}, in strained semiconductors \cite{Bernevig06a} and in Mercury-Telluride quantum wells \cite{Bernevig06b, Konig07}. The edge states of the QSHE are helical \cite{Wu06}, namely their electrons have spin direction and momentum locked each other. In presence of intra-edge interactions they can be described in terms of a helical Luttinger liquid \cite{Wu06}. The experimental measurement of non-local transport in multi-terminal setups, according with the prediction of the Landauer-Buttiker theory \cite{Buttiker09}, represented an important test of the existence of helical edge states \cite{Roth09}.\\
The fast technical developments in this field will make shortly possible to realize interesting experimental geometries, like the Quantum Point Contact (QPC) \cite{Chang03, Teo09, Czapkiewicz12} that already revealed extremely useful to extract information on the edge properties in the fractional QHE \cite{DePicciotto97, Chung03, Ferraro08, Ferraro10a, Ferraro10b, Carrega11}. Various theoretical proposals have investigated this geometry focusing on both the two-terminal \cite{Strom09} and four-terminal \cite{Hou09, Liu11, Schmidt11} setups. Possible interference experiments \cite{Dolcini11, Virtanen11}, as well as quantum pumps \cite{Citro11}, involving two point contacts have also been considered.\\
The possibility offered by the Mercury-Telluride quantum wells to realize a QPC by means of electrostatic gates or, more realistically, by etching the sample in the desired shape makes possible to have a great control on the geometry and allows to study the evolution of the transport properties as a function of the constriction geometrical parameters. An analysis of the effects of extended contacts \cite{Aranzana05, Overbosch09, Chevallier10, Wang10} on the transport properties have been already addressed for the QHE showing deviations from the standard power-law behavior of the current as a function of the voltage at zero temperature.
Finite temperature effects were also considered for composite fractional QH systems \cite{Overbosch09}, demonstrating that extended contacts may provide information about the neutral mode propagation velocity along the edge, provided that it is very small with respect to the one of the charged mode.\\
In this paper we propose to investigate the extended contact geometry for the helical edge states of the QSHE by properly taking into account the role played by interactions. We will evaluate the backscattering current as a function of voltage and temperature. We will demonstrate that all the deviations with respect to the point-like case can be included in a modulating function. We will demonstrate that, at low enough temperatures, a peak appears in the differential conductance, which provides evidence of the helical nature of the edge states and gives information about the propagation velocity of the edge modes. At low energies the backscattering current and the linear conductance are described by the same power-law behaviors predicted for the QPC geometry. Even more interestingly, power-laws are recovered also at higher energies, but with different exponents.\\
The paper is divided as follows. In Sec. \ref{model} we recall the main results of the helical Luttinger liquid description of edge states of a QSH system. In Sec. \ref{extended} we analyze the extended contact geometry introducing the modulating function both in the non-interacting and in the interacting case. Sec. \ref{results} contains the main results on transport properties. Sec. \ref{conclusion} is devoted to the conclusions.
\section{Model} \label{model}
We consider a QSH insulator with one Kramers doublet of helical edge states in the two terminal configuration (see Fig. \ref{qlc}).
On the upper edge (1) one has right-moving spin up and left-moving spin down electrons, on the lower edge (2) the opposite.\newline
The corresponding free Hamiltonians are \cite{Hou09, Strom09} ($\hbar=1$)
\begin{equation}
H_{1(2)}=-i v_{F} \int dx \left(\psi^{\dagger}_{R, \uparrow (\downarrow)} \partial_{x}\psi_{R, \uparrow (\downarrow)}- \psi^{\dagger}_{L, \downarrow (\uparrow)} \partial_{x} \psi_{L, \downarrow (\uparrow)}\right)
\end{equation}
where $\psi_{R, \uparrow}$ ($\psi_{L, \uparrow}$) annihilates right (left)-moving electron with spin up, and analogous for the spin down, and $v_{F}$ is the Fermi velocity, estimated\cite{Konig07, Goren96} about $5\cdot 10^5$ m/s. For sake of simplicity we assume infinite edges, even if a more realistic description based on finite length edges coupled to non-interacting leads can also be considered \cite{Liu11, Schmidt11}. This so called $g(x)$ model \cite{Maslov95, Safi95, Kleimann02} reveals crucial in order to recover the proper quantization of the conductance of one dimensional channels and leads to finite length corrections to physical quantities, that however are not crucial in the considered setup \cite{Liu11}.\newline
Concerning interactions, we consider terms which preserve time-reversal symmetry near the Fermi surface for a single Kramers doublet of helical edge states \cite{Moore06}. They are a subset of all possible contributions analyzed by the so called $g$-hology \cite{Giamarchi03, Miranda03} represented by the dispersive
\begin{equation}\label{perp}
H_{d}= g_{2 \perp}\int dx\left(\psi^{\dagger}_{R, \uparrow} \psi_{R, \uparrow}\psi^{\dagger}_{L, \downarrow} \psi_{L, \downarrow}+\psi^{\dagger}_{L, \uparrow} \psi_{L, \uparrow}\psi^{\dagger}_{R, \downarrow} \psi_{R, \downarrow}\right)
\end{equation}
and the forward scattering
\begin{equation}\label{parallel}
H_{f}=\frac{g_{4 \parallel}}{2} \sum_{\alpha=R, L; \sigma=\uparrow, \downarrow} \int dx \psi^{\dagger}_{\alpha, \sigma} \psi_{\alpha, \sigma}\psi^{\dagger}_{\alpha, \sigma} \psi_{\alpha, \sigma}.
\end{equation}
\newline
Note that possible Umklapp terms, which are important only at certain commensurate fillings \cite{Wu06}, are here neglected.\newline
The bosonized procedure of the Luttinger liquid allows to write the electronic field operator in the form \cite{Giamarchi03}
\begin{equation}
\psi_{R/L,\sigma}(x)=\frac{\mathcal{F}_{R/L,\sigma}}{\sqrt{2 \pi a}}e^{\pm i k_{F}x} e^{-i\sqrt{2\pi} \varphi_{R/L,\sigma}(x)},
\end{equation}
with $\varphi_{R/L,\sigma}(x)$ a bosonic field ($\sigma=\uparrow, \downarrow$), $\mathcal{F}_{R/L,\sigma}$ the Klein factor, necessary to give the proper commutation relation between electrons belonging to different edges, $a$ a finite length cut-off and $k_{F}$ the Fermi momentum. The bosonic field $\varphi_{R/L,\sigma}(x)$ is related to the electron density through $\rho_{R/L,\sigma}(x)=\mp\frac{1}{\sqrt{2\pi}}\partial_x\varphi_{R/L,\sigma}(x)$. According to the standard bosonization procedure \cite{Giamarchi03, Miranda03} the interaction terms in Eqs. (\ref{perp})-(\ref{parallel}) are quadratic in the electron density.\newline
Introducing the helical edge basis on the upper and lower edge \cite{Miranda03}
\begin{equation}
\varphi_{1(2)}(x)=\frac{1}{\sqrt{2}}\left [\varphi_{L, \uparrow(\downarrow)}(x)-\varphi_{R, \downarrow(\uparrow)}(x)\right ],
\end{equation}
with their canonical conjugates
\begin{equation}
\theta_{1(2)}(x)=\frac{1}{\sqrt{2}}\left [\varphi_{L, \uparrow(\downarrow)}(x)+\varphi_{R, \downarrow(\uparrow)}(x)\right ],
\end{equation}
the total Hamiltonian $H=H_1+H_2+H_{d}+H_{f}$ can be recast in the bosonized form \cite{Hou09, Strom09}
\begin{equation}
H=\frac{v}{2}\sum_{i=1, 2} \int dx \left[\frac{1}{K}\left(\partial_{x} \varphi_{i}\right)^{2}+K \left(\partial_{x}\theta_{i}\right)^{2}\right].
\end{equation}
Here, $K=\sqrt{\frac{ 2\pi v_{F}+g_{4\parallel}-g_{2\perp}}{2\pi v_{F}+g_{4\parallel}+g_{2\perp}}}$ is the interaction parameter and $v= v_{F}\sqrt{\left(1+\frac{g_{4\parallel}}{2 \pi v_{F}}\right)^{2}-\left(\frac{g_{2\perp}}{ 2\pi v_{F}}\right)^{2}}$ the renormalized velocity.
For Coulomb repulsion $g_{4\parallel}=g_{2 \perp}$ and therefore $v=v_{F}/K$. In the following we will assume this condition, despite other possible interactions can be straightforwardly taken into account.
\section{Extended contact} \label{extended}
In presence of an external voltage $V$, right (left) -moving electrons feel a chemical potential $\mu_L$ ($\mu_R$), with $\mu_{L}-\mu_{R}=eV$. Spatial separation prevents electron tunneling between edges leading to conductance quantization \cite{Konig07} $G=\frac{e^2}{\pi}$. In order to study tunneling effects the system is pinched by means of a gate voltage \cite{Strom09} or, more realistically, by etching the sample \cite{Dolcini12} creating a tunneling region \cite{Chang03}.\newline
Previous theoretical works have studied this configuration \cite{Strom09, Teo09, Hou09, Liu11, Dolcini11}, both in two-terminal and in four-terminal setups, assuming a point-like tunneling.
In what follows, we will generalize this assumption, taking into account the possibility of tunneling events occurring in an extended region (see Fig. \ref{qlc}). Our aim is to investigate the effects induced by a long contact on the backscattering current.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.28]{DolcettoFig1.eps}
\caption{(Color online) Extended contact geometry for a quantum spin Hall system with one Kramers doublet of helical edge states. The full (dashed) lines represent helical edge states carrying electrons with spin up (down). Right (left)-moving electrons are in equilibrium with the left (right) contact at chemical potential $\mu_{L}$ ($\mu_{R}$). The black arrow represents a possible spin-conserving electron tunneling event through the extended region.}\label{qlc}
\end{figure}
The backscattering Hamiltonian connecting the two helical edge states is represented by
\begin{equation}\label{H_tun}
H_{\mathrm{B}}=\int dx ~dy~ \left [ \sum_{\sigma=\uparrow, \downarrow} \Lambda_{x,y}\psi^{\dagger}_{R,\sigma}(x)\psi_{L,\sigma}(y)\right]+h.c.,
\end{equation}
with $\Lambda_{x,y}$ the tunneling amplitude in which a left-moving electron is destroyed in $y$ on one edge and recreated as a right-moving electron in $x$ on the other one. A reasonable choice for $\Lambda_{x,y}$ is to assume a separable form \cite{Chevallier10}
\begin{equation}
\Lambda_{x,y}=\Lambda_0f_l\left (|x+y|\right ) f_c\left(|x-y|\right).
\label{separabile}
\end{equation}
The function $f_{l}$, indicated as lateral contribution, specifies the average location of the tunneling events \cite{Aranzana05, Overbosch09, Chevallier10} while $f_{c}$, dubbed crossed, allows to take into account non perfectly vertical events \cite{Chevallier10}. This assumption is reasonable for smooth tunneling junctions.
Both functions are maximal around zero and decrease by increasing their arguments. With this requirement, the longer the tunneling path the smaller the corresponding local amplitude.\\
Note that Eq. (\ref{H_tun}) describes spin-conserving tunneling processes only, since spin-flipping tunneling terms give no contribution in our two-terminal setup \cite{Liu11, Dolcini11}. Furthermore we neglect tunneling of either charged ($\sim \cos\left [\frac{1}{\sqrt{\pi}}\left (\varphi_1+\varphi_2\right )\right ]$) or spinful ($\sim \cos\left [\frac{1}{\sqrt{\pi}}\left (\theta_1-\theta_2\right )\right ]$) particle pairs, although for strong enough electron interactions they could compete with single-particle tunneling processes ($\sim \cos\left [\frac{1}{2\sqrt{\pi}}\left (\varphi_1+\varphi_2\right )\right ]\cos\left [\frac{1}{2\sqrt{\pi}}\left (\theta_1-\theta_2\right )\right ]$) \cite{Teo09, Schmidt11}. Note that all these processes are irrelevant \cite{Teo09}, in the RG sense, for $0.5 < K < 2$. We limit our analysis to repulsive interaction $0.5 < K < 1$, and we treat the tunneling current as a small perturbation.\\
The tunneling Hamiltonian in Eq. \eqref{H_tun} induces no net charge transfer between the two edges, but leads to a net spin tunneling current.
The corresponding spin current operator is
\begin{equation}
I_{\mathrm{S}}=-\frac{i}{2} \sum_{\sigma=\uparrow, \downarrow}\int dx~dy~\Lambda_{x, y}\psi^{\dagger}_{R,\sigma}(x)\psi_{L,\sigma}(y)+h.c.,
\label{current_operator}
\end{equation}
according to the requirement of absence of spin flipping and multiple-particles contributions. In linear response approximation in the tunneling hamiltonian, the stationary expectation value of the spin current in Eq. (\ref{current_operator}) can be written in terms of the tunneling rates ${\bf{\Gamma}}_{L,\sigma\to R, \sigma}$ and ${\bf{\Gamma}}_{R,\sigma\to L,\sigma}$ as
\begin{equation}
\label{I_S}
\langle I_{\mathrm{S}} \rangle=\frac{1}{2}\sum_{\sigma=\uparrow,\downarrow}\left [ {\bf{\Gamma}}_{L,\sigma\to R,\sigma}-{\bf{\Gamma}}_{R,\sigma\to L,\sigma}\right ] .
\end{equation}
Note that the functional dependence of rates and other physical quantities from bias and temperature is understood for notational convenience.
One can easily realize that this spin tunneling current is responsible for a reduction of the net current flowing from one lead to the other \cite{Strom09, Liu11}, i.e. $\langle I_{\mathrm{}} \rangle=\frac{e^2}{\pi}V-\langle I_{\mathrm{BS}} \rangle$, with $\langle I_{\mathrm{BS}} \rangle$ the backscattering current, related to $\langle I_{\mathrm{S}} \rangle$ by
\begin{equation}
\langle I_{\mathrm{BS}}\rangle=2e\langle I_{\mathrm{S}}\rangle.
\end{equation}
We can thus measure the spin tunneling current by measuring the ordinary backscattering current \cite{Strom09}.\\
By taking into account the spin independence of the tunneling rates and by considering the detailed balance relation ${\bf{\Gamma}}_{R,\sigma\to L,\sigma}=e^{-\beta eV}{\bf{\Gamma}}_{L, \sigma\to R,\sigma}$, ($\beta=1/k_{B}T$ the inverse temperature) one has
\begin{equation}
\langle I_{\mathrm{BS}}\rangle= 2e\left (1-e^{-\beta eV}\right ) {\bf{\Gamma}}_{L,\uparrow\to R,\uparrow}.
\label{current_detailed}
\end{equation}
According to Eq. (\ref{current_detailed}), we can consider only the tunneling rate ${\bf{\Gamma}}\equiv {\bf{\Gamma}}_{L,\uparrow\to R,\uparrow}$ given by
\begin{eqnarray}
{\bf{\Gamma}}&=&\int dx~dy~dx'~dy'~\Lambda_{x,y}\Lambda_{x',y'}^{*}\nonumber \\
&\times&\int dt~e^{i eVt}G^{>}_{L}(y'-y,t)G^{<}_{R}(x'-x,t),
\label{qlcrrate}
\eeq
with
\begin{eqnarray}
G_{R/L}^>(x,t)&=&\frac{e^{\mp ik_Fx}}{2\pi a}e^{\mathcal{W}_{R/L}(x,t)}\\
G_{R/L}^<(x,t)&=&\frac{e^{\pm ik_Fx}}{2\pi a}e^{\mathcal{W}_{R/L}(x,t)}
\eeq
the greater and lesser electron Green's functions associated to the right $(R)$ and left $(L)$ movers. The corresponding bosonic Green's functions are
\begin{eqnarray}
\mathcal{W}_{R/L}(x,t)&=&2\pi\langle \varphi_{R/L, \sigma}(x, t) \varphi_{R/L, \sigma}(0,0)\rangle \nonumber \\
& &-2\pi \langle \varphi_{R/L, \sigma}(0, 0) \varphi_{R/L, \sigma}(0,0)\rangle.
\end{eqnarray}
They do not depend on spin and can be written in terms of the chiral ones $\mathcal{W}_{\pm}(x,t)$
\begin{eqnarray}
\mathcal{W}_R(x,t)&=&c^{(+)}_K \mathcal{W}_+(x,t)+c^{(-)}_K\mathcal{W}_-(x,t)\\
\mathcal{W}_L(x,t)&=&c^{(-)}_K\mathcal{W}_+(x,t)+c^{(+)}_K\mathcal{W}_-(x,t),
\end{eqnarray}
with
\begin{equation}
\mathcal{W}_{\pm}(x,t)=\mathcal{W}\left(t\mp \frac{x}{v} \right)
\end{equation}
and
\begin{equation}
\mathcal{W}(t)=\ln\left [\frac{\left |\Gamma\left (1+\frac{1}{\beta\omega_{c}}-i\frac{t}{\beta}\right )\right |^2}{\Gamma^2\left (1+\frac{1}{\beta\omega_{c}}\right )\left (1+i\omega_{c} t\right )}\right ].
\label{W_exact}
\end{equation}
Here, $\Gamma(x)$ is the Euler Gamma function, $c^{(\pm)}_K=\frac{1}{4}\left (\sqrt{K}\pm\frac{1}{\sqrt{K}}\right )^2$ are the interaction dependent tunneling coefficients and $\omega_{c} =v/a$ the energy bandwidth. By replacing the above expressions into Eq. \eqref{qlcrrate} one obtains
\begin{widetext}
\begin{equation}
{\bf{\Gamma}}_{K}=\int dx~dy~dx'~dy'~ \frac{\Lambda_{x,y}\Lambda_{x',y'}^*}{(2\pi a)^2}e^{ik_F(y'-y+x'-x)}\int dt~e^{ieV t} e^{c^{(+)}_K\mathcal{W}(t-\frac{x'-x}{v}) +c^{(-)}_K\mathcal{W}(t+\frac{x'-x}{v}) +c^{(-)}_K\mathcal{W}(t-\frac{y'-y}{v}) +c^{(+)}_K\mathcal{W}(t+\frac{y'-y}{v})},
\label{qlcWW}
\end{equation}
\end{widetext}
where we explicitly indicate the dependence on the interaction parameter $K$.\\
In what follows we will first analyze the non-interacting case, which can be thought as a superposition of two independent integer QH systems subjected to opposite magnetic fields. Later we will address the case of interacting helical edge states.
\subsection{Non-interacting helical edge states}
\noindent
In the non-interacting case ($K=1$), one has $c^{(+)}_{K=1}=1$ and $c^{(-)}_{K=1}=0$, and Eq. \eqref{qlcWW} reduces to
\begin{eqnarray}
{\bf{\Gamma}}_{1}&=&\int d\vec{x}~d\vec{y} \frac{\Lambda_{x,y}\Lambda_{x',y'}^*}{(2\pi a)^2}e^{ik_F(y'-y+x'-x)}\nonumber \\
&\times&\int dt~e^{ieV t}e^{\mathcal{W}(t-\frac{x'-x}{v})+\mathcal{W}(t+\frac{y'-y}{v})}
\label{qlcWW2}
\eeq
where we introduced the short hand notation $d\vec{x}\equiv dx \cdot dx'$, $d\vec{y}\equiv dy \cdot dy'$.
In terms of the new variables \cite{Chevallier10} $\tau=t-\frac{y-y'-x+x'}{2v}$ and $z=\frac{y-y'+x-x'}{2}$ one has
\begin{eqnarray}
{\bf{\Gamma}}_{1}&=&\int d\vec{x}~d\vec{y}~ \frac{\Lambda_{x,y}\Lambda_{x',y'}^*}{(2\pi a)^2}e^{i\left [k_{+}\left (x'-x\right )+k_{-}\left (y'-y\right ) \right ]} \nonumber \\
&\times&\int d\tau~ e^{ieV\tau}e^{\left [\mathcal{W}(\tau-\frac{z}{v})+\mathcal{W}(\tau+\frac{z}{v})\right ]}, \label{qlctau}
\end{eqnarray}
with $k_{\pm}= k_F\pm eV/2v$.
This can be further expressed as
\begin{equation}
{\bf{\Gamma}}_{1}=\int d\vec{x}~d\vec{y}~ \frac{\Lambda_{x,y}\Lambda_{x',y'}^*}{(2\pi a)^2}e^{i\left [ k_{+}(x'-x)+k_{-}(y'-y) \right ]}\tilde{F}_1(z,eV)
\label{rhs}
\end{equation}
where
\begin{equation}
\tilde{F}_g(z,\omega)=\int d\tau~ e^{i\omega\tau}P_g\left (\tau-\frac{z}{v}\right )P_g\left (\tau+\frac{z}{v}\right )
\label{Pgz}
\end{equation}
and $P_g(t)=e^{g\mathcal{W}(t)}$ (cf. Eq. (\ref{W_exact})).\\
The separability assumption in Eq. (\ref{separabile}) allows to factorize the tunneling amplitude as
\begin{eqnarray}
&{\bf{\Gamma}}_{1}&=4\frac{\left |\Lambda_0\right |^2}{(2\pi a)^2}\int d\vec{y}~ \cos \left [\frac{eV}{v}\left (y'-y \right)\right ]f_c(|2y|)f_c(|2y'|) \nonumber \\
\label{rate4}
&\times& \int d\vec{x} \cos \left [ 2k_F\left (x'-x\right ) \right ]f_l(|2x|)f_l(|2x'|)\tilde{F}_1(x'-x,eV). \nonumber\\
\label{rate3b}
\end{eqnarray}
To better characterize the effects of the extended contact geometry it is useful to represent ${\bf{\Gamma}}_{1}$ in terms of the point contact rate ${\bf{\Gamma}}^{(point)}_{1}$ as
\begin{equation}\label{Rate1}
{\bf{\Gamma}}_{1}=\lambda_{1}\times {\bf{\Gamma}}^{(point)}_{1}.
\end{equation}
This can be done regardless of the form of the tunneling amplitude but, as we will see, the separability assumption of Eq. (\ref{separabile}) allows to give a closed form for the modulating function.
From Eq. \eqref{current_detailed} and Eq. \eqref{Rate1} follows that
\begin{equation}\label{I_point}
\langle I_{\mathrm{BS}} \rangle=\lambda_1\times\langle I^{(point)}_{\mathrm{BS}} \rangle.
\end{equation}
For any interaction $K$, the point-like current is given by \cite{Strom09}
\begin{equation}
\langle I^{(point)}_{\mathrm{BS}}\rangle=2e(1-e^{-\beta eV}) \frac{\left |\Lambda_0\right |^2}{(2\pi a)^2}\tilde{P}_{2d_K}(eV)
\end{equation}
with $d_K\equiv c^{(+)}_K+c^{(-)}_K=\frac{1}{2}\left (K+\frac{1}{K}\right )$ so that $d_K=1$ in the non-interacting case.
The function
\begin{equation}
\tilde{P}_g(\omega)=\int dt~e^{i\omega t}P_g(t)
\end{equation}
has the following form \cite{Overbosch09} for energies lower than the bandwidth $\omega_{c}$
\begin{equation}
\tilde{P}_g(E)=\left\{
\begin{array}{l} \frac{2 \pi}{\Gamma(g)\omega_{c}}\left(\frac{E}{\omega_{c}}\right)^{g-1} \theta{(E)} \ \left (T=0\right )\\
\left (\frac{2\pi}{\beta\omega_c}\right )^{g-1}\frac{e^{\frac{\beta E}{2}}}{\omega_c} \mathcal{B}\left [\frac{g}{2}-i\frac{\beta E}{2\pi},\frac{g}{2}+i\frac{\beta E}{2\pi}\right ] \ \left (T\neq 0\right )
\end{array}\right.
\label{P_zero}
\end{equation}
with $\theta(x)$ the Heaviside step function and $\mathcal{B}\left[x,y\right ]$ the Euler Beta function.\\
The modulating function $\lambda_{1}$ in Eq. \eqref{Rate1} represents the influence of the extended region and is given by
\begin{eqnarray}
\lambda_{1}&=&4\int d\vec{y}~ \cos \left [\frac{eV}{v}\left (y'-y \right)\right ]f_c(|2y|)f_c(|2y'|) \nonumber \\
\label{rate4b}
&\times& \int d\vec{x}~ \cos \left [ 2k_F\left (x'-x\right ) \right ]f_l(|2x|)f_l(|2x'|)\nonumber \\
&\times&\frac{\tilde{F}_1(x'-x,eV)}{\tilde{P}_2(eV)}. \nonumber \\
\label{coeff}
\end{eqnarray}
It can be written as a product of crossed and lateral contribution $\lambda_{1}=\lambda_{1}^{c}\lambda_{1}^{l}$, with
\begin{eqnarray}
\lambda^{c}_{1}&=&2\int d\vec{y}~ \cos \left [\frac{eV}{v}\left (y'-y \right )\right ]f_c(|2y|)f_c(|2y'|) \label{ampc} \\
\lambda^{l}_{1}&=&2\int d\vec{x}~ \cos \left [ 2k_F\left (x'-x\right ) \right ]f_l(|2x|)f_l(|2x'|)\nonumber \\
&\times &\frac{\tilde{F}_1(x'-x,eV)}{\tilde{P}_2(eV)}.
\label{ampl}
\end{eqnarray}
Notice that, while $\lambda^{c}_{1}$ depends on the crossed contribution $f_{c}$ only, $\lambda^{l}_{1}$ contains also the electronic Green's functions through $\tilde{F}_{1}$.\\
In order to perform an analysis of the extended contact, we consider a separable gaussian form \cite{Chevallier10, Overbosch09}
\begin{equation}
\Lambda_{x,y}=\frac{\Lambda_0}{2\pi\xi_c\xi_l}e^{-\frac{(x-y)^2}{4\xi_c^2}}e^{-\frac{(x+y)^2}{4\xi_l^2}}.
\label{gauss}
\end{equation}
The parameter $\xi_{l}$ is related to the extension of the contact, while $\xi_c$ allows to take into account non perfectly vertical events. In this sense a realistic assumption for modeling an extended contact is $\xi_c\ll\xi_l$. Note that in the limits $\xi_{c,l}\to 0$ we recover the point-like tunneling amplitude $\Lambda_{x,y}\to\Lambda_0\delta(x)\delta(y)$, so that $\langle I_{BS}\rangle\to\langle I^{(point)}_{BS}\rangle$.\newline
By replacing the gaussian expression into Eqs. (\ref{ampc})-(\ref{ampl}) one obtains
\begin{eqnarray}
\lambda^{c}_{1}&=&e^{-\frac{1}{2}\left (\frac{\xi_c eV}{v} \right )^2} \label{ampc2} \\
\lambda^{l}_{1}&=&\frac{1}{\sqrt{2\pi}}\int dx e^{-\frac{x^2}{2}}\cos\left (2k_F\xi_lx\right )\frac{\tilde{F}_1(\xi_l x,eV)}{\tilde{P}_{2}(eV)}.
\label{ampl2}
\eeq
By exploiting the convolution properties
\begin{equation}
\tilde{F}_g(z,\omega)=\frac{1}{2\pi}\int dE~ e^{i\frac{2z}{v}E}\tilde{P}_g\left (\frac{\omega}{2}+E\right )\tilde{P}_g\left (\frac{\omega}{2}-E\right ),
\label{Pgtil}
\end{equation}
the tunneling amplitude can be written in the form
\begin{eqnarray}
\lambda_{1}&=&e^{-\frac{1}{2}\left (\xi_c\frac{eV}{v}\right )^2-2(k_F\xi_l)^2}\int \frac{dE}{2\pi}e^{-2\left ( \xi_l\frac{E}{v} \right )^2}\cosh \left ( 4k_F\xi_l^2\frac{E}{v} \right )\nonumber\\
&\times&
\frac{\tilde{P}_{1}\left(\frac{eV}{2}+E\right) \tilde{P}_{1}\left(\frac{eV}{2}-E\right) }{\tilde{P}_{2}(eV)}. \label{rate_simplified}
\eeq
This result is valid also at finite temperature and extends what done in Ref. \onlinecite{Chevallier10} for the QHE at $T=0$. Note that the crossed contribution to the modulating function comes into play only at high bias voltage. For an extended contact with length $\sim (0.1 \div 1)\mu$m, one has $\xi_l\sim (0.1 \div 1) \mu$m and $\xi_c\ll \xi_l$, e.g. $\xi_c\sim 10$ nm. With this assumption the crossed contribution is crucial only for relatively high bias $\gtrsim 0.1$ V, not considered here. This fact allows to choose $\lambda^{c}_{1}\approx 1$ and to focus only on the lateral contribution which, as we will see in the following, shows strong modifications with respect to the point-like case also at low bias.
\subsection{Interacting helical edge states}
Starting from the general expression in Eq. (\ref{qlcWW}) and proceeding as in the previous section, one can express the interacting modulating function as ($K\neq 1$)
\begin{widetext}
\begin{equation}
\lambda_{K}=\int \frac{ dE_{1} d E_{2} d E_{3}}{\left( 2 \pi\right)^{3}}e^{-\frac{1}{2} \left[ \frac{\xi_{c}}{v}\left(eV-2 E_{2}-2E_{3}\right)\right]^{2}-\frac{1}{2} \left[ \frac{\xi_{l}}{v}\left(eV-2 E_{1}-2E_{2}- 2k_{F} v\right)\right]^{2}}\frac{\tilde{P}_{c^{(+)}_K}(E_{1})\tilde{P}_{c^{(-)}_K}(E_{2})\tilde{P}_{c^{(-)}_K}(E_{3})\tilde{P}_{c^{(+)}_K}(eV-\underset{i=1,2,3}{\sum} E_i)}{\tilde{P}_{2 d_K}(eV)}
\label{lambda_general}
.\end{equation}
\end{widetext}
Due to the natural constraints imposed by the functional form of $\tilde{P}(E)$ in Eq. (\ref{P_zero}) it is possible to neglect the crossed contribution, present in the first gaussian term, as far as $eV, k_{B}T\ll v/\xi_c$. Under this condition and noting that
\begin{equation}
\int_{-\infty}^{\infty}\frac{dE}{2\pi}\tilde{P}_{g_1}(E)\tilde{P}_{g_2}(\omega-E) =\tilde{P}_{g_1+g_2}(\omega),
\label{pezzo}
\end{equation}
Eq. (\ref{lambda_general}) becomes
\begin{eqnarray}
\lambda_{K}&=&e^{-2\alpha_l^2}\int \frac{dE}{2\pi}e^{-2\left ( K\alpha_l\frac{E}{\epsilon_F} \right )^2}\cosh \left ( 4K\alpha_l^2\frac{E}{\epsilon_F} \right ) \nonumber\\
&\times&\frac{\tilde{P}_{d_K}\left(\frac{eV}{2}+E\right) \tilde{P}_{d_K}\left(\frac{eV}{2}-E\right) }{\tilde{P}_{2d_K}(eV)}. \label{rateverticale}
\eeq
Here, we introduced the Fermi energy $\epsilon_F=k_Fv_F$ and the dimensionless parameter $\alpha_l = k_F\xi_l$.
The modulating function thus depends on the length of the contact $\xi_l$ and on the Fermi momentum only through their product.
By inserting Eq. \eqref{P_zero} in Eq. \eqref{rateverticale} one has
\begin{eqnarray}
\lambda_{K}&=&\frac{\Gamma (2 d_K) e^{-2\alpha_l^2}}{8 \pi^2\Gamma^2(d_K)}\!\!\int \!\!dx e^{-\frac{1}{2}(K\alpha_l\frac{k_BT}{\epsilon_F}x)^2}\!\!\cosh\left (2K\alpha_l^2\frac{k_BT}{\epsilon_F}x\right )\nonumber \\
&\times& \mathcal {B}\left[ \gamma_{+,+}(x), \gamma_{+,-}(x)\right]\mathcal {B}\left[ \gamma_{-,+}(x), \gamma_{-,-}(x)\right]
\label{ratefinale}
\eeq
with ($\eta, \eta '=\pm$)
\begin{equation}
\gamma_{\eta, \eta'}(x) = \frac{d_{K}}{2}+\eta\frac{i}{4\pi} \left(\frac{eV}{k_{B}T} +\eta' x\right).
\end{equation}
To conclude we observe that also in the interacting case the backscattering current can be written as
\begin{equation}\label{Ilong}
\langle I_{\mathrm{BS}}(V, T) \rangle= \lambda_{K}(V, T)\times \langle I^{(point)}_{\mathrm{BS}}(V, T) \rangle
\end{equation}
with $\langle I^{(point)}_{\mathrm{BS}}(V, T) \rangle$ given in Eq. \eqref{I_point} and where we explicitly reintroduced the dependence on bias and temperature.
Note that for $\alpha_l=0$, Eq. \eqref{ratefinale} reduces to $\lambda_K=1$, and the point-like tunneling case is recovered.
\section{Results} \label{results}
Since the modulating function depends on bias and temperature, it will influence the behavior of transport properties with respect to the point-like tunneling case.
It is then useful to investigate it in details.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.52]{DolcettoFig2.eps}
\caption{(Color online) Modulating function as a function of (a) bias $V$ (in units of $\epsilon_F/e$) at low temperature ($k_BT=10^{-2}\epsilon_F$) and (b) temperature $T$ (in units of $\epsilon_F/k_B$) at low bias ($eV=10^{-2}\epsilon_F$), for different lengths of the contact:
$\alpha_l=1$ (long dashed red), $2$ (dashed green), $5$ (short dashed blue). Note that the behavior at low temperature in (a) is indistinguishable from the $T=0$ case. This comment holds as well for panel (b) between low $V$ and $V=0$. Other parameters: $K=0.75$.}\label{lambda}
\end{figure}
Fig. \ref{lambda} shows $\lambda_K$ as a function of voltages (a) or temperatures (b).
Fig. \ref{lambda}(a) presents a maximum at $V\approx \bar{V}\equiv 2\epsilon_F/eK$, becoming more and more pronounced by increasing $\alpha_l$, that is the length of the contact. In the limit $\alpha_l\to 0$ it is washed out and $\lambda_K(V, T)\to 1$.
As already noted for QHE \cite{Overbosch09}, this maximum is determined by the two phases that control tunneling, one set by the Fermi momentum ($2k_Fx$) and the other by the voltage drop ($eVt$). The peak occurs when the two phases are equal: $e\bar{V}=2k_F x/t=2k_Fv=2\epsilon_F/K$.\\
A maximum is present also in Fig. \ref{lambda}(b), but it originates from a dephasing mechanism, induced by finite temperature, similar to what was found in interferometric geometries with two or several QPCs, both in QH \cite{Chamon97} and in QSH systems \cite{Virtanen11}, where the dephasing was depending on the distance among the QPCs.
The extended contact geometry can be seen indeed as an infinite series of QPCs with different tunneling amplitudes, with infinitesimal distance $dx$ between them, and the backscattering current is now given by integrating over the contact region.
For all interaction strengths $0.5 < K < 1$ we find the maximum at a position $\bar{T}$ of the order of $\epsilon_F/k_B$, vanishing as $\alpha_l\to 0$, reproducing in this case the point-like regime with $\lambda_K(V,T)\to 1$.\\
Note that for vanishing bias and temperature the modulating function is exponentially suppressed by the length of the contact, namely $\lambda_K(V=0,T=0)=e^{-2\alpha_l^2}$.\\
We can also study the asymptotic behavior of $\lambda_K$ at low bias or low temperatures. Introducing the energy scales $eV_{\alpha_l}=\epsilon_F/(K\alpha_l)$ and $k_BT_{\alpha_l}=\epsilon_F/(K \alpha_l)$ one finds
\begin{equation}\label{asympt_V}
\lambda_K(V,T\ll eV/k_B)\sim\left\{
\begin{array}{cc}
\mathrm{constant} & V\ll V_{\alpha_l} \\
V^{-1} & V-\bar{V}\gg V_{\alpha_l}
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{asympt_T}
\lambda_K(V\ll k_BT/e,T)\sim\left\{
\begin{array}{cc}
\mathrm{constant} & T\ll T_{\alpha_l} \\
T^{-1} & T-\bar{T}\gg T_{\alpha_l}
\end{array}
\right. .
\end{equation}
Fig. \ref{conduttanze} shows the differential conductance $G(V,T)=d\langle I_{\mathrm{BS}}(V, T)\rangle/dV$ as a function of bias (a) and the linear conductance $G(T)=G(V=0,T)$ as a function of temperature (b).
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.52]{DolcettoFig3.eps}
\caption{(Color online) (a) Differential conductance as a function of bias $V$ (in units of $\epsilon_F/e$) at low temperature ($k_BT=10^{-2}\epsilon_F$) and (b) linear conductance as a function of the temperature $T$ (in units of $\epsilon_F/k_B$), for different lengths of the contact:
$\alpha_l=1$ (long dashed red), $2$ (dashed green), $5$ (short dashed blue).
Units of the conductance: $G_0=\frac{2 e^2}{\epsilon_F^2}\frac{\left | \Lambda_0\right |^2}{(2\pi a)^2}\left (k_F a\right )^{2d_K}$.
Other parameters: $K=0.75$.}\label{conduttanze}
\end{figure}
They both show a peaked structure, in contrast to the point-like case, reminiscent of the form of $\lambda_K$ (see Fig. \ref{lambda}).\\
More quantitatively, focusing on a given length, we can study the dependence on interactions.
Fig. \ref{G_T=0,K} shows the differential conductance as a function of bias, varying the electron interaction.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.40]{DolcettoFig4.eps}
\caption{(Color online) Differential conductance as a function of bias $V$ (in units of $\epsilon_F/e$) for different interaction strengths: $K=1$ (long dashed red), $0.75$ (dashed green), $0.5$ (short dashed blue).
Note that the conductance is plotted in unity of $G_0$ as in Fig. \ref{conduttanze}, which depends on $K$ and thus not allow for a direct comparison on the size between the different curves.
Other parameters: $\alpha_l=5$; $k_BT=10^{-2}\epsilon_F$.}\label{G_T=0,K}
\end{figure}
The conductance shows a peak at $V\approx \bar{V}$, which depends on the velocity of the excitations ($\bar{V}=2 k_Fv/e$).
Thanks to this behavior, we argue that an extended contact geometry could be fruitful to extract information about the velocity of the excitation modes along the edges, by experimentally measuring the peak of the conductance, varying the Fermi energy and the bias voltage \cite{Konig07, Roth09}.
Furthermore, it must be stressed that in presence of an ordinary Luttinger liquid we should expect two different peaks, as a consequence of the spin-charge separation, which leads to two different propagation velocities, one for the charge modes and one for the spin modes \cite{Giamarchi03, Miranda03, Auslander05, Cavaliere04}.
The single-peak structure of Fig. \ref{G_T=0,K}, instead, provides evidence of the close connection between spin and charge typical of the helical edge states of QSHE, where these degrees of freedom are locked each other and propagate with the same velocity.\\
We remark that, as expected, the peak in the differential conductance is reduced by increasing temperature and finally washed out for temperatures $2\pi k_BT\sim e\bar{V}$, as shown in Fig. \ref{G_T}.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.40]{DolcettoFig5.eps}
\caption{(Color online) Differential conductance as a function of bias $V$ (in units of $\epsilon_F/e$) for different temperatures (in units of $\epsilon_F/k_{B}$): $T=0.1$ (long dashed red), $T=0.5$ (dashed green), $T=2$ (short dashed blue).
Units of $G_0$ as in Fig. \ref{conduttanze}.
Other parameters: $\alpha_l=5$; $K=0.75$.}\label{G_T}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.50]{DolcettoFig6.eps}
\caption{(Color online) Log-Log plot (a) of the backscattering current (in units of $I_0\equiv (\epsilon_{F}/e) G_{0}$) as a function of the bias voltage $V$ (in units of $\epsilon_F/e$) at low temperature ($k_{B}T=10^{-2} \epsilon_F$) (long dashed red curve) and (b) of the linear conductance (in units of $G_0$) as a function of temperature (in units of $\epsilon_F/k_B$) (long dashed red curve).
Other parameters: $\alpha_{l}=1$, $K=0.75$.
Straight lines represent the asymptotic power-law behavior with exponent (a) $2d_{K}-1=13/12$ (dashed green line) and $2d_{K}-2=1/12$ (short dashed blue line) and (b) $2d_{K}-2=1/12$ (dashed green line) and $2d_{K}-3=-11/12$ (short dashed blue line). }\label{power_law}
\end{figure} \\
Information about velocity are not the only ones that can be extracted by means of this setup. Theoretical works concerning point-like tunneling predict power-law behaviors for current \cite{Strom09, Schmidt11}
\begin{equation}
\langle I^{(point)}_{\mathrm{BS}}(V, T\ll eV/k_B)\rangle\sim V^{2d_K-1}
\end{equation}
\begin{equation}
G^{(point)}(T)\sim T^{2 d_K-2}.
\end{equation}
Despite these trends are here no longer valid, they still survive at bias or temperatures lower enough, namely for $V\ll V_{\alpha_l}$ or $T\ll T_{\alpha_l}$ respectively, as shown in Fig. \ref{power_law}. Interestingly, by increasing energies, new power-law behaviors are recovered, however, with different exponents
\begin{equation}
\langle I_{\mathrm{BS}}(V, T\ll eV/k_B)\rangle\sim
V^{2d_K-2} \qquad (V-\bar{V}\gg V_{\alpha_l})
\end{equation}
and
\begin{equation}
G(T)\sim T^{2d_K-3} \qquad (T-\bar{T}\gg T_{\alpha_l}).
\end{equation}
This is a consequence of the asymptotic behavior of the modulating function (cf. Eqs. \eqref{asympt_V}-\eqref{asympt_T}).\\
It is worth noting that the effective visibility of these high energy power-laws crucially depends on the Fermi energy $\epsilon_{F}$ of the system, that can be easily tuned experimentally by means of an external gate \cite{Konig07}, and the natural cut-off energy $\omega_c$ of the theory. The latter can be reasonably identified as the energy at which additional bulk effects have to be taken into account, thus the presented helical Luttinger liquid picture holds for energies lower than $\omega_c$.
\section{Conclusions} \label{conclusion}
We proposed a model for an extended tunneling through contact region in QSH system. We demonstrated that it is possible to take into account the extended nature of the contact through a modulating function, which renormalizes the transport properties of the point-like case.\\
We showed that, due to the extended nature of the contact and for low enough temperatures, the differential conductance shows a pronounced peak that can be used to extract information about the propagation velocity of the excitations along the edge. The presence of a unique peak is a signature of the helical nature of the edge states in QSHE.\\
We analyzed the backscattering current in the low temperature regime and the linear conductance, showing that the power-law behaviors predicted in the point-like case survive at progressively lower energies by increasing the length of the contact. Remarkably enough, new power-laws emerge also at higher energies, but with different exponents.
\section*{Acknowledgements}
We thank A. Braggio, M. Carrega, and T. Martin for useful discussions. The support of CNR STM 2010 program, EU-FP7 via Grant No. ITN-2008-234970 NANOCTM and CNR-SPIN via Seed Project PGESE001 is acknowledged.
|
2,869,038,156,268 | arxiv |
\subsection*{\large Supplementary Note 1: \\Classical and quantum chaos in the Dicke model}
The Dicke model displays regular and chaotic behavior~\cite{Lewenkopf1991,Emary2003PRL,Emary2003,Bastarrachea2014b,Bastarrachea2015,Bastarrachea2016PRE,Chavez2016,Chavez2019}. For the parameters selected in the main text ($\omega=\omega_0=1, \gamma=2 \gamma_c$), the dynamics are regular up to $\epsilon\approx-1.6$ \cite{Chavez2016}, then there is a mixed region of regularity and chaos up to $\epsilon\approx-0.8$, after which strong chaos sets in.
The onset of chaos is illustrated in Supplementary Fig.~\ref{fig01} for the classical limit (a)-(b) and for the quantum domain (c)-(d).
Supplementary Fig.~\ref{fig01}~(a) shows the percentage of chaos defined as the ratio of
the number of chaotic initial conditions, determined by the Lyapunov exponent, over the total number of initial conditions for a very large sample. The percentage is presented as a function of the rescaled energy $\epsilon$ and the coupling strength $\gamma$. Following the vertical red dashed line marked at $\gamma=2\gamma_c$, one sees that energies $\epsilon \sim -0.5$ are already deep in the chaotic region (light color). This is confirmed in Supplementary Fig.~\ref{fig01}~(b), where the Poincar\'e section for $\epsilon = -0.5$ exhibits hard chaos, that is, all chaotic trajectories cover the entire energy shell densely and have the same positive Lyapunov exponent.
Supplementary Fig.~\ref{fig01}~(c) displays the distribution $P(s)$ of the spacings $s$ between nearest-neighboring unfolded energy levels. The eigenvalues of quantum systems whose classical counterparts are chaotic are correlated and repel each other. In this case, $P(s)$ follows the Wigner surmise~\cite{Guhr1998}, as indeed seen in Supplementary Fig.~\ref{fig01}~(c).
In Supplementary Fig.~\ref{fig01}~(d), we show the quantum survival probability, $S_P(t)=|\langle {\cal R}_\epsilon|e^{-i\hat{H}_{D}t}|{\cal R}_\epsilon\rangle |^2$ for a Gaussian ensemble of random initial states $|{\cal R}_\epsilon\rangle=\sum_k c_k \ket{E_k}$ whose components $|c_{k}|^{2}$ were generated through a random sampling (see Methods) and are centered at energy $\epsilon = -0.5$ in the chaotic region~\cite{Lerma2019, Villasenor2020}. The survival probability of individual random states are shown with gray solid lines, their ensemble average with an orange solid line, the running time average with a blue solid line, which overlaps with a green line that represents an analytical curve derived from the Gaussian orthogonal ensemble of the random matrix theory~\cite{Lerma2019,Villasenor2020}. The asymptotic value of $S_P(t)$ is shown with a horizontal red dashed line. The green and blue curves exhibit a dip below the saturation value of the quantum survival probability known as correlation hole, which is a dynamical manifestation of spectral correlations. It contains more information than the level spacing distribution $P(s)$, since in addition to short-range correlations, it captures also long-range correlations~\cite{Leviandier1986,Wilkie1991,Alhassid1992,Torres2017PTR,Lerma2019}. We verified that most coherent states from the chaotic region develops the correlation hole. Exceptions to this pattern are the states very close to unstable periodic orbits of relatively short periods~\cite{Villasenor2020}.
The four panels of Supplementary Fig.~\ref{fig01} leave no doubt that the Dicke model reaches a limit of very strong chaos. This is the region for which our analysis of the quantum scars is developed.
\clearpage
\vspace*{2em}
\begin{center}
\includegraphics[width=0.6\columnwidth]{figSM1.pdf}
\captionof{figure}{\textbf{Indicators of classical and quantum chaos.} \textbf{(a)} Percentage of chaos over classical energy shells. The black solid line follows the ground-state energy and the vertical red dashed line marks the coupling $\gamma=2\gamma_c$ chosen for our studies. The green dot marks the separation between the normal and the superradiant phase for the ground state. The blue dot represents the energy $\epsilon = -0.5$ used in the indicators (b) and (d). \textbf{(b)} Poincar\'e section $(p = 0)$ for the rescaled classical Hamiltonian $h_{\text{cl}}$ at energy $\epsilon = -0.5$. \textbf{(c)} Level spacing distribution of the unfolded spectrum (shaded area) for $22458$ levels in the energy region $\epsilon \in (-1,1.755)$ and Wigner surmise (red dashed line), $j=100$. \textbf{(d)} Survival probability for an ensemble of 500 random states (gray solid lines) centered at energy $\epsilon = -0.5$, ensemble average (orange solid line), running average (blue solid line), analytical curve from the random matrix theory (green solid line), and the asymptotic value (horizontal red dashed line) (${j=100}$).
}
\label{fig01}
\end{center}
\subsection*{\large Supplementary Note 2:\\ All eigenstates exhibit scars}
Supplementary Fig.~\ref{figSM2} shows the Husimi distributions of 160 eigenstates projected over the $(Q,P)$ plane for $j=100$. The eigenstates are selected from a list of 16,000 eigenstates with energies between $\epsilon_\text{GS}=-2.125$ and $\epsilon=0$, sampled in steps of 100 from $k=100$ to $k=16000$. The values of the localization measure $\mathfrak{L}_{k}$ are indicated in the panels. We select $\gamma=2\gamma_c$, so all these eigenstates are located in the red dashed line of Supplementary Fig.~\ref{fig01} (a). States with $k\leq 800$ ($\epsilon_k\leq -1.6$) are in the regular region, those with $800< k\leq 5600$ ($-1.6 < \epsilon_k\leq -0.82$) in the mixed region, and those with $k>5600$ ($ \epsilon_k> -0.82$) are in the region of strong chaos. In all projections, the Husimi distributions display ellipsoidal shapes that can be associated with periodic orbits in the classical limit once they are identified.
\clearpage
\begin{center}
\centering
\includegraphics[width=1.0\textwidth]{figSM2pg1.pdf}
{\footnotesize Figure continues on the next page}
\end{center}
\begin{center}
\includegraphics[width=1.0\textwidth]{figSM2pg2.pdf}
\vspace{-2em}
\captionof{figure}{\textbf{Scars in all eigenstates.} Husimi projections $\widetilde{\cal{Q}}_k$ of 160 eigenstates for $j=100$. The values of $k$ are indicated in the top left of each panel, along with the value of $\mathfrak{L}_k$. The energy range is indicated on the right side of each row of panels. Lighter colors indicate higher concentrations, while black corresponds to zero.
}
\label{figSM2}
\end{center}
\subsection*{\large Supplementary Note 3:\\ Dependence on system size}
The Husimi distributions for some representative eigenstates are shown in Supplementary Fig.~\ref{figSM3} for $j=30$ and $j=100$.
The patterns marking the periodic orbits are very similar. For each column, compare the top state ($j=30$) with the bottom state ($j=100$). They have very similar patterns, but the lines become better defined as $j$ increases. As the system size increases, more lines also appear, because more periodic orbits scar the states.
\\
\begin{center}
\includegraphics[width=1.0\textwidth]{figSM3.pdf}
\vspace*{-20px}
\captionof{figure}{\textbf{Husimi projections vs. system size.} Husimi projections $\widetilde{\cal{Q}}_k$ of 7 eigenstates for $j=30$ (first row) and $j=100$ (second row). Lighter colors indicate higher concentrations, while black corresponds to zero. The lines marking the periodic orbits become better defined as the system size increases.
}
\label{figSM3}
\end{center}
\vspace*{-15px}
\renewcommand{\baselinestretch}{1.08}
|
2,869,038,156,269 | arxiv | \section{Introduction}
The physics of graphene, a two-dimensional (2D) allotrope of carbon,
presents a unique opportunity to explore properties of gapless (massless)
Dirac fermions in a solid-state context \cite{review}. Because graphene
is a truly 2D material from electronic point of view, it is also an
interesting system where one can explore the important issue of electron-electron
interactions. The interaction problem is a central
one for the physics of low-dimensional electron systems.
Although the electron-electron interactions in graphene are not internally
screened, that is, they remain long-range and decay like $1/r$ (the
electric field lines propagate in 3D away from the graphene plane
and one has the ordinary Coulomb law), there is little experimental evidence,
if any, that electron-electron interactions play a role in graphene physics.
It is possible that the interactions between the fermions are actually
dielectrically screened by the presence of substrates, onto which
graphene is deposited in most experimental setups. Nevertheless, there
is a fast growing literature in suspended graphene samples \cite{suspended}
that will eventually tell us much more about electron-electron interactions
in this amazing material.
In graphene, the strength of the Coulomb interaction relative to the
kinetic energy is given by the dimensionless coupling constant (also
called graphene's ``fine structure constant''),
\begin{equation}
\alpha = \frac{e^2}{\hbar v},
\end{equation}
where $v$ is the Fermi velocity, and we absorbed the environmental
dielectric constant ($\epsilon_0$) into the definition of the charge
$e$. For graphene in vacuum, as in the case of suspended samples,
$\alpha$ reaches its maximum value, $\alpha \approx 2.2$, i.e.
interaction effects are expected to be strong. One of the ways
strong-coupling effects can manifest themselves theoretically is
through spontaneous generation of a mass $m$ (chiral symmetry breaking).
In solid state language, mass generation is equivalent to the opening
of a gap $\Delta=2|m|$ in the electronic spectrum. Hence, in this work, the
terms ``mass'' and ``gap'' are interchangeable.
In relativistic quantum electrodynamics (QED) in two space (plus one time)
dimensions, QED$_{2+1}$, the study of this phenomenon started quite a while
ago \cite{2dqed,criticalqed} and is still going strong today. Graphene is actually
different from QED$_{2+1}$ because only the fermions are confined to a 2D
plane, while the field lines extend through the whole 3D space. In addition,
the Coulomb interaction in graphene can be considered instantaneous since
the speed of light $c$ is much larger than the Fermi velocity
($v \approx c/300$). Hence, Lorenz invariance is not respected,
which reflects the non-relativistic, purely band origin of the Dirac
quasiparticles. For the case of graphene, mass generation has been
predicted \cite{kve,kve2}, although no experimental signatures have yet been
detected. An in-plane magnetic field also favors an excitonic condensate (gap) \cite{tsvelik}.
There has been a surge of recent numerical activity on
the problem of mass generation in graphene (without external field), \cite{drut} and a consensus
seems to have been reached that above $\alpha_c \approx 1.1$, mass generation
occurs \cite{kve,drut}.
Another way to analyze this phenomenon is as a function of $N$, the number
of fermion species, which for graphene is $N=4$ due to the spin (2)
and valley (2) degeneracies. In the strong coupling limit
$\alpha \rightarrow \infty$, generation of mass occurs below a critical
$N$ which was estimated to be $N_c \approx 7-9$ \cite{kve,drut}.
In experiments, a detectable gap has so far been observed only in a
situation when it is actually due to external factors, such as the
presence of a substrate with specific symmetry, creating sublattice
asymmetry in the graphene plane and thus making the graphene electrons
massive (gapped) \cite{gap}. However, it is quite possible, as already
mentioned, that in ``suspended" graphene, whose exploration has just
begun, the gap generically exists due to the strong quasiparticle interaction.
Whether graphene breaks overall parity (sublattice symmetry) in the
process of spontaneous mass generation depends on the details of the interactions.
Long-range Coulomb interactions \cite{kve} favor equally parity-even and parity-odd combinations
of masses in the two Dirac cones (valleys).
On the other hand in QED$_{2+1}$ it is usually argued that
parity-breaking mass generation is not possible, but this has
to do with the presence of vector interactions in the fully
relativistic model \cite{parity}.
In this work we study the effect of
a finite gap on the quasiparticle interactions (in particular modifications
of Coulomb's law), and the renormalization of the quasiparticle parameters,
such as the Fermi velocity and the gap itself.
Our main goal has been to compute corrections to those quantities under the
assumption that the system is already massive, e.g. due to external
factors explicitly breaking the sublattice symmetry, as mentioned above. However at the end of
the paper we also present estimates how the new physics we find can
possibly affect the spontaneous formation of a gap via the excitonic
mechanism. We do not address the issue of parity breaking since
only single valley Coulomb interactions are considered.
For massless graphene,
the large-N limit was studied recently in Refs.~ [\onlinecite{son,foster}],
extending earlier results \cite{Gonz}. The present work can be viewed as
an extension of those studies to the massive case. We also find that for
massive Dirac fermions unconventional modification of the interaction
vertex and fermion's properties can occur, not possible for strictly massless
quasiparticles. In particular, we find that there is a crossover regime
for $N \alpha\gg1$ where the 3D Coulomb law, $V(r) \sim 1/r$, is modified, due to the confinement of the electric field lines, to a 2D Coulomb law, $V(r) \sim \ln(1/r)$,
with strong renormalizations of the quasiparticle properties. In this non-perturbative
regime the photon field is confined to the graphene plane
leading to a situation where the Dirac electrons form a 2D ``relativistic'' Coulomb gas \cite{marino}. Such electronic state has never been observed
in nature before and, perhaps, with developments in the control of the structure
of graphene samples, it can be studied soon.
The paper is organized as follows. In the next section, for completeness,
we analyze the weak
coupling regime of small $\alpha$, which can also be relevant to real
situations (when the substrate screening is strong). In section \ref{onen}
we study the $1/N$ expansion for the electronic properties of gapped
Dirac fermions and show the existence of this new intermediate regime
where weak confinement of the electric field lines leads to strong renormalization of
electron-electron interactions. We also calculate the implications
of this new regime in the quasiparticle properties.
Section \ref{conclusions} contains our conclusions, and in Appendix A some estimates
related to the excitonic gap formation are summarized.
\section{Interaction potential: weak-coupling regime}
\label{weak}
Our starting point is a model of two-dimensional massive Dirac fermions with a gap
$\Delta =2|m|$. The Hamiltonian of the system is \cite{review}
\begin{equation}
\label{ham1}
H = \sum_{\bf{p}} \Psi_{\bf{p}}^{\dagger} ( v {\bm \sigma} \cdot{\bf p} \pm m \sigma_{3})
\Psi_{\bf{p}} + H_{I},
\end{equation}
where $H_{I}$ is the quasiparticle interaction
\begin{equation}
\label{ham2}
H_{I} = \frac{1}{2} \sum_{\bf{p}} n_{\bf p} V({\bf p}) n_{\bf-p}, \
\ n_{\bf p} \equiv \sum_{\bf{q}} \Psi_{\bf{q+p}}^{\dagger} \Psi_{\bf{q}},
\end{equation}
and the potential $V({\bf p})$ will be specified later.
We work in a two-component representation, so that $\sigma_i, i=1,2,3$ are the Pauli
matrices; $\hat{\sigma}_{0} = \hat{I}$ is the 2$\times$2 identity matrix, often omitted
for simplicity, and the vector ${\bm \sigma}= (\sigma_1,\sigma_2)$.
Thus the Hamiltonian \eqref{ham1} describes the physics in a single Dirac cone (valley).
The two valleys in graphene are connected by time reversal, which translates into
opposite signs of the mass term: the sign $``+"$ in \eqref{ham1}
correspond to one of the valleys, while the sign $``-"$ to the other one.
All formulas that follow
are invariant under a change of the mass sign. The valley (and spin) indexes in \eqref{ham1},\eqref{ham2}
are omitted for simplicity, and it is understood that the spinors $\Psi_{\bf{q}}$ and the density
$n_{\bf p}$ describe a given single valley.
The low energy electronic spectrum (dispersion close
to the Fermi energy), for $H_{I}=0$, is given by:
\begin{equation}
\label{dis}
E^{\pm}_{\bf k} \ = \pm \sqrt{v^2{\bf k}^2 + m^2}.
\end{equation}
The interaction $H_{I}$ renormalizes both $v$ and $m$, as we show
later.
We will only analyze the case when the system is an insulator,
i.e. the chemical potential is in the gap (e.g. fixed to be zero).
It is quite remarkable that in recent experiments in gapped samples
the chemical potential can actually be moved from the electron to the
hole side through the gap, thus causing a metal-insulator transition
\cite{gap2}. In addition to using $\hbar=1$ everywhere
we also put $v=1$ in all intermediate formulas and restore it
only at the end, when necessary.
The polarization function, $\Pi({\bf q},\omega)$, for massive Dirac fermions
in the random phase approximation (RPA) was most recently analyzed in detail
in Ref.~ [\onlinecite{vitor2}] (and can also be deduced by appropriate
modification of the Lorentz-invariant results in QED$_{2+1}$ \cite{2dqed}).
The result is:
\begin{equation}
\label{pol3}
\Pi(\textbf{q},\omega) \! = \! -N\frac{|{\bf q}|^2}{4\pi}
\Biggl\{
\frac{m}{q^2} + \frac{1}{2q}
\biggl(1 \!-\! \frac{4m^2}{q^2} \biggr)
\arctan{\Bigl(\frac{q}{2m}\Bigr)}
\Biggr\},
\end{equation}
where $q$ is the ``3-momentum'',
\begin{equation}
\label{q}
q=\sqrt{|{\bf q}|^2-\omega^2}.
\end{equation}
In RPA the effective potential is given by:
\begin{equation}
\label{charge}
V({\bf{q}}) = \frac{V^{(0)}_{{\bf{q}}}}{1- V^{(0)}_{{\bf{q}}}\Pi(q)}, \ \ \
V^{(0)}_{{\bf{q}}} = \frac{2\pi \alpha}{|{\bf{q}}|}
\end{equation}
Firstly, we study the behavior of the potential for weak coupling
$N\alpha \ll 1$, and in this part of the work we fix $N$ to its
graphene value $N=4$. The first order correction to the static potential
is: $\delta V({\bf{q}}) \approx (V^{(0)}_{{\bf{q}}})^2 \Pi(\textbf{q},\omega=0)$.
After transforming back to real space, we represent the correction by
the function $C(r)$, and write the total potential as
\begin{equation}
\label{pot0}
V(r) \approx \frac{\alpha}{r} \Bigl(1 + \alpha C(r) + O(\alpha^2) \Bigr). \ \ \
\end{equation}
By using \eqref{pol3}, we obtain
\begin{eqnarray}
\label{Cr}
C(r) &= & -2(mr) \int_{0}^{\infty} dx J_{0}(2mrx) \nonumber\\
&& \times\left \{ \frac{1}{x} + \biggl(1 - \frac{1}{x^2} \biggr)
\arctan{(x)} \right \},
\end{eqnarray}
which is shown in Fig.~\ref{Fig1} (lower panel).
At this point we also find it useful to compare our results with the
well studied case of conventional QED$_{3+1}$ \cite{BLP}, where
interaction effects are also governed by a dimensionless coupling,
the fine structure constant $\alpha_{QED}$. Eq.\eqref{pot0} has the same
form (with the substitution $\alpha \rightarrow \alpha_{QED} =1/137$),
leading to the so-called Uehling potential. The plot of $C_{QED}(r)$ in
this case is shown in Fig.~\ref{Fig1} (upper panel) \cite{QED}.
It should be mentioned that (essentially as a consequence of the
uncertainty principle) vacuum polarization effects are expected to be
strong only at distances below the Compton wavelength $r < \lambda_C = 1/m$.
For our case we find, by evaluating the long distance asymptotic behavior
of the integral \eqref{Cr} \cite{roman},
\begin{equation}
\label{ass}
C(r \gg m^{-1}) \sim -\sqrt{\frac{\pi}{mr}} \ e^{-2mr} \, .
\end{equation}
At short distances we find: $C(r \ll m^{-1}) = -\pi/2$, which
is the well-known massless limit.
Thus we observe two main differences
between massive Dirac fermions and QED: (1) The sign of the correction
is different, i.e. in graphene vacuum polarization weakens the potential.
This can be traced to the fact that the charge itself $e^2$ is not
renormalized in graphene \cite{vitor2}, while it diverges logarithmically
at small distances in QED (in other words the distribution of the vacuum
polarization charge is very different in the two cases \cite{vitor2}).
(2) The magnitude of the correction in graphene is more than 10 times
larger (for typical distances), compared to QED (see Fig.~\ref{Fig1}).
This seems to be related to the dimensionality of the problem. Thus,
we conclude that (at weak coupling) polarization effects can be appreciable
in a wide range of distances, especially if one assumes that Eq.\eqref{pot0}
can be used also away from its strict applicability limit $\alpha \ll 1$.
Typically such potential modification effects are important for calculation
of localized energy levels (in the gap); so far such studies have not been
performed experimentally in graphene.
The renormalization of $v$ and $m$ in the weak-coupling regime was addressed
in Ref.~[\onlinecite{vitor2}]
and will not be repeated here; the main difference from
the massless case is that the mass provides an effective infrared cutoff
where the renormalization stops, and the mass itself increases
logarithmically (to first order in $\alpha$).
\begin{figure}[tb]
\centering
\includegraphics*[width=0.45\textwidth]{Uehling.eps}
\caption{(Color online.) Plot of $C(r)$ as defined by Eq.\eqref{pot0}, for the case of graphene (blue line) and
3D QED (red line).}
\label{Fig1}
\end{figure}
\section{$1/N$ expansion}
\label{onen}
Now, we analyze the limit $N\alpha \gg 1$, and proceed to evaluate
the renormalization of the potential and the quasiparticle properties.
We view the calculation as a two-step procedure, which is not exact
but will simplify the problem technically: first we calculate divergent
terms that originate from intermediate integration in the high momentum
region ($m \ll |{\bf q}| \ll \Lambda$), where $\Lambda$ is the ultraviolet
cutoff for graphene ($\Lambda \sim 1/a$, where $a \approx 1.42$ \, \AA \,
is the lattice spacing) and after that, as a second step, we concentrate
on renormalization from low momenta $|{\bf q}| < m$ (such renormalization
will be more severe in the strong coupling limit $\alpha \to \infty$).
\subsection{High momentum regime: $m\ll|{\bf q}| \ll \Lambda$ }
To implement the first (large momentum) part of the scheme, one naturally
expects the mass term to be unimportant; more formally, we can expand:
\begin{equation}
\label{pol2}
\Pi({\bf q}, \omega) = -\frac{N|{\bf q}|^2 }{4q} \left \{ \frac{1}{4} - \frac{m^2}{q^2} + ... \right \} \,,
\quad
q \gg m
\,.
\end{equation}
Keeping only the first term, one arrives at the effective potential,
identical to the one found for the massless case \cite{Gonz,son}:
\begin{equation}
\label{potrpa}
V({\bf{q}}, \omega) \approx \frac{2\pi \alpha}{|{\bf q}|}\left \{ 1 +
\frac{\pi g}{8}\frac{|{\bf q}|}{q} \right \}^{-1} \, ,
\end{equation}
where we use the notation
\begin{equation}
g \equiv N \alpha,
\end{equation}
and $q$ is defined by Eq.\eqref{q}.
Now we evaluate the self-energy correction to one loop, in the limit
$g \gg 1$, where the second term in \eqref{potrpa}
dominates. The self energy is proportional to $1/N$ in this case.
The Green's function is
\begin{equation}
\label{GF}
\hat{G}^{-1}(\textbf{k},\omega)
= \omega - v\,{\bm \sigma}\cdot\textbf{k} -
m\, \sigma_3 - \hat{\Sigma}(\textbf{k},\omega)+ i\eta {\mbox{sign}} (\omega)\ ,
\end{equation}
where the self-energy at one-loop level is
\begin{equation}
\label{se}
\hat{\Sigma}(\textbf{k},\omega) = i \int \frac{d^{2}p d \varepsilon }{(2\pi)^{3}}
\hat{G}_{0}(\textbf{k} +\textbf{p},\omega +\varepsilon )
V(\textbf{p}, \varepsilon) \, .
\end{equation}
Here $\hat{G}_{0}$ is the free Green's function.
By expanding
\begin{equation}
\hat{\Sigma} = \omega \Sigma_{0} + v{\bm \sigma}\cdot\textbf{k} \Sigma_{v} + m\sigma_3 \Sigma_{m}\, ,
\end{equation}
we find
\begin{equation}
\label{gf2}
\hat{G}({\bf k},\omega) =
\frac{Z}{\omega - Z(1+\Sigma_{v} )v{\bm \sigma}\cdot\textbf{k} -Zm(1+\Sigma_{m})\sigma_3}\, ,
\end{equation}
where $Z$ is the quasiparticle residue:
\begin{equation}
Z^{-1} = 1 - \Sigma_{0} \, .
\end{equation}
The calculation of the velocity renormalization is then practically identical
to the massless case \cite{son}, except one should keep in mind that
the mass provides an effective infrared cutoff in the integrals (due to the
finite value of the dispersion at zero momentum \eqref{dis}). Thus we put the mass
to zero to avoid lengthy formulas and keep the above in mind.
One finally obtains ($\delta v(k)$ stands for the velocity correction at one-loop order)
\begin{equation}
\frac{\delta v(k)}{v} = \Sigma_{0} + \Sigma_{v} \!= \!-i \frac{16}{N} \int_{k}^{\Lambda}\frac{ p dp}{(2\pi)}
\int_{-\infty}^{\infty}\frac{ d \omega}{(2\pi)}(p^2 \!-\!\omega^2)^{-3/2},
\end{equation}
where $k$ is the external momentum (neglected in the Green's function), and written
as an effective infrared cutoff (thus we assume $k \gg m$; in the opposite limit $m$ is the cutoff).
By performing Wick's rotation $\omega \rightarrow i\omega$ (which avoids crossing any poles) \cite{BLP},
one finds
\begin{equation}
\label{velocity1}
\delta v(k)/v = \frac{8}{\pi^2}\frac{1}{N} \ln(\Lambda/k), \ \ m \ll k \ll \Lambda \, .
\end{equation}
As expected, the result is this region is identical to the massless case \cite{son}.
For the mass renormalization we have, written in more detail,
\begin{eqnarray}
\label{mass}
\frac{\delta m(k)}{m} & =& \Sigma_{0} + \Sigma_{m} = -i \frac{16}{N} \int_{k}^{\Lambda}\frac{ p dp}{(2\pi)}
\int\frac{ d \omega}{(2\pi)} \frac{\sqrt{p^2-\omega^2}}{p^2} \nonumber\\
&& \times \left \{ \frac{p^2+\omega^2}{(p^2-\omega^2)^{2}} + \frac{1}{p^2-\omega^2} \right \} \, .
\end{eqnarray}
Here the first term in the curly brackets corresponds to $\Sigma_{0}$ and the second one
to $\Sigma_{m}$. The final result is
\begin{equation}
\label{mass1}
\delta m(k)/m = \frac{16}{\pi^2}\frac{1}{N} \ln(\Lambda/k), \ \ m \ll k \ll \Lambda \, .
\end{equation}
Finally, the quasiparticle residue $Z$ is determined by the behavior of $\Sigma_{0}$.
However in the limit $g \gg 1$ one encounters some complications,
because even the frequency integral in the expression for $\Sigma_{0}$
(see \eqref{mass}) is logarithmically divergent (although in the sum
$\Sigma_{0} + \Sigma_{m}$ this additional divergence is not present).
This means that we have to use the full, finite $g$ potential from
\eqref{potrpa}. Performing the calculation one finds
\begin{equation}
\Sigma_{0} = \frac{\alpha}{\pi} \ln(\Lambda/k) \int_{0}^{\infty}dx \frac{1-x^2}{(g\pi/8 + \sqrt{1+x^2})(1+x^2)^{3/2}} \, ,
\end{equation}
and therefore,
\begin{equation}
Z \approx 1 - \frac{8}{\pi^2}\frac{1}{N}\ln(g\pi/4) \ln(\Lambda/k), \ g \gg 1, \ m \ll k \ll \Lambda \, .
\end{equation}
The results for the mass and velocity renormalization \eqref{velocity1},
\eqref{mass1} can be used to form renormalization group (RG) equations
for these quantities. The reasoning is similar to the one presented, for example, in ref.[\onlinecite{son}] for the massless case. One integrates out the high momentum degrees of
freedom, i.e. momentum regions $\Lambda > |\textbf{p}| > \Lambda_1 $,
and the results vary with the quantity $\ln(\Lambda/\Lambda_1) \equiv l$.
As evident from \eqref{velocity1} and \eqref{mass1} the renormalization should
stop at a scale $\sim m$. For $m$ large enough and $N\alpha\gg1$, the functional form of the potential \eqref{potrpa} is not
significantly affected by the RG flow before it stops. The RG equations in that case are:
\begin{eqnarray}
\frac{dv}{dl}=\frac{8}{N\pi^2}v \, ,
\nonumber
\\
\frac{dm}{dl}=\frac{16}{N\pi^2}m \, ,
\end{eqnarray}
that have the solutions:
\begin{equation}
\label{renorm1}
m(k) = m \left (\frac{\Lambda}{k} \right)^{\frac{16}{N\pi^2}}, \ \
v(k) = v \left (\frac{\Lambda}{k} \right)^{\frac{8}{N\pi^2}} \, ,
\end{equation}
which are valid in the region $m \ll k \ll \Lambda$.
Here $m,v$ are the corresponding quantities at the ultraviolet scale
$\Lambda$, i.e. their initial band values at the lattice scale.
\subsection{Low-momentum regime: $|{\bf q}|\ll m $ }
Now we proceed to analyze the low-momentum region. In this limit,
the polarization \eqref{pol3} can be expanded to give:
\begin{equation}
\label{pol1}
\Pi({\bf q}, \omega) = -\frac{N|{\bf q}|^2 }{12\pi m} \left \{ 1 - \frac{q^2}{10m^2} + ... \right \} \,,
\quad
q \ll m
\,.
\end{equation}
We keep only the first term, as the other terms decrease quite fast in powers
of $q/m$. Notice also that in this limit $ \Pi({\bf q}, \omega)$ becomes
frequency independent. The corresponding RPA effective potential is:
\begin{equation}
\label{rpa}
V({\bf q})
\approx \frac{2\pi\alpha}{|{\bf q}|+ \tilde{g} |{\bf q}|^{2}/m}\,, \ \ |{\bf q}| \lesssim m \, ,
\end{equation}
where we have defined:
\begin{eqnarray}
\tilde{g} \equiv g/6 = N\alpha/6 \, .
\end{eqnarray}
By direct numerical evaluation of the polarization bubble \cite{vitor2},
we actually find that the above formula is valid even up to
$|{\bf q}| \sim m$.
In the strict long-distance limit $|{\bf q}| \rightarrow 0$ the above
potential tends to the pure Coulomb potential. However, in the limit
$\tilde{g} \rightarrow \infty$ there is an intermediate window of momenta,
$m/\tilde{g} \ll |{\bf q}| < m$, where the potential crosses over to the 2D
Coulomb's law,
\begin{equation}
\label{rpa21}
V({\bf q}) \approx \frac{12 \pi m}{N} \frac{1}{|{\bf q}|^2}, \ \ m/\tilde{g} \ll |{\bf q}| < m.
\end{equation}
In real space we have:
\begin{equation}
\label{rpa2}
V(r) \approx \frac{6}{N} m \ln{\Bigl (\frac{\tilde{g}}{mr} \Bigr )}\,, \ \ \frac{\tilde{g}}{m} \gg r \gg \frac{1}{m} \, .
\end{equation}
We also comment on some other situations in electrodynamics when the Coulomb
potential can be strongly modified.
It is known that in compact QED$_{2+1}$ linear confinement can occur due
to non-perturbative instanton effects \cite{Polyakov}. It is also possible to
have a confining potential in ferroelectrics where compact field configurations are favored leading to linear confinement of charges \cite{Kir}.
Intermediate logarithmic behavior (in real space), similar to \eqref{rpa2},
can occur in thin films (of thickness $d$) with large dielectric constant
$\kappa \gg 1$ \cite{boriss}. In that case the logarithmic behavior is
limited to the region: $d \ll r \ll \kappa d$, i.e. $d$ plays the role
(formally) of the ``Compton" wavelength $1/m$ in our case, and $ \kappa d$
plays the role of $N \alpha/m$. It was argued that such an intermediate
regime can actually lead to modification of the variable-range-hopping law
in systems with strong disorder. Finally, a mechanism similar to the one
found in this work, i.e. due to the dominance of fluctuations over the bare
potential, was explored in the context of high temperature superconductivity
models based on QED (where it contributes to spinon deconfinement-confinement
transition) \cite{Spinons}.
\begin{figure}[tb]
\centering
\includegraphics*[width=0.46\textwidth]{figSE.eps}
\caption{Two diagrams contributing to the self-energy at two-loop level.
The wavy line stands for the potential \eqref{rpa}.}
\label{Fig2}
\end{figure}
Now we show that the low-momentum region where \eqref{rpa} is valid,
also contributes singularly to the self-energy, in the strong-coupling limit $\alpha \gg 1$.
Using again the expression \eqref{se} with the potential \eqref{rpa},
we obtain for the velocity renormalization at one-loop level,
\begin{equation}
\frac{\delta v^{(1)}(k)}{v} = i\int_{k}^{m}\frac{p dp}{(2\pi)}
\int\frac{ d \omega}{(2\pi)} \frac{1}{\omega^2-m^2}V(|\textbf{p}|)\ .
\label{first}
\end{equation}
Here we have put the fermion energies $E_{\textbf{p}} \approx m$. Because
the potential is static, there is no residue renormalization ($Z=1$) in
this momentum range, and we have finally
\begin{eqnarray}
\label{vel2}
\frac{\delta v^{(1)}(k)}{v} &=& \frac{3}{N}\ln{\left (\frac{1+ \tilde{g}}{1+\tilde{g} k/m} \right )}
\approx \frac{3}{N}\ln(m/k), \nonumber \\
&& {\mbox{ \hspace{2.5cm} } } m/\tilde{g} \ll k \ll m \, .
\label{vFirst}
\end{eqnarray}
Performing the same calculation for the mass, one easily finds
\begin{equation}
\delta m^{(1)}(k)/m = \delta v^{(1)}(k)/v \ .
\label{mFirst}
\end{equation}
Let us also examine the two-loop self-energy which is given by the two diagrams
of Fig.~\ref{Fig2}. After some rather involved calculations we obtain the
first, ``rainbow" contribution (Fig.~\ref{Fig2}(a))
\begin{equation}
\hat{\Sigma}^{(2)}_{a} = -\frac{1}{N^2} \Bigl ( \frac{9}{2} v{\bm \sigma}\cdot\textbf{k} + \frac{9}{4} m\sigma_3
\Bigr )\ln(m/k) \, ,
\end{equation}
in the low-momentum region $m/\tilde{g} \ll k \ll m$.
For the vertex correction to the self-energy,
shown in Fig.~\ref{Fig2}(b), we find:
\begin{equation}
\hat{\Sigma}^{(2)}_{b} = -\frac{1}{N^2} \bigl (\omega + 3 v{\bm \sigma}\cdot\textbf{k} + 3 m\sigma_3 \bigr )\ln(m/k) \ .
\end{equation}
From here we find the corrections to $v,m,Z$ at this order:
\begin{eqnarray}
\label{}
\frac{\delta v^{(2)}}{v} &=& -\frac{17}{2}\frac{1}{N^{2}}\ln(m/k)\ ,
\nonumber \\
\frac{\delta m^{(2)}}{m} &=& -\frac{25}{4}\frac{1}{N^{2}}\ln(m/k)\ ,
\nonumber \\
Z^{(2)} &\approx& 1 - \frac{1}{N^{2}}\ln(m/k)\ ,
\label{second}
\end{eqnarray}
valid for $ m/\tilde{g} \ll k \ll m$.
Observe that a single logarithm appears at two loops, meaning that we do not
have a conventional renormalization group situation (piling up of leading logs) in this low-momentum
region. This behavior is similar to the case of a static Coulomb potential
in the massless case, where a single log appears up to second order of
perturbation theory \cite{Misch}.
Let $m^{*},v^{*}$ be the values of these quantities at the lowest scale
$k \sim m/\tilde{g} \ll m$. Keeping only the dominant one loop contribution,
we can estimate
\begin{equation}
\label{renorm3}
v^{*}/v = m^{*}/m \approx 1 + \frac{3}{N}\ln(\tilde{g}) + O(1/N^2) \ .
\end{equation}
From equations \eqref{dis},\eqref{renorm3} one can also deduce
the correction to the dispersion at low momenta:
\begin{equation}
|E_k| \approx m^{*} + \frac{(v^{*})^2k^2}{2m^{*}} \approx \left ( 1 + \frac{3}{N}\ln(\tilde{g})
\right ) \left ( m + \frac{v^2k^2}{2m} \right ),
\end{equation}
for $k \sim m/\tilde{g} \ll m$.
On the validity of the above expansions, comparing Eq. \eqref{vFirst} and \eqref{second}
we note that in two loop the $1/N$ expansion
breaks down for $N\approx N_c = 17/6$, when the coefficient in front of the log vanishes and the trend in the renormalization of the velocity is reversed towards increasing $\alpha$. In the strong coupling limit, $\alpha \to \infty$, the velocity renormalizes to zero, reinforcing $\alpha$ to be large, what indicates the possibility of an instability below the critical $N$. Although this analysis is not directly applicable to graphene, where $N=4$, it is similar in spirit to the prediction of a metal-insulator transition in massless graphene \cite{kve,drut}, which would take place around the critical value $\alpha_c\approx1$, where usual perturbation theory in the Coulomb potential breaks down.
In this regard it is also important to explore how the change in the potential,
leading to the modified shape \eqref{rpa21}, can affect the spontaneous formation
of a mass gap via the non-perturbative excitonic mechanism \cite{kve,kve2}. The complete
self-consistent examination of this problem is well beyond the scope of this work \cite{criticalqcd};
however some estimates are presented in Appendix A. We can conclude (see
Eq.~\eqref{excitonicgap}) that the gap increases as $\tilde{g}$ increases,
\begin{equation}
m \approx \Lambda e^{-\pi N/4 + (3\pi/4) \ln(\tilde{g}) } = \Lambda \tilde{g}^{(3\pi/4)} e^{-\pi N/4} \ ,
\end{equation}
although this increase in
small in the perturbative $1/N$ regime when $\ln(\tilde{g})/N \ll 1$. On the other hand
when
\begin{equation}
\label{newscale}
\frac{3}{N}\ln(\tilde{g}) \sim 1 \ ,
\end{equation}
the gap enhancement is
substantial. However this strong-coupling regime in not accessible
within the conventional large $N$ philosophy, where $\tilde{g} = N \alpha/6$ is kept
fixed while $N\gg1$, so that the RPA self-consistent scheme is well justified.
One can also hope that the results represent correctly the behavior for $N$
fixed at its physical value ($N=4$) with $\tilde{g}$ large at strong coupling
($\alpha \gg 1$), but of course in this case diagrams beyond RPA might be
important and their influence remains unclear. Therefore we cannot make
a definite conclusion how the system behaves under the condition $\frac{3}{N}\ln(\tilde{g}) \sim 1$,
although on the surface of things this criterion signifies a transition into a new
low-energy regime which in itself deserves further study.
\section{Conclusions}
\label{conclusions}
In conclusion, we have studied the behavior of the interaction potential
and the quasiparticle properties both in the weak-coupling
$g = N \alpha \ll 1$ and the ``strong-coupling" $g = N \alpha \gg 1$ regimes.
In the latter case we have found an unconventional regime where the potential
crosses over from the usual 3D Coulomb potential to a 2D logarithmic behavior,
as if the field lines were confined in the plane. This is due to the fact
that vacuum fluctuations dominate over the bare potential, although they do
so in a limited momentum range. This physics can also lead to unusual
renormalization of physical quantities at distances well beyond the Compton
wavelength $1/m$ (i.e. momenta $q \ll m$), up to distances of order
$g/m \gg 1/m$. Such effects have not been studied, to the best of our
knowledge, in conventional QED due to the smallness of $\alpha_{QED}$.
In our case both the mass gap and the velocity ``keep running" (increase)
up to the larger distance $g/m$. Since for graphene in vacuum $g \approx 9$,
one can hope that this physics is observable. On the other hand,
the presence of various numerical coefficients makes this somewhat difficult,
e.g. in \eqref{rpa2} $\tilde{g} = g/6 = 1.5$ is probably too small to create a
large-enough intermediate energy window. Nevertheless, from purely
theoretical perspective, the massive case exhibits much richer behavior
compared to the gapless one.
\acknowledgments
We are grateful to R. Barankov, E. Fradkin, E. Marino, V. Pereira, and
O. Sushkov for many stimulating discussions. We thank Roman Barankov
for showing us the derivation of Eq.~(\ref{ass}).
This work was supported in part by the Office of Science, U.S.
Department of Energy under Contract DE-FG02-91ER45439 through the
Frederick Seitz Materials Research Laboratory at the University of
Illinois. AHCN acknowledges the partial support of the U.S. Department of Energy under
the grant DE-FG02-08ER46512.
|
2,869,038,156,270 | arxiv | \subsubsection*{\bibname}}
\usepackage{hyperref}
\usepackage{enumerate}
\usepackage{mathtools}
\usepackage{etextools}
\usepackage[capitalise,noabbrev]{cleveref} %
\crefname{appsec}{Appendix}{Appendices}
\crefformat{equation}{(#2#1#3)}
\hypersetup{
colorlinks=true,
linkcolor=blue,
urlcolor=blue,
citecolor=blue
}
\DeclarePairedDelimiter\norm{\lVert}{\rVert}
\DeclarePairedDelimiterX{\dprod}[2]{\langle}{\rangle}{#1, #2}
\SetSymbolFont{stmry}{bold}{U}{stmry}{m}{n}
\allowdisplaybreaks
\newcommand{\overset{\mathrm{iid}}{\sim}}{\overset{\mathrm{iid}}{\sim}}
\begin{document}
\twocolumn[
\aistatstitle{Stable behaviour of infinitely wide deep neural networks}
\aistatsauthor{
Stefano Favaro\\
\texttt{[email protected]}
\And
Sandra Fortini\\
\texttt{[email protected]}
\And
Stefano Peluchetti\\
\texttt{[email protected]}
}
\aistatsaddress{
Department ESOMAS\\ University of Torino\\ and Collegio Carlo Alberto\\
\And
Department of Decision Sciences\\
Bocconi University\\
\And
Cogent Labs\\
} ]
\begin{abstract}
We consider fully connected feed-forward deep neural networks (NNs) where weights and biases are independent and identically distributed as symmetric centered stable distributions. Then, we show that the infinite wide limit of the NN, under suitable scaling on the weights, is a stochastic process whose finite-dimensional distributions are multivariate stable distributions. The limiting process is referred to as the stable process, and it generalizes the class of Gaussian processes recently obtained as infinite wide limits of NNs \citep{matthews2018gaussian}. Parameters of the stable process can be computed via an explicit recursion over the layers of the network. Our result contributes to the theory of fully connected feed-forward deep NNs, and it paves the way to expand recent lines of research that rely on Gaussian infinite wide limits.
\end{abstract}
\section{Introduction}\label{sec:intro}
The connection between infinitely wide deep feed-forward neural networks (NNs), whose parameters at initialization are independent and identically distributed (iid) as scaled and centered Gaussian distributions, and Gaussian processes (GPs) is well known \citep{neal1995bayesian,der2006beyond,lee2018deep,g.2018gaussian,matthews2018gaussian,1yang2019scaling}. Recently, this intriguing connection has been exploited in many exciting research directions, including: i) Bayesian inference for GPs arising from infinitely wide networks \citep{lee2018deep,garriga-alonso2018deep}; ii) kernel regression for infinitely wide networks which are trained with continuous-time gradient descent via the neural tangent kernel \citep{jacot2018neural,lee2019wide,arora2019exact}; iii) analysis of the properties of infinitely wide networks as function of depth via the information propagation framework \citep{poole2016exponential,schoenholz2017deep,hayou2019impact}.
It has been shown a substantial gap between finite NNs and their corresponding infinite (wide) GPs counterparts in terms of empirical performance, at least on some of the standard benchmarks applications. Moreover, it has been shown to be a difficult task to avoid undesirable empirical properties arising in the context of very deep networks. Given that, there exists an increasing interest in expanding GPs arising in the limit of infinitely wide NNs as a way forward to close, or even reverse, this empirical performance gap and to avoid, or slow down, pathological behaviors in very deep NN.
Let $\mathcal{N}(\mu,\sigma^{2})$ denote the Gaussian distribution with mean $\mu\in\mathbb{R}$ and variance $\sigma^{2}\in\mathbb{R}^{+}$. Following the celebrated work of \cite{neal1995bayesian}, we consider the shallow NN
\begin{align*}
f_{i}^{(1)}(x)&=\sum_{j=1}^{I}w_{i,j}^{(1)}x_{j}+b_{i}^{(1)}\\
f_{i}^{(2)}(x,n)&=\frac{1}{\sqrt{n}}\sum_{j=1}^{n}w_{i,j}^{(2)}\phi(f_{j}^{(1)}(x))+b_{i}^{(2)},
\end{align*}
where $\phi$ is a nonlinearity, $i=1,\dots,n$, $w_{i,j}^{(1)},w_{i,j}^{(2)} \overset{\mathrm{iid}}{\sim} \mathcal{N}(0, \sigma_w^2)$, $b_{i}^{(1)},b_{i}^{(2)} \overset{\mathrm{iid}}{\sim} \mathcal{N}(0, \sigma_b^2)$ and $x \in \mathbb{R}^I$ is the input. It follows that
\begin{align*}
f_{i}^{(1)}(x) &\overset{\mathrm{iid}}{\sim} \mathcal{N}\left(0, \sigma_{f^{(1)}}^2(x)\right)\\
f_{i}^{(2)}(x,n)|f^{(1)} &\overset{\mathrm{iid}}{\sim} \mathcal{N}\left(0, \sigma_{f^{(2)}}^2(x,n)\right)\\
\sigma_{f^{(1)}}^2(x) &= \sigma_b^2 + \sigma_w^2 \frac{1}{I}\sum_{j=1}^{I}x_j^2\\
\sigma_{f^{(2)}}^2(x,n) &= \sigma_b^2 + \sigma_w^2 \frac{1}{n}\sum_{j=1}^{n}\phi(f_j^{(1)}(x))^2.
\end{align*}
If $x'$ is another input we obtain bivariate Gaussian distributions
\begin{align*}
(f_{i}^{(1)}(x),f_{i}^{(1)}(x')) &\overset{\mathrm{iid}}{\sim} \mathcal{N}_2\left(0, \Sigma_{f^{(1)}}(x,x')\right)\\
(f_{i}^{(2)}(x,n),f_{i}^{(2)}(x',n))|f^{(1)} &\overset{\mathrm{iid}}{\sim} \mathcal{N}_2\left(0, \Sigma_{f^{(2)}}(x,x',n)\right),
\end{align*}
where
\begin{align*}
\Sigma_{f^{(1)}}(x,x') &= \begin{bmatrix}\sigma_{f^{(1)}}^2(x)&c_{f^{(1)}}(x,x')\\c_{f^{(1)}}(x,x')&\sigma_{f^{(1)}}^2(x')\end{bmatrix}\\
\Sigma_{f^{(2)}}(x,x',n) &= \begin{bmatrix}\sigma_{f^{(2)}}^2(x,n)&c_{f^{(2)}}(x,x',n)\\c_{f^{(2)}}(x,x',n)&\sigma_{f^{(2)}}^2(x',n)\end{bmatrix}\\
c_{f^{(1)}}(x,x') &= \sigma_b^2 + \sigma_w^2 \frac{1}{I}\sum_{j=1}^{I}x_j x'_j\\
c_{f^{(2)}}(x,x',n) &= \sigma_b^2 + \sigma_w^2 \frac{1}{n}\sum_{j=1}^{n}\phi(f_j^{(1)}(x))\phi(f_j^{(1)}(x')).
\end{align*}
Let $\overset{\mathrm{a.s.}}{\longrightarrow}$ denote the almost sure convergence. By the strong law of large numbers we know that, as $n \rightarrow +\infty$, one has
\begin{align*}
\frac{1}{n}\sum_{j=1}^{n}\phi(f_j^{(1)}(x))^2 &\overset{\mathrm{a.s.}}{\longrightarrow} \mathbb{E}[\phi(f_1^{(1)}(x))^2]\\
\frac{1}{n}\sum_{j=1}^{n}\phi(f_j^{(1)}(x))\phi(f_j^{(1)}(x')) &\overset{\mathrm{a.s.}}{\longrightarrow} \mathbb{E}[\phi(f_1^{(1)}(x))\phi(f_1^{(1)}(x'))],
\end{align*}
from which one can conjecture that in the limit of infinite width the stochastic processes $f_i^{(2)}(x)$ are distributed as iid (over $i$) centered GP with kernel $K(x,x')=\sigma_b^2 + \sigma_w^2\mathbb{E}[\phi(f_1^{(1)}(x))\phi(f_1^{(1)}(x'))]$. Provided that the nonlinear function $\phi$ is chosen so that $\phi(f_1^{(1)}(x))$ has finite second moment, \cite{matthews2018gaussian} made rigorous this argument and extended it to deep NNs.
A key assumption underlying the interplay between infinite wide NNs and GPs is the finiteness of the variance of the parameters' distribution at initialization. In this paper we remove the assumption of finite variance by considering iid initializations based on stable distributions, which includes Gaussian initializations as a special case. We study the infinite wide limit of fully connected feed-forward NN in the following general setting: i) the NN is deep, namely the NN is composed of multiple layers; ii) biases and scaled weights are iid according to centered symmetric stable distributions; iii) the width of network's layers goes to infinity jointly on the layers, and not sequentially on each layer; iv) the convergence in distribution is established jointly for multiple inputs, namely the convergence concerns the class of finite dimensional distributions of the NN viewed as a stochastic process in function space. See \cite{neal1995bayesian} and \cite{der2006beyond} for early works on NNs under stable initialization.
Within our setting, we show that the infinite wide limit of the NN, under suitable scaling on the weights, is a stochastic process whose finite-dimensional distributions are multivariate stable distributions \citep{samoradnitsky2017stable}. This process is referred to as the stable process. Our result may be viewed as a generalization of the main result of \cite{matthews2018gaussian} to the context of stable distributions, as well as a generalization of results of \cite{neal1995bayesian} and \cite{der2006beyond} to the context of deep NN. Our result contributes to the theory of fully connected feed-forward deep NNs, and it paves the way to extend the research directions i) ii) and iii) that rely on Gaussian infinite wide limits. The class of stable distributions is known to be especially relevant. Indeed while the contribution of each Gaussian weight vanishes as the width grows unbounded, some of the stable weights retains significant size, thus allowing them to represent "hidden features" \citep{neal1995bayesian}.
The paper is structured as follows. \cref{sec:preliminaries} contains some preliminaries on stable distributions, whereas in \cref{sec:deep_stable_nets} we define the class of feedforward NNs considered in this work. \cref{sec:infinite_limits} contains our main result: as the width tends to infinity jointly on network's layers, the finite dimensional distributions of the NN converges to a multivariate stable distribution whose parameters are compute via a recursion over the layers. The convergence of the NN to the stable process then follows by finite-dimensional projections. In \cref{sec:related_work} we detail how our result extends previously established large width convergence results and comment on related work, whereas in \cref{sec:applications} we discuss how our result applys to the research lines highlighted in i) ii) and iii) which relies on GP limits. In \cref{sec:conclusions} we comment on future research directions. The Supplementary Material (SM) contains all the proofs (SM A,B,C), a preliminary numerical experiment on the recursion evaluation (SM D), an empirical investigation of the distribution of trained NN models' parameters (SM E). Code is available at \url{https://github.com/stepelu/deep-stable}.
\section{Stable distributions} \label{sec:preliminaries}
Let $\text{St}(\alpha,\sigma)$ denote the symmetric centered stable distribution with stability parameter $\alpha\in(0,2]$ and scale parameter $\sigma>0$, and let $S_{\alpha,\sigma}$ be a random variable distributed as $\text{St}(\alpha,\sigma)$. That is, the characteristic function of $S_{\alpha,\sigma}\sim \text{St}(\alpha,\sigma)$ is $\varphi_{S_{\alpha,\sigma}}(t)=\mathbb{E}[\text{e}^{\textrm{i}tS_{\alpha,\sigma}}]=\text{e}^{-\sigma^{\alpha}|t|^{\alpha}}$. For any $\sigma > 0$, a $S_{\alpha,\sigma}$ random variable with $0 < \alpha < 2$ has finite absolute moments $\mathbb{E}[|S_{\alpha,\sigma}|^{\alpha -\varepsilon}]$ for any $\varepsilon > 0$, while $\mathbb{E}[|S_{\alpha,\sigma}|^{\alpha}] = + \infty$. Note that when $\alpha = 2$, we have that $S_{2,\sigma} \sim \mathcal{N}(0, 2\sigma^2)$. The random variable $S_{2,\sigma}$ has finite absolute moments of any order. For any $a \in \mathbb{R}$ we have the scaling identity $a S_{\alpha,1} \sim \text{St}(\alpha,|a|)$.
We recall the definition of symmetric and centered multivariate stable distribution and its marginal distributions. First, let $\mathbb{S}^{k-1}$ be the unit sphere in $\mathbb{R}^{k}$. Let $\text{St}_{k}(\alpha,\Gamma)$ denote the symmetric and centered $k$-dimensional stable distribution with stability $\alpha\in(0,2]$ and scale (finite) spectral measure $\Gamma$ on $\mathbb{S}^{k-1}$, and let $\mathbf{S}_{\alpha,\Gamma}$ be a random vector of dimension $k\times 1$ distributed as $\text{St}_{k}(\alpha,\Gamma)$. The characteristic function of $\boldsymbol{S}_{\alpha,\Gamma}\sim \text{St}_k(\alpha,\Gamma)$ is
\begin{equation}\label{eq:multivariate_stable_cf}
\varphi_{\boldsymbol{S}_{\alpha,\Gamma}}(\boldsymbol{t})=\mathbb{E}[\text{e}^{\textrm{i}\boldsymbol{t}^{T}\boldsymbol{S}_{\alpha,\Gamma}}]=\exp\left\{-\int_{\mathbb{S}^{k-1}}|\boldsymbol{t}^{T}\boldsymbol{s}|^{\alpha}\Gamma(\text{d}\boldsymbol{s})\right\}.
\end{equation}
If $\boldsymbol{S}_{\alpha,\Gamma}\sim \text{St}(\alpha,\Gamma)$ then the marginal distributions of $\boldsymbol{S}_{\alpha,\Gamma}$ are described as follows. Let $\boldsymbol{1}_{r}$ denote a vector of dimension $k \times 1$ with $1$ in the $r$-the entry and $0$ elsewhere. Then, the random variable corresponding to the $r$-th element of $\boldsymbol{S}_{\alpha,\Gamma}\sim \text{St}(\alpha,\Gamma)$ can be defined as follows
\begin{equation}\label{eq:mar}
\boldsymbol{1}_{r}^{T}\boldsymbol{S}_{\alpha,\Gamma}\sim \text{St}(\alpha,\sigma(r)),
\end{equation}
where
\begin{equation}\label{eq:up_s}
\sigma(r)=\left(\int_{\mathbb{S}^{k-1}}|\boldsymbol{1}_{r}^{T}\boldsymbol{s}|^{\alpha}\Gamma(\text{d}\boldsymbol{s})\right)^{1/\alpha}.
\end{equation}
The distribution $\text{St}_{k}(\alpha,\Gamma)$ with characteristic function \cref{eq:multivariate_stable_cf} allows for marginals which are not centered nor symmetric. However in the present work all the marginals will be centered and symmetric, and the spectral measure will often be a discrete measure, i.e., $\Gamma(\cdot) = \sum_{j=1}^n \gamma_j \delta_{\boldsymbol{s}_j}(\cdot)$for $n \in \mathbb{N}$, $\boldsymbol{s}_j \in \mathbb{S}^{k-1}$ and $\gamma_j \in \mathbb{R}$. In particular, under these specific assumptions, we have
\begin{equation*}
\varphi_{\boldsymbol{S}_{\alpha,\Gamma}}(\boldsymbol{t})=\exp\left\{-\sum_{j=1}^n \gamma_j |\boldsymbol{t}^{T}\boldsymbol{s}_j|^\alpha\right\}.
\end{equation*}
See \cite{samoradnitsky2017stable} for a detailed account on $\boldsymbol{S}_{\alpha,\Gamma}\sim \text{St}(\alpha,\Gamma)$.
\section{Deep stable networks} \label{sec:deep_stable_nets}
We consider fully connected feed-forward NNs composed of $D \geq 1$ layers where each layer is of width $n \geq 1$.
Let $w_{i,j}^{(l)}$ be the weights of the $l$-th layer, and assume that they are independent and identically distributed as $\text{St}(\alpha,\sigma_{w})$, a stable distribution with stability parameter $\alpha \in (0,2]$ and scale parameter $\sigma_w > 0$. That is, the characteristic function of $
w_{i,j}^{(l)}\sim \text{St}(\alpha,\sigma_{w})$ is
\begin{equation}\label{eq:car_w}
\varphi_{w^{(l)}_{i,j}}(t)=\mathbb{E}[\text{e}^{\textrm{i}tw^{(l)}_{i,j}}]=e^{-\sigma_{w}^\alpha |t|^\alpha},
\end{equation}
for any $i\geq1$, $j\geq1$ and $l \geq 1$. Let $b_{i}^{(l)}$ denote the biases of the $l$-th hidden layer, and assume that they are independent and identically distributed as $\text{St}(\alpha,\sigma_{b})$. That is, the characteristic function of the random variable $b_{i}^{(l)}\sim \text{St}(\alpha,\sigma_{b})$ is
\begin{equation}\label{eq:car_b}
\varphi_{b^{(l)}_{i}}(t)=\mathbb{E}[\text{e}^{\textrm{i}tb^{(l)}_{i}}]=e^{-\sigma_{b}^\alpha |t|^\alpha},
\end{equation}
for any $i\geq1$ and $l \geq 1$. The random weights $w_{i,j}^{(l)}$ are independent of the biases $b_{i}^{(l)}$, for any $i\geq1$, $j\geq1$ and $l \geq 1$. That is,
\begin{displaymath}
(w_{i,j}^{(l)}+b_{i}^{(l)})\sim \text{St}(\alpha,(\sigma_{w}^{\alpha}+\sigma_{b}^{\alpha})^{1/\alpha}).
\end{displaymath}
Let $\phi:\mathbb{R}\rightarrow\mathbb{R}$ be a nonlinearity with a finite number of discontinuities and such that it satisfies the envelope condition
\begin{equation}\label{eq:le}
|\phi(s)|\leq (a+b|s|^{\beta})^\gamma
\end{equation}
for every $s\in\mathbb{R}$, and for any parameter $a,b>0$, $\gamma<\alpha^{-1}$ and $\beta<\gamma^{-1}$. If $x\in\mathbb{R}^{I}$ is the input argument of the NN, then the NN is explicitly defined by means of
\begin{equation}\label{eq:f1}
f_{i}^{(1)}(x,n)=f_{i}^{(1)}(x)=\sum_{j=1}^{I}w_{i,j}^{(1)}x_{j}+b_{i}^{(1)},
\end{equation}
and
\begin{equation}\label{eq:fl}
f_{i}^{(l)}(x,n)=\frac{1}{n^{1/\alpha}}\sum_{j=1}^{n}w_{i,j}^{(l)}\phi(f_{j}^{(l-1)}(x,n))+b_{i}^{(l)}
\end{equation}
for $l=2,\ldots,D$ and $i=1,\ldots,n$ in \cref{eq:f1} and \cref{eq:fl}. The scaling of the weights in \cref{eq:fl} will be shown to be the correct one to obtain non-degenerate limits as $n\rightarrow+\infty$.
\section{Infinitely wide limits}\label{sec:infinite_limits}
We show that, as the width of the NN tends to infinity jointly on network's layers, the finite dimensional distributions of the network converge to a multivariate stable distribution whose parameters are compute via a suitable recursion over the network layers. Then, by combining this limiting result with standard arguments on finite-dimensional projections we obtain the large $n$ limit of the stochastic process $(f_i^{(l)}(x^{(1)},n),\dots,f_i^{(l)}(x^{(k)},n))_{i \geq 1}$ where $x^{(1)},\dots,x^{(k)}$ are the inputs to the NN. In particular, let $\overset{w} {\longrightarrow}$ denote the weak convergence. Then, we show that as $n\rightarrow+\infty$,
\begin{equation}\label{eq:main_informal}
(f_i^{(l)}(x^{(1)},n),\dots,f_i^{(l)}(x^{(k)},n))_{i \geq 1} \overset{w} {\longrightarrow} \bigotimes_{i\geq1}\text{St}_{k}(\alpha,\Gamma(l))
\end{equation}
where $\bigotimes$ is the product measure. From now on $k$ is the number of inputs, which is equal to the dimensionality of the finite dimensional distributions of interest for the stochastic processes $f_i^{(l)}$.Thorough the rest of the paper we assume that the assumptions introduced in \cref{sec:deep_stable_nets} hold true. Hereafter we present a sketch of the proofs of our main result for a fixed index $i$ and input $x$, and we defer to the SM for the complete proofs of our main results.
We start with a technical remark: in \cref{eq:f1}-\cref{eq:fl} the stochastic processes $f_i^{(l)}(x,n)$ are only defined for $i=1,\dots,n$, while the limiting measure in \cref{eq:main_informal} is the product measure on $i \geq 1$. This fact does not determine problems, as for each $\mathcal{L} \subset \mathbb{N}$ there is a $n$ large enough such that for each $i \in \mathcal{L}$ the processes $f_i^{(l)}(x,n)$ are defined. In any case, the simplest solution consists in extending $f_i^{(l)}(x,n)$ from $i=1,\dots,n$ to $i \geq 1$ in \cref{eq:f1}-\cref{eq:fl}, and we will make this assumption in all the proofs.
\subsection{Large width asymptotics: \texorpdfstring{$k=1$}{k=1}}
We characterize the limiting distribution of $f_i^{(l)}(x,n)$ as $n \rightarrow \infty$ for a fixed $i$ and input $x$. We show that, as $n\rightarrow+\infty$,
\begin{equation}\label{eq:convergence_1}
{f}_{i}^{(l)}(x,n)\overset{w}{\longrightarrow} \text{St}(\alpha,\sigma(l)),
\end{equation}
where the parameter $\sigma(l)$ is computed through the recursion:
\begin{align*}
\sigma(1) &= \big(\sigma_b^\alpha + \sigma_w^\alpha \sum_{j=1}^I |x_j|^\alpha\big)^{1/\alpha}\\
\sigma(l) &= \big(\sigma_b^\alpha + \sigma_w^\alpha \mathbb{E}_{f \sim q^{(l-1)}}[|\phi(f)|^\alpha]\big)^{1/\alpha}
\end{align*}
and $q(l) = \text{St}(\alpha,\sigma(l))$ for each $l \geq 1$. The generalization of this result to $k \geq 1$ inputs is given in \cref{sec:asympt_k_brief}.
\begin{proof}[Proof of \cref{eq:convergence_1}] The proof exploits exchangeability of the sequence $({f}_i^{(l)}(n,x))_{i\geq1}$, an induction argument on the layer's index $l$ for the directing (random measure) of $({f}_i^{(l)}(n,x))_{i\geq1}$, and some technical lemmas that are proved in SM. Recall that the input is a real-valued vector of dimension $I$.
By means of \cref{eq:car_w} and \cref{eq:car_b}, for $i \geq 1$:
\begin{align*}
&\varphi_{f_{i}^{(1)}(x)}(t)\\
&=\mathbb{E}[e^{\textrm{i}tf_{i}^{(1)}(x)}]\\
&=\mathbb{E}\left[\exp\left\{\textrm{i}t\left[\sum_{j=1}^{I}w_{i,j}^{(1)}x_{j}+b_{i}^{(1)}\right]\right\}\right]\\
&=\exp\left\{-(\sigma_{w}^{\alpha}\sum_{j=1}^{I}|x_{j}|^{\alpha}+\sigma_{b}^{\alpha})|t|^{\alpha}\right\},
\end{align*}
i.e.
\begin{equation*}
f_{i}^{(1)}(x)\overset{\text{d}}{=}S_{\alpha,\left(\sigma_{w}^{\alpha}\sum_{j=1}^{I}|x_{j}|^{\alpha}+\sigma_{b}^{\alpha}\right)^{1/\alpha}};
\end{equation*}
and for $l=2,\dots,D$
\begin{align*}
&\varphi_{f_{i}^{(l)}(x,n)\,|\,\{f_{j}^{(l-1)}(x,n)\}_{j=1,\ldots,n}}(t)\\
&=\mathbb{E}[e^{\textrm{i}tf_{i}^{(l)}(x,n)}\,|\,\{f_{j}^{(l-1)}(x,n)\}_{j=1,\ldots,n}]\\
&=\mathbb{E}\Bigg[\exp\Bigg\{\textrm{i}t\Bigg[\frac{1}{n^{1/\alpha}}\sum_{j=1}^{n}w_{i,j}^{(l)}\phi(f_{j}^{(l-1)}(x,n))+b_{i}^{(l)}\Bigg]\Bigg\}\\
&\qquad \mathrel{\Bigg|} \{f_{j}^{(l-1)}(x,n)\}_{j=1,\ldots,n}\Bigg]\\
&=\exp\left\{-(\frac{\sigma_{w}^{\alpha}}{n}\sum_{j=1}^{n}|\phi(f_{j}^{(l-1)}(x,n))|^{\alpha}+\sigma_{b}^{\alpha})|t|^{\alpha}\right\},
\end{align*}
i.e.,
\begin{align*}
&f_{i}^{(l)}(x,n)\,|\,\{f_{j}^{(l-1)}(x,n)\}_{j=1,\ldots,n}\\
&\notag\overset{\text{d}}{=} S_{\alpha,\left(\frac{\sigma_{w}^{\alpha}}{n}\sum_{j=1}^{n}|\phi(f_{j}^{(l-1)}(x,n))|^{\alpha}+\sigma_{b}^{\alpha}\right)^{1/\alpha}}.
\end{align*}
It comes from \eqref{eq:fl} that, for every fixed $l$ and for every fixed $n$ the sequence $({f}_i^{(l)}(n,x))_{i\geq1}$ is exchangeable. In particular, let $p_{n}^{(l)}$ denote the directing (random) probability measure of the exchangeable sequence $({f}_i^{(l)}(n,x))_{i\geq1}$. That is, by de Finetti representation theorem, conditionally to $p_{n}^{(l)}$ the ${f}_i^{(l)}(n,x)$'s are iid as $p_{n}^{(l)}$. Now, consider the induction hypothesis that $p_{n}^{(l-1)}\stackrel{w}{\longrightarrow}q^{(l-1)}$ as $n\rightarrow+\infty$, with $q^{(l-1)}$ be $\text{St}(\alpha,\sigma(l-1))$, and the parameter $\sigma(l-1)$ will be specified. Therefore,
\begin{align}\label{eq_princi}
&\notag\mathbb{E}[\text{e}^{\textrm{i}t{f}_i^{(l)}(x,n)}]\\
&\notag=\mathbb{E}\left[\exp\left\{-|t|^\alpha\left(\frac{\sigma_{w}^\alpha}{n}\sum_{j=1}^n|\phi({f}_j^{(l-1)}(x,n))|^\alpha+\sigma^{\alpha}_{b}\right)\right\}\right]\\
&\notag=e^{-|t|^{\alpha}\sigma_{b}^{\alpha}}\mathbb{E}\left[\exp\left\{-|t|^\alpha\frac{\sigma_{w}^\alpha}{n}\sum_{j=1}^n|\phi({f}_j^{(l-1)}(x,n))|^\alpha\right\}\right]\\
&=e^{-|t|^{\alpha}\sigma_{b}^{\alpha}}\mathbb{E}\left[\left(\int \exp\left\{-|t|^\alpha\frac{\sigma_{w}^\alpha}{n}|\phi({f})|^\alpha\right\}p_{n}^{(l-1)}(\text{d}{f})\right)^n\right].
\end{align}
where the first equality comes from plugging in the definition of $f_i^{(l)}(x,n)$, rewriting $\mathbb{E}[\exp(\sum_{j=1}^n\cdots)]$ as $\mathbb{E}[\prod_{j=1}^n\exp(\cdots)] = \prod_{j=1}^n\mathbb{E}[\exp(\cdots)]$ due to independence, computing the characteristic function for each term, and re-arranging. therein, since $(f_{I}^{(l-1)}(n,x))_{i\geq1}$ is exchangeable there exists (de Finetti theorem) a random probability measure $p_{n}^{(l-1)}$ such that conditionally to $p_{n}^{(l-1)}$ the $f_{I}^{(l-1)}(n,x)$ are iid as $p_{n}^{(l-1)}$ which explains \cref{eq_princi}.
Now, let $\overset{p} {\longrightarrow}$ denote the convergence in probability. The following technical lemmas (\cref{sec:asympt_1}):
\begin{itemize}
\item[L1)] for each $l \geq 2$ $\text{Pr}[p_{n}^{(l-1)}\in I]=1$, with $I=\{p:\int |\phi({f})|^\alpha p(\text{d}{f})<+\infty\}$;
\item[L2)] $\int |\phi({f})|^\alpha p_{n}^{(l-1)}(\text{d}{f})\stackrel{p}{\longrightarrow} \int |\phi({f})|^\alpha q^{(l-1)}(\text{d}{f})$, as $n\rightarrow +\infty$;
\item[L3)] $\int |\phi({f})|^\alpha [1-e^{-|t|^\alpha\frac{\sigma^{\alpha}_{w}}{n} |\phi({f})|^\alpha}]p_{n}^{(l-1)}(\text{d}{f})\stackrel{p}{\longrightarrow} 0$, as $n\rightarrow +\infty$.
\end{itemize}
together with Lagrange theorem, are the main ingredients for proving \cref{eq:convergence_1} by studying the large $n$ asymptotic behavior of \cref{eq_princi}. By combining \eqref{eq_princi} with lemma L1,
\begin{align*}
&\mathbb{E}[\text{e}^{\textrm{i}t{f}_i^{(l)}(x,n)}] =e^{-|t|^{\alpha}\sigma_{b}^{\alpha}} \mathbb{E}\Bigg[\mathbbm{1}_{\{(p_{n}^{(l-1)}\in I)\}}
\\&\times\Bigg(\int \exp\Bigg\{-|t|^\alpha\frac{\sigma_{w}^\alpha}{n}|\phi({f})|^\alpha\Bigg\}p_{n}^{(l-1)}(\text{d}{f})\Bigg)^n\Bigg].
\end{align*}
By means of Lagrange theorem, there exists $\theta_{n}\in[0,1]$ such that
\begin{align*}
&\exp\left\{-|t|^\alpha\frac{\sigma_{w}^\alpha}{n}|\phi({f})|^\alpha\right\}\\
&\quad=1-|t|^\alpha\frac{\sigma_{w}^\alpha}{n}|\phi({f})|^\alpha\\
&\quad\quad+|t|^\alpha\frac{\sigma_{w}^\alpha}{n}|\phi({f})|^{\alpha}\left(1-\exp\left\{-\theta_{n}|t|^\alpha\frac{\sigma_{w}^\alpha}{n}|\phi({f})|^\alpha\right\}\right).
\end{align*}
Now, since
\begin{align*}
0&\leq \int|\phi({f})|^{\alpha}[1-e^{-\theta_{n}|t|^\alpha\frac{\sigma^{\alpha}_{w}}{n} |\phi({f})|^\alpha}]p_{n}^{(l-1)}(\text{d}{f})\\
&\leq \int|\phi({f})|^{\alpha}[1-e^{-|t|^\alpha\frac{\sigma^{\alpha}_{w}}{n} |\phi({f})|^\alpha}]p_{n}^{(l-1)}(\text{d}{f}),
\end{align*}
\begin{align*}
&\mathbb{E}[\text{e}^{\textrm{i}t{f}_i^{(l)}(x,n)}] \leq e^{-|t|^{\alpha}\sigma_{b}^{\alpha}}\mathbb{E}\Bigg[\mathbbm{1}_{\{(p_{n}^{(l-1)}\in I)\}}\\
&\times\Bigg(1-|t|^{\alpha}\frac{\sigma^{\alpha}_{w}}{n}\int|\phi({f})|^{\alpha}p_{n}^{(l-1)}(\text{d}{f})\Bigg.\Bigg.\\
&+\Bigg.\Bigg.|t|^{\alpha}\frac{\sigma^{\alpha}_{w}}{n}\int|\phi({f})|^{\alpha}[1-e^{-|t|^\alpha\frac{\sigma^{\alpha}_{w}}{n} |\phi({f})|^\alpha}]p_{n}^{(l-1)}(\text{d}{f})\Bigg)^{n}\Bigg].
\end{align*}
Finally, recall the fundamental limit $\text{e}^{x}=\lim_{n\rightarrow+\infty}(1+x/n)^{n}$. This, combined with L2 and L3 leads to
\begin{align*}
&\mathbb{E}[\text{e}^{\textrm{i}t{f}_i^{(l)}(x,n)}]\rightarrow e^{-|t|^\alpha[\sigma^{\alpha}_{b}+\sigma^\alpha_{w} \int |\phi({f})|^\alpha q^{(l-1)}(\text{d}{f})]},
\end{align*}
as $n\rightarrow+\infty$. That is, we proved that the large $n$ limiting distribution of ${f}_i^{(l)}(x,n)$ is $\text{St}(\alpha,\sigma(l))$, where we set
\begin{equation*}
\sigma(l)=\left(\sigma^{\alpha}_{b}+\sigma^\alpha_{w} \int |\phi({f})|^\alpha q^{(l-1)}(\text{d}{f})\right)^{1/\alpha}
\end{equation*}
\end{proof}
\subsection{Large width asymptotics: \texorpdfstring{$k \geq 1$}{k>=1}}\label{sec:asympt_k_brief}
We establish the convergence in distribution of $(f_i^{(l)}(x^{(1)},n),\dots,f_i^{(l)}(x^{(k)},n))$ as $n \rightarrow +\infty$ for a fixed $i$ and $k$ inputs $x^{(1)},\dots,x^{(k)}$. This result, combined with standard arguments on finite-dimensional projections, then establishes the convergence of the NN to the stable process. Precisely, we show that, as $n\rightarrow+\infty$, one has
\begin{equation}\label{eq:convergence_k}
(f_i^{(l)}(x^{(1)},n),\dots,f_i^{(l)}(x^{(k)},n))\overset{w}{\longrightarrow}\text{St}_{k}(\alpha,\Gamma(l)),
\end{equation}
where the spectral measure $\Gamma(l)$ is computed through the recursion:
\begin{align}
\Gamma(1) &= \sigma_b^\alpha ||1||^\alpha \delta_{\frac{\boldsymbol{1}}{||\boldsymbol{1}||}} + \sigma_w^\alpha \sum_{j=1}^I ||\boldsymbol{x}_j||^\alpha \delta_{\frac{\boldsymbol{x}_j}{||\boldsymbol{x}_j||}}\label{eq:recursion_1}\\
\Gamma(l) &= \sigma_b^\alpha ||1||^\alpha \delta_{\frac{\boldsymbol{1}}{||\boldsymbol{1}||}} + \sigma_w^\alpha \mathbb{E}_{f \sim q^{(l-1)}}[||\phi(f)||^\alpha\delta_{\frac{\phi(f)}{||\phi(f)||}}]\label{eq:recursion_l}
\end{align}
and $q(l) = \text{St}_{k}(\alpha,\Gamma(l))$ for each $l \geq 1$, where $\boldsymbol{x}_j = [x^{(1)}_j,\dots,x^{(k)}_j] \in \mathbb{R}^k$. Here (and in all the expressions involving the function $\delta$ ) we make use of the notational assumption that if $\lambda = 0$ in $\lambda \delta_{\bullet}$, then $\lambda \delta_{\bullet} = 0$. This assumption allows us to avoid making the notation more cumbersome than necessary to explicitly exclude the case of $\phi(f) = 0$, for which $\phi(f)/||\phi(f)||$ is undefined. We omit the sketch of the proof of \cref{eq:convergence_k}, as it is a step-by-step parallel of the proof of \cref{eq:convergence_1} with the added complexities due to the multivariate stable distributions. The reader can refer to the SM for the full proof.
\subsection{Finite-dimensional projections}
In \cref{sec:asympt_k_brief} we obtained the convergence in law of $f_i^{(l)}(x^{(1)},n),\dots,f_i^{(l)}(x^{(k)},n)$ for $k$ inputs and a generic $i$ to a multivariate Stable distribution. Let refer to this random vector as $f_i(x)$. Now, we derive the limiting behavior in law of $f_i(x)$ jointly over all $i=1,\dots$ (again for a given $k$-dimensional input). It is enough to study the convergence of $f_1(x),\dots,f_n(x)$ for a generic $n \geq 1$. That is, it is enough to establish the convergence of the finite dimensional distributions (over $i$: we consider here $f_i(x)$ as random sequence over $i$). See \cite{billingsley1999convergence} for details.
To establish the convergence of the finite dimensional distributions (over $i$) it then suffices to establish the convergence of linear combinations. More precisely, let $\boldsymbol{X} = [x^{(1)},\dots,x^{(k)}] \in \mathbb{R}^{I \times k}$. We show that, as $n\rightarrow+\infty$,
\begin{equation*}
({f}_{i}^{(l)}(\boldsymbol{X},n))_{i\geq1}\overset{w}{\longrightarrow}\bigotimes_{i\geq1}\text{St}_{k}(\alpha,\Gamma(l)),
\end{equation*}
by proving the large $n$ asymptotic behavior of any finite linear combination of the ${f}_{i}^{(l)}(\boldsymbol{X},n)$'s, for $i\in\mathcal{L}\subset\mathbb{N}$. Following the notation of \cite{matthews2018gaussian}, let
\begin{equation*}
T^{(l)}(\mathcal{L},p,\boldsymbol{X},n)=\sum_{i\in \mathcal{L}}p_{i}[{f}_{i}^{(l)}(\boldsymbol{X},n)-b_{i}^{(l)}\boldsymbol{1}].
\end{equation*}
Then, we write
\begin{align*}
&T^{(l)}(\mathcal{L},p,\boldsymbol{X},n)\\
&=\sum_{i\in \mathcal{L}}p_{i}\left[\frac{1}{n^{1/\alpha}}\sum_{j=1}^{n}w_{i,j}^{(l)}(\phi\circ {f}_{j}^{(l-1)}(\boldsymbol{X},n))\right]\\
&=\frac{1}{n^{1/\alpha}}\sum_{j=1}^{n}\gamma_{j}^{(l)}(\mathcal{L},p,\boldsymbol{X},n),
\end{align*}
where
\begin{equation*}
\gamma_{j}^{(l)}(\mathcal{L},p,\boldsymbol{X},n)=\sum_{i\in \mathcal{L}}p_{i}w_{i,j}^{(l)}(\phi\circ{f}_{j}^{(l-1)}(\boldsymbol{X},n)).
\end{equation*}
Then,
\begin{align*}
&\varphi_{T^{(l)}(\mathcal{L},p,\mathbf{X},n)\,|\,\{{f}_{j}^{(l-1)}(\boldsymbol{X},n)\}_{j=1,\ldots,n}}(\boldsymbol{t})\\
&=\mathbb{E}[e^{\textrm{i}\boldsymbol{t}^{T}T^{(l)}(\mathcal{L},p,\boldsymbol{X},n)}\,|\,\{{f}_{j}^{(l-1)}(\boldsymbol{X},n)\}_{j=1,\ldots,n}]\\
&=\prod_{j=1}^{n}\prod_{i\in\mathcal{L}}\text{e}^{-\frac{p^{\alpha}_{i}\sigma_{w}^{\alpha}}{n}|\boldsymbol{t}^{T}(\phi\circ f_{j}^{(l-1)}(\boldsymbol{X},n))|^{\alpha}}
\end{align*}
That is,
\begin{align*}
&T^{(l)}(\mathcal{L},p,\boldsymbol{X},n)\,|\,\{{f}_{j}^{(l-1)}(\boldsymbol{X},n)\}_{j=1,\ldots,n}\overset{\text{d}}{=}\boldsymbol{S}_{\alpha,\Gamma^{(l)}}
\end{align*}
where
\begin{equation*}
\Gamma^{(l)}=\frac{1}{n}\sum_{j=1}^{n}\sum_{i\in\mathcal{L}}||p_{i}\sigma_{w}(\phi\circ f_{j}^{(l-1)}(\boldsymbol{X},n))||^{\alpha}\delta_{\frac{\phi\circ f_{j}^{(l-1)}(\boldsymbol{X},n)}{||\phi\circ f_{j}^{(l-1)}(\boldsymbol{X},n)||}}
\end{equation*}
Then, along lines similar to the proof of the large $n$ asymptotics for the $i$-th coordinate, we have the following
\begin{align*}
&\mathbb{E}[\text{e}^{\textrm{i}\boldsymbol{t}^{T}T^{(l)}(\mathcal{L},p,\boldsymbol{X},n)}] \rightarrow \exp\Bigg\{-\int\int_{\mathbb{S}^{k-1}}|\boldsymbol{t}^{T}\boldsymbol{s}|^{\alpha}\\
& \times \Bigg(\sum_{i\in\mathcal{L}}||p_{i}\sigma_{w}(\phi\circ{f})||^{\alpha}\delta_{\frac{\phi\circ{f}}{||\phi\circ{f}||}}\Bigg)(\text{d}\boldsymbol{s})q^{(l-1)}(\text{d}{f})\Bigg\}
\end{align*}
as $n\rightarrow+\infty$. This complete the proof of the limiting behaviour \eqref{eq:main_informal}.
\section{Related work}\label{sec:related_work}
For the classical case of Gaussian weights and biases, and more in general for finite-variance iid distributions, the seminal work is that of \cite{neal1995bayesian}. Here the author establishes, among other notable contributions, the connection between infinitely wide shallow (1 hidden layer) NNs and centered GPs. We reviewed the essence of it in \cref{sec:intro}.
This result is extended in \cite{lee2018deep} to deep NNs where the width $n(l)$ of each layer $l$ goes to infinity sequentially, starting from lowest layer, i.e. $n(1)$ to $n(D)$. The sequential nature of the limits reduces the task to a sequential application of the approach of \cite{neal1995bayesian}. The computation of the GP kernel for each layer $l$ involves a recursion, and the authors propose a numerical method to approximate the integral involved in each step of the recursion. The case where each $n(l)$ goes to infinity jointly, i.e. $n(l)=n$, is considered in \cite{g.2018gaussian} under more restrictive hypothesis, which are relaxed in \cite{matthews2018gaussian}. While this setting is most representative of a sequence of increasingly wide networks, the theoretical analysis is considerably more complicated as it does not reduce to a sequential application of the classical multivariate central limit theorem.
Going beyond finite-variance weight and bias distributions, \cite{neal1995bayesian} also introduced preliminary results for infinitely wide shallow NNs when weights and biases follow centered symmetric stable distributions. These results are refined in \cite{der2006beyond} which establishes the convergence to a stable process, again in the setting of shallow NNs.
The present paper can be considered a generalization of the work of \cite{matthews2018gaussian} to the context of weights and biases distributed according to centered and symmetric stable distributions. Our proof follows different arguments from the proof of \cite{matthews2018gaussian}, and in particular it does not rely on the central limit theorem for exchangeable sequences (Blum et al., 1958). Hence, since the Gaussian distribution is a special case of the stable distribution, our proof provides an alternative and self-contained proof to the result of \cite{matthews2018gaussian}. It should be noted that our envelope condition \cref{eq:le} is more restrictive than the linear envelope condition of \cite{matthews2018gaussian}. For the classical Gaussian setting the conditions on the activation function have been weakened in the work of \cite{1yang2019scaling}.
Finally, there has been recent interest in using heavy-tailed distributions for gradient noise \citep{simsekli2019tail} and for trained parameter distributions \citep{martin2019traditional}. In particular, \cite{martin2019traditional} includes an empirical analysis of the parameters of pre-trained convolutional architectures (which we also investigate in SM E) supportive of heavy-tailed distributions. Results of this kind are compatible with the conjecture that stochastic processes arising from NNs whose parameters are heavy-tailed might be closer representations of their finite, high-performing, counterparts.
\section{Future applications}\label{sec:applications}
\subsection{Bayesian inference}\label{sec:bayesian_inference}
Infinitely wide NNs with centered iid Gaussian initializations, and more in general finite variance centered iid initializations, gives rise to iid centered GPs at every layer $l$. Let us assume that weights and biases are distributed as in \cref{sec:intro}, and let us assume $L$ layers ($L-1$ hidden layers). Each centered GPs is characterized by its covariance kernel function. Let us denote by $f^{(l)}$ such GPs for the layer $2 \leq l \leq L$. Over two inputs $x$ and $x'$ the distribution of $(f^{(l)}(x),f^{(l)}(x'))$ is characterized by the variances $q^{(l)}_x = \mathbb{V}[f^{(l)}(x)]$, $ q^{(l)}_{x'} = \mathbb{V}[f^{(l)}(x')]$ and by the covariance $c^{(l)}_{x,x'} = \mathbb{C}[f^{(l)}(x),f^{(l)}(x')]$. These quantities satisfy
\begin{align}
q^{(l)}_x &= \sigma_b^2 + \sigma_w^2 \mathbb{E}\Bigg[\phi\Bigg(\sqrt{q^{(l-1)}_x}z\Bigg)^2\Bigg]\label{eq:ip_q}\\
c^{(l)}_{x,x'} &= \sigma_b^2 + \sigma_w^2\mathbb{E}\Bigg[\phi\Bigg(\sqrt{q^{(l-1)}_x}z\Bigg)\label{eq:ip_c}\\
&\notag\times\phi\Bigg(\sqrt{q^{(l-1)}_{x'}}\Big(\rho^{(l-1)}_{x,x'}z + \sqrt{1 - (\rho^{(l-1)}_{x,x'})^2}z'\Big)\Bigg)\Bigg]
\end{align}
where $z$ and $z'$ are independent standard Gaussian distributions $\mathcal{N}(0,1)$,
\begin{equation}
\rho^{(l)} = \frac{c^{(l)}_{x,x'}}{\sqrt{q^{(l)}_x q^{(l)}_{x'}}}\label{eq:ip_rho}
\end{equation}
with initial conditions $q^{(1)}_x = \sigma_b^2 + \sigma_w^2 \norm{x}^2$ and $c^{(1)}_{x,x'} = \sigma_b^2 + \sigma_w^2 \dprod{x}{x'}$.
To perform prediction via $\mathbb{E}[f^{(L)}(x^{*})|x^{*},\mathcal{D}]$, it is necessary to compute these recursions for all ordered pairs of data points $x,x'$ in the training dataset $\mathcal{D}$, and for all pairs $x^{*},x$ with $x\in\mathcal{D}$. \cite{lee2018deep} proposes an efficient quadrature solution that keeps the computational requirements manageable for an arbitrary activation $\phi$.
In our setting, the corresponding recursion is defined by \cref{eq:recursion_1}-\cref{eq:recursion_l}, which is a more computationally challenging problem with respect to the Gaussian setting. A sketch of a potential approach is as follows. Over the training data points and test points, $f^{(1)} \sim \text{St}_k(\alpha,\Gamma(1))$ where $k$ is equal to the size of training and test datasets combined. As $\Gamma(1)$ is a discrete measure exact simulations algorithms are available with a computational cost of $\mathcal{O}(I)$ per sample \citep{nolan2008overview}. We can thus generate $M$ samples $\widetilde{f}^{(1)}_j$, $j=1,\dots,M$, in $\mathcal{O}(IM)$, and use these to approximate $f^{(2)} \sim \text{St}_k(\alpha,\Gamma(2))$ with $\text{St}_k(\alpha,\widetilde{\Gamma}(2))$ with $\widetilde{\Gamma}(2)$ being
\begin{equation*}
\widetilde{\Gamma}(2) = \sigma_b^\alpha ||1||^\alpha \delta_{\frac{\boldsymbol{1}}{||\boldsymbol{1}||}} + \sigma_w^\alpha \sum_{j=1}^M ||\phi(\widetilde{f}^{(1)}_j)||^\alpha\delta_{\frac{\phi(\widetilde{f}^{(1)}_j)}{||\phi(\widetilde{f}^{(1)}_j)||}}
\end{equation*}
We can repeat this procedure by generating (approximate) random samples $\widetilde{f}^{(2)}_j$, with a cost of $\mathcal{O}(M^2)$, that in turn are used to approximate $\Gamma(3)$ and so on. In this procedure the errors can accumulate across the layers, as in \cite{lee2018deep}. This may be ameliorated by using quasi random number generators of \cite{joe2008notes}, as the sampling algorithms for multivariate stable distributions \citep{weron1996chambers,weron2010correction,nolan2008overview} are all implemented as transformations of uniform distributions. The use of QRNG effectively defines a quadrature scheme for the integration problem. We report in the SM preliminary results regarding the numerical approximation of the recursion defined by \cref{eq:recursion_1}-\cref{eq:recursion_l}.
This leaves us with the problem of computing a statistic of $f^{(L)}(x^{*})|(x^{*},\mathcal{D})$ or sampling from it, to perform prediction. Again, it could be beneficial to leverage on the discreteness of $\widetilde{\Gamma}(L)$. For example, these multivariate stable random variables can be expressed as suitable linear transformations of independent stable random variables \citep{samoradnitsky2017stable}, and results expressing stable variables as mixtures of Gaussian variables are available in \cite{samoradnitsky2017stable}.
\subsection{Neural tangent kernel}
In \cref{sec:bayesian_inference} we reviewed how the connection with GPs makes it possible to perform Bayesian inference directly on the limiting process. This corresponds to a "weakly-trained" regime of NNs, in the sense that the point (mean) predictions are equivalent to assuming an $l_2$ loss function, and fitting only a terminal linear layer to the training data, i.e. performing a kernel regression \citep{arora2019exact}. The works of \cite{jacot2018neural}, \cite{lee2019wide} and \cite{arora2019exact} consider "fully-trained" NNs with $l_2$ loss and continuous-time gradient descent. Under Gaussian initialization assumptions it is shown that as the width of the NN goes to infinity, the point predictions corresponding by such fully trained networks are given again by a kernel regression but with respect to a different kernel, the neural tangent kernel.
In the derivation of the neural tangent kernel, one important point is that the gradients are not computed with respect to the standard model parameters, i.e. the the weights and biases entering the affine transforms. Instead they are "reparametrized gradients" which are computed with respect to parameters initialized as $\mathcal{N}(0,1)$, with any scaling (standard deviation) defined by parameter multiplication. It would thus be interesting to study whether a corresponding neural tangent kernel can be defined for the case of stable distributions with $0 < \alpha < 2$, and whether the parametrization of \cref{eq:f1}-\cref{eq:fl} is the appropriate one to do so.
\subsection{Information propagation}
The recursions \cref{eq:ip_q}-\cref{eq:ip_c} define the evolution over depth of the distribution of $f^{(l)}$ for two points $x,x'$ when weights and biases are distributed as in \cref{sec:intro}. The information propagation framework studies the behavior of $q_x^{(l)}$ and $\rho_{x,x'}^{(l)}$ as $l \rightarrow +\infty$. It is shown in \cite{poole2016exponential} and \cite{schoenholz2017deep} that the $(\sigma_w,\sigma_b)$ positive quadrant is divided in two regions: a stable phase where $\rho_{x,x'}^{(l)} \rightarrow 1$ and a chaotic phase where $\rho_{x,x'}^{(l)}$ converges to a random variable (in the $\phi=\tanh$ case, in other cases the limiting processes may fail to exist). Thus in the stable phase $f^{(l)}$ is eventually perfectly correlated over inputs (and in most cases perfectly constant), while in the chaotic phase it is almost everywhere discontinuous. The work of \cite{hayou2019impact} formalizes these results and investigates the case where $(\sigma_w,\sigma_b)$ is on the curve separating the stable from the chaotic phase, i.e. the edge of chaos. Here it is shown that the behavior is qualitatively similar to that of the stable case, but with a lower rate of convergence with respect to depth. Thus in all cases the distribution of $f^{(l)}$ eventually collapse to degenerate and inexpressive distributions as depth increases.
In this context it would be interesting to study what is the impact of the use of stable distributions. All results mentioned above holds for the Gaussian case, which corresponds to $\alpha=2$. Thus this further analysis would study the case $0 < \alpha < 2$, resulting in a triplet $(\sigma_w,\sigma_b,\alpha)$. Even though it seems hard to escape the course of depth under iid initializations, it might be that the use of stable distributions, with their not-uniformly-vanishing relevance at unit level \citep{neal1995bayesian}, might slow down the rate of convergence to the limiting regime.
\section{Conclusions}\label{sec:conclusions}
Within the setting of fully connected feed-forward deep NNs with weights and biases iid as centered and symmetric stable distributions, we proved that the infinite wide limit of the NN, under suitable scaling on the weights, is a stable process. This result contributes to the theory of fully connected feed-forward deep NNs, generalizing the work of \cite{matthews2018gaussian}. We presented an extensive discussion on how our result can be used to extend recent lines of research which relies on GP limits.
On the theoretical side further developments of our work are possible. Firstly, \cite{matthews2018gaussian} performs an empirical analysis of the rates of convergence to the limiting process as function of depth with the respect to the MMD discrepancy \citep{gretton2012kernel}. Having proved the convergence of the finite dimensional distributions to multivariate stable distributions, the next step would be to establish the rate of convergence with respect to a metric of choice as function of the stability index $\alpha$ and depth $l$. Secondly, all the established convergence results (this paper included) concern the convergence of the finite dimensional distributions of the NN layers. For the countable case, which is the case of the components $i \geq 1$ in each layer, this is equivalent to the convergence in distribution of the whole process (over all the $i$) with respect to the product topology. However, the input space being $R^I$ it is not countable. Hence, for a given $i$, the convergence of the finite dimensional distributions (i.e. over a finite collection of inputs) is not enough to establish the convergence in distribution of the stochastic process seen as a random function on the input (with respect to an appropriate metric). This is also the case for results concerning the convergence to GPs. It would thus be worthwhile to complete this theoretical line of research by establishing such result for any $0 < \alpha \leq 2$. As a side result, doing so is likely to provide estimates on the smoothness proprieties of the limiting stochastic processes.
\section{Acknowledgements}\label{sec:acknowledgements}
We wish to thank the three anonymous reviewers and the meta reviewer for their valuable feedback. Stefano Favaro received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 817257. Stefano Favaro gratefully acknowledge the financial support from the Italian Ministry of Education, University and Research (MIUR), ``Dipartimenti di Eccellenza" grant 2018-2022.
|
2,869,038,156,271 | arxiv | \section{INTRODUCTION}
In recent years, there has been considerable number of studies with regard to explaining the decisions of deep learning models.
While deep learning models have been used to improve the accuracy of various tasks, they encounter a challenge: it is difficult to understand the basis of their decisions.
This makes it difficult to use deep learning models for tasks that require explanations, such as medical image processing.
Explanations are also helpful in understanding the model's behavior.
For these reasons, research on understanding the rationale for decisions of deep learning models has been widely conducted.
GNNs are deep learning models that take graph data as inputs.
In many real-world situations, data are represented in the form of graphs.
For example, molecular structures can be represented as graphs where nodes are atoms and edges are chemical bonds.
Therefore, GNNs are becoming powerful tools that can be applied to a variety of tasks such as drug discovery.
However, similar to other deep learning models, GNNs cannot present the reasoning behind their decisions.
In this study, we extend several explainability methods for CNNs to GNNs to calculate the importance of the edges for the models' outputs.
Reference \cite{survey} states that graph convolution in GNNs is generalized from 2-D convolution in CNNs because both take the weighted sum of information from neighborhood nodes/pixels.
This similarity between GNNs and CNNs makes it reasonable to apply techniques used for CNNs to GNNs.
Thus, we investigate LIME~\cite{LIME}, Gradient-Based Saliency Maps~\cite{saliency-maps}, and Grad-CAM~\cite{grad-cam}. These are frequently used in computer vision tasks.
Although the techniques specified in this study are not novel, our contribution is that we extend off-the-shelf explainability methods in computer vision to GNNs and experimentally show that LIME is the best approach. Furthermore, we found LIME to be superior to a state-of-the-art method~\cite{GNNExplainer}.
\section{RELATED WORK}\label{sec:related}
\subsection{Formulation of GNNs}
Although a variety of GNN methods have been proposed, most of them can be expressed in the framework of message passing~\cite{MP} as follows.
Let $G$ be an input graph of GNNs, $\bm{h}_v^t$ be the feature vector of the node $v$ in $G$ at the $t$th message passing phase, and $N(v)$ be the set of nodes adjacent to the node $v$.
The message passing operation is expressed by the following equation:
\begin{eqnarray}
\bm{h}_{v}^{t+1} &=& U_{t}\left(\bm{h}_{v}^{t}, \sum_{w \in N(v)} M_{t}\left(\bm{h}_{v}^{t}, \bm{h}_{w}^{t}\right)\right).
\end{eqnarray}
Here, $M_{t}$ and $U_{t}$ are functions defined for different methods, where $M_{t}$ collects information from neighboring nodes, and $U_{t}$ updates the feature vector of each node based on the neighboring information.
By performing these message passing operations $T$ times, a higher-order feature vector $\bm{h}_{v}^{T}$ of the node $v$ can be obtained.
For a graph classification task, the feature vector for the entire graph is then calculated from each node's feature vector by taking its summation or mean, for example.
By performing the above-mentioned operations, feature vectors for each node or each graph can be obtained.
Finally, these feature vectors are fed into fully connected layers.
\subsection{Explainability Methods}
There has been considerable amount of research on explaining the decisions of deep learning models.
For example, Ribeiro et al.~\cite{LIME} proposed LIME that can be applied to machine learning models in general, including deep learning models.
Furthermore, significant research on explainability methods that are designed for CNN models has been carried out.
For example, Simonyan et al.~\cite{saliency-maps} proposed Gradient-Based Saliency Maps.
This method simply differentiated the output of the model with respect to each pixel and created a heat map.
Another explainability method for CNN models is Grad-CAM~\cite{grad-cam}.
Grad-CAM used the values of feature maps in CNN models and the differential of the output with respect to them to calculate each pixel's importance. Then, a heat map was created.
Contrary to the explainable models for CNNs, there are fewer works that explain GNN models.
For example, Pope et al.~\cite{node-cam} extended Grad-CAM to GNNs and calculated the importance of each node for the output of the GNN model.
Note that this approach is designed to explain the contribution of the nodes only.
GNNExplainer~\cite{GNNExplainer} is an explainability method for GNNs that explains which parts of edges and which dimensions of the node features are responsible for GNN model's outputs.
In addition, there are several approaches of explainability for edges in graphs.
Please refer to~\cite{yuan2020explainability} for a comprehensive survey of this field.
\section{PROPOSED METHODS}\label{sec:proposed}
In this section, we extend the explainability methods for CNNs to GNNs to predict which edges are important for GNN decisions.
We define an important edge as ``an edge that contributes to the increase of the GNN model's output.''
\subsection{LIME-Based Method}
First, we propose a LIME-based~\cite{LIME} explainability method for GNNs.
In the message passing operation described in the previous section, each node gathers the features of the adjacent nodes.
Thus, the edges are the paths through which node features pass.
Therefore, we define the operation to multiply node features passing through a certain edge by a weight $w \in [0, 1]$ as ``perturbing an edge.''
In the original LIME method, each part of the inputs is either removed completely or preserved.
The perturbing operation of multiplying information by continuous weight is different from the simple removal operation of the original LIME algorithm.
Let $n$ be the number of edges in the input graph $G$, $p$ be the probability of perturbing each edge, and $m$ be the number of samples.
$p$ and $m$ are both hyperparameters.
First, $m$ graphs in which each edge of $G$ is perturbed with the probability of $p$ are input to the GNN model.
Then, the combination of vectors $\bm{x}_k = [0, 1]^{n}$ that indicates which edge was perturbed and the output value of the model $y_k$ for $k = 1, 2, ..., m$ are obtained.
Here, each dimension of $\bm{x}_k$ corresponds to each edge of $G$ and shows the weight by which the information passing through each edge is multiplied.
A linear regression model is then constructed to predict $y_k$ from $\bm{x}_k$, and the importance of each edge is obtained as the coefficients of the linear regression model.
As the linear regression model, we use Lasso~\cite{Lasso}, which has a regularization term that limits the number of nonzero coefficients.
The loss function used for training the Lasso model $f$ is the weighted mean squared error (MSE):
\begin{equation}
\mathcal{L} = \sum_{k=1}^{m} \exp \left(-\frac{(n - \|{\bf x}_k\|_1)^{2}}{\sigma^{2}}\right) \left(f\left({\bf x}_k\right) - y_k\right)^{2}
+ \lambda \|{\bf w}\|_1,
\end{equation}
where ${\bf w} \in \mathbb{R}^n$ represents $f$'s coefficients, and $\sigma$ and $\lambda$ are both hyperparameters.
\subsection{Saliency Map-Based Method}
In this section, we propose an extension of Saliency Maps~\cite{saliency-maps} to calculate each edge's importance.
Here, we consider GGNN~\cite{GGNN} as the model to be explained for example, and assume the model's output to be $y$.
In GGNN, the message passing operation is represented as follows:
\begin{eqnarray}
\bm{h}_{v}^{t}={\rm GRU}\left(\bm{h}_{v}^{t-1}, \sum_{w \in N(v)} \bm{W} \bm{h}_{w}^{t-1}\right),
\end{eqnarray}
where $\bm{W}$ is a learnable matrix.
As the edges can be considered as pathways through which the node information propagates, one can assume that $\bm{W} \bm{h}_{w}^{t-1}$ passes through the edge $e_{v, w}$ connecting the node $v$ and the node $w$ at the $t$th message passing phase.
This operation is performed on all nodes in the graph, so the information that eventually passes through $e_{v, w}$ is $\bm{W} \bm{h}_{v}^{t-1}$ and $\bm{W} \bm{h}_{w}^{t-1}$.
Therefore, we consider the importance of $e_{v, w}$ to be the sum of information $\bm{W} \bm{h}_{v}^{t-1}$ and $\bm{W} \bm{h}_{w}^{t-1}$.
Let the size of $\bm{W}$ be $m$-by-$n$, the length of $\bm{h}_{v}^{t-1}$ be $n$, and the $k$th row component of $\bm{W} \bm{h}_{v}^{t-1}$ be ${\left(\bm{W} \bm{h}_{v}^{t-1}\right)}_k$.
Then, as the importance of $e_{v, w}$, $L_{edge} [t, v, w]$ is calculated as follows:
\begin{equation}
L_{edge} [t, v, w] = \sum_{k=1}^m \frac{\partial y}{\partial {\left(\bm{W} \bm{h}_{v}^{t-1}\right)}_k}
+ \sum_{k=1}^m \frac{\partial y}{\partial {\left(\bm{W} \bm{h}_{w}^{t-1}\right)}_k}.
\end{equation}
\subsection{Grad-CAM-Based Method}
In this section, we propose an extension of Grad-CAM~\cite{grad-cam} to calculate each edge's importance.
Here, we consider the same GGNN~\cite{GGNN} model as in the previous subsection for example.
As in the Saliency Map-based method, the importance of $e_{v, w}$ can be considered as the sum of the importance of $\bm{W} \bm{h}_{v}^{t-1}$ and $\bm{W} \bm{h}_{w}^{t-1}$.
In this method, the importance of them is calculated by the method in~\cite{node-cam}, and then summed.
The specific method is described below:
$N$ feature vectors from $\bm{W} \bm{h}_{1}^{t-1}$ to $\bm{W} \bm{h}_{N}^{t-1}$ are converted to row vectors. Then, feature matrix $\bm{G}^t$ are created by stacking them.
The $(v, w)$ element of $\bm{G}^t$ corresponds to the $w$th row component of $\bm{W} \bm{h}_{v}^{t-1}$.
First, the weight $\alpha_{k}^{t}$ for $\bm{G}^t$'s column $k$ is calculated as follows:
\begin{equation}
\alpha_{k}^{t} = \frac{1}{N} \sum_{n=1}^{N} \frac{\partial y}{\partial \bm{G}_{n, k}^{t}}.
\end{equation}
Second, let $\bm{\alpha}^{t} \in \mathbb{R}^K$ be a vector whose $k$th row component is $\alpha_{k}^{t}$.
Then, the importance of $\bm{W} \bm{h}_{v}^{t-1}$ is obtained by calculating the dot product of $\bm{\alpha}^{t}$ and $\bm{W} \bm{h}_{v}^{t-1}$.
Finally, the importance of $e_{v, w}$, $L_{edge} [t, v, w]$ is calculated by adding the importance of vector $\bm{W} \bm{h}_{v}^{t-1}$ and $\bm{W} \bm{h}_{w}^{t-1}$ as follows:
\begin{equation}
L_{edge} [t, v, w] = \bm{\alpha}^{t} \cdot \bm{W} \bm{h}_{v}^{t-1} + \bm{\alpha}^{t} \cdot \bm{W} \bm{h}_{w}^{t-1}.
\end{equation}
\subsection{Baseline Methods}
We compare these methods to three baseline methods: GNNExplainer~\cite{GNNExplainer}, the removal method, and the random method.
In the removal method, edges are removed from the input graph one by one and are input to the model. Then, the importance of the edge is calculated by measuring the extent to which the output value decreases compared to the original value.
In the random method, each edge's importance is determined randomly.
\section{EXPERIMENTS}\label{sec:experiments}
\subsection{Experimental Setup}
We use three evaluation tasks: synthetic test, benzene ring test, and removal test.
In the synthetic test, we follow the setting in GNNExplainer~\cite{GNNExplainer} and use the dataset called BA-shapes.
This is a node classification dataset that contains a randomly generated graph with 300 nodes and 80 five-node ``house''-structured network motifs attached to it.
In this dataset each node has no node feature values and nodes in the base graph are labeled with 0; the ones located in the ``house'' are labeled with 1, 2, or 3.
First, a GCN~\cite{GCN} model is trained to predict each node's label.
If this model predicts that the node located in the ``house'' has a label other than 0, then the ground truth of the basis of this prediction can be regarded as the ``house''-structured motif.
The evaluation metrics is the percentage of the ground truth edges that are included in the top five edges in terms of importance calculated by each method (i.e., recall rate).
In the second evaluation task, the benzene ring test, we use the QM9 dataset~\cite{qm9} that contains molecule graphs with atoms as nodes and chemical bonds as edges.
We trained a GGNN~\cite{GGNN} model that performs binary classification if a molecule is aromatic.
As a molecule being aromatic is determined only by the presence of a benzene ring in the molecule, we can determine the ground truth of explanation as the five or six edges that form the benzene ring.
The evaluation metrics is the percentage of the ground truth edges that are included in the top five or six edges in terms of importance (i.e., recall rate).
The third evaluation task, the removal test, is motivated by the evaluation tasks of explainability methods for CNNs proposed by~\cite{np}.
First, the edges in the graph are removed from one to five in the order of importance of the edges obtained by each explainability method.
Second, these graphs are input to the GNN model to obtain the output values.
Subsequently, the number of removed edges is plotted on the horizontal axis, and the decrease in the predicted value compared to the original value is plotted on the vertical axis to obtain the Area Under the Curve ($\rm{AUC_{edge}}$).
The larger the $\rm{AUC_{edge}}$, the better the performance of the explainability method.
Let $y_k~(k = 0, 1, 2, ..., 5)$ be the output of the GNN model when $k$ edges are removed.
$\rm{AUC_{edge}}$ can be calculated by the following equation:
\begin{equation}
{\rm AUC_{edge}} = \sum_{k = 1}^{5} \frac{\left(y_0 - y_{k-1}\right) +\left(y_0 - y_{k}\right)}{2}.
\end{equation}
We use three datasets for the removal test: Cora~\cite{cora}, Coauthor~\cite{coauthor}, and Amazon~\cite{coauthor}.
Cora is a citation network dataset in which nodes are documents and edges are citation links.
Coauthor is a coauthorship network dataset in which nodes are authors and are connected by an edge if they coauthored a paper.
Amazon is a co-purchase graph dataset in which nodes are products and edges indicate that two goods are frequently bought together.
We trained three GCN~\cite{GCN} models that predict each node's label for these three datasets respectively.
\subsection{Results}
Examples of explanation results for the benzene ring test are shown in Fig.~\ref{benzene-ex}, which shows that the importance of edges forming benzene rings is relatively high in all methods except for GNNExplainer.
In particular, LIME assigns high importance to the benzene ring edges only.
LIME can determine the important edges specifically.
{
\setlength\abovecaptionskip{-6pt}
\begin{table*}[t]
\caption{Results of the synthetic test, benzene ring test, and removal test.}
\label{scores}
\begin{center}
\begin{tabular}{cccccc}
\toprule
& Synthetic test (accuracy) & Benzene ring test (accuracy) & \multicolumn{3}{c}{Removal test ($\rm{AUC_{edge}}$)} \\
\midrule
Task & Node classification & Graph classification & \multicolumn{3}{c}{Node classification} \\
Model & GCN~\cite{GCN} & GGNN~\cite{GGNN} & \multicolumn{3}{c}{GCN~\cite{GCN}} \\
Dataset & BA-shapes~\cite{GNNExplainer} & QM9~\cite{qm9} & Cora~\cite{cora} & Coauthor~\cite{coauthor} & Amazon~\cite{coauthor} \\
\midrule
LIME~\cite{LIME} & 0.67 & $\bm{0.99}$ & $\bm{2.13}$ & $\bm{1.55}$ & $\bm{0.37}$\\
Saliency Maps~\cite{saliency-maps} & 0.91 & 0.62 & 1.78 & 1.14 & 0.27\\
Grad-CAM~\cite{grad-cam} & 0.20 & 0.88 & 0.21 & 0.03 & 0\\
GNNExplainer~\cite{GNNExplainer} (Reproduction) & 0.87 & 0.44 & 0.57 & 0.60 & 0.08\\
GNNExplainer~\cite{GNNExplainer} (Reported) & $\bm{0.93}$ & - & - & - & -\\
Removal & 0.80 & 0.90 & $\bm{2.14}$ & $\bm{1.59}$ & $\bm{0.38}$\\
Random & 0.15 & 0.36 & 0.18 & 0.02 & 0\\
\bottomrule
\end{tabular}
\end{center}
\end{table*}
\begin{table}[t]
\caption{The average computational costs of LIME and the removal method for each dataset.}
\label{costs}
\begin{center}
\begin{tabular}{cccc}
\toprule
& Cora~\cite{cora} & Coauthor~\cite{coauthor} & Amazon~\cite{coauthor} \\
\midrule
LIME~\cite{LIME} & 0.98~s & 11.4~s & 79.3~s \\
Removal & 3.63~s & 119.8~s & 704.6~s \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
}
\begin{figure}[t]
\begin{minipage}{0.32\hsize}
\begin{center}
\subfigure[LIME]{
\includegraphics[width=27.5mm]{18-LIME.eps}
\label{fig:one}}
\end{center}
\end{minipage}
\begin{minipage}{0.32\hsize}
\begin{center}
\subfigure[Grad-CAM]{
\includegraphics[width=27.5mm]{18-grad-cam.eps}
\label{fig:three}}
\end{center}
\end{minipage}
\begin{minipage}{0.32\hsize}
\begin{center}
\subfigure[GNNExplainer]{
\includegraphics[width=27.5mm]{18-explainer.eps}
\label{fig:four}}
\end{center}
\end{minipage}
\caption{Examples of explanation results for the benzene ring test.}
\label{benzene-ex}
\end{figure}
The results of the synthetic test, benzene ring test, and removal test are shown in Table~\ref{scores}.
In the synthetic test, GNNExplainer (Reported) yields the best performance, followed by saliency maps.
However, in the benzene ring test, LIME shows the best performance followed by the removal method and Grad-CAM, whereas GNNExplainer is the worst among all methods except for the random method.
In the removal test, LIME and the removal method perform the best followed by saliency maps, whereas GNNExplainer and Grad-CAM are worse than these methods.
Although GNNExplainer performs well on the synthetic dataset, it does not perform well on real-world datasets.
On the other hand, LIME is generally better than the other algorithms in our experiments.
In the removal test, the performances of LIME and the removal method are comparable, but the removal method requires high computational costs because it removes each edge one by one.
Table~\ref{costs} shows the average computational costs of LIME and the removal method in the three datasets for the removal test.
The computational cost of the removal method is about three to ten times larger than that of LIME.
Therefore, LIME is the best explainability method in terms of both performance and computational cost.
\subsection{Discussion}
LIME has the best performance among the three proposed methods in the real-world situations.
LIME directly perturb several edges in the input graph, therefore it can take interactions between edges into account.
In contrast, Grad-CAM and saliency maps calculate each edge's importance independently.
This capability of considering the interactions between edges would make the LIME's score the best.
However, in the synthetic test, GNNExplainer outperforms LIME.
In the synthetic dataset, each node has no feature values. This is different from the real-world datasets, where each node has unique feature values.
As mentioned in Section~\ref{sec:proposed}, edges are the paths through which node information passes, but no meaningful information passes through the edges in the synthetic dataset.
This absence of meaningful information passing through the edges would make the perturbing operation of LIME less effective.
\section{Conclusion}\label{sec:conclusion}
In this study, we extended explainability methods for CNNs to GNNs, i.e., LIME, Grad-CAM, and Gradient-Based Saliency Maps, to calculate each edge's importance for the outputs.
It was found that the performance of the LIME-based approach was the best in real-world situations in terms of both accuracy and computational cost.
\renewcommand{\baselinestretch}{0.2}
\bibliographystyle{IEEEtran}
|
2,869,038,156,272 | arxiv | \section{Introduction}
The coronal mass ejection (CME) from 23 July 2012 was one of the most energetic events ever recorded, having a transit time from Sun to 1~AU of less than 21~h. The event raised a lot of attention due the strong magnetic ejecta and the extremely high impact speed of more than 2200~km~s$^{-1}$ for the shock ahead of it as measured by in-situ instruments aboard STEREO-A/PLASTIC \citep{russell13}. If directed towards Earth, the CME might have been of unusual high geoeffectiveness and scenarios of extreme space weather consequences were proposed \citep{ngwira13,baker13}. \cite{liu14} studied the solar perspective of this event and came to the conclusion that the complex magnetic field and high field strength as measured in-situ is most probably attributable to a CME-CME interaction process. These authors also speculated that the extremely high speed and short propagation time were caused by the increased upstream solar wind speed and decreased solar wind density as well as magnetic field tension, which in turn resulted from the CME associated with an M7.7 flare on 19 July 2012.
The structure of the ambient magnetic field and plasma flow in which CMEs are embedded in may play a critical role in
controlling their propagation behavior and geoeffectiveness \citep[see \textit{e.g.},][]{vrsnak07,gopal08}. In recent years the stereoscopic view made possible by the \textit{Solar-Terrestrial Relations Observatory} \citep[STEREO;][]{kaiser08} has permitted us to learn much about the evolution of CMEs in the inner heliosphere. Observational results revealed that variations in the ambient solar wind speed, \textit{e.g.}\,by slow and fast solar winds and various interactions, may appreciably change the CME kinematics (including deflection) and deform the CME structure \citep[see \textit{e.g.},][]{rouillard10,temmer11,lugaz12,liu13}. Recent studies pointed out that magnetic erosion by reconnection processes at the front of the magnetic structure of the CME may significantly reduce the strength of geomagnetic storms \citep{ruffenach12,lavraud14}. As such, the preconditioning of interplanetary space has immediate consequences for predicting arrival times and impact speeds of CMEs.
Since the plasma and magnetic field conditions in interplanetary space are not observationally well-known, research on the interaction of CMEs with their environment may be performed by combining the observational data with models. In this study, we investigate the CME from 23 July 2012 in terms of its evolution in the inner heliosphere. The three-dimensional (3D) kinematical profile of the CME is derived up to a distance range of 30~$R_{\odot}$ by applying the graduated cylindrical shell (GCS) model \citep{thernisien06}. Assuming that the aerodynamic drag governs the evolution of the CME at farther distances from the Sun, we use the drag-based model (DBM;\,\cite{vrsnak13}, and references therein) to reconstruct the CME propagation behavior. This yields results that strongly indicate the low density environment in which the CME is propagating in.
In Section~\ref{obs} we give an overview of the observational data and describe the methods that are used for the analysis. We give the results in Section~\ref{res} and discuss them and draw our conclusions in Sections~\ref{dis} and~\ref{con}.
\section{Observational Data and Methods}\label{obs}
On 23 July 2012 at around 02:08~UT a fast CME is launched from solar active region NOAA~11520 located at W141$^{\circ}$, \textit{i.e.}\,behind the limb from Earth-view. Two filament eruptions may have driven two CMEs, which then
interacted \citep[see][]{liu14}. However, we cannot clearly separate two individual CME profiles in the coronagraph data. Therefore, for our study we simply refer to the event as two-stage filament eruption resulting in a complex CME, consistent with the overall CME profile from \cite{liu14}. In this respect we note that the derived longitude range of the CME is consistent with the erupting filament located at the eastern edge of the active region. For the event time, the two STEREO spacecraft, STEREO-A (\textit{Ahead}) and STEREO-B (\textit{Behind}), were located, respectively, at the heliographic longitude of W121$^{\circ}$ and E115$^{\circ}$, and heliocentric distance of 0.96~AU ($\approx$206~$R_{\odot}$) and 1.02~AU ($\approx$219~$R_{\odot}$). STEREO-A views the CME as halo event, STEREO-B close to its eastern limb with the plane-of-sky angle W155$^{\circ}$. We use the disk-integrated flux from the EUVI-A 195\AA~channel as proxy to estimate the soft X-ray emission \citep[see][]{nitta13}. According to this, the associated flare is a long duration event of at most X2.5 GOES class.
The development of the CME up to a distance range of $\approx$30~$R_{\odot}$ is observed from combined EUV and white-light data from STEREO-A and -B as well as the \textit{Solar Heliospheric Observatory} \citep[SoHO;][]{domingo95}. Using contemporaneous imaging data from the coronagraphs aboard those spacecraft enables us to do a 3D reconstruction of the CME evolution by applying the GCS model. The different viewpoints from at least two spacecraft (combination of COR2-A on STEREO-A, COR2-B on STEREO-B, and LASCO/C2/C3 on SoHO) are digested into the model and by forward fitting the model flux rope to the observed white-light structure we obtain the 3D geometry (width and cross section), propagation direction and deprojected kinematics of the CME. We apply the same model for investigating the shock ahead of the flux rope by putting the half-angle to zero and the ratio aspect close to one. This makes the front part of the GCS model spherical in order to mimic the geometry of a shock front \citep[see][]{thernisien11}. Measurements coming from single spacecraft data (EUVI-B, COR1-B) are deprojected onto the derived 3D results using a multiplication factor. From the deprojected distance-time data we derive the speed- and acceleration time profiles by using the regularization method originally developed by \cite{kontar04} to invert solar X-ray spectra measured by the \textit{Reuven Ramaty High-Energy Solar Spectroscopic Imager} \citep[RHESSI;][]{lin02}. The method was further developed and could be successfully applied to CME kinematical data as described in \cite{temmer10}.
By calibrating white-light images from COR2-B data in units of the mean solar brightness and subtracting a pre-event image we calculate the excess brightness due to the CME. The excess brightness is then converted to the excess mass of the CME under the common assumptions that all of the CME mass is concentrated on the plane-of-sky of the corresponding instrument, and that the CME material consists of 90\%~H and 10\%~He. The brightness of all pixels in the region covering the CME is then summed up and fed into the following relation $m=B_{\rm obs}/B_{\rm e}(\Phi)$, where $B_{\rm obs}$ is the white-light pixel brightness and $B_{\rm e}(\Phi)$ the single electron brightness at the angle $\Phi$ from the plane-of-sky \citep[see][]{billings66,vourlidas00}.
In-situ data are taken from the STEREO-A/IMPACT-MAG instrument which measures the magnetic field direction and magnitude \citep{luhmann08}. The high fluxes of solar energetic particles associated with this event are the cause of a data gap for the solar wind plasma data by PLASTIC \citep{galvin08} and, hence, are reconstructed from the magnetic field data (5~min resolution). This introduces uncertainties of 1\% for reconstructed speeds below 900~km~s$^{-1}$ and for the peak of the order of 200~km~s$^{-1}$ (A.B. Galvin, private communication). For our study we extract the arrival time and the lower limit of the impact speed of the CME shock and magnetic ejecta as given in \cite{russell13}.
\begin{figure}
\centerline{\includegraphics[width=1\textwidth,clip=]{f1.pdf}}
\caption{GCS model results for the reconstructed magnetic structure (yellow mesh) and shock (green mesh) using simultaneous image triplets of COR2-A, COR2-B, and LASCO-C2 at $\approx$03:08~UT.}
\label{gcs}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=1\textwidth,clip=]{f2.pdf}}
\caption{Same as Figure~\ref{gcs} but for the image triplet COR2-A, COR2-B, and LASCO-C3 at $\sim$03:39~UT. }
\label{gcs2}
\end{figure}
For investigating the propagation behavior of the CME in interplanetary space, and to relate the results from remote sensing and in-situ data, we use the DBM \citep{vrsnak07,vrsnak13}. Applying the DBM we assume that the CME propagation speed ($v_{\rm CME}$) at far distances from the Sun is governed only by the conditions of the ambient solar wind flow (density, $\rho_{\rm SW}$, and speed, $v_{\rm SW}$) which can be expressed in terms of the magnetohydrodynamical analogue of the aerodynamic drag \citep[for more details see][]{cargill96}. The acceleration due to the aerodynamic drag of the ambient solar wind is given by $a_{\rm D}$\,=\,$\gamma(v_{\rm CME}-v_{\rm SW})|v_{\rm CME}-v_{\rm SW}|$ with $\gamma$\,=\,$C_{\rm D}({A_{\rm CME}\rho_{\rm SW}}/{m_{\rm CME}})$ where $A_{\rm CME}$ and $m_{\rm CME}$ is the cross-section and the mass of the CME, respectively, $\rho_{\rm SW}$ the ambient solar wind density and $C_{\rm D}$ is the dimensionless drag coefficient, typically of order unity \citep[\textit{cf.}][]{cargill04}. The statistically derived $\gamma$ values for magnetic ejecta have a range of 0.1--2$\times$10$^{-7}$~km$^{-1}$ and $v_{\rm SW}$ of 400--500~km~s$^{-1}$ \citep{vrsnak13}. A recent statistical study shows that for the shock structures of CMEs the analytical DBM yields similar results as the numerical WSA-ENLIL+Cone model \citep[see \textit{e.g.},][]{odstrcil99} when using the parameter combination $\gamma$\,=\,0.1$\times$10$^{-7}$~km$^{-1}$ and $v_{\rm SW}$\,=\,400~km~s$^{-1}$ \citep[see][]{vrsnak14}. An online version of the DBM is available under \url{http://oh.geof.unizg.hr/DBM/dbm.php}.
\section{Results}\label{res}
Due to the close location of the source region of the CME (W141$^{\circ}$) and the plane-of-sky viewing angle of STEREO-B (W155$^{\circ}$), it is reasonable to assume that STEREO-B images are only marginally affected by projection effects. From high-cadence EUVI-B 171\AA~data (1.25~min resolution) we are able to track the onset of a loop formation at 2:02~UT that further develops into the COR1-B field-of-view (FoV) revealing at 2:13~UT the first signature of the CME. The CME can be tracked into the FoV of COR2-B and is also identified in STEREO-A white-light as well as SoHO/LASCO data enabling us to derive its 3D kinematics.
\begin{figure}
\centerline{\includegraphics[width=1\textwidth,clip=]{f3.pdf}}
\caption{3D distance-time measurements for the shock (red) and magnetic structure (blue) of the CME. The measurements for the CME were derived from different instruments (STEREO-A (A), STEREO-B (B), LASCO (L)) and reconstruction methods as given in the legend. }
\label{distance}
\end{figure}
The top panels of Figure~\ref{gcs} show coronagraph images from LASCO-C2, (left panel), COR2-A (middle panel), and COR2-B (right panel). The density envelope of the CME nicely reveals the shock-sheath and the magnetic structure (see annotation in the top right panel of Figure~\ref{gcs}). In order to compare the in-situ signatures of the arriving shock and magnetic structure separately, we make for each feature a separate GCS reconstruction. The forward fit shown in Figures~\ref{gcs} and~\ref{gcs2} gives the best match with the white-light data as viewed from different vantage points. From this we find that the CME propagates in longitude along W125--135$^{\circ}$ and in latitude along N00--N10$^{\circ}$. The flux rope geometry of the magnetic structure is characterized by its face-on and edge-on width for which we derive 130$\pm$5$^{\circ}$ and 60$\pm$5$^{\circ}$, respectively, and its tilt angle relative to the solar equator with 60$\pm$5$^{\circ}$. To track the kinematical evolution beyond the FoV of COR2, we assume self-similar expansion, i.e.\,keeping all model parameters constant with exception of the distance, and fit the obtained CME geometries (shock, magnetic structure) to LASCO-C3 image data (we refer to these measurements as quasi-GCS). Measurements of the early evolution phase of the CME from EUVI-B 171\AA~and COR1-B images are deprojected by using a multiplication factor of 1.1 to match the 3D values. With this we derive deprojected distance-time data from $\approx$1.25~R$_{\odot}$ up to 30~$R_{\odot}$. We intentionally used some extreme scaling to track the shock front of the CME, however, we note that a clear separation between shock and magnetic structure is only observed in COR2 and LASCO data.
\begin{figure}
\centerline{\includegraphics[width=1\textwidth,clip=]{f4.pdf}}
\caption{3D speed-time profile derived from the distance-time data for the CME shock (sh) and magnetic structure (ms). The normalized EUVI-A 195\AA~flux is shown as black dashed line. }
\label{speed}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=1\textwidth,clip=]{f5.pdf}}
\caption{3D acceleration-time profile derived from the distance-time data for the magnetic structure (ms) of the CME. The derivative of the EUVI-A 195\AA~flux (normalized) is shown as black dashed line.}
\label{accel}
\end{figure}
Figure~\ref{distance} gives the deprojected distance-time values resulting from the GCS model and individual measurements (quasi-GCS and data from a single spacecraft). Due to the uncertainty in the 3D reconstruction, especially for distances far from the Sun, we put rather conservative error bars which is reflected in the uncertainties derived for the speed- and acceleration-time profiles. Figure~\ref{speed} shows the speed-time profile together with the EUV flux in the 195\AA~channel. The CME shock front reaches around 03:00~UT ($\pm$5~min) a maximum speed of 2580$\pm$280~km~s$^{-1}$ and the magnetic structure 2270$\pm$420~km~s$^{-1}$. Figure~\ref{accel} shows the acceleration-time profile of the magnetic structure together with the derivative of the EUV flux in the 195\AA~channel\footnote{As the shock and magnetic structure can be clearly distinguished starting from COR2 FoV, the shock acceleration profile is unreliable and not used for further analysis.}. The maximum acceleration yields 2.25$\pm$0.18~km~s$^{-2}$ with an acceleration duration of $\approx$30~min. Consistent with case and statistical studies, the hard X-ray as well as the derivative of the soft X-ray flux of the flare is closely related to the acceleration profile of the associated CME \citep{temmer08,temmer10,bein12}. This gives strong evidence that the complex CME eruption is launched during the initial rising phase of the EUV emission. The CME mass is calculated from COR2-B data at a distance of about 14~$R_{\odot}$ and is found to be in the range of 1.5$\pm$0.5$\times10^{16}$~g. The CME mass refers to a lower limit since we neglect possible projection effects \citep{vourlidas00}. From the CME mass and maximum speed we derive the kinetic energy which is of the order of $\approx$5$\times$10$^{32}$~ergs. These values are at the head of energy distributions found for CME observations over three decades \citep{vourlidas11} and demonstrate how exceptional the event under study is.
\begin{figure}
\centerline{\includegraphics[width=1\textwidth,clip=]{f6.pdf}}
\caption{Speed-time profile (top panel) and distance-time profile (bottom panel) for the observed CME shock (dark grey) and magnetic structure (light grey), DBM shock (red), and DBM magnetic structure (blue line). In the top panel the in-situ proton bulk speed (black line) is given together with the uncertainties for the impact speeds. Results from the empirical acceleration-velocity relation by \cite{gopal01} are marked as black dashed-dotted lines. The horizontal dashed lines in the bottom panel refer to the distance range that the CME needs to propagate owing to different propagation directions W121--W136$^{\circ}$.}
\label{dbm_dir}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=1\textwidth,clip=]{f7.pdf}}
\caption{Same as Figure~\ref{dbm_dir} but giving a close-up view on the early evolution (left panels) and the in-situ arrival (right panels).}
\label{dbm-zoom}
\end{figure}
Figures~\ref{dbm_dir} and~\ref{dbm-zoom} show the results for the propagation behavior of the CME shock and magnetic structure using the DBM. The horizontal dashed lines in the bottom panel give the location of the STEREO-A spacecraft at W121$^{\circ}$ with a distance of 206~$R_{\odot}$ and the direction W136$^{\circ}$ with a CME propagation distance of 219~$R_{\odot}$, referring to that the apex of the CME is not directed towards STEREO-A \citep[see \textit{e.g.},][]{moestl13}. As comparison, we also show the result from the empirical acceleration-velocity relation, $a=2.193-(0.0054 \times v)$, proposed by \cite{gopal01}.
From observations we obtain that the CME shock reached at 23 July 2012 04:25~UT a distance of 27~$R_{\odot}$ with a speed in the range of 1100--2900~km~s$^{-1}$ (\textit{cf.}\,Figures~\ref{distance} and~\ref{speed}) and arrives at STEREO-A with a distance of 206~$R_{\odot}$ the same day at 20:55~UT having an impact speed of 2250~km~s$^{-1}$ \citep[\textit{cf.}][]{russell13}. The DBM input parameters are constrained by the remotely observed parameters and the model output is controlled by the in-situ observations. The model parameters which show the best match with the observations are summarized in Table~\ref{shockDBM}. From this it follows that the ambient solar wind flow speed in which the fast shock is propagating is of average value with 450~km~s$^{-1}$ close to the slow solar wind speed. The initial shock speed is required to be close to the impact speed (almost no deceleration) since for higher initial shock speeds, larger than $\approx$2300~km~s$^{-1}$, the model could not reproduce the observed arrival time and would arrive much too early. We need to apply a low $C_{\rm D}$ value of the order of 0.5--0.8 which refers to a very heavy and fast CME as also found from MHD simulations by \cite{cargill04}. A remarkable result is obtained for the $\gamma$ value with 0.01$\times$10$^{-7}$~km$^{-1}$ it is lower by one order of magnitude compared to the statistical results given in \cite{vrsnak14}. We conclude for the shock, the weak deceleration is owing to the extremely low $\gamma$ value, which we will examine in more detail below. We would like to add, the empirical acceleration-velocity relation by \cite{gopal01} would give a correct arrival time using as input speed 2500~km~s$^{-1}$ but would have largely underestimated the impact speed.
We assume that the magnetic structure, as tracked in our study, coincides with the magnetic cloud which reached STEREO-A at 22:55~UT having an impact speed of 1870~km~s$^{-1}$ \citep[\textit{cf.}][]{russell13}. We constrain the model input parameters by observations from remote sensing data which give for the magnetic structure a distance of 23~$R_{\odot}$ at 04:25~UT and a speed in the range of 750--2350~km~s$^{-1}$ (\textit{cf.}\,Figures~\ref{distance} and~\ref{speed}). As the magnetic structure propagates in the wake of the fast shock we use the resulting DBM speed-time profile from the CME shock as input for the ambient solar wind speed. This further constrains the model input values. We obtain the best match by using the DBM parameters as given in Table~\ref{msDBM}. The initial speed of the magnetic structure is required to be close to the observed in-situ impact speed ($\approx$2000~km~s$^{-1}$). Similar as derived for the CME shock, the deceleration of the magnetic structure is very small but required in order to match the correct arrival time as measured in-situ. $C_{\rm D}$ is of the typical value of unity and $\gamma$ reveals to be of 0.15$\times$10$^{-7}$~km$^{-1}$, a reasonable value for massive CMEs. We conclude that for the magnetic structure, the weak deceleration is produced by the high ambient solar wind speed that actually resembles the CME shock speed.
\begin{table}
\caption{DBM and observational parameters for the shock of the CME. The parameters time ($t$), distance ($d$), and speed ($v$) as derived from observations constrain the DBM. Further DBM parameters are the ambient solar wind speed ($v_{\rm SW}$), the drag parameter ($\gamma$) as well as the dimensionless drag coefficient ($C_{\rm D}$). The given parameters show the best match found between model and observations. DBM$_{\rm out1}$ refers to a best match between model and in-situ observations for the parameters distance and speed, DBM$_{\rm out2}$ for time and speed.
}
\label{shockDBM}
\begin{tabular}{ccccccc}
\hline
& $t$ [UT] & $d$ [Rs] & $v$ [km s$^{-1}$] & $v_{\rm SW}$ [km s$^{-1}$] &$\gamma$ [10$^{-7}$~km$^{-1}$] & $C_{\rm D}$ \\
\hline
Remote obs & 04:25 & 27 & 1100--2900 & --- & --- & ---\\
DBM$_{\rm input}$ & 04:25 & 27 & 2300 & 450 & 0.01 & 0.5--0.8 \\
In-situ obs & 20:55 & 206 & 2250 & --- & --- & ---\\
DBM$_{\rm out1}$ & 19:45 & 206 & 2210 & --- & --- & ---\\
DBM$_{\rm out2}$ & 20:50 & 217 & 2200 & --- & --- & ---\\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{The same as Table~\ref{shockDBM} but for the magnetic structure of the CME. }
\label{msDBM}
\begin{tabular}{ccccccc}
\hline
& $t$ [UT] & $d$ [Rs] & $v$ [km s$^{-1}$] & $v_{\rm SW}$ [km s$^{-1}$] &$\gamma$ [10$^{-7}$~km$^{-1}$] & $C_{\rm D}$ \\
\hline
Remote obs & 04:25 & 23 & 750--2350 & --- & --- & --- \\
DBM$_{\rm input}$ & 04:25 & 23 & 2000 & $v_{\rm DBM(shock)}$ & 0.15 & 1 \\
In-situ obs & 22:55 & 0.96 & 1870 & --- & --- & --- \\
DBM$_{\rm out1}$ & 22:30 & 206 & 1910 & --- & --- & --- \\
DBM$_{\rm out2}$ & 22:55 & 211 & 1910 & --- & --- & --- \\
\hline
\end{tabular}
\end{table}
The question remains, is the extremely low $\gamma$ value as applied for the CME shock physically meaningful? We examine $\gamma$ for the CME shock and the magnetic structure by calculating it directly from the individual variables. We use the observed CME mass, the resulting CME geometry from the GCS model to obtain the cross-section, and the empirical relation by \cite{leblanc98} to derive the ambient solar wind density. For the CME shock the cross section ($A_{\rm sh}$) is defined as the plane area of a spherical cap with the stand-off distance between the shock and the magnetic structure as height of the cap. The stand-off distance $s$ is assumed to be linearly related with the propagated distance $d$. From observations we derive $s\approx4~R_{\odot}$ at $d\approx30~R_{\odot}$ (\textit{cf.}\,Figure~\ref{distance}) and we make a linear extrapolation for the distance range up to 1~AU. Using simple geometrical relations we find $A_{\rm sh}=(2ds-s^2)\pi$. For the magnetic structure the cross-section ($A_{\rm ms}$) is an ellipse with the semi-minor and semi-major axes defined by the obtained GCS edge-on and face-on width ($w_{\rm eo}, w_{\rm fo}$), expanding in a self-similar manner up to 1~AU. The area of this ellipse can be calculated with $A_{\rm ms}=\tan(\frac{w_{\rm eo}w_{\rm fo}}{2})d^2\pi$. For the dimensionless parameter $C_{\rm D}$ we use the model results which are 0.5--0.8 for the shock and 1 for the magnetic structure. In Figure~\ref{gamma} we plot the derived $\gamma$ values, including the derived uncertainties for all the variables, against the solar wind proton density values normalized for the distance at 1~AU. From this we derive that a density of $\rho_{\rm sw}$\,=\,1--2~cm$^{-3}$ indeed yields for the shock $\gamma$ as low as 0.01$\times$10$^{-7}$~km$^{-1}$. For the magnetic structure $\rho_{\rm sw}$\,=\,1--3~cm$^{-3}$ is required to derive $\gamma$ in the range of 0.15$\times$10$^{-7}$~km$^{-1}$.
\begin{figure}
\centerline{\includegraphics[width=1\textwidth,clip=]{f8.pdf}}
\caption{Calculated $\gamma$ values for the CME shock (red area) and magnetic structure (grey area) versus different values for $\rho_{\rm sw}$. The dashed lines mark the $\gamma$ values as applied for the DBM. The given range for $\gamma$ covers uncertainties in the derived CME mass, cross-section area, and $C_{\rm D}$. Minimum values for $\gamma$ are obtained applying the upper limit in CME mass, the lower limit in CME size, and the lower limit for the $C_{\rm D}$. }
\label{gamma}
\end{figure}
\section{Discussion}\label{dis}
Using the DBM, we investigate the evolution and propagation from the Sun up to STEREO-A for the fast CME which occurred on 23 July 2012. We apply the GCS model assuming a spherical geometry for the shock as well as an idealized flux rope for the magnetic structure and study their 3D kinematics. From this we obtain for the maximum speed of the CME shock 2580$\pm$280~km~s$^{-1}$ and for its magnetic structure 2270$\pm$420~km~s$^{-1}$. We note that these values are lower than those derived by \cite{liu14} with $\approx$3050$\pm$200~km~s$^{-1}$. \cite{liu14} use a triangulation technique of time-elongation maps which are extracted from the ecliptic plane. Our study is based on GCS modeling which reproduces the entire density envelope of the CME along its propagation direction, hence, the results are not restricted to structures propagating along the ecliptic. We note that deviations from self-similar expansion and the idealized geometry assumption might be the caveat for the GCS results.
From the kinematics we derive the acceleration phase of the CME. Considering for the major acceleration peak of the CME an acceleration duration of $\approx$30~min, we estimate from different power-law relations\footnote{According to statistics, the CME acceleration duration and peak value of acceleration are closely related parameters which are inversely proportional. } maximum acceleration values in the range of 0.2--1.0~km~s$^{-2}$ \citep{zhang06,vrsnak07acc,bein11}. However, the derived acceleration value from observations is much higher (2.2$\pm$0.1~km~s$^{-2}$), showing the exceptional characteristics of this event. From this we may speculate that most probably it is the underlying magnetic reconnection process itself that is able to efficiently drive the CME to such high speeds at far distance from the Sun. Related to this we note that in general CMEs that reach high peak accelerations start at lower coronal heights \citep{bein11} and suggest that the low starting height of the CME might contribute to the strong magnetic field as measured in-situ.
Assuming that the further CME evolution is solely governed by the drag force owing to the ambient solar wind (speed and density), we use the DBM to simulate the short travel time from Sun to STEREO-A as well as the high impact speed. The CME shock propagation is successfully reproduced by using reasonable model input values supported by the observational results. We derive that the CME hardly decelerates in interplanetary space and find that it is not necessarily ultra-fast. The model required CME speed input values should not exceed 2300~km~s$^{-1}$ in order to reproduce the observed arrival time at STEREO-A. The ambient solar wind speed is found to be of average value close to the slow solar wind speed and might not play the key role for producing the short propagation time. The extreme low drag exerted on the CME is due to the applied $\gamma$ value which has to be one order of magnitude lower than statistically derived for DBM shocks \citep{vrsnak14}. The very high mass of the CME under study is one key requirement for a low $\gamma$ value and the low drag \citep[see also][]{vrsnak10}. But most important, the ambient solar wind density should not exceed 1--2~cm$^{-3}$ in order to derive the $\gamma$ value as low as required for our study. With this we support the interpretation by \cite{liu14} who pointed out the importance of the preconditioning of interplanetary space as consequence of the prior CME from July 19, 2012. As shown in \cite{liu14} the trailing part of the prior CME has a density as low as 1~cm$^{-3}$. We further conclude that a largely radial orientation of the interplanetary magnetic field, due to its stretching, may reduce the pile-up of solar wind and delay its replenishment. This may preserve the preferable conditions for a fast propagation of the CME launched 3.5~days later. In this respect we note that owing to the low solar activity of cycle 24, the solar wind density is generally low. For the propagation behavior of the CME magnetic structure we find that the weak drag exerted on the magnetic structure is due to the high ambient solar wind speed resembling its propagation in the wake of the fast shock.
\section{Conclusion}\label{con}
We identify the following key characteristics in the 23 July 2012 CME:
\begin{itemize}
\item high CME launch speed due to high flare energy release of long duration
\item very high CME mass due to high amount of filament plasma
\item extremely low ambient solar wind density due to previous CME and weak solar cycle
\end{itemize}
In conclusion, the extreme character of the 23 July 2012 eruption is first of all marked by the long duration flare energy release. Due to an efficient magnetic reconnection process the CME might reach a very high speed that is sustained by the prolonged energy release. The underlying mechanism which is able to build up the energy in the source region as well its conversion into such high kinetic energy is beyond the limit of information from observational data. We encourage modelers to provide further insight into the physics of source regions and reconnection processes related to such extreme events. The very high mass of the CME, related to the filament eruption, as well as the preconditioning of interplanetary space, in terms of reduced solar wind density, may have been the decisive factors for low deceleration leading to the short transit time and high in-situ impact speed.
\begin{acks}
M.T. acknowledges the Austrian Science Fund (FWF): P20145-N16. N.V.N's work has been supported by NSF grant AGS-1259549, NASA AIA contract NNG04EA00C, and the NASA STEREO mission under NRL Contract No.\,N00173-02-C-2035. We appreciate the provision of PLASTIC data supported by NASA Grant NNX13AP52G. We thank Y.D.~Liu for valuable comments on the manuscript and J.G.\,Luhmann, as well as Y.\,Li, and B.J.\,Lynch for helpful discussions.
\end{acks}
\bibliographystyle{spr-mp-sola}
\tracingmacros=2
|
2,869,038,156,273 | arxiv |
\section{Introduction}
Chebyshev polynomial interpolants provide powerful approximation properties, both in theory and as implemented in practice by the Chebfun software system~\cite{battles2004extension}. Chebfun uses spectral collocation to provide very accurate automatic solutions to differential equations~\cite{driscoll2008chebop}. The method is not fully adaptive, though, since the refinement is limited to the degree of the global interpolant.
Chebfun includes a \textit{splitting} method that creates piecewise polynomial approximations \cite{pachon2010piecewise}. When splitting is enabled, if a Chebyshev interpolant is unable to represent the function accurately at a specified maximum degree on an interval, the interval is bisected; this process is recursively repeated on the subintervals. Afterwards adjacent subintervals are merged if the new interval allows for a Chebyshev approximation with lower degree. In effect, the method does a binary search for a good splitting location. In \cite{driscoll2014optimal} it was shown that the splitting locations are roughly optimal based on the singularity structure of the function in the complex plane.
A drawback of Chebfun's splitting approach is that the resulting representation does not ensure anything more than $C^0$ continuity. Differentiation of the Chebyshev interpolation polynomial of degree $n$ has norm $O(n^2)$, so a jump in the derivative develops across a splitting point and becomes more pronounced for higher derivatives and larger $n$. In order to solve a boundary-value problem, Chebfun imposes explicit continuity conditions on the solution to augment the discrete problem. This solution works well in 1D but becomes cumbersome in higher dimensions, particularly if refinements are made nonconformingly.
In this paper we explore the use of Chebyshev interpolants on overlapping domains combined using a \emph{partition of unity}. The resulting approximation has the same accuracy as the individual piecewise interpolants. We use compactly supported weight functions that are infinitely differentiable, so the resulting combined interpolant is also infinitely smooth (though not analytic). We also show that the accuracy of the derivative can be bounded by $\Theta(\delta^{-2})$ for an overlap amount $\delta$, revealing an explicit tradeoff between efficiency (smaller overlap and more like Chebfun splitting) and global accuracy of the derivative. Because the global approximation is smooth, there are no matching conditions needed to solve a BVP, and there are standard preconditioners available that should aid with iterative methods for large discretizations. For example, since we split the interval into overlapping domains we could use the restricted additive Schwarz preconditioner \cite{doi:10.1137/S106482759732678X}.
We describe a recursive, adaptive algorithm for creating and applying a partition of unity, modeled on the recursive splitting in Chebfun but merging adjacent subdomains aggressively in order to keep the total node count low. Even though each node of the recursion only combines two adjacent subdomains, we show that the global approximant is also a partition of unity. We demonstrate that the adaptive refinement is able to resolve highly localized features of an explicitly given function and of a solution to a singularly perturbed BVP.
The use of a partition of unity in our approximation affords us some flexibility; we are able to create approximations which are both efficient and infinitely smooth without matching. Partition of unity schemes have been widely used for interpolation \cite{franke1980smooth,mclain1976two,shepard1968two} and solving PDE's \cite{griebel2000particle,safdari2015radial}. In section~\ref{PUM_FORM_SEC} we introduce the partition of unity method, and we discuss the convergence of the method for a simple split on the interval $[-1,1]$ in section~\ref{converge_sec}. We describe our adaptive algorithm in section~\ref{PUM_recurse}. In section~\ref{PUM_BVP_SEC} we explain how to apply our method to solve boundary value problems on an interval and perform some experiments with singularly perturbed problems.
\section{Chebyshev interpolation}
\label{sec_cheb}
We use Chebyshev interpolants for our partition of unity method because they enjoy spectral convergence. Suppose that $f(x)$ is analytic inside a Bernstein ellipse $E_\rho$ (an ellipse with foci $\pm 1$ and semi-major axis $\rho>1$). We then have Theorem 6 from \cite{trefethen2000spectral}:
\begin{theorem} Suppose $f(z)$ is analytic on and inside the Bernstein ellipse $E_\rho$. Let $p_n$ be the polynomial that interpolates $f(z)$ at $n+1$ Chebyshev points of the second kind. Then there exists a constant $C>0$ such that for all $n>0$,
$$ \left \| f(x)-p_n(x) \right \|_{\infty} \leq C \rho^{-n}.$$
\end{theorem}
If $f(x)$ is Lipschitz continuous on $[-1,1]$ then
\begin{equation}
f(x) = \sum_{k=0}^\infty a_k T_k(x), \quad a_k = \frac{2}{\pi} \int_{-1}^1 \frac{f(x) T_k(x)}{\sqrt{1-x^2}} dx,
\end{equation}
where $T_k$ denotes the degree $k$ Chebyshev polynomial (and for $a_0$, we multiply by $\frac{1}{\pi}$ instead of $\frac{2}{\pi}$). Furthermore if $p_n(x)$ is the $n$th degree Chebyshev interpolant then
\begin{equation}
f(x)-p_n(x) = \sum_{k=n+1}^{\infty} a_k \left( T_k(x)-T_m(x)\right),
\end{equation}
where
\begin{equation}
m = \left [ (k+n-1)(\text{mod }2n) - (n-1)\right ],
\end{equation}
implying we can determine the accuracy of the interpolant $p_n(x)$ by inspecting the Chebyshev coefficients \cite{Trefethen2013}. Chebfun's standardChop method determines the minimum required degree by searching for a plateau of low magnitude coefficients \cite{Aurentz:2017:CCS:3034774.2998442}. For example, Figure~\ref{Coeff_example} shows the first 128 coefficients of $f(x)=\exp \left( \sin \left( \pi x \right) \right)$. We see that all coefficients after the first 46 have magnitude less than $10^{-15}$. In this case, Chebfun determines the ideal degree to be 50.
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.5]{coeff_plot.eps}
\caption{Chebyshev coefficients for $f(x)=\exp \left( \sin \left( \pi x \right) \right)$.}
\label{Coeff_example}
\end{figure}
\section{Partition of unity formalism}
\label{PUM_FORM_SEC}
Suppose we have an overlapping covering $\{ \Omega_k \}_{k=1}^N$ on a bounded region $\Omega$. A partition of unity is a collection of real valued functions $\{w_k(x)\}_{k=1}^N$ such that:
\begin{itemize}
\item $w_k(x)$ has support within $\Omega_k$,
\item each $w_k(x)$ is nonnegative,
\item $\forall x \in \Omega, \quad \sum_{k=1}^N w_k(x)=1$.
\end{itemize}
The functions $\{w_k(x)\}_{k=1}^N$ are called the \textit{weights} of the partition. Suppose now that $\Omega=[-1,1]$ and each $\Omega_k$ is an interval. We can use the partition of unity $\{w_k(x)\}_{k=1}^N$ to construct an approximating function. Suppose that for $m \geq 0$ we have a function $f \in C^{m}([-1,1])$, each weight $w_k(x)\in C^{m}([-1,1])$ and for each patch $\Omega_k$ we have an approximation $s_k(x)$ of $f(x)$. Then the function
\begin{equation}
\label{POUAPPROX}
s(x) = \sum_{k=1}^N w_k(x)s_k(x)
\end{equation}
can be used to approximate $f(x)$ and its derivatives \cite{wendland2004scattered}.
\begin{theorem}
\label{PUMCON}
Suppose $f \in C^{m}([-1,1])$ and for each patch $\Omega_k$ we have a function $s_k(x)$ such that
$$ \|f^{(\alpha)}(x)-s_k^{(\alpha)}(x)\|_{L_{\infty}(\Omega_k)} \leq \varepsilon_k(\alpha) $$
for $\alpha \leq m$. Thus for $j\leq m$, if $s(x)$ is the approximation (\ref{POUAPPROX}) then
\begin{equation}
\left \|f^{(j)}(x)- s^{(j)}(x) \right \|_{L_{\infty}(\Omega_k)} \leq \sum_{k=1}^N\sum_{i=0}^j \binom{j}{i} \left \| w_k^{(j-i)}(x) \right \|_{L_{\infty}(\Omega_k)} \epsilon_k(i).
\end{equation}
\end{theorem}
\begin{proof}
Since $\sum_{k=1} w_k(x)=1$, $\sum_{k=1} w_k(x)f(x)=f(x)$. Thus
\begin{equation}
\begin{aligned}
\frac{d^{j}}{d x^j}f(x)-\frac{d^{j}}{d x^j} \sum_{k=1}^N w_k(x)s_k(x) &= \frac{d^{j}}{d x^j} \sum_{k=1}^N w_k(x)(f(x)-s_k(x)) \\
&= \sum_{k=1}^N\sum_{i=0}^j \binom{j}{i} w_k^{(j-i)}(x) \left( f^{(i)}(x)-s_k^{(i)}(x) \right).
\end{aligned}
\end{equation}
The result follows from here by the triangle inequality.
\end{proof}
\section{Convergence analysis}
\label{converge_sec}
In this section we consider a single interval partitioned into two overlapping parts, i.e.
$[-1,t]$,$[-t,1]$, where $t$ is the overlap parameter such that $0<t<1$. For the weights, we use Shepard's method \cite{shepard1968two} based on the compactly supported, infinitely differentiable shape function
\begin{align}
\psi(x) = \begin{cases}
\exp \left( 1 - \frac{1}{1-x^2}\right) & |x| < 1, \\
0 & |x| \geq 1.
\end{cases}
\end{align}
We define support functions
\begin{align}
\psi_{\ell}(x) = \psi \left( \frac{x+1}{1+t} \right) \quad \text{ and } \quad \psi_{r}(x) = \psi \left( \frac{x-1}{1+t} \right) ,
\end{align}
to construct the PU weight functions
\begin{align}
w_{\ell}(x) = \frac{\psi_{\ell}(x)}{\psi_{\ell}(x)+\psi_{r}(x)} \quad \text{and} \quad w_{r}(x) = \frac{\psi_{r}(x)}{\psi_{\ell}(x)+\psi_{r}(x)},
\label{PUW}
\end{align}
where $w_{\ell}(x),w_{r}(x)$ have support on the left and right intervals, respectively.
Suppose that $s_{\ell}(x),s_{r}(x)$ approximate $f(x)$ on $[-1,t]$, $[-t,1]$ respectively and are both infinitely smooth. Let
\begin{equation}
s(x) = w_{\ell}(x)s_{\ell}(x)+w_{r}(x)s_{r}(x),
\label{PUM2}
\end{equation}
where $s(x)$ is the partition of unity approximation. Following Theorem~\ref{PUMCON} we have for $x \in [-1,1]$ that
\begin{equation}
\begin{aligned}
\left | f(x)-s(x) \right | &= \left | w_{\ell}(x) \left( f(x) - s_{\ell}(x) \right) + w_{r}(x) \left( f(x) - s_{r}(x) \right) \right | \\
&\leq w_{\ell}(x) \left | f(x) - s_{\ell}(x) \right | + w_{r}(x) \left | f(x) - s_{r}(x) \right |.
\end{aligned}
\end{equation}
We conclude that
\begin{align}
\left \| f(x)-s(x) \right \|_{L_{\infty}[-1,1]} \leq \max \left( \left \| f(x)-s_{\ell}(x) \right \|_{L_{\infty}[-1,t]} , \left \| f(x)-s_{r}(x) \right \|_{L_{\infty}[-t,1]} \right).
\label{POU_UP}
\end{align}
This implies that the Partition of unity Method (PUM) preserves the accuracy of its local approximants. We also have that $s(x)$ is infinitely smooth. For the first derivative we have
\begin{equation}
\begin{aligned}
\left | f'(x)-s'(x) \right | &\leq \left | w_{\ell}(x) \left( f'(x)-s_{\ell}'(x) \right) \right |+\left | w_{r}(x) \left( f'(x)-s_{r}'(x) \right) \right | \\
&+\left | w_{\ell
}'(x) \left( f(x)-s_{\ell}(x) \right) \right | +\left | w_{r}'(x) \left( f(x)-s_{r}(x) \right) \right |,
\end{aligned}
\end{equation}
giving us
\begin{equation}
\begin{aligned}
\left \| f'(x)-s'(x) \right \|_{L_{\infty}[-1,1]} &\leq \max \left( \left \| f'(x)-s_{\ell}'(x) \right \|_{L_{\infty}[-1,t]} , \left \| f'(x)-s_{r}'(x) \right \|_{L_{\infty}[-t,1]} \right) \\
&+\left \| w_{\ell}'(x) \right \|_{L_{\infty}[-t,t]} \left \| f(x)-s_{\ell}(x)\right \|_{L_{\infty}[-t,t]}\\
&+ \left \| w_{r}'(x) \right \|_{L_{\infty}[-t,t]} \left \| f(x)-s_{r}(x)\right \|_{L_{\infty}[-t,t]},
\end{aligned}
\label{diff_error}
\end{equation}
since the derivatives of the weights have support only on the overlap. For $t \ll 1$, the weights steepen to become nearly step functions. This causes the derivatives of the weights to be large in magnitude, resulting in an increase in the error for the derivative.
Since $w_{\ell}'(x)=-w_{r}'(x)$, from (\ref{diff_error}) we can infer
\begin{equation}
\begin{aligned}
\left \| f'(x)-s'(x) \right \|_{L_{\infty}[-1,1]} \leq \max \left( \left \| f'(x)-s_{\ell}'(x) \right \|_{L_{\infty}[-1,t]} , \left \| f'(x)-s_{r}'(x) \right \|_{L_{\infty}[-t,1]} \right)& \\
+\left \| w_{\ell}'(x) \right \|_{L_{\infty}[-t,t]} \max \left( \left \| f(x)-s_{\ell}(x)\right \|_{L_{\infty}[-t,t]}
, \left \| f(x)-s_{r}(x)\right \|_{L_{\infty}[-t,t]} \right). &
\end{aligned}
\label{diff_error_2}
\end{equation}
We have that $x=0$ is a critical point of $w_{\ell}'(x)$ and for $t<0.4$ it can be shown that the maximum of $\left |w_{\ell}'(x) \right |$ occurs at $x=0$. Since
\begin{equation}
w_{\ell}'(0) = -\frac{(1+t)^2}{t^2 (2+t)^2},
\end{equation}
we can infer that $\left \| w_{\ell}'(x) \right \|_{L_{\infty}[-t,t]} = \Theta(t^{-2})$ as $t\to 0$. The norm of the Chebyshev differentiation operator is $\Theta(n^2)$ (for $n$ nodes), implying that the two terms on the right-hand side of (\ref{diff_error_2}) are balanced if $t^{-2}=\Theta(n^2)$, or equivalently $t = \Theta \left( \frac{1}{n} \right)$. A simple example of a split can be seen in Figure~\ref{ARCTAN}.
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.5]{PUTANEXAMP.eps}
\caption{Plot of the PU approximation with overlap parameter $t=0.1$ for $f(x)=\arctan \left( x/0.1 \right)$, where the thick lines represent the domains of the left and right approximation. Here $\left \| f(x)-s(x) \right \|_{L_{\infty}[-1,1]}= 2.4 \mathrm{e}{-15}$ and $\left \| f'(x)-s'(x) \right \|_{L_{\infty}[-1,1]}= 1.7 \mathrm{e}{-13}$.}
\label{ARCTAN}
\end{figure}
\section{Recursive algorithm}
\label{PUM_recurse}
In order to allow for adaptation to specific features of $f(x)$, we next describe a recursive bisection algorithm that works similarly to Chebfun's splitting algorithm \cite{driscoll2014optimal} and to that of \cite{tobor2006reconstructing}. Suppose we want to construct a PU approximation $s_{[a,b]}(x)$ on the interval $[a,b]$ using Chebyshev interpolants on the patches. If $f(x)$ can be resolved by a Chebyshev interpolant $s(x)$ of length $n_{\text{max}}$ on $[a,b]$ then
\begin{align}
s_{[a,b]}(x) = s(x).
\end{align}
Otherwise we split the interval into two overlapping domains and blend the results as in (\ref{PUM2}):
\begin{align}
s_{[a,b]}(x) = w_{\ell}(x)
s_{\left [a,a+\delta \right ]}(x)+ w_{r}(x) s_{\left [b-\delta,b\right ]}(x),
\end{align}
where $w_{\ell},w_{r}$ are the PU weight functions defined in (\ref{PUW}) (but defined for $[a,b]$, and $\delta= (1+t) \left( \frac{a+b}{2} \right)$).
We define a binary tree $T$ with each node $\nu$ having the following properties:
\begin{itemize}
\item interval($\nu$):=the domain of the patch
\item \child{0}($\nu$),\child{1}($\nu$):=respective left and right subtrees of $\nu$ (if split)
\item \weight{0}($\nu$),\weight{1}($\nu$):=respective left and right weights of $\nu$ (if split)
\item interpolant($\nu$):=Chebyshev interpolant on interval($\nu$) if $\nu$ is a leaf
\item values($\nu$):=values of the function we are approximating at the Chebyshev points of $\nu$.
\end{itemize}
We define root($T$) as the root node of $T$. In Algorithm~\ref{alg2} we formally describe how we refine our splitting; the merge method is described in section~\ref{Merging_sec}.
\begin{algorithm}[!h]
\caption{splitleaves($\nu$,$n_{\text{max}}$,$t$)}
\label{alg2}
\begin{algorithmic}
\IF{$\nu$ is a leaf and $f(x)$ cannot be resolved by interpolant($\nu$)}
\STATE Define new nodes $\nu_0$, $\nu_1$.
\STATE $[a,b]$:=interval($\nu$)
\STATE $\delta:= \frac{b-a}{2} \left( 1+t \right)$
\STATE interval($\nu_0$):= $[a,a+\delta]$
\STATE interval($\nu_1$):= $[b-\delta,b]$
\FOR{$k=0,1$}
\STATE \child{k}($\nu$) := $\nu_k$
\ENDFOR
\STATE \weight{0}($\nu$),\weight{1}($\nu$):= weights in (\ref{PUW}) defined for $[a,a+\delta]$,$[b-\delta,b]$
\ELSIF{$\nu$ is a leaf and $f(x)$ can be resolved by a Chebyshev interpolant with degree less than $n_{\text{max}}$}
\STATE interpolant($\nu$):=minimum degree interpolant $f(x)$ can be resolved by \par\qquad\ \enspace
as determined by Chebfun
\ELSE
\FOR{$k=0,1$}
\STATE splitleaves(\child{k}($\nu$),$n_{\text{max}}$,$t$)
\ENDFOR
\STATE merge($\nu$,$n_{\text{max}}$)
\ENDIF
\end{algorithmic}
\end{algorithm}
We first initialize the tree $T$ with a single node $\nu$ where interval($\nu$)=$[a,b]$. Next we repeatedly call the splitleaves method until each leaf of $T$ has a Chebyshev interpolant that can resolve $f(x)$ with degree less than $n_{\text{max}}$, as seen in Algorithm~\ref{alg6}. For each leaf $\nu$ of $T$, sample($T$,$f(x)$) sets values($\nu$) using $f(x)$ . For a leaf $\nu$, we determine if a Chebyshev interpolant can resolve $f(x)$ using Chebfun's standardChop method with values($\nu$) (as described in Section~\ref{sec_cheb}). Using $T$ we can evaluate $s_{[a,b]}(x)$ recursively as demonstrated in Algorithm~\ref{alg3}.
\begin{algorithm}[!h]
\caption{$T$=refine($n_{\text{max}}$,$t$,$f(x)$)}
\label{alg6}
\begin{algorithmic}
\STATE Define $T$ as a tree with a single node.
\WHILE{$T$ has unresolved leaves}
\STATE sample($T$,$f(x)$)
\STATE splitleaves(root($T$),$n_{\text{max}}$,$t$)
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!h]
\caption{v=eval($\nu$,$x$)}
\label{alg3}
\begin{algorithmic}
\IF{$\nu$ is a leaf}
\STATE $p$:=interpolant($\nu$)
\STATE v:= $p(x)$
\ELSE
\STATE $v_0,v_1$:=0
\STATE $w_0$:=\weight{0}($\nu$)
\STATE $w_1$:=\weight{1}($\nu$)
\FOR{$k=0,1$}
\IF{$x \in$ interval(\child{k}($\nu$))}
\STATE $v_k$:=eval(\child{k}($\nu$),$x$)
\ENDIF
\ENDFOR
\STATE v := $w_0(x)v_0 + w_1(x)v_1$
\ENDIF
\end{algorithmic}
\end{algorithm}
As a simple example, we approximate the function $f(x)=\arctan \left( \frac{x-0.25}{0.001} \right)$ with $n_{\max}=128$. In order to resolve to machine precision, a global Chebyshev interpolant on the interval $[-1,1]$ requires 25743 nodes while our method requires 523. Chebfun with non-overlapping splitting requires 381 nodes. Overlapping splittings will typically require more total nodes while offering the benefit of global smoothness. The result can be seen in Figure~\ref{ARCTAN2}.
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.5]{non_merging_example.eps}
\caption{Plot of the partition of unity approximation with overlap parameter $t=0.1$ for $f(x)=\arctan \left( (x-0.25)/0.001 \right)$, where the solid blue lines represent the patches.}
\label{ARCTAN2}
\end{figure}
We can deduce from (\ref{POU_UP}) that $s_{[a,b]}(x)$ will approximate $f(x)$. Moreover, our method implicitly creates a PU on the leaves of the tree through the product of the weights at each level.
\begin{theorem}
Let an approximation $s_{[a,b]}(x)$ be as in (\ref{PUM2}). Then the tree that represents $s_{[a,b]}(x)$ implicitly defines a PU $\{w_k(x)\}_{k=1}^M$, where $w_k(x)$ has compact support over the $k$th leaf.
\end{theorem}
\begin{proof}
Suppose that on the domain $[a,b]$ we have PU's $\{w_{\ell k}(x)\}_{k=1}^{M_\ell}$, $\{w_{rk}(x)\}_{k=1}^{M_r}$ for the leaves of the left and right child respectively. We claim that
\begin{equation}
\{w_{\ell}(x) w_{\ell k}(x)\}_{k=1}^{M_\ell} \cup \{w_r(x) w_{r k}(x)\}_{k=1}^{M_r}
\label{UNPU}
\end{equation}
forms a PU over the leaves of the tree. We first observe that $w_{\ell}(x)w_{\ell k}(x)$ will have support in $\supp \left( w_{1k}(x) \right)$, the domain of the respective leaf. This is similarly true for $w_{r}(x)w_{r k}(x)$.
Next suppose that $x \in \sup \left( w_{\ell}(x)\right) \cap \, \sup \left( w_{r}(x)\right)^C$. Then $w_{\ell}(x)=1$ and $w_r(x)=0$, so
\begin{equation}
\sum_{k=1}^{M_\ell} w_{\ell}(x) w_{\ell k}(x)+ \sum_{k=1}^{M_r} w_{r}(x) w_{r k}(x) = \sum_{k=1}^{M_\ell} w_{\ell k}(x) = 1,
\end{equation}
since $\{w_{\ell k}(x)\}_{k=1}^{M_\ell}$ is a PU. This is similarly true if $x \in \sup \left( w_{\ell}(x)\right)^C \cap \sup \left( w_{r}(x)\right)$. Finally if $x \in \sup \left( w_{\ell}(x)\right) \cap \sup \left( w_{r}(x)\right)$ then
\begin{equation}
\begin{aligned}
\sum_{k=1}^{M_\ell} w_{\ell}(x) w_{\ell k}(x)+ \sum_{k=1}^{M_r} w_{r}(x) w_{r k}(x) &= w_{\ell}(x) \sum_{k=1}^{M_\ell} w_{\ell k}(x)+ w_{r}(x) \sum_{k=1}^{M_r} w_{r k}(x) \\
&= w_{\ell}(x)+w_{r}(x)=1.
\end{aligned}
\end{equation}
Thus by induction, we have that the product of weights through the binary tree for (\ref{PUM2}) implicitly creates a PU over the leaves.
\end{proof}
\subsection{Merging}
\label{Merging_sec}
As we create the tree we opportunistically merge leaves for greater efficiency. If a particular location in the interval requires a great deal of refinement, the recursive splitting essentially performs a binary search for that location (as was noted about Chebfun splitting in \cite{driscoll2014optimal}). The intermediate splits are not necessarily aiding with resolving the function; they are there just to keep the binary tree full. In Chebfun the recursive splitting phase is followed by a merging phase that discards counterproductive splits. We describe a similar merging operation here, but we allow these merges to take place whenever a leaf splits while its sibling does not, in order to keep the number of leaves from unnecessarily growing exponentially.
\begin{figure}[!htb]
\centering
\adjustbox{valign=t}{
\subfloat[Tree before merging. Here ${a_2<b_1}$ and ${a_{22}<b_{21}}$.]{
\begin{forest}
for tree={circle,draw, l sep=20pt,,scale=0.98}
[ {$\begin{array}{c}
[a,b] \\
s_{[a,b]}(x)
\end{array}$}
[ {$\begin{array}{c}
[a,b_1] \\
s_{[a,b_{1}]}(x) \\
w_{\ell_1}(x)
\end{array}$},name=left_p]
[ {{$\begin{array}{c}
[a_2,b] \\
s_{[a_{2},b]}(x) \\
w_{r_1}(x)
\end{array}$}}
[{$\begin{array}{c}
[a_2,b_{21}] \\
s_{[a_2,b_{21}]}(x) \\
w_{\ell_2}(x)
\end{array}$},name=right_p]
[{$\begin{array}{c}
[a_{22},b] \\
s_{[a_{22},b]}(x) \\
w_{r_2}(x)
\end{array}$}]
]
]
]
\draw[<->,dotted,thick] (right_p) to[out=north west,in=south] (left_p);
\end{forest}
\label{TRM_1}
}}\hfill
\adjustbox{valign=t}{
\subfloat[Tree after merging.]{
\begin{forest}
for tree={circle,draw, l sep=20pt,scale=0.98}
[ {$\begin{array}{c}
[a,b]\\
s_{[a,b]}(x)
\end{array}$}
[ {$\begin{array}{c}
[a,b_{21}] \\
s_{[a,b_{21}]}(x) \\
\hat{w}_{\ell_1}(x)
\end{array}$} ]
[ {$\begin{array}{c}
[a_{22},b] \\
s_{[a_{22},b] }(x) \\
\hat{w}_{r_1}(x)
\end{array}$} ]
]
]
\end{forest}
\label{TRM_2}
}}
\caption{An example of how leaves are merged, where each node is labeled with its domain, PU approximation and weight.}
\label{Tree_merge}
\end{figure}
In Figure~\ref{Tree_merge} we illustrate how we merge leaves; the interval $[a,b_{1}]$ is merged with $[a_2,b_{21}]$. Here we decide to merge if $f(x)$ can be resolved with an interpolant with degree less than $n_{\text{max}}$ on the interval $[a,b_{21}]$. For the new tree we define the left weight $\hat{w}_{\ell_1}(x)$ in Figure~\ref{Tree_merge} as
\begin{align}
\hat{w}_{\ell_1}(x) = \begin{cases}
1 & x<a_{22}, \\
w_{\ell_2}(x) & \text{ otherwise.}
\end{cases}
\label{pwe}
\end{align}
Since $w_{\ell_2}(x)=1$ for $x<a_{22}$, $\hat{w}_{\ell_1}(x)$ is smooth. For the right weight we use $\hat{w}_{r_1}(x) = w_{r_2}(x)$; these new weights form a PU. The PU approximation
\begin{align}
\hat{s}(x)=w_{\ell 1}(x) s_{[a,b_{1}]}(x)+w_{r 1}(x) s_{[a_2,b_{21}]}(x)
\end{align}
can be used to approximate $f(x)$ on $[a,b_{21}]$ since $f(x)$ is resolved at the leaves. In this case $s_{[a,b_{21}]}(x)$ is computed from sampling $\hat{s}(x)$. If the degree of $s_{[a,b_{21}]}(x)$ after Chebfun's chopping is less than $n_{\text{max}}$, we decide to merge. We explain in more detail the merging in Algorithm~\ref{alg5}; here extend($w(x)$,$[a,b]$) piecewise extends the weight $w(x)$ in $[a,b]$ as in (\ref{pwe}). We show the results for merging in Figure~\ref{MERGE_EXAMPLE} with $f(x)=\frac{1}{x-1.001}$.
\begin{algorithm}[!h]
\caption{merge($\nu$,$n_{\text{max}}$)}
\label{alg5}
\begin{algorithmic}
\IF{\child{0}($\nu$) and \child{0}(\child{1}($\nu$)) (child and grandchild of $\nu$) are leaves and both of the intervals of the leaves can be resolved on}
\STATE Define a new leaf $\nu_0$
\STATE $p_0(x)$:=interpolant(\child{0}($\nu$))
\STATE $p_1(x)$:=interpolant(\child{1}(\child{0}($\nu$)))
\STATE $w_0(x)$:= \weight{0}($\nu$)
\STATE $w_1(x)$:= \weight{1}($\nu$)
\STATE $\hat{s}(x)$:=$w_0(x)p_0(x)+w_1(x)p_1(x)$
\IF{$\hat{s}(x)$ can be resolved by a Chebyshev interpolant $p(x)$ with degree less than $n_{\text{max}}$}
\STATE interval($\nu_0$):=interval(\child{0}($\nu$))$\cup$interval(\child{0}(\child{1}($\nu$)))
\STATE interpolant($\nu_0$):=$p(x)$
\STATE points($\nu_0$):=Chebyshev grid of length deg($p(x)$) on $[a_0,b_1]$
\STATE $\hat{w}_0(x)$:= \weight{0}(\child{1}($\nu$))
\STATE $\hat{w}_1(x)$:= \weight{1}(\child{1}($\nu$))
\STATE \weight{0}($\nu$):= extend($\hat{w}_0(x)$,interval($\nu_0$))
\STATE \weight{1}($\nu$):= $\hat{w}_1(x)$
\STATE \child{0}($\nu$):= $\nu_0$
\STATE \child{1}($\nu$):= \child{1}(\child{1}($\nu$))
\ENDIF
\ELSIF{\child{1}($\nu$) is a leaf and \child{1}(\child{0}($\nu$)) is a leaf (and exists)}
\STATE inv(merge($\nu$)) (i.e. apply the algorithm, except swap 0 and 1)
\ENDIF
\end{algorithmic}
\end{algorithm}
\begin{figure}[!htb]
\centering
\subfloat[Tree before merging.]{
\includegraphics[scale = 0.35]{PUBEFORE_MERGE.eps}
\label{MERGEB}
}
\subfloat[Tree after merging.]{
\includegraphics[scale = 0.35]{PUAFTER_MERGE.eps}
\label{MERGEA}
}
\caption{An example of how the PUM with $t=0.08$, $n_{\text{max}}=128$ resolves $f(x)=\frac{1}{x-1.0005}$ without merging (a) and after merging (b).
}
\label{MERGE_EXAMPLE}
\end{figure}
\subsection{Differentiation matrices}
\label{PUM_matrix}
Next we demonstrate how to construct a first derivative matrix; higher derivative matrices can be similarly constructed. Suppose we have constructed a splitting represented with the tree $T$. For each node $\nu$ of the tree, we add the following methods:
\begin{itemize}
\item points($\nu$):= provides the Chebyshev points of the leaves of $\nu$
\item leafpoints($\nu$):= provides the Chebyshev points of $T$ in interval($\nu$) i.e. \newline $\text{points(root($T$))} \cap \text{interval($\nu$)}$
\item pointindex($\nu$):=gives the index of points($\nu$) with respect to the points of the parent of $\nu$ (if $\nu$ is a child)
\item leafpointindex($\nu$):=gives the index of leafpoints($\nu$) with respect to the leafpoints of the parent of $\nu$ (if $\nu$ is a child).
\end{itemize}
Let $[\alpha,\beta]=\text{interval($\nu$)}$. We want to construct matrices $M,D$ such that
\begin{equation}
\begin{aligned}
M \left . f(x) \right |_{\text{points($\nu$)}} &= \left . s_{[\alpha,\beta]}(x) \right |_{\text{leafpoints($\nu$)}}, \\
D \left . f(x) \right |_{\text{points($\nu$)}} &= \left . \frac{d}{dx} s_{[\alpha,\beta]}(x) \right |_{\text{leafpoints($\nu$)}}.
\end{aligned}
\end{equation}
Let $I_k = \text{interval(\child{k}($\nu$))}$, $w_k(x)=\text{\weight{k}($\nu$)}$ for $k=0,1$. Then
\begin{equation}
\begin{aligned}
\left . s_{[\alpha,\beta]}(x) \right |_{\text{leafpoints($\nu$)}} &= \sum_{k=0}^1 \left . w_k(x) s_{I_k}(x) \right |_{\text{leafpoints($\nu$)}}, \\
\left . \frac{d}{dx} s_{[\alpha,\beta]}(x) \right |_{\text{leafpoints($\nu$)}} &= \sum_{k=0}^1 \left( \left . w_k(x) \frac{d}{dx} s_{I_k}(x) + \frac{d}{dx} w_k(x) s_{I_k}(x) \right) \right |_{\text{leafpoints($\nu$)}} .
\end{aligned}
\label{sum_eval}
\end{equation}
Thus we can recursively build up the differentiation matrix through the tree $T$. Due to the support of the weights, for each term in (\ref{sum_eval}) we only need evaluate the approximation $s_{I_k}(x)$ (or its derivative) for $\text{leafpoints($\nu$)} \cap I_k$, i.e. leafpoints(\child{k}($\nu$)). We describe how to construct the differentiation recursively in Algorithm~\ref{alg4}, using MATLAB notation for matrices. At each leaf the interpolation matrix $M$ has entries given by the barycentric interpolation formula based on second-kind Chebyshev points, as produced by the Chebfun command {\tt barymat} \cite{driscoll2015rectangular}.
\begin{algorithm}
\caption{[$M,D$]=diffmatrix($\nu$)}
\label{alg4}
\begin{algorithmic}
\IF{$\nu$ is a leaf}
\STATE $M$:= the Chebyshev barycentric matrix from points($\nu$) to leafpoints($\nu$)
\STATE $D_x$:= Chebyshev differentiation matrix with grid points($\nu$).
\STATE $D$:=$M D_x$.
\ELSE
\STATE $M,D$:=zeros(length(leafpoints($\nu$)),length(points($\nu$)))
\FOR{$k=0,1$}
\STATE [$M_k$,$D_k$]:= diffmatrix(\child{k}($\nu$))
\STATE $M$(leafpointindex(\child{k}($\nu$)),pointindex(\child{k}($\nu$))) = \par\qquad\(\hookrightarrow\)\enspace \text{diag} $\left( \left . w_k \right |_{\text{leafpoints(\child{k}($\nu$))}} \right)$*$M_k$;
\STATE $D$(leafpointindex(\child{k}($\nu$)),pointindex(\child{k}($\nu$))) = \par\qquad\(\hookrightarrow\)\enspace \text{diag} $\left( \left . w_k \right |_{\text{leafpoints(\child{k}($\nu$))}} \right)$*$D_k$+ \par\qquad\(\hookrightarrow\)\enspace \text{diag} $\left( \frac{d}{dx} \left . w_k \right |_{\text{leafpoints(\child{k}($\nu$))}} \right)$*$M_k$;
\ENDFOR
\ENDIF
\end{algorithmic}
\end{algorithm}
For $x \in [\alpha,\beta]$ we only need to evaluate the local approximations for the patches $x$ belongs to; this implies that the differentiation matrices will be inherently sparse. For example, Figure~\ref{SPARSE_DX} shows the sparsity of the first derivative matrix for the tree generated in Figure~\ref{ARCTAN2}. In this case, we have a sparsity ratio of around 76\%.
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.5]{diff_sparse2.eps}
\caption{Sparsity of the first derivative matrix for the tree generated for Figure~\ref{ARCTAN2}.}
\label{SPARSE_DX}
\end{figure}
\section{PUM for boundary-value problems}
\label{PUM_BVP_SEC}
Our method can be applied to solve linear and nonlinear boundary-value problems. For instance, consider a simple Poisson problem with zero boundary conditions:
\begin{equation}
\begin{aligned}
&u''(x) = f(x) \text{ for }-1<x<1 \\
&u(-1)=0, u(1)=0.
\end{aligned}
\label{simp_pois}
\end{equation}
Suppose that we have differentiation and interpolation matrices $D_{xx}$ and $M$ from section~\ref{PUM_matrix}, $X$ is the set of Chebyshev points over all the leaves, and that $X_I$, $X_B$ are the respective interior and boundary points of $X$. Let $E_{I}$ and $E_{B}$ be the matrices that map a vector to its subvector for the interior and boundary indices respectively. Let $F$ be the vector of values used for the local interpolants (i.e. if we had only two leaves whose interpolants used values $F_1,F_2$, we set $F = [F_1^T F_2^T]^T$). In order to find a PUM approximation $s(x)$ that approximates (\ref{simp_pois}) we find $F$ by solving the following linear system:
\begin{equation}
\begin{bmatrix}
E_{I} D_{xx} \\[1mm]
E_{B} M
\end{bmatrix}
\begin{bmatrix}
E_{I} F \\[1mm]
E_{B} F
\end{bmatrix}
=
\begin{bmatrix}
\left . f \right |_{X_I} \\[1mm]
0 \\
\end{bmatrix}.
\label{PUM_lin_system}
\end{equation}
Algorithm~\ref{BVP_solve} builds an adaptive solution for the BVP. We first construct a PU approximation $s(x)$ by solving the discretized system in (\ref{PUM_lin_system}). Sampling with $s(x)$, we use Algorithm~\ref{alg6} to determine if the solution is refined and split leaves that are determined to be unrefined. Here we also allow merging for a node with resolved left and right leaves (i.e., the left and right leaves can be merged back together).
\begin{algorithm}
\caption{$T$=refineBVP($n_{\text{max}}$,$t$,BVP)}
\label{alg7}
\begin{algorithmic}
\STATE Define $T$ as a tree with a single node with the domain of the BVP.
\WHILE{$T$ has unrefined leaves}
\STATE Find values for the interpolants $F$ of the leaves of $T$ by solving a discretized \par\qquad\ \enspace system defined by the interpolation and differentiation matrices of $T$.
\STATE sample($T$,$F$)
\STATE $s(x) = \text{eval}(\text{root}(T),x)$ (the PU approximation)
\STATE sample($T$,$s(x)$)
\STATE splitleaves(root($T$),$n_{\text{max}}$,$t$)
\ENDWHILE
\end{algorithmic}
\label{BVP_solve}
\end{algorithm}
\subsection{BVP examples}
\label{BVP_SEC}
We solve the stationary Burgers equation on the interval $[0,1]$ with Robin boundary conditions \cite{reyna1995exponentially}:
\begin{equation}
\begin{aligned}
&\nu u''(x)-u(x)u'(x)=0 \\
&\nu u'(0)-\kappa(u(0)-\alpha)=0 \\
&\nu u'(1)+\kappa(u(1)+\alpha)=0
\end{aligned}
\label{PUM_nlin_system}
\end{equation}
which has nontrivial solution
\begin{equation}
u(x)=-\beta \tanh \left( \frac{1}{2} \beta \nu^{-1} \left( x-\frac{1}{2} \right) \right)
\label{true_sol}
\end{equation}
where $\beta$ satisfies
\begin{equation}
-\frac{1}{2} \beta^2 \text{sech}^2 \left( \frac{1}{4} \beta \nu^{-1} \right)+\kappa \left [ \alpha-\beta \text{tanh} \left( \frac{1}{4} \beta \nu^{-1} \right) \right ]=0.
\end{equation}
We choose $\nu=5 \times 10^{-3},\alpha=1$, and $\kappa=2$. We use {\tt fsolve} in MATLAB to solve the BVP, supplying the Jacobian of the discretized nonlinear system. Starting with a linear guess $u(x)=0$, we update the solution from the latest solve (i.e. if the solution $s(x)$ from Algorithm~\ref{BVP_solve} is determined to be unresolved, we use it as the next initial guess). For this problem we set the Chebfun chopping tolerance to $10^{-10}$. Our solution was resolved to the tolerance we set after four nonlinear solves; as seen in Figure~\ref{NLIN_EXAMPLE}, the final approximation had 298 nodes and the absolute error was less than $10^{-4}$ as seen in Figure~\ref{NLIN_EXAMPLE}. On a machine with processor 2.6 GHz Intel Core i5, the solution was found in 1.3 seconds.
\begin{figure}[!htb]
\centering
\subfloat[Solution with subintervals plotted.]{
\includegraphics[scale = 0.35]{non_lin_plot.eps}
\label{NLIN_EXAMPLEA}
}
\subfloat[Plot of the error.]{
\includegraphics[scale = 0.35]{non_lin_residual.eps}
\label{NLIN_EXAMPLEB}
}
\caption{Numerical solution using the PU method and residual for the BVP (\ref{PUM_nlin_system}) with $\nu=5 \times 10^{-3}$, $t=0.1$, and $n_{\text{max}} = 128$.}
\label{NLIN_EXAMPLE}
\end{figure}
We preformed a similar experiment but instead used global Chebyshev interpolants. We adapt by increasing the degree of the polynomial from $n$ to $\text{floor}(1.5 n)$, starting with $n=128$. We stop when we have a solution that is refined to the tolerance $10^{-10}$ (same as before). Both the solution and residual are in Figure~\ref{GNLIN_EXAMPLE}; here we have the absolute error is higher at 1.8e-2. The solution took 3.2 minutes on the same machine. There are two main reasons why the global solution performs much slower. First, in order to resolve the true solution with the tolerance $10^{-10}$, the global Chebyshev solution requires 766 nodes versus 300 for the PU approximation. Secondly, when adapting with the PUM, if a leaf is determined to be refined, the number of nodes is reduced as dictated in Algorithm~\ref{alg2} and the leaf is not split in further iterations. This keeps the total number of nodes lower while adapting.
\begin{figure}[!htb]
\centering
\subfloat[Solution with subintervals plotted.]{
\includegraphics[scale = 0.35]{globalburger.eps}
\label{GNLIN_EXAMPLEA}
}
\subfloat[Plot of the error.]{
\includegraphics[scale = 0.35]{globalburgererr.eps}
\label{GNLIN_EXAMPLEB}
}
\caption{Numerical solution using the global Chebyshev method and residual for the BVP (\ref{PUM_nlin_system}) with $\nu=5 \times 10^{-3}$.
}
\label{GNLIN_EXAMPLE}
\end{figure}
\section{Discussion}
Our method offers a simple way to adaptively construct infinitely smooth approximations of functions that are given explicitly or that solve BVPs. By recursively constructing the PU weights with the binary tree, we avoid the need to determine the neighbors of each patch (as would be needed with the standard Shepard's PU weights). While this is not a serious issue in one dimension, the complexity of how the patches overlap increases with higher dimension. For example, in 2D we could build a similar method on a box where we use tensor product Chebyshev approximations. We would refine by splitting the box into two overlapping parts (either in $x$ or $y$) and recursively build a binary tree. We similarly would define partition of unities for each of the splits. If we used infinitely smooth weights at the splits, the 2D PU approximation will be infinitely smooth as well.
Our method leaves room for improvement. For instance, while merging helps reduce the number of nodes, in cases where we have a singularity right above the split the PU method over-resolves in the overlap; this can be seen in Figure~\ref{ARCTAN2}. The source of the problem is that patches may be adjacent in space but not in the tree. This could be resolved by a more robust merging algorithm. Alternatively we could determine an optimal splitting location through a Chebyshev-Pad\'{e} approximation as in \cite{driscoll2014optimal}, but the PU adds a layer of complexity since we must optimize not just for the splitting location but the size of the overlap.
Additionally it is possible to construct weights that are not $C^{\infty}$ but have smaller norms in their derivatives. For instance,
\begin{equation}
\begin{aligned}
w_{\ell}(x) &= \begin{cases}
1 & x \leq -t \\
\frac{1}{4t^3} x^3 - \frac{3}{4 t} x+\frac{1}{2} & -t\leq x \leq t \\
0 & x>t
\end{cases} \\
w_{r}(x) &= 1-w_{\ell}(x)
\end{aligned}
\end{equation}
defines a $C^1[-1,1]$ piecewise cubic partition of unity, where $\| w_{\ell}'(x)\|_{\infty} = \frac{3}{4t}$. If a BVP requires higher smoothness, we could similarly construct a higher degree polynomial for the weights.
\bibliographystyle{siamplain}
|
2,869,038,156,274 | arxiv |
\section{Combination method}
\label{sec:method}
The combination is performed by finding the best linear unbiased estimator (BLUE)~\cite{Lyons:1988rp,Valassi:2003mu}. The BLUE method finds the coefficients of the linear combination of the input measurements by minimising the total uncertainty of the combined result, taking into account both the statistical and systematic uncertainties, as well as correlations between the inputs.
Imposing a unitarity constraint between the three observables, \ensuremath{F_{\text{0}}}, \ensuremath{F_{\text{L}}} , and \ensuremath{F_{\text{R}}}, results in two independent observables. In this analysis, the measurements of \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ are combined while \ensuremath{F_{\text{R}}}\ is obtained as \mbox{$\ensuremath{F_{\text{R}}} = 1-\ensuremath{F_{\text{0}}}-\ensuremath{F_{\text{L}}}$}. As no further constraint on the observables was placed, values outside the range [0,1] are allowed for the three polarisation fractions.
The total correlation between \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ obtained from the combination is taken into account in the estimation of the uncertainty on the \ensuremath{F_{\text{R}}}~value.
\section{Introduction}
The large number of top quarks produced at the CERN LHC provides an excellent laboratory for the study of their production and decay properties. Precise predictions of some of these properties are available in the standard model (SM) of particle physics, and are tested through detailed comparisons to data. Potential deviations between data and predictions could reveal important information on the existence of new physics beyond the SM. The properties of the top quark\xspace decay vertex \ensuremath{\PQt\PW\PQb}\xspace are governed by the structure of the weak interaction. In the SM, this interaction has a $V-A$ structure, where $V$ and $A$ refer to the vector and axial-vector components of the weak current. This structure, along with the masses of the particles involved, determines the fractions of \wboss\ with longitudinal (\ensuremath{F_{\text{0}}}), left-handed (\ensuremath{F_{\text{L}}}), and right-handed (\ensuremath{F_{\text{R}}}) polarizations, referred to as \emph{polarization fractions}. Theoretical calculations at next-to-next-to-leading order (NNLO) in perturbative quantum chromodynamics (QCD) predict the fractions to be \mbox{$\ensuremath{F_{\text{0}}}=0.687 \pm 0.005$}, \mbox{$\ensuremath{F_{\text{L}}}=0.311 \pm 0.005$}, and \mbox{$\ensuremath{F_{\text{R}}}=0.0017 \pm 0.0001$}~\cite{Czarnecki:2010gb}, assuming a top quark\xspace mass of $172.8 \pm 1.3\GeV$. Thus, the SM predictions can be tested in high-precision measurements of the polarization fractions, and potential new physics processes that modify the structure of the \ensuremath{\PQt\PW\PQb}\xspace vertex can be probed.
Experimentally, polarization fractions can be measured in events containing top quarks, using the kinematic properties of its decay products.
For semileptonically decaying top quarks, \ie~$\PQt\to \PW (\to \ell \Pgn)\PQb$ (with lepton $\ell$ = electron, muon, or $\Pgt$), the polarization angle $\theta^{*}$ is defined as the angle between the direction of the charged lepton and the reversed direction of the \PQb quark, both in the rest frame of the \wbos. The distribution of the variable \ensuremath{\cos \theta^{*}}\ is particularly sensitive to the polarization fractions. The differential decay rate is given by
\begin{linenomath}
\begin{equation}
\frac{1}{\Gamma} \frac{\rd \Gamma}{\rd \ensuremath{\cos \theta^{*}}} = \frac{3}{4} \left( 1-\cos^{2} \theta^{*} \right) \, \ensuremath{F_{\text{0}}} + \frac{3}{8} \left(1-\ensuremath{\cos \theta^{*}} \right)^2 \, \ensuremath{F_{\text{L}}} + \frac{3}{8} \left(1 + \ensuremath{\cos \theta^{*}}\right)^{2} \, \ensuremath{F_{\text{R}}}.
\label{eq:costh}
\end{equation}
\end{linenomath}
In a similar way, $\theta^{*}$ can be defined for the hadronically decaying top quarks, \ie~$\PQt\to \PW (\to \PQq'\PAQq)\PQb$, by replacing the charged lepton with the down-type quark ($\PQq'$). In the measurements used in this paper, only angles from top quarks decaying semileptonically to electrons or muons are considered. Imposing a unitarity constraint between the three polarization fractions, $\ensuremath{F_{\text{0}}}+\ensuremath{F_{\text{L}}}+\ensuremath{F_{\text{R}}}=1$, results in two independent observables.
The \wbos polarization fractions have been measured in proton-antiproton collisions by the CDF and D0~experiments~\cite{TevatronCombin} at a centre-of-mass energy of 1.96\TeV with experimental uncertainties of 10--15\% in \ensuremath{F_{\text{0}}}~and \ensuremath{F_{\text{L}}}. The ATLAS and CMS Collaborations have performed measurements at the LHC in proton-proton ($\Pp\Pp$) collisions at $\sqrt{s}=7$~\cite{TOPQ-2011-10,CMS-TOP-11-020} and 8~\cite{TOPQ-2016-02, CMS-TOP-13-008, CMS-TOP-12-020}\TeV, reaching a precision in \ensuremath{F_{\text{0}}}~and \ensuremath{F_{\text{L}}}~of 3--5\%.
All measurements are in agreement with the SM NNLO predictions within their experimental uncertainties. However, these experimental uncertainties are larger than those of the current theoretical predictions, which are less than 2\%. Improving the experimental precision motivates the combination of the ATLAS and CMS measurements: combining measurements based on independent data sets reduces the statistical uncertainty, while the overall uncertainty can be further decreased by exploiting the differences in experimental systematic effects stemming from the use of the two detectors and different analysis methods.
This paper describes the combination of the \wbos polarization fractions measured by the ATLAS and CMS Collaborations based on data collected at $\sqrt{s}=8\TeV$, in final states enhanced in top quark\xspace pair (\ttbar)~\cite{TOPQ-2016-02, CMS-TOP-13-008} and single top quark~\cite{CMS-TOP-12-020} production processes. The paper is structured as follows: the measurements included in the combination are briefly described in Section~\ref{sec:measurements}. Section~\ref{sec:systematics} lists the sources of systematic uncertainty considered in the input measurements. The correlations between the measured values included in this combination are categorized in Section~\ref{sec:sec4}, and presented for each source of systematic uncertainty. In Section~\ref{sec:results}, the results of the combination and their interpretation in terms of new physics using the effective field theory approach are described. A summary and conclusions are presented in Section~\ref{sec:conclusion}.
\section{The ATLAS and CMS measurements}
\label{sec:measurements}
Three measurements of the \wbos polarization in the top quark\xspace decay from top quark\xspace pair production events in the $\ell+$jets channel and one from events with a single top quark\xspace signature are the four input measurements in this combination. The measurements based on \ttbar production events were performed by the ATLAS~\cite{TOPQ-2016-02} and CMS~\cite{CMS-TOP-13-008} experiments, where the latter was separated in electron and muon channels. The measurement from events with a single top quark\xspace signature was performed by the CMS~\cite{CMS-TOP-12-020} experiment.
The measurements were based on $\Pp\Pp$ collision data at $\sqrt{s}= 8$\TeV, corresponding to integrated luminosities of 20.2 and 19.7\fbinv for the ATLAS and CMS experiments, respectively. The 7\TeV measurements~\cite{TOPQ-2011-10,CMS-TOP-11-020} are not included in this combination since they are based on smaller data sets, and, having relatively large systematic uncertainties, their contribution to the combination is expected to be marginal. All measurements were based on fits where the polarization fractions were adjusted to describe the observed \ensuremath{\cos \theta^{*}}\ distributions of the semileptonically decaying top quark, taking into account the SM predictions for the backgrounds. These measurements are summarized in the rest of the section. Detailed descriptions of the ATLAS and CMS detectors can be found elsewhere~\cite{atlasDet,cmsDet}.
\subsection{The ATLAS measurement}
\label{sec:altasmeas}
The contributing input from the ATLAS experiment to this combination is described in Ref.~\cite{TOPQ-2016-02} and denoted ``ATLAS'' in the following.
In this measurement, the event selection was defined to efficiently select events from top quark\xspace pair decays in the $\ell+$jets channel, \ie exactly one reconstructed electron or muon and at least four jets, of which at least two were tagged as $\PQb$ jets, and minimizing background contributions, \eg from $\PW$/$\PZ$+jets and multijet productions. The latter corresponds to events including jets misidentified as leptons, or non-prompt leptons from hadron decay passing the $\ell$+jets selection.
The \ttbar system was fully reconstructed via a kinematic likelihood fit technique~\cite{Erdmann:2013rxa}, which maps the four decay quarks (two $\PQb$ quarks and two light quarks from the \wbos decay) to four reconstructed jets, utilising Breit--Wigner distributions for the \wbos and top quark masses, as well as transfer functions to map the reconstructed jet and lepton energies to the parton or true lepton level, respectively.
The \wbos polarization was measured in the single-lepton channels from \ttbar events using a template fit method. Dedicated \ttbar templates of the \ensuremath{\cos \theta^{*}}\ distribution for each polarization configuration were produced by reweighting the simulated SM \ttbar events. Additional templates for background processes were also produced.
The templates were fit to the \ensuremath{\cos \theta^{*}}\ distribution in data using different templates for the electron and muon channels, via a binned likelihood fit as:
\begin{linenomath}
\begin{equation}
{\mathcal{L}}=\prod\limits_{k=1}^{n_{\text{bins}}} \frac{N_\text{exp}(k)^{~N_\text{data}(k)}}{\Bigl[ N_\text{data}(k)\Bigr]!}~\exp{[-N_\text{exp}(k)]}
\prod\limits_{j=1}^{n_{\text{bkg}}}\frac{1}{\sqrt{2\pi}\sigma_{\text{bkg}, j}}\exp\left({\frac{-(N_{\text{bkg}, j}-\hat N_{\text{bkg}, j})^2}{2\sigma ^{2}_{\text{bkg}, j}}}\right),
\label{eq:LHFit}
\end{equation}
\end{linenomath}
where $N_{\text{data}}(k)$ and $N_{\text{exp}}(k)$ represented the number of observed and the total number of expected events (sum of signal and background events) in each bin $k$ of the \ensuremath{\cos \theta^{*}}\ distribution, respectively. The number of events for each background source $j$ is represented by $N_{\text{bkg}, j}$. The expected number of events for each background source $j$, $\hat N_{\text{bkg}, j}$, and the uncertainties in the normalization of the background events, $\sigma_{\text{bkg}, j}$, were used to constrain the fit. Therefore, the uncertainties in the polarization fractions obtained from the fit included both the statistical and systematic uncertainties in the background normalizations. The final result was obtained by a simultaneous fit of the electron and muon channel templates to the data. A common parameter was used to scale each of the backgrounds in the electron and muon channel in a fully correlated manner, except in the case of the nonprompt-lepton background for which two separate, uncorrelated, parameters were used. The contribution from $\PW$+jets events was split into different quark flavour samples and scaled by the calibration factors derived from sidebands in data. These procedures were found to cover the corresponding shape uncertainties in the nonprompt-lepton and $\PW$+jets contributions. The uncertainty in the shape of the contributions from single top quark and diboson events was found to be negligible.
\subsection{The CMS measurements}
\label{sec::cms}
Three CMS measurements contribute to this combination.
The results presented in Ref.~\cite{CMS-TOP-13-008} used similar final states to those in ATLAS: one lepton and four or more jets, of which at least two were tagged as $\PQb$ jets. The \ttbar system was fully reconstructed using a constrained kinematic fit. The unmeasured longitudinal momentum of the neutrino was inferred by the kinematic constraints.
The measurement was performed by maximizing the binned Poisson likelihood function,
\begin{linenomath}
\begin{equation}
\mathcal{L} = \prod\limits_{k=1}^{n_{\text{bins}}} \frac{N_\text{exp}(k)^{\displaystyle~N_\text{data}(k)}}{\Bigl[ N_\text{data}(k)\Bigr]!}~\exp{[-N_\text{exp}(k)]},
\label{eq:lagrang}
\end{equation}
\end{linenomath}
\noindent
where ${N_\text{data}}(k)$ is the number of observed events in each bin $k$ of the reconstructed \ensuremath{\cos \theta^{*}}~distribution,
and ${ N_\text{exp}}(k)$ is the number of expected events from Monte Carlo (MC) simulation for a given polarization configuration $\vec{F} \equiv (\ensuremath{F_{\text{0}}} ,\ensuremath{F_{\text{L}}}, \ensuremath{F_{\text{R}}})$, including signal and background events.
During each step of the maximization, ${ N_\text{exp}}(k)$ was modified for different values of the polarization fractions $\vec{F}$ using a reweighting procedure based on Eq.~(\ref{eq:costh}). Weights are applied to the events at the generated level, so that the $\cos\theta^*$ distribution generated according to Eq.~(\ref{eq:costh}) corresponds to alternative values of $\vec{F}$. Backgrounds that did not involve a top quark did not change ${N_\text{exp}}(k)$ for different values of $\vec{F}$. The ATLAS and CMS measurements considered the variations on ${N_\text{exp}}(k)$ coming from all top quark events passing the selection, either $\ell$+jets or non-$\ell$+jets, including $\Pgt$+jets and dilepton \ttbar processes. In addition, the CMS analyses took into account the variations arising from single top quark processes, which were treated as a background in the ATLAS measurement. The normalization of the \ttbar process was left free in the fit.
In order to allow a more detailed account of the correlations with the other measurements, the two lepton channels, $\Pe$+jets and $\Pgm$+jets, enter the combination as two separate measurements, referred to as ``CMS ($\Pe$+jets)'' and ``CMS ($\Pgm$+jets)'' throughout this paper, respectively. In the ATLAS measurement, the fractions were obtained simultaneously using the events from the two channels, therefore this separation is not available.
The third CMS input~\cite{CMS-TOP-12-020} included in the combination used a final state targeting $t$-channel single top quark topologies instead of \ttbar events. The event selection required exactly one electron or muon, and exactly two jets, one of which was tagged as a $\PQb$ jet. This selection is orthogonal to that of the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) analyses, making the three of them statistically independent. Nevertheless, while the expected amount of selected $t$-channel single top quark events corresponded to only about 13\% of the sample, the expected contribution from the \ttbar process amounted to about 35\%, and needed to be taken into account as part of the signal. The largest background came from the $\PW$+jets process. This contribution was fully estimated from data, and corresponded to about 36\% of the selected sample. Other processes, such as multijet and $\PZ$+jets production, accounted for the remaining 16\% of the sample.
The fitting procedure applied in Ref.~\cite{CMS-TOP-13-008} was slightly modified for the single top quark topology measurement. In this case, because of the different background composition with respect to the \ttbar analysis, the normalizations of the single top quark and \ttbar processes were fixed according to their predicted cross section values. On the other hand, the normalization of the $\PW$+jets sample was left free in the fit to be adjusted simultaneously with the \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ fractions, and treated independently in the $\Pe$+jets and $\Pgm$+jets channels.
Moreover, the fractions were extracted by maximizing a combined likelihood function, constructed from the two likelihood functions of the electron and muon channels, taking into account
the correlations between them. Therefore, although based on two single-lepton channels, this measurement contributes to the combination as one single input, denoted as ``CMS (single top)'' in the following.
\subsection{The \texorpdfstring{$\PW$ boson polarization values from the input measurements}{}}
\label{sec:mod_measurements}
The polarization fractions from the input measurements before applying the modifications concerning the combination (as discussed in Section~\ref{sec:systematics}), and their uncertainties are summarized in Table~\ref{tab:measurements}. The first quoted uncertainty in the ATLAS measurement includes the statistical uncertainties and uncertainties in the background determination, and the second uncertainty refers to the remaining systematic uncertainty. For CMS measurements, the first uncertainty is statistical, while the second is the total systematic uncertainty, including that on background determination.
In order to harmonize the treatment of the systematic uncertainties evaluation across the input measurements, some of them are modified before performing the combination process. The following modifications are applied (as detailed in Section~\ref{sec:systematics}):
\begin{itemize}
\item The uncertainty values in the ATLAS measurement are symmetrized.
\item The \ttbar modelling uncertainties in the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements are recalculated without the contributions from the limited number of events in the samples used to estimate them.
\item The uncertainty due to the top quark mass used in the ATLAS measurement is increased from a variation of $\pm 0.7\GeV$ to $\pm 1.0\GeV$.
\end{itemize}
\renewcommand{\arraystretch}{1.2}
\begin{table}[ht!]
\centering
\topcaption{Summary of the published ATLAS and CMS measurements for 8\TeV data. The first quoted uncertainty in the ATLAS measurement includes statistical uncertainties and uncertainties in the background determination, and the second uncertainty refers to the remaining systematic contribution. For CMS measurements, the first uncertainty is statistical while the second is the total systematic uncertainty, including that on background determination.
\label{tab:measurements}}
\begin{tabular}{lccc}
\hline
Measurement & \ensuremath{F_{\text{0}}} & \ensuremath{F_{\text{L}}} & \ensuremath{F_{\text{R}}}\\
\hline
ATLAS ($\ell$+jets) & $0.709 \pm 0.012 \pm 0.015$ & $ 0.299\pm 0.008 \pm 0.013$ & $ -0.008 \pm 0.006 \pm 0.012$ \\
CMS ($\Pe$+jets) & $0.705 \pm 0.013 \pm 0.037$ & $0.304 \pm 0.009 \pm 0.020$ & $ -0.009 \pm 0.005 \pm 0.021$ \\
CMS ($\Pgm$+jets) & $0.685 \pm 0.013 \pm 0.024$ & $0.328 \pm 0.009 \pm 0.014$ & $-0.013 \pm 0.005 \pm 0.017$ \\
CMS (single top) & $0.720 \pm 0.039 \pm 0.037$ & $0.298 \pm 0.028 \pm 0.032$ & $-0.018 \pm 0.019 \pm 0.011$ \\
\hline
\end{tabular}
\end{table}
\section{Sources of systematic uncertainty}
\label{sec:systematics}
The effects of various systematic uncertainties on the input results were studied individually for each measurement. In the ATLAS measurement, the impact of systematic uncertainties was evaluated with alternative pseudo-data distributions built from the altered signal and background contributions. The alternative pseudo-data distributions were produced by varying each source of systematic uncertainty by one standard deviation ($\pm 1\sigma$). The CMS measurements also used pseudo-data to estimate the uncertainties due to parton distribution functions (PDFs), size of the simulated samples, and single top quark analysis specific uncertainties. The other uncertainties were estimated by replacing the nominal sample with alternative samples containing simulated events modified according to each of the systematic variations, and repeating the fit.
As the algorithm used to perform the combination accepts only symmetric uncertainties (more details in Section~\ref{sec:results}), the uncertainties in the ATLAS measurement are symmetrized by assigning the average uncertainty value between the up and down variations in each uncertainty source. A test is performed by replacing the average uncertainty value with the largest shift among the up and down variations. No variation in the combination results is observed, \ie the central values of the polarization fractions, combination uncertainty, and total correlation remain unchanged. In addition, common uncertainty categories are established by merging and regrouping various uncertainties in each individual input measurement.
In the following, the categorization of the systematic uncertainties considered for the combination is presented. The categories, assumed to be independent from each other, comprise sources of uncertainties that have similar origins, easing the treatment of correlations discussed in Section~\ref{sec:sec4}.
\subsection{Limited size of the data and simulated samples, backgrounds, and integrated luminosity}
\noindent {\textit{Statistical uncertainty, background determination, and integrated luminosity (stat+bkg):}}
The uncertainties in the ATLAS measurement from the fit included both the statistical uncertainty in the data and the systematic uncertainty in the background normalizations via priors for the background yields. The shape of the multijet processes was determined from data, while for the other background events it was fully determined from simulation. The impact of the 1.9\% integrated luminosity uncertainty~\cite{DAPR-2013-01} was found to be negligible because of the background normalization treatment in the fit.
In the CMS measurements, the uncertainties in the expected backgrounds included shape and normalization effects, and were estimated by varying them separately within their uncertainties and repeating the measurement. The multijet background in all CMS measurements as well as the normalization of the $\PW$+jets contribution in the CMS (single top) case were derived exclusively from data.
All other background processes, as well as \ttbar, and single top quark processes in the CMS (single top) measurement were estimated using simulation, normalized to the integrated luminosity of the data samples. These were affected by the uncertainties in their predicted cross sections, and the integrated luminosity determination. The CMS integrated luminosity uncertainty of 2.6\%~\cite{CMS-PAS-LUM-13-001} had a sizeable effect only on the CMS (single top) measurement.
\noindent {\textit{Size of simulated samples:}}
This category accounts for the limited number of simulated events for the nominal samples in all input measurements. Both ATLAS and CMS evaluated this uncertainty by performing pseudo-experiments. In the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements, the limited number of simulated events was also considered for the \ttbar samples used for the estimation of the modelling uncertainties. In order to perform a consistent combination, the \ttbar modelling uncertainties in the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements are recalculated without the contributions from the limited number of events in the samples used to estimate them. The impact of this modification on the relative uncertainty in the measurements is found to be in the order of $O(10^{-4})$.
\subsection{Detector modelling}
\label{sec:detmod}
\noindent {\textit{Jets:}} In all input measurements in this combination, the same jet clustering algorithm, the anti-\kt algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma}, was used, with the radius parameter R of 0.4 and 0.5 for the ATLAS and CMS experiments, respectively. However, in the ATLAS measurement the jets were built from energy deposits in the calorimeter~\cite{PERF-2014-07}, while in the CMS analyses they were reconstructed from particle-flow~\cite{CMS-PRF-14-001} objects. Thus, the two experiments used different calibration procedures and uncertainties for jets. The following categories comprise various sources of uncertainty related to the reconstruction and energy calibration of jets.
\begin{itemize}
\item \noindent {\textit{ Jet energy scale (JES):}}
The JES uncertainty in the ATLAS and CMS analyses was composed of different uncertainty sources, such as jet flavour dependence, the additional interactions in the same or nearby bunch crossings (pileup), calibrations from $\PZ$+jets or $\gamma$+jets processes, and other components. In general, these components have different level of correlations among the two experiments and have been used to evaluate the total JES correlation (as detailed in Section~\ref{subsec:stability}). The final JES uncertainty used in this combination is quoted in Tables~\ref{tab:uncertainties_ATLAS}--\ref{tab:uncertainties_CMSst} and results from grouping all JES uncertainty components into a single number.
\item \noindent {\textit{ Jet energy resolution (JER):}}
This category includes contributions due to the uncertainties in the modelling of the jet energy resolution. The momenta of the jets in simulation were smeared so that the jet energy resolution in simulation agrees with that in data. Both experiments used a similar method to estimate this uncertainty.
\item \noindent {\textit{Jet vertex fraction (JVF):}}
To suppress jets from pileup, in the ATLAS measurement jets were required to fulfil the JVF criterion. The corresponding uncertainty was evaluated in the measurement by changing the nominal JVF cutoff value and repeating the measurement~\cite{ATLAS-CONF-2013-083}.
In the CMS measurements, pileup events were removed at the event reconstruction level with the particle-flow algorithm. In this case, uncertainties in jet reconstruction due to pileup were covered by the JES and pileup categories, and are not added as a separate source.
\item \noindent {\textit{Jet reconstruction efficiency:}}
A systematic uncertainty was included in the ATLAS measurement to account for the jet reconstruction efficiency mismatch between simulation and data. In the CMS measurements, this uncertainty is included in the JES uncertainty.
\end{itemize}
\noindent {\textit{Lepton efficiency:}}
For all measurements, this category accounted for the uncertainties in the scale factors used to correct the simulated samples so that the efficiencies for lepton selection, reconstruction, and identification observed in data were well reproduced by the simulation. Since the samples were collected using single-lepton triggers, uncertainties in the trigger efficiencies were also included. All corrections were applied as functions of \pt\ and $\eta$ of the leptons. This uncertainty was found to be negligible for the CMS (single top) measurement, compared to other uncertainties.
\noindent {\textit{$\PQb$ tagging:}}
In this category, uncertainties on the scale factors used to correct the simulation for different efficiencies for tagging jets originating from $\PQb$ quarks (tag) or for those originating from $\PQc$ or light partons wrongly identified as $\PQb$ jets (mistag) were taken into account. This difference was accounted for by assigning scale factors to the jets, dependent on the \pt and $\eta$ as well as on the flavour of the jet.
In the ATLAS measurement, additionally, an uncertainty was assigned to account for the extrapolation of the $\PQb$ tagging efficiency measurement to the high-\pt region.
\noindent {\textit{Pileup:}}
In both the ATLAS and the CMS analyses, pileup effects were taken into account in the simulation of signal and background events. The distribution of pileup was adjusted to reflect the measured instantaneous luminosities per bunch in data. In the CMS measurements, this uncertainty was estimated by varying the $\Pp\Pp$ cross section used to estimate the number of pileup in data within its uncertainty, and recalculating the weights applied to the simulation. In the ATLAS measurement, the uncertainty in the description of extra energy deposited due to pileup interactions was treated as a separate missing transverse momentum (\ptmiss) scale uncertainty. The impact on the measured \wbos polarization fractions from this uncertainty was found to be negligible, and therefore was not considered.
\subsection{Signal modelling}
\label{sec:sigMod}
\noindent {\textit{Top quark mass:}}
In all four analyses, the effect of the uncertainty in the top quark mass was estimated by repeating the measurements using simulated samples with different input top quark masses for the signal process.
In the ATLAS measurement, this effect was evaluated using an uncertainty of $\pm 0.70\GeV$ in the top quark mass as given by the ATLAS measurement~\cite{TOPQ-2016-03}. In the CMS measurements on the other hand, an uncertainty of $\pm 1.0\GeV$ in the top quark mass was assumed. In order to keep consistency across the various input measurements, this effect in the ATLAS measurement is reestimated using the original estimation method described in Ref.~\cite{TOPQ-2016-02}, accounting for a variation of $\pm 1.0\GeV$ in the top quark mass. The impact of this modification in the ATLAS input result is negligible.
\noindent {\textit{Simulation model choice:}}
The impact of using different MC event generators and their interfaced showering and hadronization models was estimated in all input measurements. In the ATLAS measurement, the impact of the choice of different MC event generators was assessed by comparing events produced by \textsc{Powheg-Box}~\cite{Nason:2004rx,Frixione:2007nw,Frixione:2007vw,Alioli:2009je,Alioli:2010xd} and \MCATNLO~\cite{Frixione:2002ik,Frixione:2003ei,Frixione:2005vw}, both interfaced to \textsc{Herwig}~\cite{Corcella:2000bw} for showering and hadronization.
To evaluate the impact of the different parton shower and hadronization models, the \textsc{Powheg+Herwig} sample was compared to \textsc{Powheg+Pythia}~\cite{Sjostrand:2006za}.
For the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements, the uncertainties were estimated by replacing the events produced by \MADGRAPH~\cite{Alwall:2014hca} interfaced with \textsc{Pythia} with \MCATNLO interfaced with the \textsc{Herwig} generator and additionally, varying the kinematic scale used to match jets to partons (matching threshold) by twice and half its central value.
In the CMS (single top) measurement, the uncertainty in the choice of different MC generators was estimated as the difference between the \textsc{Powheg+Pythia} and the \COMPHEP~\cite{Comphep2} generators.
\noindent {\textit{Radiation and scales:}}
In all four analyses, this category represents the uncertainty associated with initial- and final-state radiation (ISR/FSR) estimated using simulated samples of \ttbar events where the renormalization and factorization scales ($\mu_{\mathrm{R}}$ and $\mu_{\mathrm{F}}$) were simultaneously set to twice and half the default value in the matrix element (ME) calculations. In the CMS measurements, the $\mu_{\mathrm{R}}$ and $\mu_{\mathrm{F}}$ in the parton shower were also varied simultaneously to those used in the ME calculations.
However, in the ATLAS measurement, a different set of tuned parameters of the \PYTHIA parton shower with a modified strong coupling \alpS was used to account for low and high radiation to match the variation of scales in the ME calculations. The detailed list of modified parameters is given in Ref.~\cite{Skands:2010ak}.
Furthermore, in the ATLAS measurement the value of the damping parameter ($h_{\text{damp}}$) in \textsc{Powheg-Box} was set to twice the top quark mass for the high-radiation sample.
In addition to changing it in the \ttbar background, the CMS (single top) measurement varied the scales used in the single top quark simulated samples.
\noindent {\textit{Top quark \pt:}}
In previous CMS analyses of \ttbar events, described \eg in Ref.~\cite{CMS-TOP-12-028}, the shape of the \pt spectrum for top quarks was found to be softer than the predictions from \MADGRAPH\ simulation. The effect of this mismodelling on the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements was estimated by reweighting the simulated \ttbar sample to describe the data. The difference in the polarization fractions with the default sample to the reweighted sample was taken as a systematic uncertainty.
On the other hand, the top quark \pt distribution did not exhibit, within uncertainties, a significant difference with the predictions in the single top quark enriched phase space, therefore no systematic uncertainty was assigned in the CMS (single top) measurement.
In the ATLAS measurement, this mismodelling was checked to be covered by the simulation model choice uncertainties, and therefore no additional uncertainty for the top quark \pt spectrum was considered.
\noindent {\textit{PDF:}}
{\tolerance=800
The uncertainty due to the choice of PDFs in all input measurements was evaluated by varying the eigenvalues of different PDF sets following the PDF4LHC recommendations~\cite{Alekhin:2011sk, Botje:2011sn}.
In the ATLAS measurement, the differences between three PDF sets: CT10~\cite{Lai:2010vv}, MSTW2008~\cite{Martin:2009iq}, and NNPDF 2.3~\cite{Ball:2012cx} were taken into account.
Uncertainties related to the choice of PDF set in the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements were estimated by replacing CTEQ6L1~\cite{cteq} used to generate the nominal samples, with NNPDF 2.1~\cite{Ball:2011mu} and MSTW2008. A similar procedure was adopted in the CMS (single top) measurement, where the default CTEQ6.6M~\cite{Nadolsky:2008zw} set was replaced with CT10 instead.
\par}
\noindent {\textit{Single top quark analysis method:}}
In addition to the systematic uncertainties considered for the \ttbar measurement, a few specific uncertainties were included for the CMS (single top) measurement.
For the specific case of single top quark processes, unlike for \ttbar production, the polarization fractions can also be altered at the production level. To study this effect, pseudo-data were generated from samples simulated using \COMPHEP and {\textsc{Single Top}}~\cite{Comphep1} event generators with varied values of the couplings $g_{\text{L}},\ V_{\text{R}}$, and $V_{\text{L}}$ (as described in Section \ref{sec:anomcoupl}) both at the single top quark production and decay, and the polarization fractions values were extracted using the analysis fitting framework. The differences between the generated and fitted values were taken as the systematic uncertainty. Secondly, a small difference in the generated \wbos polarization fraction values was observed for the \ttbar events, simulated with \MADGRAPH, and single top quark events, simulated with \POWHEG. This difference of about 0.01 was taken into account as an uncertainty in the measurement.
Finally, the effect of fixing the signal normalization in the fit was considered.
All these uncertainties are merged into a single uncertainty, referred to as {\textit{Single top method}} in Tables~\ref{tab:uncertainties_ATLAS} to~\ref{tab:uncertainties_LHC}.
In all input measurements, the uncertainty in the modelling of colour reconnection was found to be negligible and therefore was not considered.
\section{Correlations and uncertainties in the ATLAS and CMS measurements}
\label{sec:sec4}
\subsection{Correlations}
\label{sec:correlation}
Four pairs of longitudinal and left-handed polarization fractions from four input measurements, as described in Section~\ref{sec:measurements} are used in the combination. The correlations between the input values are defined taking into account the unitarity relation between the polarization fractions in each measurement and the correlations among the measurements. The groups of correlations are listed in Table~\ref{tab:corr_categories} and defined as follows:
\begin{itemize}
\item \textit{Correlations within the same measurement:}\\
Because of the unitarity constraint, and given that $\ensuremath{F_{\text{R}}} \approx 0$, the observed values of \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ within one single measurement are usually highly anticorrelated.
In the ATLAS measurement, this correlation is estimated for each systematic uncertainty source from its corresponding covariance matrix. For categories with multiple sources of systematic uncertainty, the sum of the individual covariance matrices is used to calculate the correlation.
In the CMS analyses, this group of correlations is estimated from the covariance propagation of the expression $\ensuremath{F_{\text{R}}}=1-\ensuremath{F_{\text{0}}}-\ensuremath{F_{\text{L}}}$ as
\begin{linenomath}
\begin{equation}
\rho(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}}) = \frac{\sigma^{2}(\ensuremath{F_{\text{R}}}) - \sigma^{2}(\ensuremath{F_{\text{0}}}) -
\sigma^{2}(\ensuremath{F_{\text{L}}})}{2\sigma(\ensuremath{F_{\text{0}}})\sigma(\ensuremath{F_{\text{L}}})},
\label{eqn:rho}
\end{equation}
\end{linenomath}
where $\sigma(\ensuremath{F_{\text{i}}})$ is the uncertainty in the polarization fraction \ensuremath{F_{\text{i}}} , which is directly obtained from the individual measurements. This is done for all sources of systematic uncertainty. For systematic uncertainty categories with multiple sources, \eg `stat+bkg' including statistical uncertainty, background determination, and others, $\sigma^2(\ensuremath{F_{\text{i}}})$ is defined as the quadratic sum of the individual uncertainty sources.
This group of correlations is denoted in this document as $\rho_{\text{ATLAS}}$, $\rho_{\text{CMS}}^{\Pe\text{+jets}}$, $\rho_{\text{CMS}}^{\Pgm\text{+jets}}$, and $\rho_{\text{CMS}}^{\text{st}}$ for the ATLAS, CMS ($\Pe$+jets), CMS ($\Pgm$+jets), and CMS (single top) measurements, respectively.
\item \textit{ Correlations between measurements within the CMS experiment:}\\
{\tolerance 800
For each source of systematic uncertainty, the correlations between the polarization fractions in the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements are denoted $\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{j}}})$, where i and j stand for 0 or L. The correlations between CMS (single top) and CMS ($\Pe$+jets) are assumed to be the same as those between the CMS (single top) and CMS ($\Pgm$+jets) measurements for each source of the uncertainty, and are denoted generically $\rho_{\text{CMS}}^{\text{st,}\ell \text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{j}}})$. The relations $\rho_{\text{CMS}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})=\rho_{\text{CMS}}(\ensuremath{F_{\text{L}}},\ensuremath{F_{\text{L}}})=-\rho_{\text{CMS}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$ are assumed in all CMS measurements. In this hypothesis, the strong anti-correlation observed for \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ within the same measurement (as described above) is assumed to hold also across different measurements.
\par}
The uncertainties associated with the limited size of the data and simulated samples, and background estimation are assumed to be uncorrelated (as also discussed in Sections~\ref{sec:partialcorrel} and~\ref{subsec:stability}).
The lepton efficiency uncertainty is assumed to be uncorrelated between the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements, and partially correlated with the CMS (single top) measurement.
All other sources of uncertainty are assumed to be fully correlated.
\item \textit{Correlations between the ATLAS and CMS experiments:}\\
For each source of systematic uncertainty, the correlation between the measured polarization fractions $\ensuremath{F_{\text{i}}}$ by the ATLAS and CMS experiments, $\rho(\ensuremath{F_{\text{i}}}^{\text{ATLAS}},\ensuremath{F_{\text{j}}}^{\text{CMS}})$ is presented by \rhoIJ{\text{LHC}}, where $\rhoZZ{\text{LHC}}=\rhoLL{\text{LHC}}=-\rhoZL{\text{LHC}}$ are assumed.
The uncertainties associated with the detector modelling (except for the JES) as well as the method-specific uncertainty are assumed to be uncorrelated, \ie~$\rhoZZ{\text{LHC}}=0$.
The uncertainty associated with the radiation and scales, and the JES are assumed to be partially correlated with $\rhoZZ{\text{LHC}}$ estimated to be 0.5 and 0.2, respectively (see Sections~\ref{sec:partialcorrel} and \ref{subsec:stability} for details). All other sources of uncertainty are assumed to be fully correlated, \ie~$\rhoZZ{\text{LHC}}=+1$.
\end{itemize}
\subsection{Correlation choices for the partially correlated uncertainties}
\label{sec:partialcorrel}
Although the correlations between the measurements are well known for most of the systematic uncertainty sources, some of them, in particular those that are partially correlated, are not very accurately determined.
This section describes how these values are estimated for the combination.
Stability tests are performed to verify the
robustness of the combination against these correlation assumptions, as discussed in Section~\ref{subsec:stability}.
In the CMS measurements, the uncertainties in the background determination (shape and normalization), integrated luminosity, and the statistical uncertainty were estimated independently and grouped into a single uncertainty category (stat+bkg) for coherence with the ATLAS treatment. The major components of the stat+bkg category in the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements are the uncertainty in the determination of the background events from multijet and $\PW$+jets production.
The former is estimated from data, and therefore uncorrelated between all CMS measurements, while $\PW$+jets production, as well as the other minor backgrounds are estimated from simulation, and therefore at least partially correlated between the measurements. For the CMS (single top) case, the major component of this category is the statistical uncertainty, which is uncorrelated with the other measurements.
The normalization of $\PW$+jets production, a major background in the CMS (single top) analysis, is estimated from data, and therefore it is uncorrelated to the other CMS measurements. On the other hand, the $\PW$+jets production shape, as well as the modelling of other background event sources and signal events, rely on simulation, which may lead to a nonzero $\rho_{\text{CMS}}^{\text{st,} \ell \text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ correlation. Neglecting the small correlations that could arise from the $\PW$+jets production shape and the background modelling from simulation, the values $\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{j}}})=0$ and $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})=0$ are assumed for the combination, and the impact of this assumption is studied via the stability tests.
In all ATLAS and CMS measurements, the JES systematic uncertainty is estimated from different components, which are characterized by different levels of correlations among the two experiments. These components are categorized as fully correlated, such as gluon-initiated jet fragmentation; partially correlated, such as modelling uncertainties from \textit{in situ} techniques, such as \PZ-jet, \PGg-jet, and multijet balance techniques; and uncorrelated, such as statistical and detector-related uncertainties. These correlations have been evaluated and are described in Ref.~\cite{ATL-PHYS-PUB-2015-049}.
In the ATLAS measurement, the contribution from the uncorrelated (partially correlated) components to the total JES uncertainty is found to be about 70 (20)\%, and the total JES uncertainty is dominated by the uncorrelated jet flavour composition component. In the CMS measurements, because JES uncertainties are small, the breakdown into components was not done. Therefore, assuming a similar JES uncertainty composition between the two experiments, the value of $\rho_{\text{LHC}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ is found to be 0.2.
In the ATLAS and CMS analyses, different approaches were used to estimate the radiation and scales uncertainties, as described in Section~\ref{sec:sigMod}.
In the CMS (single top) measurement, this uncertainty is estimated by varying the scales $\mu_\text{R}$ and $\mu_\text{F}$ for the simulations of both the \ttbar and the single top quark processes. While the \ttbar component, which is dominant, is fully correlated to the analogous uncertainties in the ATLAS, CMS ($\Pe$+jets), and CMS ($\Pgm$+jets) measurements, the smaller component from the single top quark $\mu_\text{R}$ and $\mu_\text{F}$ scales is uncorrelated with the other measurements. Since the effects being studied are the same, but the methods are different, the values of $\rho_{\text{LHC}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ and $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ are not well known, and are assumed to be 0.5 and 1.0, respectively.
\renewcommand{\arraystretch}{1.6}
\begin{table}[b!]
\centering
\topcaption{Summary of the correlation categories considered in the combination. The correlations among the \ensuremath{F_{\text{L}}}\ measurements are not shown for brevity.
\label{tab:corr_categories}}
\cmsTable{
\begin{tabular}{@{}lcrrrr}
\hline
Measurement & & ATLAS & CMS ($\Pe$+jets) & CMS ($\Pgm$+jets) & CMS (single top) \\
& Fraction & \ensuremath{F_{\text{0}}} & \ensuremath{F_{\text{0}}} & \ensuremath{F_{\text{0}}} & \ensuremath{F_{\text{0}}} \\
\hline
ATLAS & \ensuremath{F_{\text{0}}} & $+1$ & $\rhoZZ{\text{LHC}}$ & $\rhoZZ{\text{LHC}}$ & $\rhoZZ{\text{LHC}}$ \\
CMS ($\Pe$+jets) & \ensuremath{F_{\text{0}}} & $\rhoZZ{\text{LHC}}$ & +1 & $\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ & $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$\\
CMS ($\Pgm$+jets) & \ensuremath{F_{\text{0}}} & $\rhoZZ{\text{LHC}}$ & $\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ & $+1$ & $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$\\
CMS (single top) & \ensuremath{F_{\text{0}}} & $\rhoZZ{\text{LHC}}$ & $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ & $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ & $+1$ \\ [\cmsTabSkip]
ATLAS & \ensuremath{F_{\text{L}}} & $\rho_{\text{ATLAS}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$ & $-\rhoZZ{\text{LHC}}$ & $-\rhoZZ{\text{LHC}}$ & $-\rhoZZ{\text{LHC}}$ \\
CMS ($\Pe$+jets) & \ensuremath{F_{\text{L}}} & $-\rhoZZ{\text{LHC}}$ & $\rho_{\text{CMS}}^{\Pe\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$ & $-\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ & $-\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ \\
CMS ($\Pgm$+jets) & \ensuremath{F_{\text{L}}} & $-\rhoZZ{\text{LHC}}$ & $-\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ & $\rho_{\text{CMS}}^{\Pgm\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$ & $-\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ \\
CMS (single top) & \ensuremath{F_{\text{L}}} & $-\rhoZZ{\text{LHC}}$ & $-\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ & $-\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})$ & $\rho_{\text{CMS}}^{\text{st}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$ \\
\hline
\end{tabular}
}
\end{table}
\renewcommand{\arraystretch}{1.1}
\begin{table}[ht!]
\centering
\topcaption{Input correlations across different measurements, as explained in Section~\ref{sec:correlation}. The values stand for the correlations $\rho(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$, with $\text{i}$ being either 0 or L. The correlations of the type $\rho(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$ are assumed to be $\rho(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})=-\rho(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{0}}})=-\rho(\ensuremath{F_{\text{L}}},\ensuremath{F_{\text{L}}})$. In case an uncertainty is not applicable, the correlation value is set to zero and marked with an asterisk. The correlations marked with a dagger sign are those that are not precisely determined and checks are performed to test the stability of the results against these assumptions.
}
\label{tab:input_correlations}
\begin{tabular}{@{}llll@{}}
\hline
& $\rho_{\text{LHC}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ & $\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ & $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ \\
Uncertainty Category & & & \\
\hline
\multicolumn{4}{@{}l}{\textit{Samples size and background determination}} \\
\, \, \ Stat+bkg & 0.0 & $0.0^{\dagger}$ & $0.0^{\dagger}$\\
\, \, \ Size of simulated samples & 0.0 & 0.0 & 0.0\\ [\cmsTabSkip]
\multicolumn{4}{@{}l}{\textit{Detector modelling}} \\
\, \, \ JES & $0.2^{\dagger}$ & 1.0 & 1.0\\
\, \, \ JER & 0.0 & 1.0 & 1.0\\
\, \, \ JVF & $0.0^{\ast}$ & $0.0^{\ast}$ & $0.0^{\ast}$\\
\, \, \ Jet reconstruction efficiency & $0.0^{\ast}$ & $0.0^{\ast}$ & $0.0^{\ast}$\\
\, \, \ Lepton efficiency & 0.0 & 0.0 & 0.0\\
\, \, \ $\PQb$ tagging & 0.0 & 1.0 & 1.0\\
\, \, \ Pileup & $0.0^{\ast}$ & 1.0 & 1.0\\ [\cmsTabSkip]
\multicolumn{4}{@{}l}{\textit{Signal modelling}} \\
\, \, \ Top quark mass & 1.0 & 1.0 & 1.0\\
\, \, \ Simulation model choice & 1.0 & 1.0 & 1.0\\
\, \, \ Radiation and scales & $0.5^{\dagger}$ & 1.0 & $1.0^{\dagger}$\\
\, \, \ Top quark \pt & $0.0^{\ast}$ & 1.0 & $0.0^{\ast}$\\
\, \, \ PDF & 1.0 & 1.0 & 1.0\\
\, \, \ Single top method & $0.0^{\ast}$ & $0.0^{\ast}$ & $0.0^{\ast}$\\
\hline
\end{tabular}
\end{table}
\subsection{Summary of the uncertainties and correlations of the input measurements}
\label{sec:uncertainties}
For each systematic uncertainty category, the correlations between the measured polarization fractions for the input measurements are given in Table~\ref{tab:input_correlations}. A breakdown of the uncertainties in the input measurements of \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\, as well as their correlations, are presented in Tables~\ref{tab:uncertainties_ATLAS}--\ref{tab:uncertainties_CMSst}. The uncertainties are grouped according to the categories listed in Section~\ref{sec:systematics}.
\renewcommand{\arraystretch}{1.1}
\begin{table}[ht!]
\centering
\topcaption{Uncertainties in \ensuremath{F_{\text{0}}}, \ensuremath{F_{\text{L}}}\ and their corresponding correlations from the ATLAS measurement. The uncertainty that is not applicable to this measurement, or which is included in other categories, is indicated by ``\NA''. The line ``Systematic uncertainty'' represents the quadratic sum of all the systematic uncertainty sources except for the uncertainty in the background determination, which is included in the ``Stat+bkg'' category. The quoted correlation values are obtained via the procedures described in Section~\ref{sec:correlation}.
\label{tab:uncertainties_ATLAS}
}
\begin{tabular}{@{}lccc@{}}
\hline
& \multicolumn{3}{@{}c@{}}{ATLAS} \\
& \ensuremath{F_{\text{0}}} & \ensuremath{F_{\text{L}}} & \multirow{2}{*}{$\rho_{\text{ATLAS}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$} \\
Measured value &0.709 &0.299 & \\ [\cmsTabSkip]
Uncertainty category & & & \\
\hline
\multicolumn{4}{@{}l}{\textit{Samples size and background determination}} \\
\, \, \ Stat+bkg & 0.012 & 0.008 & $-1.00$\\
\, \, \ Size of simulated samples & 0.009 & 0.006 & $-1.00$\\ [\cmsTabSkip]
\multicolumn{4}{@{}l}{\textit{Detector modelling}} \\
\, \, \ JES & 0.005 & 0.003 & $-0.94$\\
\, \, \ JER & 0.006 & 0.003 & $-0.92$\\
\, \, \ JVF & 0.003 & 0.002 & $-0.99$\\
\, \, \ Jet reconstruction efficiency & ${<}0.001$ & ${<}0.001$ & $-1.00$\\
\, \, \ Lepton efficiency & 0.004 & 0.002 & $-0.99$\\
\, \, \ $\PQb$ tagging & 0.002 & 0.001 & $-0.84$\\
\, \, \ Pileup & \NA & \NA & \NA\\ [\cmsTabSkip]
\multicolumn{4}{@{}l}{\textit{Signal modelling}} \\
\, \, \ Top quark mass & 0.002 & 0.007 & $-1.00$ \\
\, \, \ Simulation model choice & 0.003 & 0.004 & 0.99 \\
\, \, \ Radiation and scales & 0.003 & 0.006 & $-0.91$ \\
\, \, \ Top quark \pt & \NA & \NA & \NA \\
\, \, \ PDF & 0.003 & 0.004 & $-1.00$ \\
\, \, \ Single top method & \NA & \NA & \NA \\ [\cmsTabSkip]
\multicolumn{4}{@{}l}{\textit{Total uncertainties}} \\
\, \, \ Systematic uncertainty & 0.014 & 0.013 & $-0.82$\\
\, \, \ Total uncertainty & 0.019 & 0.015 & $-0.80$\\
\hline
\end{tabular}
\end{table}
\renewcommand{\arraystretch}{1.1}
\begin{table}[ht!]
\centering
\topcaption{Uncertainties in \ensuremath{F_{\text{0}}}, \ensuremath{F_{\text{L}}}\, and their corresponding correlations from the CMS $\Pe$+jets and $\Pgm$+jets measurements.
The uncertainty that is not applicable to this measurement, or which is included in other categories, is indicated by ``\NA''. The line ``Systematic uncertainty'' represents the quadratic sum of all the systematic uncertainty sources except for the uncertainties in the background determination and the integrated luminosity, which are included in the ``Stat+bkg'' category. The quoted correlation values are obtained via the procedures described in Section~\ref{sec:correlation}.
\label{tab:uncertainties_CMSljets}
}
\cmsTable{
\begin{tabular}{@{}lcccccc@{}}
\hline
& \multicolumn{3}{@{}c@{}}{CMS $\Pe$+jets} & \multicolumn{3}{@{}c@{}}{CMS $\Pgm$+jets} \\
& \ensuremath{F_{\text{0}}} & \ensuremath{F_{\text{L}}} &
\multirow{2}{*}{$\rho_{\text{CMS}}^{\Pe\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$} &
\ensuremath{F_{\text{0}}} & \ensuremath{F_{\text{L}}} &
\multirow{2}{*}{$\rho_{\text{CMS}}^{\Pgm\text{+jets}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$} \\
Measured value &0.705 &0.304 & &0.685 &0.328 & \\ [\cmsTabSkip]
Uncertainty category & & & & & & \\
\hline
\multicolumn{7}{@{}l}{\textit{Samples size and background determination}} \\
\, \, \ Stat+bkg & 0.028 & 0.011 & $-0.87$ & 0.016 & 0.010 & $-0.88$ \\
\, \, \ Size of simulated samples & 0.002 & 0.001 & $-0.95$ & 0.002 & 0.001 & $-0.96$ \\ [\cmsTabSkip]
\multicolumn{7}{@{}l}{\textit{Detector modelling}} \\
\, \, \ JES & 0.004 & 0.003 & $-1.00$ & 0.005 & 0.003 & $-1.00$ \\
\, \, \ JER & 0.001 & 0.002 & $-1.00$ & 0.004 & 0.003 & $-1.00$ \\
\, \, \ JVF & \NA & \NA & \NA & \NA & \NA & \NA \\
\, \, \ Jet reconstruction efficiency & \NA & \NA & \NA & \NA & \NA & \NA \\
\, \, \ Lepton efficiency & 0.001 & 0.002 & $-1.00$ & 0.001 & 0.001 & $-1.00$ \\
\, \, \ $\PQb$ tagging & 0.001 & ${<}0.001$ & $-1.00$ & 0.001 & ${<}0.001$ & $-1.00$ \\
\, \, \ Pileup & 0.001 & 0.001 & $-1.00$ & ${<}0.001$ & ${<}0.001$ & $-1.00$ \\ [\cmsTabSkip]
\multicolumn{7}{@{}l}{\textit{Signal modelling}} \\
\, \, \ Top quark mass & 0.012 & 0.008 & $-0.99$ & 0.009 & 0.006 & $-1.00$ \\
\, \, \ Simulation model choice & 0.015 & 0.010 & $-0.87$ & 0.008 & 0.004 & $0.20$ \\
\, \, \ Radiation and scales & 0.007 & 0.005 & $-1.00$ & 0.014 & 0.006 & $-0.83$ \\
\, \, \ Top quark \pt & 0.011 & 0.010 & $-1.00$ & ${<}0.001$ & 0.001 & $-1.00$ \\
\, \, \ PDF & 0.004 & 0.001 & $-0.92$ & 0.002 & 0.001 & $-0.15$ \\
\, \, \ Single top method & \NA & \NA & \NA & \NA & \NA & \NA \\ [\cmsTabSkip]
\multicolumn{7}{@{}l}{\textit{Total uncertainties}} \\
\, \, \ Systematic uncertainty & 0.024 & 0.018 & $-0.93$ & 0.020 & 0.010 & $-0.71$ \\
\, \, \ Total uncertainty & 0.037 & 0.021 & $-0.87$ & 0.025 & 0.014 & $-0.78$ \\
\hline
\end{tabular}
}
\end{table}
\renewcommand{\arraystretch}{1.1}
\begin{table}[ht!]
\centering
\topcaption{Uncertainties in \ensuremath{F_{\text{0}}}, \ensuremath{F_{\text{L}}}\, and their corresponding correlations from the CMS (single top) measurement. The uncertainty that is not applicable to this measurement, or which is included in other categories, is indicated by ``\NA''. The line ``Systematic uncertainty'' represents the quadratic sum of all the systematic uncertainty sources except for the uncertainties in the background determination and the integrated luminosity, which are included in the ``Stat+bkg'' category. The quoted correlation values are obtained via the procedures described in Section~\ref{sec:correlation}.}
\label{tab:uncertainties_CMSst}
\begin{tabular}{@{}lccc@{}}
\hline
& \multicolumn{3}{@{}c@{}}{CMS (single top)} \\
& \ensuremath{F_{\text{0}}} & \ensuremath{F_{\text{L}}} & \multirow{2}{*}{$\rho_{\text{CMS}}^{\text{st}}(\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}})$} \\
Measured value &0.720 &0.298 & \\ [\cmsTabSkip]
Uncertainty category & & & \\
\hline
\multicolumn{4}{@{}l}{\textit{Samples size and background determination}} \\
\, \, \ Stat+bkg & 0.041 & 0.031 & $-0.90$\\
\, \, \ Size of simulated samples & 0.020 & 0.012 & $-0.96$\\
\multicolumn{4}{@{}l}{\textit{Detector modelling}} \\
\, \, \ JES & 0.004 & 0.004 & $-1.00$\\
\, \, \ JER & 0.001 & 0.001 & $-1.00$\\
\, \, \ JVF & \NA & \NA & \NA \\
\, \, \ Jet reconstruction efficiency & \NA & \NA & \NA \\
\, \, \ Lepton efficiency & ${<}0.001$ & ${<}0.001$ & $-1.00$ \\
\, \, \ $\PQb$ tagging & 0.006 & 0.006 & $-1.00$\\
\, \, \ Pileup & 0.003 & 0.003 & $-1.00$\\
\multicolumn{4}{@{}l}{\textit{Signal modelling}} \\
\, \, \ Top quark mass & 0.005 & 0.007 & $-1.00$\\
\, \, \ Simulation model choice & 0.002 & 0.003 & $-1.00$\\
\, \, \ Radiation and scales & 0.023 & 0.019 & $-1.00$\\
\, \, \ Top quark \pt & \NA & \NA & \NA \\
\, \, \ PDF & 0.004 & 0.004 & $-0.97$\\
\, \, \ Single top method & 0.012 & 0.015 & $-1.00$\\
\multicolumn{4}{@{}l}{\textit{Total uncertainties}} \\
\, \, \ Systematic uncertainty & 0.035 & 0.029 & $-0.96$ \\
\, \, \ Total uncertainty & 0.054 & 0.043 & $-0.92$\\
\hline
\end{tabular}
\end{table}
Figure~\ref{fig:BlueCorrMatrix} presents the total correlation values between the input measurements. Typically, \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ are highly anticorrelated within the same measurement. The three \ttbar measurements (ATLAS, CMS ($\Pe$+jets), and CMS ($\Pgm$+jets)) are also correlated or anti-correlated, with the absolute values of the correlations ranging around 30 to 40\%. The correlations of the CMS (single top) measurement with the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements are around 20\% in the absolute value, and are generally smaller with the ATLAS measurement.
\begin{figure}[h]
\centering
\includegraphics[width = 0.95\textwidth]{Figure_001.pdf}
\caption{The total correlation between the input measurements of the combination.
\label{fig:BlueCorrMatrix}
}
\end{figure}
\section{Results}
\label{sec:results}
The combination is performed by finding the best linear unbiased estimator (BLUE)~\cite{Lyons:1988rp,Valassi:2003mu} with the method implemented in Ref.~\cite{Nisius:2014wua}. The BLUE method finds the coefficients of the linear combination of the input measurements by minimizing the total uncertainty of the combined result, taking into account both the statistical and systematic uncertainties, as well as the correlations between the inputs.
In this analysis, the measurements of \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ are combined while \ensuremath{F_{\text{R}}}\ is obtained as \mbox{$\ensuremath{F_{\text{R}}} = 1-\ensuremath{F_{\text{0}}}-\ensuremath{F_{\text{L}}}$}. As no further constraints on the observables were placed, values outside the range [0, 1] are allowed for the three polarization fractions.
The total correlation between \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ obtained from the combination is taken into account in the estimation of the uncertainty in the \ensuremath{F_{\text{R}}}\ value.
The results of the combination of the polarization fractions measurements are
\begin{equation*}
\begin{aligned}
\ensuremath{F_{\text{0}}} &=& 0.693 \pm 0.009\,\text{(stat+bkg)} \pm 0.011\,\text{(syst)}, \\
\ensuremath{F_{\text{L}}} &=& 0.315 \pm 0.006\,\text{(stat+bkg)} \pm 0.009\,\text{(syst)},
\end{aligned}
\end{equation*}
with a total correlation of $-0.85$. Using the unitarity
constraint on the polarization fractions, the fraction of events with a \wbos with right-handed polarization is calculated to be
\begin{equation*}
\begin{aligned}
\ensuremath{F_{\text{R}}} = \text{$-0.008$~}\pm {0.005\,\text{(stat+bkg)} \pm 0.006\,\text{(syst)}},
\end{aligned}
\end{equation*}
where the first quoted uncertainty includes the statistical part and uncertainties in the background determination, and the second uncertainty refers to the remaining systematic contribution. From these results, an upper limit of $\ensuremath{F_{\text{R}}} < \text{0.007}$ at 95\% confidence level (\CL) is set. The limit is set using the Feldman--Cousins method~\cite{refFC}, considering that \ensuremath{F_{\text{R}}}\ follows a normal distribution, and that it is physically bound to $\ensuremath{F_{\text{R}}} \geq 0$. The relative uncertainty on \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ is 2.0 and 3.5\%, respectively, including systematic and statistical components.
Figure~\ref{fig:summary} shows an overview of the four measurements included in the combination and the result of the combination together with the polarization fractions predicted by NNLO QCD calculations. The uncertainties in the NNLO predictions, presented with vertical bands, include an uncertainty of 1.3\GeV in the top quark mass, uncertainties in the $\PQb$ quark and \PW\ boson masses, and in \alpS. The combined \ensuremath{F_{\text{R}}}\ value is negative, as this is not explicitly forbidden in the combination, but compatible with the predictions within the uncertainties. The measurements are consistent with each other and with the NNLO QCD prediction.
\begin{figure}
\centering
\includegraphics[width = 1.0\textwidth]{Figure_002.pdf}
\caption{Overview of the four measurements, as well as the results of the combination. The inner and outer error bars correspond to the statistical and the total uncertainties, respectively. The inner bars for the combination include also the background determination uncertainties. The vertical solid line indicates the predictions of NNLO QCD calculations~\cite{Czarnecki:2010gb}.
\label{fig:summary}}
\end{figure}
The $\chi^2$ and upper tail probability of the combination are 4.3\ and 64\%\ respectively. The combination includes four sets of measurements, each composed of two highly anticorrelated observables, and two fit parameters of the combination, \ie the combined \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}. A detailed breakdown of the uncertainties is presented in Table~\ref{tab:uncertainties_LHC}.
The dominant uncertainties are those arising from the statistical uncertainty on data and background estimation (stat+bkg), followed by the uncertainties in the radiation and scales modelling, the limited size of the simulated samples, and simulation model choice. The total detector modelling uncertainty is minor, smaller than the uncertainties in the stat+bkg category.
The measurement with the highest impact in the determination of \ensuremath{F_{\text{0}}}~is ATLAS, while CMS ($\Pgm$+jets) dominates the combined \ensuremath{F_{\text{L}}}~determination. The impact of the CMS ($\Pe$+jets) and CMS ($\Pgm$+jets) measurements is not directly comparable to the other input measurements that already include the electron and muon channels together. As a test, the combination is repeated, using a pre-combined CMS ($\Pe$+jets) + CMS ($\Pgm$+jets) input, and the results are unchanged.
The ATLAS+CMS combined fractions and uncertainties are identical in both cases, with a small variation on the resulting (\ensuremath{F_{\text{0}}}, \ensuremath{F_{\text{L}}}) correlation, being 1.5\% smaller for the cross-check combination.
In another test, the CMS (single top) measurement was removed from the combination. The impact on the combined fractions and uncertainties is less than 1.5\%.
The combination yields an important improvement in precision, as compared to the most precise individual published measurements~\cite{TOPQ-2016-02,CMS-TOP-13-008}. Improvements of 25 and 29\% relative to the most precise single measurement are found for the precision of the combined measurements of \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}, respectively.
The improvement is estimated with respect to the published values of the \wbos polarization fraction determination that is given in Table~\ref{tab:measurements}. The total correlation between the combined fractions is similar to those in the input measurements, and their uncertainties are smaller. These two factors lead to a combined right-handed polarization fraction \ensuremath{F_{\text{R}}}\ that is almost a factor two more precise than in previous publications.
\renewcommand{\arraystretch}{1.1}
\begin{table}[ht!]
\centering
\topcaption{
Results of the ATLAS and CMS combination: \wbos polarization fraction values and uncertainties. The combined \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}\ values are anticorrelated, with $\rho$=$-0.85$.
\label{tab:uncertainties_LHC}
}
\begin{tabular}{@{}lcc@{}}
\hline
& \multicolumn{2}{@{}c@{}}{ATLAS+CMS combination} \\
& \ensuremath{F_{\text{0}}} & \ensuremath{F_{\text{L}}} \\
Fractions & 0.693 & 0.315 \\
Uncertainty category & & \\
\hline
\multicolumn{3}{@{}l}{\textit{Samples size and background determination}} \\
\, \, \ Stat+bkg & 0.009 & 0.006 \\
\, \, \ Size of simulated samples & 0.005 & 0.003 \\ [\cmsTabSkip]
\multicolumn{3}{@{}l}{\textit{Detector modelling}} \\
\, \, \ JES & 0.004 & 0.002 \\
\, \, \ JER & 0.004 & 0.002 \\
\, \, \ JVF & 0.001 & 0.001 \\
\, \, \ Jet reconstruction & ${<}0.001$ & ${<}0.001$ \\
\, \, \ Lepton efficiency & 0.002 & 0.001 \\
\, \, \ $\PQb$ tagging & 0.001 & 0.001 \\
\, \, \ Pileup & ${<}0.001$ & ${<}0.001$ \\ [\cmsTabSkip]
\multicolumn{3}{@{}l}{\textit{Signal modelling}} \\
\, \, \ Top quark mass & 0.003 & 0.004 \\
\, \, \ Simulation model choice & 0.006 & 0.005 \\
\, \, \ Radiation and scales & 0.005 & 0.004 \\
\, \, \ Top quark \pt & 0.001 & 0.002 \\
\, \, \ PDF & 0.001 & 0.001 \\
\, \, \ Single top method & 0.001 & ${<}0.001$ \\ [\cmsTabSkip]
Total uncertainty & 0.014 & 0.011 \\
\hline
\end{tabular}
\end{table}
\subsection{Stability tests}
\label{subsec:stability}
The hypotheses assumed for the correlations between the measurements, as defined in Sections~\ref{sec:correlation} and~\ref{sec:partialcorrel}, are based on the best knowledge of the similarities and differences in the detectors, analysis methods, and simulations used in each measurement. Nevertheless, some of these correlations cannot be precisely determined. The checks described in this section are performed to test the stability of the results against this potential lack of knowledge.
\noindent{\textit{$\rho_{\text{LHC}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$~hypothesis (with $i$=0,L) for the JES uncertainty:}}
The correlation value $\rho_{\text{LHC}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})=0.2$ was estimated according to the prescription given in Ref.~\cite{ATL-PHYS-PUB-2015-049} and the description in Section~\ref{sec:partialcorrel}. The impact of this assumption is evaluated by repeating the combination by varying $\rho_{\text{LHC}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ in the interval between 0.0 and 0.4, in steps of 0.1. The fraction values and uncertainties remained unchanged in the entire probed range. The $\chi^2$ of the fit, the probability, and the total (\ensuremath{F_{\text{0}}},\ensuremath{F_{\text{L}}}) correlation are found to be stable with a relative shift of less than 0.5\%.
\noindent{\textit{$\rho_{\text{LHC}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ and $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ hypotheses for the radiation and scales uncertainties:}}
Although addressing similar effects, the radiation and scales uncertainties are estimated in three different ways for ATLAS, CMS (single top), and the other CMS measurements, with different levels of correlations among them. Therefore, the two hypotheses, $\rho_{\text{LHC}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})=0.5$ and $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})=1$, are tested simultaneously, by variation in steps of 0.1 in the interval between 0 and 0.5 for $\rho_{\text{LHC}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ and between 0.6 and 1.0 for $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$. The resulting polarization fraction mean values and uncertainties remained unchanged in the whole ranges. Small variations, below the percent level, are observed for the total correlation and fit probability.
\noindent{\textit{JES versus radiation and scales correlations:}}
Since the JES and radiation and scales uncertainties are among the dominant sources of uncertainty with significant correlation between measurements, an additional test was performed varying the two correlation hypotheses simultaneously, rather than separately. The results of this test also show stable combination with maximum relative shifts of about 2\% for the $\chi^2$ and probability and about 0.6\% for the total correlation. The combined fractions and uncertainties are found to be stable, with negligible variations for all probed hypotheses.
\noindent{\textit{$\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ and $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ hypothesis for statistical+background uncertainty:}} Small correlations that could arise from the background modelling from simulated samples are
neglected in the combination by assuming $\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})=0$ and $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})=0$. In order to investigate the effect of these hypotheses, the combination was repeated by varying $\rho_{\text{CMS}}^{\Pe,\Pgm\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$ and $\rho_{\text{CMS}}^{\text{st,}\ell\text{+jets}}(\ensuremath{F_{\text{i}}},\ensuremath{F_{\text{i}}})$, using for both the same correlation values in the range [0.0, 0.7] in steps of 0.1. In the interval between 0.0 and 0.6, the fraction values are varied by a maximum of 1.3\%, with \ensuremath{F_{\text{0}}}\ going from 0.693 to 0.687, and \ensuremath{F_{\text{L}}}\ from 0.314 to 0.319. At 0.7, the combination yields $\ensuremath{F_{\text{0}}}=0.684 \pm 0.014$ and $\ensuremath{F_{\text{L}}}=0.321 \pm 0.010$, which is the maximum variation observed in all tests performed in this study. However, in this case the fit probability decreases to 28\%, suggesting that the correlation assumption of 0.7 is less favoured. The fit combination does not converge for unreasonable values, \ie correlation values above 0.7.
In conclusion, the tests reported in this section indicate that the combined results are robust against variations of some poorly known or unknown input correlations. The correlations are varied over a large range, and in all cases the observed deviation from the nominal results are well covered by the uncertainties in the combined result.
\subsection{Limits on anomalous couplings}
\label{sec:anomcoupl}
The result of the combination of the polarization fractions measurements can be used to set limits on beyond-the-SM physics contributing to the \ensuremath{\PQt\PW\PQb}\xspace vertex. In the two approaches presented in this section, only new physics contributions to the top quark decay vertex are considered---effects at the production vertex in single top quark processes are disregarded.
In a first approach, the structure of the \ensuremath{\PQt\PW\PQb}\xspace vertex is parameterized in a general form in effective field theory, expanding the SM Lagrangian to include dimension-six terms
\begin{equation}
\begin{aligned}
\mathcal{L}_{\text{\ensuremath{\PQt\PW\PQb}\xspace}} = - \frac{g}{\sqrt 2} \PAQb \, \gamma^{\mu} \left( V_{\mathrm{L}} P_{\mathrm{L}}
+ V_{\mathrm{R}} P_{\mathrm{R}} \right) \PQt\; \PW_\mu^- - \frac{g}{\sqrt 2} \PAQb \, \frac{i \sigma^{\mu \nu} \PQq_\nu}{m_\PW}
\left( g_{\mathrm{L}} P_{\mathrm{L}} + g_{\mathrm{R}} P_{\mathrm{R}} \right) \PQt\; \PW_\mu^- + \mathrm{h.c.},
\label{eq:Wtb}
\end{aligned}
\end{equation}
where $V_{\text{L,R}}$ and $g_{\text{L,R}}$ are left- and right-handed vector and tensor couplings, respectively. Here, $P_{\text{L,R}}$ refers to the left- and right-handed chirality projection operators, $m_\PW$ to the \wbos mass, and $g$ to the weak coupling constant, as detailed in Refs.~\cite{Buchmuller:1985jz,AguilarSaavedra:2008zc}.
In the SM, $V_{\text{L}}$ is given by the Cabibbo--Kobayashi--Maskawa (CKM) matrix element $V_{\text{tb}}$, with a measured value of $\approx 1$, while $V_{\text{R}}=g_{\text{L}}=g_{\text{R}}=0$ at the tree level. Using this formalism, the polarization fractions can be translated into the couplings
$V_{\mathrm{L}},\ V_{\mathrm{R}},\ g_{\mathrm{L}}$, and $\ g_{\mathrm{R}}$ (as discussed \eg in Ref.~\cite{AguilarSaavedra:2006fy}).
The two independent \wbos polarization measurements, \ensuremath{F_{\text{0}}}\ and \ensuremath{F_{\text{L}}}, cannot fully constrain the four \ensuremath{\PQt\PW\PQb}\xspace couplings. Therefore additional assumptions have to be made.
Figure~\ref{fig:2D_limits} shows the limits on the left- and right-handed tensor couplings, while the other couplings are fixed to their SM values, as well as limits on the right-handed vector and tensor couplings, with the other couplings fixed to their SM values. Limits on these anomalous couplings are set using the \textsc{EFT\textit{fitter}} tool~\cite{Castro:2016jjv}. The anomalous couplings are assumed to introduce no additional CP violation, and are taken to be real.
The allowed regions at 68 and 95\% \CL and the most probable couplings values are shown, as derived from the measured polarization fractions reported in Refs.~\cite{TOPQ-2016-02, CMS-TOP-13-008}, and from the combined results presented in this paper. A second region allowed by the \wbos polarization measurements around ${\text{Re}}(g_{\mathrm{R}})=0.8$ is excluded by the single top quark cross section measurements~\cite{CMS-st,TOPQ-2015-05}, and therefore is not shown in this figure. Table~\ref{tab:1d_limits} shows the 95\% \CL intervals for each anomalous coupling, while fixing all others to their SM values. These limits correspond to the set of smallest intervals containing 95\% of the marginalized posterior distribution for the corresponding parameter.
\begin{figure}[htb!]
\centering
\includegraphics[width = 0.49\textwidth]{Figure_003-a.pdf}
\includegraphics[width = 0.49\textwidth]{Figure_003-b.pdf}
\caption{Allowed regions for the \ensuremath{\PQt\PW\PQb}\xspace anomalous (left) left- and right-handed tensor couplings, and (right) right-handed vector and tensor coupling. The limits are obtained from the ATLAS, CMS, and the combined measurements of the \wbos polarization fractions at 68 and 95\% \CL. The limits from CMS are obtained using the pre-combined result of all CMS input measurements. The anomalous couplings are assumed to be real.}
\label{fig:2D_limits}
\end{figure}
\begin{table}[!ht]
\centering
\topcaption{
Allowed ranges for the anomalous couplings $V_{\text{R}}$, $g_{\text{L}}$, and $g_{\text{R}}$ at 95\% \CL. The limit on each coupling is obtained while fixing all other couplings to their SM value.
The limits from CMS are obtained using the pre-combined result of all CMS input measurements. The anomalous couplings are assumed to be real.
}
\begin{tabular}{c cc l}
\hline
&\multicolumn{3}{@{}c}{95\% \CL interval}\\
Coupling & ATLAS & CMS & ATLAS+CMS combination\\
\hline
Re($V_{\text{R}}$) & $[-0.17, 0.25]$ & $[-0.12, 0.16]$ & $[-0.11, 0.16]$\\
Re($g_{\text{L}}$) & $[-0.11, 0.08]$ & $[-0.09, 0.06]$ & $[-0.08, 0.05]$\\
Re($g_{\text{R}}$) & $[-0.03, 0.06]$ & $[-0.06, 0.01]$ & $[-0.04, 0.02]$\\
\hline
\end{tabular}
\label{tab:1d_limits}
\end{table}
In a similar way, limits are set in terms of Wilson coefficients.
In this second approach, effects of beyond-the-SM physics at a high scale $\Lambda$ are described by an effective Lagrangian~\cite{EFTlhc,Burges:1983zg, Leung:1984ni, Buchmuller:1985jz,Zhang:2017} as
\begin{equation}
\begin{aligned}
{\mathcal-{L}}^{\text{eff}} = {\mathcal{L}}^{\text{SM}} + \Sigma_x \frac{C_x}{\Lambda^2}O_x + {\mathcal{O}}\left(\frac{1}{\Lambda^3}\right) + \cdots
\label{eq:LagEff}
\end{aligned}
\end{equation}
where $O_x$ are dimension-six gauge-invariant operators and $C_x$ are the complex constants known as Wilson coefficients that give the strength of the corresponding operator. Only dimension-six operators are considered in this analysis. The relevant operators affecting the general effective \ensuremath{\PQt\PW\PQb}\xspace vertex can be found, \eg in Ref.~\cite{Zhang:2017}.
Three of these operators are of particular interest, since the measurement of the \wbos polarization is able to constrain their corresponding Wilson coefficients. These operators are:
\begin{linenomath}
\begin{equation}
\begin{aligned}
O_{\phi \phi} &= {\text{i}}(\Tilde{\phi}^{\dagger} D_\mu \phi) (\PAQt_{\text{R}} \gamma^\mu \PQb_{\text{R}}), \\
O_{\PQt \PW} &= (\PAQq_{\text{L}} \sigma^{\mu\nu} \tau^I \PQt_{\text{R}}) \Tilde{\phi} \PW^I_{\mu\nu}, \quad \text{and}\\
O_{\PQb \PW} &= (\PAQq_{\text{L}} \sigma^{\mu\nu} \tau^I \PQb_{\text{R}})\phi \PW^I_{\mu\nu},
\end{aligned}
\end{equation}
\end{linenomath}
where $\phi$ represents a weak doublet of the Higgs field, $\PQt_{\text{R}}$ and $\PQb_{\text{R}}$ are the weak singlets of the right-handed top and bottom quark fields, $\PQq^{\text{T}}_{\text{L}}=(\PQt,\PQb)_{\text{L}}$ denotes the $SU(2)_{\text{L}}$ weak doublet of the third generation left-handed quark fields, and $\tau^I$ is the usual Pauli matrix. Assuming the Wilson coefficients to be real, they can be trivially parameterized as functions of the anomalous couplings of Eq.~(\ref{eq:Wtb}) (as shown \eg in Refs.~\cite{Zhang:2017,AguilarSaavedra:2008zc}), thus, as functions of the \PW~polarization fractions.
The limits on each Wilson coefficient are derived from the measured fractions, as done for the anomalous couplings, fixing all others to their SM value, \ie to zero. They are shown at 95\% \CL in Table~\ref{tab:1d_limits_WC}.
\begin{table}[!ht]
\centering
\topcaption{
Allowed ranges for the Wilson coefficients $C_{\phi \phi}^{\ast}$, $C_{\PQb\PW}^{\ast}$, and $C_{\PQt\PW}$ at 95\% \CL. The limit on each coefficient is obtained while fixing all other coefficients to their SM values. The limits from CMS are obtained using the pre-combined result of all CMS input measurements. The numerical values are obtained by setting the $\Lambda$ scale to 1\TeV, and the coefficients are assumed to be real.}
\begin{tabular}{c cc l}
\hline
&\multicolumn{3}{@{}c}{95\% \CL interval}\\
Coefficient & ATLAS & CMS & ATLAS+CMS combination\\
\hline
$C_{\phi \phi}^{\ast}$ & $[-5.64, 7.68]$ & $[-3.84, 4.92]$ & $[-3.48, 5.16]$\\
$C_{\PQb\PW}^{\ast}$ & $[-1.30, 0.96]$ & $[-1.06, 0.72]$ & $[-0.96, 0.67]$\\
$C_{\PQt\PW}$ & $[-0.34, 0.67]$ & $[-0.62, 0.19]$ & $[-0.48, 0.29]$\\
\hline
\end{tabular}
\label{tab:1d_limits_WC}
\end{table}
\section{Summary}
\label{sec:conclusion}
The combination of measurements of the \wbos polarization in top quark decays performed by the ATLAS and CMS Collaborations is presented.
The measurements are based on proton-proton collision data produced at the LHC at a centre-of-mass energy of 8\TeV, and corresponding to an integrated luminosity of about 20\fbinv for each experiment. The fractions of \wboss\ with longitudinal (\ensuremath{F_{\text{0}}}) and left-handed (\ensuremath{F_{\text{L}}}) polarizations were measured in events containing a single lepton and multiple jets, enhanced in \ttbar or single top quark production processes. The results of the combination are
\begin{equation*}
\begin{aligned}
\ensuremath{F_{\text{0}}} = 0.693 \pm 0.009\,\text{(stat+bkg)} \pm 0.011\,\text{(syst)}, \\
\ensuremath{F_{\text{L}}} = 0.315 \pm 0.006\,\text{(stat+bkg)} \pm 0.009\,\text{(syst)},
\end{aligned}
\end{equation*}
where ``stat+bkg'' stands for the sum of the statistical and background determination uncertainties, and ``syst'' for the remaining systematic uncertainties. The fraction of \wboss\ with right-handed polarization, \ensuremath{F_{\text{R}}} , is estimated assuming that the sum of all polarization fractions equals unity, and by taking into account the correlation coefficient of the combination, $-0.85$. This leads to
\begin{equation*}
\begin{aligned}
\ensuremath{F_{\text{R}}} = \text{$-0.008$~} \pm 0.005\,\text{(stat+bkg)} \pm 0.006\,\text{(syst)},
\end{aligned}
\end{equation*}
which corresponds to $\ensuremath{F_{\text{R}}}<\text{0.007}$ at 95\% confidence level.
The results are consistent with the standard model predictions at next-to-next-to-leading-order precision in perturbative quantum chromodynamics. A limit on each anomalous \ensuremath{\PQt\PW\PQb}\xspace coupling is set while fixing all others to their standard model values, with the allowed regions being $[-0.11, 0.16]$ for $V_{\text{R}}$, $[-0.08, 0.05]$ for $g_{\text{L}}$, and $[-0.04, 0.02]$ for $g_{\text{R}}$, at 95\% confidence level. All couplings are assumed to be real. Limits on Wilson coefficients are also derived in a similar manner.
\begin{acknowledgments}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other institutes for their contributions to the success of the ATLAS and CMS efforts.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS, CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, HGF, and MPG, Germany; GSRT, Greece; RGC, Hong Kong SAR, China; ISF and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; NWO, Netherlands; RCN, Norway; MNiSW and NCN, Poland; FCT, Portugal; MNE/IFA, Romania; MES of Russia and NRC KI, Russian Federation; JINR; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, CANARIE, Compute Canada and CRC, Canada; ERC, ERDF, Horizon 2020, Marie Sk{\l}odowska-Curie Actions and COST, European Union; Investissements d'Avenir Labex and Idex, ANR, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF, Greece; BSF-NSF and GIF, Israel; CERCA Programme Generalitat de Catalunya and PROMETEO Programme Generalitat Valenciana, Spain; G\"{o}ran Gustafssons Stiftelse, Sweden; The Royal Society and Leverhulme Trust, United Kingdom.
We acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, PUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU (Ukraine); STFC (United Kingdom); DOE and NSF (USA).\\
\hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie programme and the European Research Council and Horizon 2020 Grant, contract Nos.\ 675440, 752730, and 765710 (European Union); the Leventis Foundation; the A.P.\ Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science -- EOS" -- be.h project n.\ 30820817; the Beijing Municipal Science \& Technology Commission, No. Z191100007219010; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy -- EXC 2121 ``Quantum Universe" -- 390833306; the Lend\"ulet (``Momentum") Programme and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS programme of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus programme of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Ministry of Science and Education, grant no. 14.W03.31.0026 (Russia); the Tomsk Polytechnic University Competitiveness Enhancement Program and ``Nauka" Project FSWW-2020-0008 (Russia); the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. In particular, the support from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers is acknowledged gratefully. Major contributors of ATLAS computing resources are listed in Ref.~\cite{ATL-GEN-PUB-2016-002}.
\end{acknowledgments}
|
2,869,038,156,275 | arxiv | \section{Introduction}
In the low-energy sector of the theory, the effects of quantum
electrodynamics can be summarized in an effective action, the
Heisenberg-Euler action, which enlarges the classical theory of
electrodynamics by non-linear self-interactions of the electromagnetic
field. Technically speaking, this effective action arises from
integrating out the massive (high-energy) degrees of freedom of
electrons and positrons. This program has successfully been carried
out to two-loop order \cite{ritu76-87,ditt85,reut97}.
The inclusion of finite-temperature effects at the one-loop level has
also been considered in various papers
\cite{ditt79,cox84,loew92,elmf94,shov98,gies99a}, and the real-time
\cite{elmf94} as well as the imaginary-time formalism \cite{gies99a}
finally arrived at congruent results.
This paper is devoted to an investigation of the thermal QED effective
action at the two-loop level. But contrary to the zero-temperature
case, where the two-loop contribution only represents a
1\%-correction to the one-loop effective action, we
demonstrate that the thermal two-loop contribution is of a
qualitatively different kind than the thermal one-loop part and
exceeds the latter by far in the low-temperature domain.
The simple physical reason for this is the following: at one loop, one
takes only the massive electrons and positrons as virtual loop
particles into account (cf. Fig. \ref{figloops}(a)). Due to the mass
gap in the fermion spectrum, a heat bath at temperatures much below
the electron mass $m$ can hardly excite higher fermion states. Hence,
one expects thermal one-loop effects to be suppressed by the electron
mass. In fact, in a low-temperature expansion of the thermal one-loop
effective action \cite{elmf98}, one finds that each term is
accompanied by a factor of $\exp (-m/T)$, exhibiting an exponential
damping for $T\to 0$.
\vspace{1cm}
\begin{figure}[h]
\begin{center}
\begin{picture}(125,20)
\put(5,0){
\begin{fmffile}{fmfpic2LoopS}
\begin{fmfgraph*}(120,20)
\fmfleft{i1}
\fmfright{o1}
\fmf{phantom,tension=1}{i1,v1,v2,v3,v4,v5,v6,v7,v8,o1}
\fmffreeze
\fmf{double,left,tension=0.1}{i1,v3}
\fmf{double,left,tension=0.1}{v3,i1}
\fmf{double,left,tension=0.1}{v6,o1}
\fmf{double,left,tension=0.1}{o1,v6}
\fmf{photon}{v6,v7,v8,o1}
\fmfdot{v6,o1}
\end{fmfgraph*}
\end{fmffile}}
\put(0,0){(a)}
\put(75,0){(b)}
\end{picture}
\end{center}
\vspace{0.3cm}
\caption{Diagrammatic representation of the one-loop (a) and two-loop
(b) contribution to the effective QED action. The fermionic double
line represents the coupling to all orders to the external
electromagnetic field.}
\label{figloops}
\end{figure}
On the other hand, the two-loop contribution to the thermal effective
action involves a virtual photon within the fermion loop (cf.
Fig.\ref{figloops}(b)). Since the photon is massless, a heat bath of
arbitrarily low temperature can easily excite higher photon states,
implying a comparably strong influence of thermal effects on the
effective action. In Sec. 2, we are able to show that the dominant
contribution to the thermal two-loop effective action in the
low-temperature limit is proportional to $T^4/m^4$. This power-law
behavior always wins out over the exponential damping of the one-loop case,
leading to a {\em two-loop dominance} in the low-temperature domain.
One might ask whether this inversion of the loop hierarchy signals the
failure of perturbation theory for finite-temperature QED. But, of
course, this is not the case, since the inclusion of a virtual photon
does not ``amplify'' the two-loop graph and higher ones. Rather,
calculating the one-loop graph should only be rated as an
inconsistent truncation of the theory, since the one-loop
approximation does not include all species of particles as virtual
ones. Besides, effective field theory techniques indicate that the
three-loop contribution is of the order of $T^8/m^8$ \cite{kong98} for
$T/m \ll 1$, thereby obeying the usual loop hierarchy.
The present paper is organized as follows: In Sec. 2, we present the
calculation of the two-loop effective QED action at finite temperature
employing the imaginary-time formalism and concentrating on the
low-temperature limit. The outcome will be valid for slowly varying
external fields of arbitrary strength.
Section 3 is devoted to an investigation of light propagation at
finite temperature. While, on the one hand, the well-known result for
the velocity shift $\delta v\sim T^4/m^4$ is rediscovered
\cite{tarr83,bart90,lato95,ditt98}, we are also able to determine
further contributions to the velocity shift arising from a non-trivial
interplay between temperature and an additional magnetic background
field.
In Sec. 4, we study aspects of thermally induced photon
splitting. Therein, we point out that the thermal two-loop
contribution to the splitting process exceeds the zero-temperature and
one-loop contributions in the low-temperature and weak-field limit,
but is negligible in comparison to other thermally induced
scattering processes.
Sections 3 and 4 are mainly concerned with the limit of a weak
magnetic background field and low-frequency photons ($\omega\ll m$),
and therefore represent only a first glance at these extensive
subjects. In fact, the quantitative results for this energy regime
describe only tiny effects; a relevance for astrophysical topics such
as pulsar physics has not been identified up to now. However, the
intention of the present work is a more categoric one, namely, to
elucidate the mechanism for a violation of the usual loop hierarchy of
perturbative thermal field theories involving virtual massless
particles.
In Sec. 5, we calculate the thermal contribution to Schwinger's
famous pair-production formula \cite{schw51} for constant electric
background fields in the low-temperature limit. Here, a thermal
one-loop contribution surprisingly does not exist
\cite{elmf94,gies99a}, since the thermal one-loop effective action is
purely real by construction. Hence, the findings of Sec. 5 prove the
existence of thermally induced pair production -- an effect which has
been searched for for 15 years
\cite{cox84,loew92,hall94,gang95,gang98}. In the low-temperature
limit, we find that the situation of a strong electric field is
dominated by the zero-temperature part (Schwinger formula), while the
thermal contribution can become dominant for a weak electric
field. Unfortunately, the experimentally more interesting
high-temperature limit cannot be covered by our approach.
One last word of caution: the inclusion of electric background fields
in finite-temperature QED is always plagued with the question of how
violently this collides with assumptions on thermal equilibrium. In
fact, electric fields and thermal equilibrium exclude each other,
thus questioning the physical meaning of the results of
Sec. 5 at least quantitatively. However, it is reasonable to assume
the existence of an at least small window of parameters in the
low-temperature and weak-field domain for which the
thermal-equilibrium calculation represents a good
approximation. Moreover, the knowledge of the effective Lagrangian
including a full dependence on all possible field configurations is
mandatory to derive equations of motion for the fields, even in the
limit of vanishing electric fields.
\section{Two-Loop Effective Action of QED at Low Temperature}
In the following, we will outline the calculation of the two-loop
effective action, concentrating on the low-temperature limit where a
{\em two-loop dominance} is expected. The calculation is necessarily
very technical, wherefore some details are left for the
appendices.\footnote{The primarily phenomenologically interested
reader may just take notice of the following conventions
\re{0a}-\re{0e}, then directly consult Eqs. \re{90}-\re{100}, and
skip the remainder of the section.}
But before we get down to business, it is useful to clarify our
notation. From the field strength tensor $F^{\mu\nu}$ and its dual
$\sta{F}_{\kappa\rho}=\frac{1}{2}
\epsilon_{\kappa\rho\mu\nu}F^{\mu\nu}$, we can construct the following
standard gauge and Lorentz invariants:
\begin{eqnarray}
{\cal F}&=&\frac{1}{4} F^{\mu\nu}F_{\mu\nu} = \frac{1}{2} \bigl
( \mathbf{B}^2- \mathbf{E}^2 \bigr) \equiv \frac{1}{2} \bigl( a^2-b^2
\bigr), \nonumber\\
{\cal G}&=&\frac{1}{4} F^{\mu\nu} \sta{F}_{\mu\nu} = - \mathbf{E\cdot
B} \equiv ab,\label{0a}
\end{eqnarray}
where, for reasons of convenience, we also introduced the {\em
secular} invariants
\begin{equation}
a=\sqrt{\sqrt{{\cal F}^2+{\cal G}^2}+{\cal F}}, \qquad
b=\sqrt{\sqrt{{\cal F}^2+{\cal G}^2}-{\cal F}}, \label{0b}
\end{equation}
and we assumed without loss of generality that a Lorentz system exists
in which the electric and magnetic field are anti-parallel. In this
particular frame, the secular invariants can be identified with the
field strengths: $a=B\equiv |\mathbf{B}|$, $b=E\equiv |\mathbf{E}|$.
When the physical system involves another vector, say, a momentum
4-vector $k^\mu=(k^0,\mathbf{k})$, we can form another field invariant
(metric: $g=(-,+,+,+)$):
\begin{eqnarray}
z_k&:=&(k_\mu F^{\mu\alpha})(k_\nu F^{\nu}{}_\alpha) \nonumber\\
&=& |\mathbf{k}|^2 B^2\sin^2\theta_B +|\mathbf{k}|^2 E^2\sin^2\theta_E
-k^2 E^2 +2 k^0\, \mathbf{E\cdot( k\times B)}, \label{0c}
\end{eqnarray}
where $\theta_B$ ($\theta_E$) denotes the angle between the magnetic
(electric) field and the 3-space vector $\mathbf{k}$.
In relativistic equilibrium thermodynamics, temperature can be
associated with the invariant norm of a 4-vector $n^\mu$: $n^\mu
n_\mu=-T^2$. On the other hand, $n^\mu$ is related to the
4-velocity vector $u^\mu$ of the heat bath by: $n^\mu=T\,
u^\mu$. E.g., in the heat-bath rest frame, $u^\mu$ takes the form:
$u^\mu=(1,0,0,0)$. Hence, we can introduce one further invariant
(beside the temperature itself):
\begin{equation}
{\cal E}=(u_\mu F^{\mu\alpha})(u_\nu F^{\nu}{}_\alpha). \label{0d}
\end{equation}
E.g., in the heat-bath rest frame, ${\cal E}$ simply reduces to ${\cal
E}=\mathbf{E}^2$. Since the effective Lagrangian is a Lorentz
covariant and gauge-invariant quantity, it can only be a function of
the complete set of invariants of the system under consideration.
Hence, we expect a finite-temperature effective QED Lagrangian of the
form:
\begin{equation}
{\cal L}={\cal L}({\cal E},{\cal F},{\cal G},T). \label{0e}
\end{equation}
Equipped with these conventions, we now turn to the calculation.
The two-loop contribution to the effective action/Lagrangian ${\cal
L}^2$ is generally given by the diagram in Fig. \ref{figloops}(b).
This translates into the following formula in coordinate space
\cite{ditt85}:
\begin{equation}
{\cal L}^2=\frac{e^2}{2} \int d^4x'\, \text{tr}\, \Bigl[ \gamma^\mu\,
G(x,x'|A)\, \gamma^\nu\, G(x',x|A) \Bigr]\,
D_{\mu\nu}(x-x'),\label{01}
\end{equation}
where $G(x,x'|A)$ represents the fermionic Green's function for the
Dirac operator in presence of an external electromagnetic field
$A$. $D_{\mu\nu}$ denotes the photon propagator. Throughout the paper,
we assume the background field to be constant or at least slowly
varying compared to the scale of the Compton wavelength; therefore,
the fermionic Green's function can be written as:
\begin{equation}
G(x,x'|A)= \Phi(x,x') \int \frac{d^4p}{(2\pi)^4}\, \text{e}^{\text{i} p(x-x')}\,
g(p),\label{02}
\end{equation}
where $g(p)$ denotes the Fourier transform of $G(x,x'|A)$ depending
only on the field strength, and $\Phi(x,x')$ is the holonomy carrying
the complete gauge dependence of the Green's function. Inserting
Eq. \re{02} into Eq. \re{01} leads us to the object
$\Phi(x,x')\Phi(x',x)\equiv \Phi(\bigcirc)$, where the right-hand side
represents the holonomy evaluated for a closed path. For a
simply connected manifold such as the Minkowski space,
$\Phi(\bigcirc)=1$; hence, it does not contribute to the
zero-temperature Lagrangian. For a non-simply connected manifold such
as the finite-temperature coordinate space ($\mathbbm{R}\times S^1$),
$\Phi(\bigcirc)$ can pick up a winding number
\cite{gies99a}. However, in the present case, we restrict our
considerations to a situation with zero density, which implies the
existence of a gauge in which $A_0=0$. Then, $\Phi(\bigcirc)=1$ and the
influence of the holonomy can be discarded.
This leads us to the representation
\begin{equation}
{\cal L}^2=\frac{\text{i}}{2} \int \frac{d^4k}{(2\pi)^4} \, D_{\mu\nu}(k)\,
\Pi^{\mu\nu}(k) \label{2}
\end{equation}
for the two-loop Lagrangian, where $D_{\mu\nu}(k)$ denotes the photon
propagator in momentum space, and we introduced the one-loop
polarization tensor in an arbitrary constant external background field:
\begin{equation}
\Pi^{\mu\nu}(k)=-\text{i} e^2 \int \frac{d^4p}{(2\pi)^4}\, \text{tr}\, \bigl
[ \gamma^\mu\, g(p)\, \gamma^\nu\, g(p-k) \bigr]. \label{3}
\end{equation}
So we have finally arrived at the well-known fact that the two-loop
effective action can be obtained from the polarization tensor in an
external field by glueing the external lines together.
The transition to finite-temperature field theory can now be made
within the imaginary-time formalism by replacing the momentum
integration over the zeroth component in Eqs. \re{2} and \re{3} by a
summation over bosonic and fermionic Matsubara frequencies,
respectively. E.g., performing this procedure in Eq. \re{3}
corresponds to thermalizing the fermions in the loop. Now we come to
an important point: confining ourselves to the low-temperature domain
where $T\ll m$, we know from the one-loop calculations \cite{elmf98},
\cite{gies99b} that thermal fermionic effects are suppressed by
factors of $\text{e}^{-m/T}$, indicating that the mass of the fermions
suppresses thermal excitations. Hence, thermalizing the polarization
tensor contributes at most terms of order $\text{e}^{-m/T}$ to the two-loop
Lagrangian for $T\ll m$; these are furthermore accompanied by an
additional factor of the coupling constant $\alpha$ and can therefore
be neglected compared to the one-loop terms. At low temperature, it is
therefore sufficient to thermalize the internal photon only in order
to obtain the leading $T$-dependence of ${\cal L}^{2}$.
Since, in Feynman gauge, the photon propagator reads
\begin{equation}
D_{\mu\nu}(k)=g_{\mu\nu}\, \frac{1}{k^2 -\text{i}\epsilon}, \qquad
k^2=-(k^0)^2 +\mathbf{k}^2, \qquad g=(-,+,+,+), \label{4}
\end{equation}
the introduction of bosonic Matsubara frequencies $(k^0)^2\to
-\omega_n^2=-(2\pi Tn)^2$, $n\in\mathbbm{Z}$, leads us to:\footnote{Of
course, the present calculation does not necessarily have to be
performed in the imaginary-time formalism. E.g., instead of
Eq. \re{4}, we could as well work with the real-time representation
of the thermal photon propagator. We could even use the
one-component formalism only, since we merely consider the photon to
be thermalized. However, from our viewpoint, the calculations in the
imaginary-time formalism appear a bit simpler since the momentum
integrals will remain Gaussian. Of course, this might be just a
matter of taste.}
\begin{equation}
{\cal L}^{2+2T}=\frac{\text{i}}{2}\, \text{i} T\sum_{\omega_n} \int
\frac{d^3k}{(2\pi)^3}\, \frac{1}{k^2 -\text{i}\epsilon}\,
\Pi^\mu{}_{\mu}(k). \label{5}
\end{equation}
From now on, we write ${\cal L}^2$ for the zero-temperature two-loop
Lagrangian, ${\cal L}^{2T}$ for the purely thermal part, and ${\cal
L}^{2+2T}$ for their sum. In Eq. \re{5}, we need the trace of the
polarization tensor in constant but otherwise arbitrary
electromagnetic fields. In the literature, there are various equivalent
representations for $\Pi_{\mu\nu}$. For the present purpose, it is
useful to derive our own one which is based on a calculation of
Urrutia \cite{urru79}. Details are presented in Appendix A.
Inserting representation \re{10} of the Appendix for $\Pi^\mu{}_\mu$
into Eq. \re{5}, we obtain for the Lagrangian:
\begin{eqnarray}
{\cal L}^{2+2T}\!\!\!\!&=& -\frac{T}{2} \frac{\alpha}{2\pi}
\sum_{\omega_n} \int \frac{d^3k}{(2\pi)^3}
\int\limits_0^\infty\!\frac{ds}{s}
\!\int\limits_{-1}^1\!\frac{d\nu}{2} \frac{\text{e}^{-\text{i}
s\phi_0}}{a^2+b^2} \frac{eas\,ebs}{\sin eas \sinh ebs}
\nonumber\\
&&\qquad\quad\left[\frac{z_k}{k^2-\text{i}\epsilon} (\tilde{N}_2
-\tilde{N}_1) +\bigl( 2N_0(a^2\!+\!b^2)
+b^2\tilde{N}_2 +a^2\tilde{N}_1\bigr)\right]
\Biggl|_{(k^0)^2=-\omega_n^2}\!\!\!\!\!\!\!, \label{11}
\end{eqnarray}
where the $\phi_0$, $N_0$, $\tilde{N}_i$ are functions of the
integration variables $s$ and $\nu$ and of the invariants $a$ and $b$;
only $\phi_0$ depends additionally on $z_k$ as defined in Eq. \re{0c}.
Their explicit form can be looked up in Eqs. \re{111225}, \re{15} and
\re{16}. In order to ensure convergence of the proper-time integrals,
the causal prescription $m^2\to m^2-\text{i}\epsilon$ for the mass term in
$\phi_0$ is understood; this agrees with deforming the $s$-contour
slightly below the real axis.
Now, the aim is to perform the $k$-momentum integration/summation;
note that the $k$-dependence is contained in $\phi_0$, $z_k$ (and
$k^2$, of course). Concentrating on this step, we encounter the
integrals:
\begin{eqnarray}
I_1&=& T\, \sum_{\omega_n} \int \frac{d^3k}{(2\pi)^3}\, \, \text{e}^{-\text{i} s
\phi_0} \biggl|_{(k^0)^2=-\omega_n^2}, \nonumber\\
I_2&=& T\, \sum_{\omega_n} \int \frac{d^3k}{(2\pi)^3}\,\,
\frac{z_k}{k^2-\text{i}\epsilon}\, \, \text{e}^{-\text{i} s \phi_0}
\biggl|_{(k^0)^2=-\omega_n^2}, \label{12}
\end{eqnarray}
which allow us to write the Lagrangian \re{11} in terms of
\begin{eqnarray}
{\cal L}^{2+2T}&=&-\frac{\alpha}{4\pi}\int\limits_0^\infty\!\frac{ds}{s}
\!\int\limits_{-1}^1\!\frac{d\nu}{2}
\frac{eas\,ebs}{(a^2+b^2) \sin eas \sinh ebs}\nonumber\\
&&\qquad\qquad \Bigl( (\tilde{N}_2
-\tilde{N}_1)\, I_2 +\bigl( 2N_0(a^2\!+\!b^2)
+b^2\tilde{N}_2 +a^2\tilde{N}_1\bigr) I_1\,\Bigr). \label{11a}
\end{eqnarray}
Employing Eq. \re{15} for $\phi_0$, we can put down the evaluation of
$I_2$ to the one of $I_1$:
\begin{eqnarray}
I_2&=& T\, \sum_{\omega_n} \int
\frac{d^3k}{(2\pi)^3}\,\, \frac{z_k}{k^2-\text{i}\epsilon}\, \,
\text{e}^{-\text{i} m^2s} \text{e}^{-A_z z_k}\, \text{e}^{-A_k k^2}
\biggl|_{(k^0)^2=-\omega_n^2} \nonumber\\
&=& -\frac{\partial}{\partial A_z}\, \int\limits_{A_k}^\infty dA_k'\,
\, I_1, \label{17}
\end{eqnarray}
where $A_z$ and $A_k$ again are functions of the integration variables
$s$ and $\nu$ and of the invariants $a$ and $b$, and are defined in
Eq. \re{16}. In view of Eq. \re{17}, it is sufficient to consider the
momentum integration/summation for $I_1$ only:
\begin{equation}
I_1\stackrel{\re{15}}{=}T\, \text{e}^{-\text{i} m^2s}\sum_{\omega_n} \int
\frac{d^3k}{(2\pi)^3}\,\, \text{e}^{-A_z z_k}\, \text{e}^{-A_k
k^2}\biggl|_{(k^0)^2=-\omega_n^2}. \label{18}
\end{equation}
At this stage, the {\em finite-temperature coordinate frame} as
introduced in \cite{gies99a} becomes extremely useful, since it
enables us to perform the calculation in terms of the invariants. This
coordinate system is adapted to the situation of electromagnetic
fields at finite temperature in a way that the components of any
tensor-valued function of the field strength can be expressed in terms
of the invariants ${\cal E}$, ${\cal F}$, and ${\cal G}$. Again,
details are presented in the appendix (App. B), from where we take the
final formula for the exponent of Eq. \re{18} (cf. Eq. \re{23}):
\begin{eqnarray}
A_z z_k +A_k k^2 \!\!&=&\!\! \bigl( A_k +(a^2\!-\!b^2\!+\!{\cal E})
A_z\bigr)\! \left(\! k^2\!-\!{\scriptstyle
\frac{A_z\sqrt{d}}{A_z(2{\cal F} +{\cal E}) +A_k}}\, k^0\!
\right)^2 -\frac{(A_k\!+\!a^2A_z)(A_k\!-\!b^2A_z)}{ A_k
+(a^2\!-\!b^2\!+\!{\cal E}) A_z} \, (k^0)^2 \nonumber\\
&&\!\! +\left(\! A_z\frac{a^2b^2}{{\cal E}} +A_k\!\right)\!\left(\!k^3
+{\scriptstyle \frac{A_z \frac{\sqrt{d}{\cal G}}{{\cal E}}}{A_z
\frac{{\cal G}^2}{{\cal E}} +A_k}}\, k^1\!\right)
+\frac{(A_k+a^2A_z)(A_k-b^2A_z)}{ A_k\frac{a^2b^2}{{\cal E}} + A_k}\,
(k^1)^2, \nonumber\\
&&\label{23T}
\end{eqnarray}
where $k^0,k^1,k^2,k^3$ represent the components of the rotated
momentum vector $k^A=e^A{}_\mu k^\mu$, and $e^A{}_\mu$ denotes the
vierbein which mediates between the given coordinate system and the
finite-temperature coordinate frame (cf. Eq. \re{2.3}). Since the
transformation into the new reference frame is only a rigid rotation
in Minkowski space, no Jacobian arises for the measure of the momentum
integral. Hence, only integrals of Gaussian type are present in
Eq. \re{18}, which can easily be performed to give:
\begin{equation}
I_1=T\,\frac{ \text{e}^{-\text{i} m^2 s}}{(4\pi)^{3/2}}\,\, \frac{1}{\sqrt{p\,
q_a\, q_b}} \sum_{\omega_n} \text{e}^{-\frac{q_a\, a_b}{p}
\omega_n^2}, \label{26}
\end{equation}
where it was convenient to introduce the short forms:
\begin{equation}
q_a:=A_k+a^2A_z, \qquad q_b:=A_k-b^2A_z, \qquad p:=A_k+
(a^2\!-\!b^2\!+\!{\cal E})A_z. \label{32}
\end{equation}
The sum in Eq. \re{26} can be rewritten with the aid of a Poisson
resummation of the form:
\begin{equation}
\sum_{n=-\infty}^\infty \exp \bigl( -\sigma (n-z)^2 \bigr) =
\sum_{n=-\infty}^\infty \sqrt{\frac{\pi}{\sigma}}\,\exp
\left(-\frac{\pi^2}{\sigma}\, n^2 -2\pi\text{i} zn \right). \label{27}
\end{equation}
With $z=0$ and $\sigma=(2\pi T)^2 \frac{q_a q_b}{p}$, we obtain for
Eq. \re{26}:
\begin{equation}
I_1\equiv I_1^{T=0}+I_1^T
=\frac{ \text{e}^{-\text{i} m^2 s}}{16\pi^2}\, \frac{1}{q_a q_b}
+\frac{ \text{e}^{-\text{i} m^2 s}}{8\pi^2}\, \frac{1}{q_a q_b}\,
\sum_{n=1}^\infty \exp \left( -\frac{p}{q_a q_b} \,
\frac{n^2}{4T^2}\right), \label{29}
\end{equation}
where we separated the ($n=0$)-term from the remaining sum in
order to find the ($T=0$)-contribution. The first term in Eq. \re{29}
(($n=0$)-term), namely, is independent of $T$ and ${\cal E}$, while
the second term vanishes in the limit $T\to 0$ exponentially. In
App. C, we check explicitly that the first term of Eq. \re{29} indeed
leads to the (unrenormalized) two-loop Lagrangian for arbitrary
constant electromagnetic fields at zero temperature. E.g., for purely
magnetic fields, the representation of Dittrich and Reuter
\cite{ditt85} is rediscovered.
For our finite-temperature considerations, we will only keep the
second term of Eq. \re{29}, which we denote by $I_1^T$ in the
following. Concerning the formula for ${\cal L}^{2T}$ in Eq. \re{11a},
$I_1^T$ is already in its final form (it will turn out later that this
term is subdominant in the low-$T$ limit and only $I_2^T$ contains the
important contributions). Hence, let us turn to the evaluation of
$I_2^T$, i.e., the thermal part of Eq. \re{17}; for this, we have to
interpret $I_1^T$ as a function of $A_z$ and $A_k$ (remember: $q_a$,
$q_b$ and $p$ are functions of $A_z$ and $A_k$):
\begin{equation}
I_2^T=-\frac{\partial}{\partial A_z} \int\limits_{A_k}^\infty dA_k'\,
I_1^T(A_k', A_z) =-\frac{\partial}{\partial A_z}
\int\limits_{0}^\infty ds'\, I_1^T(s'+A_k,A_z) =:-\frac{\text{e}^{-\text{i} m^2
s}}{8\pi^2}\sum_{n=1}^\infty\frac{\partial}{\partial A_z}
\,J(A_z), \label{31}
\end{equation}
where we defined the auxiliary integral:
\begin{equation}
J(A_z)=\int\limits_{0}^\infty ds'\,\frac{1}{(s'+q_a)(s'+q_b)}
\exp\left( -\frac{s'+p}{(s'+q_a)(s'+q_b)}\, \frac{n^2}{4T^2}
\right). \label{34}
\end{equation}
Upon a substitution of the integration variable,\footnote{Resolving
for $s'=s'(u)$ leads to a quadratic equation from which the positive
root has to be taken in order to take care of the integral
boundaries.}
\begin{eqnarray}
u&:=&\frac{q_a q_b}{p}\, \frac{s'+p}{(s'+q_a)(s'+q_b)}, \label{34a}\\
\Rightarrow\qquad \frac{ds'}{(s'+q_a)(s'+q_b)} &=&
-\frac{du}{\sqrt{\frac{q_a^2 a_b^2}{p^2} +\frac{2q_aq_b}{p}
(2p\!-\!q_a\!-\!q_b) u+(q_a\!-\!q_b)^2 u^2}}, \nonumber
\end{eqnarray}
the auxiliary integral becomes:
\begin{equation}
J(A_z)=\int\limits_0^1 \frac{du}{\sqrt{\frac{q_a^2 a_b^2}{p^2}
+\frac{2q_aq_b}{p} (2p\!-\!q_a\!-\!q_b) u+(q_a\!-\!q_b)^2 u^2}}
\, \exp\left(-\frac{n^2}{4T^2} \frac{p}{q_aq_b}\,
u\right). \label{36}
\end{equation}
Now we come to an important point: since we only thermalized the
photons, our effective Lagrangian ${\cal L}^{2T}$ is only valid for
$T\ll m$ anyway. Nevertheless, our formulas also contain information
about the high-temperature domain which we should discard, since it is
incomplete. Regarding Eq. \re{36}, the exponential function causes the
integrand to be extremely small for small values of $T$, except where
$u$ is also small. Hence, the auxiliary integral is mainly determined
by the lower end of the integration interval.
Taking these considerations into account, we expand the square root
for small values of $u$ and then extend the integration interval to
infinity (in fact, maintaining 1 as the upper bound only creates terms
of the order $\exp(-(2nm)/T)$, which are subdominant in the
low-temperature limit). The remaining $u$-integration can then easily
be performed for each order in the $u$-expansion; up to $u^2$, we
obtain:
\begin{equation}
J(A_z)=4\frac{T^2}{n^2} -16 \frac{T^4}{n^4} (2p-q_a-q_b) -64
\frac{T^6}{n^6} \bigl((q_a-q_b)^2-3(2p-q_a-q_b)^2\bigr)+ {\cal
O}(T^8/n^8). \label{69}
\end{equation}
Upon differentiation, the $T^2$-dependence drops out, and we get
(cf. Eq. \re{32}):
\begin{equation}
\frac{\partial}{\partial A_z}J(A_z) = -2^5 \frac{T^4}{n^4} ({\cal
F}+{\cal E}) -2^9 \frac{T^6}{n^6} \bigl( {\cal F}^2+{\cal
G}^2-3({\cal F}+{\cal E})^2 \bigr) A_z +{\cal
O}(T^8/n^8). \label{70b}
\end{equation}
In this equation, we indeed discover a power-law dependence on the
temperature, which will directly translate into a power-law dependence
of the two-loop effective action after insertion into Eqs. \re{31} and
\re{11a}. Technically speaking, this arises from the fact that the
omnipresent exponential factor $\exp ( -\frac{n^2}{4T^2}
\frac{p}{q_aq_b}\, u)$, which finally causes exponential damping for
$T/m\to 0$, becomes equal to 1 after the $u$-integration at the lower
bound at $u=0$.
At this stage, it is important to observe that the $u$-integration
appears only in $I_2^T$ (via the $A_k'$-integration in Eq. \re{12})
and not in $I_1^T$. Therefore, $I_1^T$ will always contain exponential
damping factors in the limit $T\to 0$. Even the remaining proper-time
integrations do not provide for a mechanism similar to the
$u$-integration, since for large $s$, the mass factor $\exp(-\text{i} m^2
s)$ with the causal prescription $m^2\to m^2-\text{i}\epsilon$ causes the
integrand to vanish, and for small $s$, the combination
$\frac{p}{q_aq_b}$ in the exponent becomes:
\begin{equation}
\frac{p}{q_aq_b} =-\frac{4\text{i}}{1-\nu^2} \frac{1}{s} +{\cal
O}(s). \label{43a}
\end{equation}
Obviously, inserting Eq. \re{43a} into the exponent leads to an
exponential fall off (bearing in mind that the $s$-contour will run
slightly below the real axis). Similar conclusions can be drawn for
the $\nu$-integration. To summarize these technical considerations, we
conclude that only the term containing $I_2^T$ (thermal part of $I_2$)
in Eq. \re{11a} contributes dominantly to ${\cal L}^{2T}$ in the
low-temperature limit.
Inserting the first and second term of $\frac{\partial}{\partial
A_z}J(A_z)$ in Eq. \re{70b} successively into Eq. \re{31} and then
into Eq. \re{11a}, we obtain the dominant terms of order $T^4$ and
$T^6$ of the two-loop effective QED Lagrangian at low temperature;
particularly for the $T^4$-term, different useful representations can
be given:
\begin{eqnarray}
{\cal L}^{2T}\Bigl|_{T^4}\!\!&=&-\frac{\alpha\pi}{90}\, T^4\, ({\cal
F}+{\cal E}) \int\limits_0^\infty \frac{ds}{s} \int\limits_{-1}^1
\frac{d\nu}{2}\, \text{e}^{-\text{i} m^2s} \frac{eas\, ebs}{\sin eas \sinh ebs}
\frac{(\tilde{N}_2- \tilde{N}_1)}{a^2+b^2} \nonumber\\
&=&-\frac{\alpha\pi}{45}\, T^4\, ({\cal F}+{\cal E})
\int\limits_0^\infty \frac{ds}{s}\, \frac{1}{a^2+b^2}\, \text{e}^{-\text{i} m^2
s} \nonumber\\
&&\qquad \left[ ebs \coth ebs \frac{1-eas \cot eas}{\sin^2 eas}
+eas \cot eas \frac{1-ebs \coth ebs}{\sinh^2 ebs}
\right] \label{90}\\
&=&\!\!\!\frac{\pi^2}{45}\, T^4\, ({\cal F}\!+\!{\cal E})\! \left(\!
\frac{1}{a^2\!+\!b^2} (\partial_a^2\!+\!\partial_b^2)\!\right)\!
\!\left[\!
\frac{1}{8\pi^2}\!\! \int\limits_0^\infty \!\frac{ds}{s^3}\, \text{e}^{-\text{i}
m^2s} eas \cot eas \, ebs \coth
ebs\!\right]\!\!. \label{102a}
\end{eqnarray}
The term proportional to $T^6$ reads:
\begin{equation}
{\cal L}^{2T}\Bigl|_{T^6}\!\! =-\frac{16\alpha\pi^3}{945}\, T^6\,
\bigl( {\cal F}^2\!\!+\!{\cal G}^2\!-\!3({\cal F}\!+\!{\cal E})^2\bigr)
\!\int\limits_0^\infty \frac{ds}{s} \int\limits_{-1}^1
\frac{d\nu}{2}\, \frac{\text{e}^{-\text{i} m^2s}}{a^2+b^2}\,
\frac{eas\, ebs}{\sin eas \sinh ebs}\, (\tilde{N}_2- \tilde{N}_1)\,
A_z, \label{71}
\end{equation}
where $\tilde{N}_i$ and $A_z$ are functions of the integration
variables and the invariants $a$ and $b$ (not of ${\cal E}$), and are
defined in Eqs. \re{111225} and \re{16}. The $\nu$-integration can be
performed analytically, but the extensive result does not provide for
new insights; hence we do not bother to write it down.
These equations represent the central result of the present work;
therefore, a few of their properties should be stressed:
\noindent
1) While we worked explicitly in the low-temperature approximation
$T\ll m$, we put no restrictions on the strength of the
electromagnetic fields.
\noindent
2) The low-temperature Lagrangians contain arbitrary powers of the
invariants $a$ and $b$ (equivalently ${\cal F}$ and ${\cal G}$), but
the additional invariant at finite temperature ${\cal E}$ only appears
linearly in the $T^4$-term and quadratic in the $T^6$-term. The
small-$T$ expansion thus corresponds to a small-${\cal E}$ expansion.
\noindent
3) The fact that only the integral $I_2^T$ with the prefactor
$(\tilde{N}_2-\tilde{N}_1)$ contributes to the low-temperature
Lagrangian in Eq. \re{11a} implies that only the spatially transversal
modes $\Pi_\|$ and $\Pi_\bot$ of the polarization tensor \re{111224}
play a role in this thermalized virtual two-loop process. The
time-like or longitudinal mode $\Pi_0$ (depending on the character of
$k^\mu$) might become important at higher values of temperature.
\noindent
4) The fact that the invariant ${\cal E}$ always appears in the
combination ${\cal F}+{\cal E}$ ensures a kind of dual invariance of
the Lagrangian. Under the replacement $\mathbf{E}\to \mathbf{B}$ and
$\mathbf{B}\to-\mathbf{E}$, the invariants change into ${\cal F}\to
-{\cal F}$, ${\cal G}\to-{\cal G}$ and ${\cal E}\to{\cal E}+2{\cal
F}$, so that ${\cal F}+{\cal E}$ remains invariant.
\noindent
5) The $T^4$-term of ${\cal L}^{2T}$ as exhibited in Eq. \re{102a}
possesses the peculiarity of being derivable from the one-loop
zero-temperature Lagrangian which we marked by square brackets in
Eq. \re{102a} after the derivative terms. This will be elucidated a
bit further in the following section.
For the remainder of this section, we will discuss certain limiting
cases of the two-loop low-temperature Lagrangian. First, let us
concentrate on a weak-field expansion which corresponds to a small-$s$
expansion of the proper-time integral due to the exponential mass
factor. Expanding the integrands for small values of $s$ (except the
mass factor) and integrating over $\nu$ and $s$, leads us to the
dominant terms in the weak-field limit:
\begin{eqnarray}
{\cal L}^{2T}\biggl|_{T^4}&=&\frac{44\alpha^2\pi^2}{2025}
\frac{T^4}{m^4} ({\cal F}+{\cal E})
-\frac{2^6\cdot37 \alpha^3\pi^3}{3^4\cdot5^2\cdot7} \frac{T^4}{m^4}
\frac{{\cal F}({\cal F}+{\cal E})}{m^4}+{\cal O}(3), \label{73}\\
{\cal L}^{2T}\biggl|_{T^6}&=&\frac{2^{13}\alpha^3\pi^5 }
{3^6\cdot 5\cdot 7^2}
\frac{T^6}{m^6}\bigl(2{\cal F}^2+6{\cal E}{\cal
F}+3{\cal E}^2-{\cal G}^2\bigr) \frac{1}{m^4}+{\cal
O}(3), \label{75}
\end{eqnarray}
where ${\cal O}(3)$ signals that we omitted terms of third order in
the field invariants (sixth order in the field strength). Note that no
linear term in the field invariants to order $T^6$ exists. For the
terms of quadratic order, the $T^6$-term is subdominant for
$T/m\leq0.05$, and amounts up to a 10\%-correction to the $T^4$-term
for $T/m\sim 0.1$. For even larger values of temperature, we expect
the failure of the low-temperature approximation.
Finally, we consider ${\cal L}^{2T}\bigl|_{T^4}$ in the limit of
purely magnetic background fields: $b\to 0$, $a\to B$, ${\cal F}+{\cal
E}\to \frac{1}{2} B^2$. The $T^4$-term in Eq. \re{90} then reduces
to:
\begin{equation}
{\cal L}^{2T}(B)\biggl|_{T^4}=\frac{\alpha\pi}{90}\,
T^4\int\limits_0^\infty \frac{dz}{z}\, \text{e}^{-\frac{m^2}{eB}z} \left
[ \frac{1-z\coth z}{\sinh^2 z} +\frac{1}{3} z\coth z\right],
\label{97}
\end{equation}
where we have performed the substitution $eas\to -\text{i} z$ in concordance
with the causal prescription $m^2\to m^2-\text{i}\epsilon$. Incidentally,
the limit of purely electric fields can simply be obtained by
replacing $B\to -\text{i} E$ and multiplying Eq. \re{97} by $(-1)$.
Introducing the critical field strength $B_{\text{cr}}:=\frac{m^2}{e}$, we can
evaluate the integral in Eq. \re{97} analytically
\cite{ditt98}\footnote{We take the opportunity to remark that there is
a misprint in the corresponding integration result in \cite{ditt98};
the term $(+1/3)$ has to be replaced by $(+1/6)$
(cf. Eq. \re{101}).} and obtain:
\begin{eqnarray}
{\cal L}^{2T}(B)\biggl|_{T^4}&=&\frac{\alpha\pi}{90}T^4\biggl[
\Bigl(
{\scriptstyle \frac{B_{\text{cr}}^2}{2B^2}}\!-\!{\scriptstyle \frac{1}{3}}\Bigr)
\psi (1\!+ \!{\scriptstyle \frac{B_{\text{cr}}}{2B}})-\frac{2B_{\text{cr}}}{B}\ln \Gamma
({\scriptstyle \frac{B_{\text{cr}}}{2B}}) -\frac{3B_{\text{cr}}^2}{4B^2}
\nonumber\\
&&\qquad\qquad\qquad\qquad\!-\frac{B_{\text{cr}}}{2B}+\frac{B_{\text{cr}}}{B}\ln 2\pi
+\!\frac{1}{6}\!+4 \zeta '(-1,{\scriptstyle \frac{B_{\text{cr}}}{2B}})
+\frac{B}{3B_{\text{cr}}} \biggr], \label{101}
\end{eqnarray}
where $\psi(x)$ denotes the logarithmic derivative of the
$\Gamma$-function, and $\zeta'(s,q)$ is the first derivative of the
Hurwitz $\zeta$-function with respect to its first argument.
For strong magnetic fields, $B\ggB_{\text{cr}}$, the last term in square
brackets in Eq. \re{101} dominates the whole expression, and we find a
linear increase of the effective Lagrangian:
\begin{equation}
{\cal L}^{2T}(B\ggB_{\text{cr}})\biggr|_{T^4}=\frac{\alpha\pi}{270}\, T^4\,
\frac{eB}{m^2}. \label{100}
\end{equation}
This contribution remains subdominant compared to the one arising from
pure vacuum polarization $\sim B^2 \ln \frac{eB}{m^2}$, which is not
astonishing, since the magnetization of (real) thermalized plasma
particles is bounded: the spins can maximally be completely
aligned. In contrast, the non-linearities of vacuum polarization set
no such upper bound. Quantitatively, the same result was found for the
thermal one-loop contribution \cite{elmf93}.
\section{Light Propagation}
As a first application, we study the propagation of plane light waves
at finite temperature and in a magnetic background. The subject of
light propagation has recently gained renewed interest due to its
accessibility to current experimental facilities \cite{peng98}.
In the limit of light of low-frequency $\omega\ll m$, the effective
action for slowly varying fields has proved useful for obtaining
velocity shifts, i.e., refractive indices of QED vacua which are
modified by various external perturbations such as fields and
temperature \cite{heyl97,ditt98}. In this limit of low frequencies and
smooth external perturbations, the terms involving derivatives of the
fields in a derivative expansion of the effective action can be
neglected, and the constant-field approximation is appropriate.
The case of light propagation at finite temperature has been
investigated in \cite{gies99b} from a general viewpoint for a class of
Lagrangians depending on the invariants ${\cal E},{\cal F},{\cal G},
T$ in an arbitrary way. Therein, a light cone condition representing a
sum-rule for the polarization modes of the propagating light has been
derived; this has been exploited for a detailed investigation of light
propagation at finite temperature to one-loop order by an insertion of
the thermal one-loop effective Lagrangian of QED. It has been
emphasized that these one-loop studies apply to a domain of
intermediate values of temperature $\sim 0.1 \leq T/m \leq \sim 1$,
where two-loop as well as plasma effects remain subdominant.
The famous results for the low-temperature velocity shift $\delta
v\sim T^4/m^4$ \cite{tarr83,bart90,lato95} could not have been
rediscovered by this first-principle investigation, because the
thermal two-loop effective action was not at hand. In the present
work, we intend to fill this last gap.
Let us first consider the situation of a thermalized QED vacuum
without an additional background field. In the low-temperature domain,
this vacuum is then characterized by the Lagrangian ${\cal L}=-{\cal
F}+{\cal L}^{2T}$, where $-{\cal F}$ represents the classical
Maxwell term. Following the lines of \cite{gies99b}, the phase
and group velocity $v$ of a propagating plane wave is then given by:
\begin{equation}
v^2=\frac{1}{1+\frac{2\, \partial_{\cal E}{\cal L}}{(-\partial_{\cal F}
{\cal L} +\partial_{{\cal E}}{\cal L})}}, \label{2.6}
\end{equation}
where $v=\frac{k^0}{|\mathbf{k}|}$ is constructed from the wave vector
of the propagating light, and it is understood that the partial
derivatives of ${\cal L}$ are evaluated in the zero-field limit.
Inserting Eqs. \re{73} and \re{75} into Eq. \re{2.6}, leads us to:
\begin{equation}
v^2=\frac{1}{1+2\frac{44}{2025} \alpha^2\pi^2
\frac{T^4}{m^4}}\simeq 1-2\frac{44}{2025} \alpha^2\pi^2
\frac{T^4}{m^4}+{\cal O}(T^8/m^8). \label{2.7}
\end{equation}
Note that there is no $T^6$-term, since ${\cal L}^{2T}|_{T^6}$ is at
least quadratic in the field invariants. In Eq. \re{2.7}, we
rediscovered the well-known velocity shifts for light propagation in a
thermal background as found in \cite{tarr83,lato95} via the two-loop
polarization operator and in \cite{bart90,ditt98} via considering vacuum
expectation values of field bilinears in a thermal background. The
here-presented rederivation within the effective action approach from
first principles thus can be viewed as an independent check of
our calculations and of the light cone condition as derived in
\cite{gies99b}.
But we can go one step further and additionally take a weak external
magnetic field into account; the light cone condition in this case
reads \cite{gies99b}:
\begin{equation}
0=\bigl(\partial_{\cal F}{\cal L}\!-\partial_{{\cal E}}{\cal
L}\!-{\cal F}\partial^2_{\cal G}{\cal L} \bigr)k^2\!
+\frac{1}{2} \bigl( \partial^2_{\cal F} \!+\partial^2_{\cal G}\bigr)
{\cal L}\, z_k
+ 2\partial_{\cal E}{\cal L}\,
(ku)^2,\label{2.14}
\end{equation}
where $u^\mu$ denotes the 4-velocity vector of the heat bath and $z_k$
is defined in Eq. \re{0c}. The Lagrangian describing a thermal QED
vacuum with weak magnetic background fields at finite temperature is
given by ${\cal L}=-{\cal F}+{\cal L}^{1}+{\cal L}^{2T}$, where ${\cal
L}^1$ denotes the one-loop effective Lagrangian at zero temperature.
Up to the second order in the invariants, this famous Heisenberg-Euler
Lagrangian ${\cal L}^1$ is given by:
\begin{equation}
{\cal L}^1=\frac{8}{45} \frac{\alpha^2}{m^4} {\cal F}^2 +
\frac{14}{45} \frac{\alpha^2}{m^4} {\cal G}^2. \label{HE}
\end{equation}
Inserting all the relevant contributions to ${\cal L}$ into the light
cone condition Eq. \re{2.14}, the light velocity to lowest order in
the parameters $T$ and $B$ finally yields:
\begin{equation}
v^2\!=1-\frac{22}{45} \frac{\alpha^2}{m^4} B^2 \sin^2\theta_B -2
\frac{44}{2025} \alpha^2\pi^2 \frac{T^4}{m^4} + \frac{22}{45}
\frac{\alpha^2}{m^4} \left(\!
\frac{2^5\cdot37}{3^2\cdot5\cdot7\cdot11} \alpha\pi^3
\frac{T^4}{m^4}\! \right) B^2(1+\sin^2 \theta_B)\!,
\label{LP7}
\end{equation}
where $\theta_B$ denotes the angle between the propagation direction
and the magnetic field (cf. Eq. \re{0c}). The second and third term
are the well-known velocity shifts for purely magnetic
\cite{adle71,bial70}
and purely thermal vacua (cf. Eq. \re{2.7}), respectively. The last
term describes a non-trivial interplay between these two vacuum
modifications. The latter can best be elucidated in the various limits
of the angle $\theta_B$; for orthogonal propagation to the magnetic
field $\theta_B=\pi/2$, we get:
\begin{equation}
v^2=1-2\frac{44}{2025} \alpha^2\pi^2 \frac{T^4}{m^4}
-\frac{22}{45} \frac{\alpha^2}{m^4} B^2 \left
( 1-(0.15...)\cdot\frac{T^4}{m^4}\right). \label{LP8}
\end{equation}
For parallel propagation to the magnetic field $\theta_B=0$, we find:
\begin{equation}
v^2=1-2\frac{44}{2025} \alpha^2\pi^2 \frac{T^4}{m^4} \left
( 1-(0.96...)\cdot
\left(\!\frac{eB}{m^2}\!\right)^2\right).\label{LP9}
\end{equation}
Since $T/m$ and $eB/m^2$ are considered to be small in each case, the
corrections to the pure effects in the mixed situation are comparably
small. Note that the mixed thermal and magnetic corrections always
diminish the values for the velocity shift of the pure magnetic or
thermal situations. Let us finally remind the reader that the
here-given velocities hold for low-frequency light ($\omega\ll m$)
only, and represent averages over the two possible polarization modes.
While for the purely thermal case the polarization modes cannot be
distinguished, the situation involving an electromagnetic field
generally leads to birefringence due to the existence of a preferred
direction of the field lines. \bigskip
Let us finally comment on the earlier works \cite{bart90,ditt98}
related to the issue of light propagation in a thermal
background. The philosophy therein was to calculate the velocity
shifts in a purely (weak) electromagnetic background first, and then
take thermal vacuum expectation values of the field
bilinears. Expressing this in formulas, we first recall the expression
for the propagation-direction-averaged light velocity in a weak
electromagnetic background from \cite{ditt98}:
\begin{equation}
v^2=1-\frac{2}{3} (\partial_{{\cal F}}^2 +\partial_{{\cal G}}^2){\cal
L}\,\, T^{00}, \label{LP10}
\end{equation}
where $T^{00}=\frac{1}{2} (E^2+B^2)$ denotes the 00-component of the
energy-momentum tensor, i.e., energy density of the electromagnetic
field. In the weak-field limit, $(\partial_{{\cal F}}^2
+\partial_{{\cal G}}^2){\cal L}$ is field independent: $2\frac{22}{45}
\frac{\alpha^2}{m^4}$ (cf. Eq. \re{HE}); therefore, taking thermal
vacuum expectation values of the field quantities in Eq. \re{LP10}
is simply equivalent to replacing $T^{00}$ by $\langle
T^{00}\rangle^T=\frac{\pi^2}{15} T^4$. This then leads to the correct
result as given in Eq. \re{2.7}.
From the viewpoint of the present work, the correctness of the
approach of \cite{bart90,ditt98} arises from the special form of the
low-temperature two-loop Lagrangian ${\cal L}^{2T}|_{T^4}$ as given in
Eq. \re{102a}. Since
\begin{equation}
\frac{1}{a^2+b^2}(\partial_a^2+\partial_b^2) =\partial_{\cal
F}^2+\partial_{\cal G}^2, \label{LP11}
\end{equation}
Eq. \re{102a} can also be written as:
\begin{equation}
\partial_{\cal E} {\cal L}^{2T}\biggr|_{T^4} =\frac{2}{3} \langle
T^{00}\rangle^T\, \frac{1}{2} (\partial_{\cal
F}^2+\partial_{\cal G}^2){\cal L}^1. \label{LP12}
\end{equation}
Incidentally, Eq. \re{LP12} holds for arbitrary field strength, but, in
this line of argument, it is required for weak fields only. Inserting
Eq. \re{LP12} into the correct light cone condition at finite
temperature, i.e., Eq. \re{2.6}, we obtain to lowest order:
\begin{equation}
v^2\simeq1-2\partial_{\cal E}{\cal L} =1-\frac{2}{3} (\partial_{{\cal
F}}^2 +\partial_{{\cal G}}^2){\cal L}\,\,\langle T^{00}\rangle^T,
\label{LP13}
\end{equation}
which is equal to the heuristically deduced light cone condition for a
thermal QED vacuum \cite{bart90,ditt98}.
Note that the combined low-temperature/weak-field effects as given in
Eqs. \re{LP7}-\re{LP9} could not have been found in \cite{ditt98},
since the invariant structure is not completely taken into account in
the heuristic approach. Whether the investigation of the
intermediate-temperature domain to two-loop has been correctly
modeled with the heuristic approach in \cite{ditt98}, cannot be
judged within the present work. Note, however, that the
intermediate-temperature domain is controlled by one-loop effects,
leading to a maximum velocity shift of $-\delta v_{\text{max}}^2
=\frac{\alpha}{3\pi}$ \cite{gies99b}. As has been shown therein, the
{\em two-loop dominance} is lost for $T/m\geq0.058$.
\section{Photon Splitting}
Photon splitting in magnetic fields at zero temperature has been
discussed comprehensively by Adler \cite{adle71}, stressing its
relevance for the photon physics of compact astrophysical objects (see
also \cite{bari95}). For the description of the splitting process for
low-frequency photons with $\omega\ll m$ at weak magnetic fields
$\frac{eB}{m^2}\ll 1$, the use of the one-loop effective Lagrangian
for weak fields is sufficient for obtaining a good estimate of the
absorption coefficient for photon splitting. To be precise, the lowest
order contribution to the splitting process comes from the terms of
third order in the invariants (sixth order in the field strength) of
${\cal L}^1$, i.e., the hexagon graph with one incoming, two outgoing
photons and three couplings to the external magnetic field. Neglecting
dispersion effects, the box graph vanishes because of ${\cal L}^1$
depending on ${\cal F}$ and ${\cal G}$ only, and because of the
Lorentz kinematics of the photons.\footnote{Taking dispersion effects
into account, the box graph still is only an order $\alpha$
correction to the hexagon graph.}
The question of thermally induced photon splitting has recently been
investigated by Elmfors and Skagerstam \cite{elmf98} with the aid of
the thermal one-loop effective QED Lagrangian; their studies were
motivated by the fact that a vacuum may be a bad approximation for the
surroundings of some astrophysical compact objects, while a
thermalized environment at zero or finite density might be more
appropriate. It turned out that, at temperatures and magnetic fields at
the scale of the electron mass, the thermal contribution can exceed
the zero-temperature one, but these effects then are superimposed by
Compton scattering of the photons with the plasma. In realistic
situations, the thermally induced process will thus be of subdominant
importance.
In the following, we intend to complete these results about thermally
induced photon splitting with the dominant low-temperature
contributions stemming from the two-loop process. Hereby, we also
concentrate on the splitting process ($\bot \to \|_1+\|_2$), where a
photon, with its electric field vector orthogonal ($\bot$) to the
plane spanned by the external magnetic field and the propagation
direction, splits into two photons with their electric field vectors
within ($\|$) that plane.\footnote{Note that Adler's definition for
the $\|,\bot$-mode rely on the direction of the magnetic field
vector of the photon and thus are opposite to ours.} This is the
only allowed process when dispersion effects are taken into account.
As pointed out in \cite{elmf98}, the box-graph no longer vanishes at
finite temperature, since the Lagrangian now involves an additional
invariant. Hence, the lowest-order contribution to the
photon-splitting matrix element is already produced by the terms of
quadratic order in the invariants in Eqs. \re{73} and \re{75}.
Without going into details, we recall that the splitting amplitude is
obtained by attaching the external photon legs to the fermion loop,
i.e., differentiating the effective action (which is represented by
the loop) thrice with respect to the fields and then contracting the
result with the field strengths of the involved photons. Hereby, one
has to take into account that the effective Lagrangian now depends on
three field invariants: ${\cal E},{\cal F}$, and ${\cal G}$. The
thermal amplitude arising from the box-graph finally yields:
\begin{equation}
{\cal M}(\bot\to\|_1+\|_2)=2\omega\omega_1\omega_2\, B\sin\theta_B
\,\partial_{{\cal E}{\cal F}} {\cal L}, \label{PS15}
\end{equation}
where $\omega,\omega_1,\omega_2$ denote the frequencies of the
incoming and the two outgoing photons, respectively, and $\theta_B$
again represents the angle between the propagation direction and the
magnetic field. From the splitting amplitude, we obtain the absorption
coefficient $\kappa$ via the formula:
\begin{equation}
\kappa=\frac{1}{32\pi\omega^2} \int\limits_0^\omega d\omega_1
\int\limits_0^\omega d\omega_2\, \delta(\omega-\omega_1-\omega_2)\,\,
{\cal M}^2. \label{PS16}
\end{equation}
Inserting Eq. \re{PS15} for the thermal splitting amplitude into
Eq. \re{PS16} leads us to:
\begin{equation}
\frac{\kappa}{m} =\frac{1}{2^6\cdot3\cdot 5\pi^2}
\left(\frac{eB}{m^2}\right)^2 \sin^2\theta_B
\left( \frac{\omega}{m}\right)^5 (\partial_{{\cal E}{\cal
F}}{\cal L})^2 \,m^8. \label{PS20}
\end{equation}
Here, we encounter the typical $(\omega/m)^5$-dependence of the
photon-splitting absorption coefficient for low-frequency photons. The
appearance of the magnetic field to the second power is directly
related to the fact that the box-graph exhibits only one coupling to
the external field. In contrast, Adler's result for the absorption
coefficient at zero temperature arising from the hexagon graph reads
\cite{adle71}:
\begin{equation}
\frac{\kappa^{T=0}}{m} =\frac{13^2}{3^5\cdot 5^3\cdot 7^2}
\frac{\alpha^3}{\pi^2} \left(\frac{eB}{m^2}\right)^6
\sin^6\theta_B\left( \frac{\omega}{m}\right)^5. \label{PS19}
\end{equation}
Here, the three couplings to the external magnetic field produce a
$B^6$-dependence of the absorption coefficient. Therefore, any
finite-temperature contribution will exceed the zero-temperature one
for small enough magnetic fields; but, of course, the absorption
coefficients may then become very tiny.
In order to obtain the one-loop and two-loop absorption coefficients
for thermally induced photon splitting at low temperature, the
derivatives of the corresponding Lagrangian are required in
Eq. \re{PS20}:
\begin{eqnarray}
\partial_{{\cal E}{\cal F}}{\cal L}^{1T} &=&\left[ \frac{8\alpha^2}{45}
\left( \frac{m}{T}\right)^2 +\frac{4\pi\alpha^2}{45} \left
( \frac{m}{T}\right)^3 \right] \frac{\text{e}^{-\frac{m}{T}}}{m^4},
\label{PS23}\\
\partial_{{\cal E}{\cal F}}{\cal L}^{2T} &=&\left
[ -\frac{2^6\cdot37\alpha^3\pi^3}{3^4\cdot5^2\cdot7} \,
\left(\frac{T}{m}\right)^4 +\frac{2^{14}
\alpha^3\pi^5}{3^5\cdot5\cdot7^2} \left( \frac{T}{m}\right)^6
\right] \frac{1}{m^4}, \label{PS21}
\end{eqnarray}
where we made use of the results of \cite{elmf98} for the
low-temperature/weak-field approximation of the one-loop Lagrangian
${\cal L}^{1T}$, and employed Eqs. \re{73} and \re{75} for the
two-loop one. Obviously, inserting the two-loop terms from Eq.
\re{PS21} into Eq. \re{PS20} leads to a power-law dependence of the
absorption coefficient $\sim T^8/m^8$, while the one-loop terms from
Eq. \re{PS23} imply an exponential mass damping $\exp (-2m/T)$ for
$T\to 0$.
As mentioned above, photons of frequencies below the pair-production
threshold are not only exposed to splitting at finite temperature, but
can also scatter directly with the plasma of electrons and
positrons. Following \cite{elmf98}, the absorption coefficient for the
Compton process is given by
\begin{equation}
\frac{\kappa_{\text{C}}}{m} =\frac{\sigma_{\text{C}}}{m}
\frac{2}{\pi^2} \int\limits_0^\infty dp \frac{p^2}{\text{e}^{\omega_e/T}+1},
\label{PS25}
\end{equation}
where $\omega_e$ denotes the fermion energy $\omega_e=\sqrt{p^2+m^2}$,
and the cross section $\sigma_{\text{C}}$ for unpolarized photons at
$\omega/m\simeq1$ is approximately given by:
\begin{equation}
\sigma_{\text{C}}\simeq\frac{4\pi\alpha^2}{3m^2}. \label{PS26}
\end{equation}
Although $\omega/m\simeq1$ formally represents the maximal limit of
validity of our constant-field approximation for the effective action,
we will continue to consider photons of that frequency in the
following, since, on the one hand, this circumvents a suppression of
the absorption coefficients by the common factor $(\omega/m)^5$, and
on the other hand, it has been shown for the hexagon graph in
\cite{adle71} that the difference between $\omega/m=1$- and
$\omega/m\sim0$-calculations is negligible for weak magnetic fields.
Finally, we have to consider another scattering process which arises
from the presence of a heat bath: photon-photon scattering between the
propagating photon and the black-body radiation of the thermal
background. We estimate the absorption coefficient for this process by
\begin{equation}
\frac{\kappa_{\gamma\gamma}}{m}= \frac{\sigma_{\gamma\gamma}
n_\gamma}{m}, \label{gg1}
\end{equation}
where $n_\gamma$ denotes the density of photons and is given by:
\begin{equation}
n_\gamma=2 \int\frac{d^3 p}{(2\pi)^3} \frac{1}{\text{e}^{\sqrt{p^2}/T} -1}
=\frac{2\zeta(3)}{\pi^2} \, T^3. \label{gg2}
\end{equation}
Here we encounter the Riemannian $\zeta$-function with
$\zeta(3)\simeq1.202$. The total polarization-averaged cross-section
for photon-photon scattering at low frequencies, as one obtains, e.g.,
from the Heisenberg-Euler Lagrangian \cite{eule36}, reads:
\begin{equation}
\sigma_{\gamma\gamma}=\frac{973}{10125} \frac{\alpha^2}{\pi}
\frac{\alpha^2}{m^2} \left( \frac{\omega_{\text{CM}}}{m}\right)^6,
\label{gg3}
\end{equation}
where $\omega_{\text{CM}}$ denotes the frequency of both photons in
the center-of-mass frame. In order to determine $\omega_{\text{CM}}$,
we first have to find the mean frequency at temperature $T$. Averaging
over the thermal probability distribution for the photons, we find the
mean value $\omega_T=\frac{\pi^4}{30\zeta(3)} T\simeq 2.701 T$.
According to relativistic kinematics, the average value for the
CM-frequency $\omega_{\text{CM}}$ is given by $\omega_{\text{CM}}=
\sqrt{\omega \omega_T/2}\simeq 1.16 \sqrt{T\omega}$, where we averaged
over the propagation direction of the thermal photons. Putting
everything together, we obtain for the absorption coefficient for
photon-photon scattering with the thermal background:
\begin{equation}
\frac{\kappa_{\gamma\gamma}}{m} =\frac{7\cdot 139}{2^5\cdot 3^7\cdot
5^6} \frac{\pi^9}{\zeta(3)^2} \alpha^4 \left(\frac{T}{m}\right)^6
\left(\frac{\omega}{m}\right)^3 \simeq 5.21\cdot 10^{-11}
\left(\frac{T}{m}\right)^6 \left(\frac{\omega}{m}\right)^3
. \label{gg4}
\end{equation}
Since the average frequency of the heat-bath photons is proportional
to the temperature, this formula becomes invalid for $T\sim m$ and
above, because we employed the low-frequency cross-section in
Eq. \re{gg1}.
It is already clear from a qualitative viewpoint that there must be a
domain where the two-loop splitting process at least exceeds the
one-loop and the Compton contributions due to the power-law dependence
on the temperature. But since $\kappa^{2T}\sim (T/m)^8$ and
$\kappa_{\gamma\gamma}\sim (T/m)^6$, the two-loop contribution will
eventually be surpassed by the photon-photon scattering for $T\to 0$.
\begin{figure}
\begin{flushleft}
\begin{picture}(145,70)
\put(0,0){
\epsfig{figure=figPS1.eps,width=7.8cm}
}
\put(0,70){(a): \qquad$\frac{eB}{m^2}=0.2$, $\frac{\omega}{m}=1=\sin
\theta_B$}
\put(83,0){
\epsfig{figure=figPS2.eps,width=7.8cm}
}
\put(85,70){(b):\qquad$\frac{eB}{m^2}=10^{-4}$, $\frac{\omega}{m}=1=\sin
\theta_B$}
\end{picture}
\end{flushleft}
\caption{Absorption coefficient $\kappa$ in units of the electron mass
versus temperature $T$ in units of the electron mass. In Fig. (a),
the various contributions are plotted for parameter values of a
realistic astrophysical system. In Fig. (b), the parameters are
chosen in such a way that the two-loop dominance over the one-loop
and the Compton process is revealed; the photon-photon scattering
contribution cannot be overtaken in the low-temperure limit.}
\label{figPS1}
\end{figure}
However, quantitative results can only be revealed by numerical
studies. In fact, as shown in Fig. \ref{figPS1}(a), the two-loop
contribution is completely irrelevant for parameter values which may
be appropriate for a neutron star system and which are close to the
upper bound of validity of our approximation: $\frac{eB}{m^2}=0.2$,
$\omega/m=1$, $\sin \theta_B=1$, and $T/m=0.05\dots0.1$. Even the
one-loop contribution is small compared to the zero-temperature
result; but all are negligible compared to the Compton
process.
Concentrating on the relative strengths of the thermal splitting
processes, the one-loop contribution loses its major role for
$T/m\leq0.041$, where its exponential decrease is surpassed by the
two-loop power law.
In order to find a domain in which the two-loop splitting wins out
over the zero-temperature process, we have to look at smaller values
of the magnetic field strength; e.g., at values of temperature
$T/m=0.025$, the two-loop process exceeds the zero-temperature one for
$\frac{eB}{m^2} \leq 2.1\cdot 10^{-4}$. Since these are more moderate
field strengths, the absorption coefficient naturally becomes very
small: $\kappa/m\sim 10^{-34}\dots 10^{-33}$. Hence, in order to be
able to measure the splitting rate, the extension of the magnetic
field in which the photon propagates must be comparable to
galactic scales.
Finally, we have plotted the Compton and photon-photon absorption
coefficients, $\kappa_{\text{C}}$ and $\kappa_{\gamma\gamma}$, and the
two-loop coefficient $\kappa^{2T}$ for a
weak magnetic field $\frac{eB}{m^2}=10^{-4}$ at $T/m=0.001\dots 0.1$
in Fig. \ref{figPS1}(b). Obviously, the Compton process loses its
dominant role for $T/m\leq0.03$; below, the absorption coefficient
is ruled by the photon-photon scattering as long as the temperature
does not become so small that only the zero-temperature amplitude
remains. As is also made visible in Fig. \ref{figPS1}(b), the two-loop
contribution does not exceed the photon-photon process, due to the
weaker temperature dependence of the latter. Hence, we may summarize
that the photon absorption coefficient in the low-temperature domain
is either dominated by the zero-temperature contribution for strong
magnetic fields or by the photon-photon scattering with the thermal
background for weak fields. So the two-loop contribution always
belongs to the top flight but is never ranked first.
In order to account for realistic astrophysical systems, it is
compulsory to include a finite chemical potential. First estimates can
be found in \cite{elmf98} to one-loop order, where signals have been
found that a finite chemical potential of $\mu\simeq m$ may induce an
increase of the thermal splitting amplitude at low temperatures. In
order to settle this question properly, the present paper shows that
an investigation of these systems should take the two-loop
contributions into account.
Let us conclude this section with the remark that in order to obtain
the sum of the zero-temperature and the thermal contributions to the
photon splitting absorption coefficient, the amplitudes must be summed
up coherently, since the final states of the processes coincide, and
the thermal vacuum with a constant background field does not provide
for a mechanism of decoherence. While the zero-temperature amplitude
as well as the thermal one-loop amplitude are strictly positive, the
$T^4$-term in Eq. \re{PS21} contributes with a negative sign. Hence,
an exceptional curve in the parameter space of $\frac{eB}{m^2}$ and
$T/m$ exists where the thermal two-loop amplitude interferes with the
thermal one-loop and zero-temperature amplitudes destructively so that
photon splitting vanishes.
\section{Pair Production}
Thermally induced pair production in electric fields has been searched
for at the one-loop level for a long time
\cite{cox84,loew92,hall94,gang95,gang98} with extremely contrary
results. In our opinion, the final concordant judgement in the
real-time formalism \cite{elmf94}, the functional Schr\"odinger
approach \cite{hall94}, as well as the imaginary-time formalism
\cite{gies99a} is that there is no imaginary part in the thermal
contribution to the effective action to one loop, implying the absence
of thermally induced pair production to this order of calculation. As
already mentioned in the introduction, drawing conclusions from an
imaginary part of the thermal effective action to pair production is
not as immediate and straightforward as at zero-temperature, since the
presence of an electric pair-producing field and the thermal
equilibrium assumption which is inherent to our approach contradict
each other.
In the following, we simply assume that on the one hand, the time
scale of pair production is much shorter than the time scale of the
depletion of the electric field so that dynamical back-reactions can
be neglected (this assumption is familiar from the zero-temperature
Schwinger formula). On the other hand, we also assume that the state
of the plasma is appropriately approximated by a thermal equilibrium
although it is exposed to an electric field. Whether the assumption on
thermal equilibrium is justified in concrete experimental situations
such as, e.g., heavy ion collisions, is still under discussion.
Let us now turn to the computation of the imaginary part of the
two-loop thermal effective action for external electric fields. For
this, we concentrate on the $T^4$-contribution as given in
Eq. \re{90}. For purely electric fields, $a\to0$, $b\to E$, ${\cal
E}+{\cal F}\to\frac{1}{2} E^2$, this reads:
\begin{equation}
{\cal L}^{2T}(E)\biggl|_{T^4}=-\frac{\alpha\pi}{90}\, T^4
\int\limits_0^\infty \frac{dz}{z}\, \text{e}^{-\text{i} \frac{m^2}{eE} z} \left
[ \frac{1}{3} z \coth z + \frac{1-z\coth z}{\sinh^2 z} \right],
\label{79}
\end{equation}
where we substituted $z=eEs$. For reasons of convenience, it is useful
to abbreviate $\eta:=\frac{eE}{m^2}$, which denotes the dimensionless
ratio between the electric field and the critical field strength
$E_{\text{cr}}:= \frac{m^2}{e}$. Integrating the $1/\sinh^2z$-term by
parts leads us to:
\begin{equation}
{\cal L}^{2T}\!(E)\biggl|_{T^4}\!\!=\!-\frac{\alpha\pi}{90} T^4
\lim_{\epsilon\to 0} \Biggl\{ \frac{1}{2\epsilon^2} +\frac{1}{2} +
\frac{1}{4\eta^2} +\int\limits_\epsilon^\infty dz\, \text{e}^{-\text{i}
\frac{z}{\eta}} \left(\! \frac{1}{3}-\frac{\text{i}}{\eta z}
-\frac{1}{z^2} + \frac{1}{2\eta^2} \right) \coth z
\Biggr\}. \label{83}
\end{equation}
Here, it should be pointed out that the isolated pole in the first
term of the curly brackets does not signal a divergence, but simply
cancels the pole at the lower bound of the integral; the whole
expression is still finite. Our aim is to evaluate the imaginary part
of Eq. \re{83}; for this, the behavior of the integral at the lower
bound is of no interest. An imaginary part $\text{Im}\, {\cal
L}^{2T}(E)|_{T^4}$ arises from the poles of the $\coth z$-term on
the imaginary axis at $z=\pm \text{i} n\pi$, $n=1,2,\dots$.
Decomposing the exponential function into $\cos +\text{i} \sin$, it becomes
obvious that the imaginary parts of the integrand are even functions
in $z$, while the real parts are odd. Thus, extending the integration
interval from $-\infty$ to $\infty$ exactly cancels the real parts and
simply doubles the imaginary parts. We finally get:
\begin{equation}
\text{Im}\, {\cal L}^{2T}(E)\biggl|_{T^4} =-\frac{\alpha\pi}{90}
\frac{T^4}{2\text{i}} \int\limits_{-\infty}^\infty dz\, \text{e}^{-\text{i}
\frac{z}{\eta}} \left( \frac{1}{3}-\frac{\text{i}}{\eta z} -\frac{1}{z^2}
+ \frac{1}{2\eta^2} \right) \coth z. \label{84a}
\end{equation}
Now we can close the contour in the lower complex half plane, which is
in agreement with the causal prescription $m^2\to m^2-\text{i}
\epsilon$. The value of the integral is then simply given by the sum
of the residues of the $\coth z$-poles at $z=-\text{i}\pi n$,
$n=1,2,\dots$. Hence, we arrive at:
\begin{equation}
\text{Im}\, {\cal L}^{2T}(E)\biggl|_{T^4} =\frac{\alpha\pi^2}{90} \,
T^4 \sum_{n=1}^\infty \text{e}^{-\frac{n\pi}{\eta}} \left( \frac{1}{3}+
\frac{1}{n\pi\eta} + \frac{1}{n^2\pi^2} + \frac{1}{2\eta^2} \right),
\qquad \eta= \frac{eE}{m^2}, \label{84}
\end{equation}
which represents our final result for the imaginary part of the
thermal effective QED action at low temperature, and should be read
side by side with Schwinger's one-loop result:
\begin{equation}
\text{Im}\, {\cal L}^1(E)=\frac{m^4}{8\pi^3} \, \eta^2
\sum_{n=1}^\infty \frac{\text{e}^{-\frac{n\pi}{\eta}}}{n^2}.\label{84b}
\end{equation}
The sum in Eqs. \re{84} and \re{84b} can be carried out analytically;
but here, it should be sufficient to consider the limiting cases of
weak and strong electric fields.
In the weak-field limit, i.e., for small values of $\eta$, the sum
over $n$ in Eq. \re{84} is dominated by the first term $n=1$.
Furthermore, it is the last term which is the most important one in
parentheses. These considerations then lead us to:
\begin{equation}
\text{Im}\, {\cal L}^{2T}(eE\ll m^2)\simeq \frac{\alpha\pi^2}{180} \,
T^4\, \frac{\text{e}^{-\pi/\eta}}{\eta^2}. \label{85}
\end{equation}
Combining this with the weak-field approximation of Eq. \re{84b}, we
get roughly for the total imaginary part of the effective Lagrangian:
\begin{equation}
\text{Im}\, {\cal L}(eE\ll m^2) = m^4 \text{e}^{-\pi/\eta} \left
(\! \frac{\eta^2}{8\pi^3} +\frac{\alpha\pi^2}{180} \frac{1}{\eta^2}
\frac{T^4}{m^4}\! \right) \simeq m^4 \text{e}^{-\pi/\eta} \left(\! 4\cdot
10^{-3} \eta^2 +4\cdot10^{-4} \frac{T^4/m^4}{\eta^2}\!
\right). \label{87}
\end{equation}
E.g., for $T/m\simeq0.1$, where the present low-temperature
approximation should still be appropriate, the thermal contribution
can be neglected for $\eta\geq 0.1$; both contributions become roughly
equal for $\eta\simeq 0.056$ (and $T/m=0.1$). For weaker fields and
$T/m\simeq0.1$, the thermal contribution even becomes the dominant
one.
In the opposite limit, where $\eta\gg 1$, i.e., for strong electric
fields beyond the critical field strength, the 1/3 in parentheses
dominates the expression in Eq. \re{84}, which then gives:
\begin{equation}
\text{Im}\, {\cal L}^{2T}(eE\gg m^2) =\frac{\alpha\pi^2}{270} \, T^4
\sum_{n=1}^\infty \Bigl( \text{e}^{-\pi/\eta}\Bigr)^n
=\frac{\alpha\pi^2}{270}\, T^4 \frac{\text{e}^{-\pi/\eta}}{1-\text{e}^{-\pi/\eta}}
=\frac{\alpha\pi}{270}\, T^4\, \eta +{\cal O}(\eta^0). \label{88}
\end{equation}
Together with the strong-field approximation of the Schwinger formula,
this gives:
\begin{equation}
\text{Im}\, {\cal L}(eE\gg m^2) =m^4 \,\eta\left( \frac{\eta}{48\pi} +
\frac{\alpha\pi}{270} \frac{T^4}{m^4} \right) \simeq m^4\, \eta
\left( 6.6\cdot 10^{-3}\eta+8.5\cdot 10^{-5} \frac{T^4}{m^4}
\right). \label{90z}
\end{equation}
Since Eq. \re{90z} is valid for $\eta\gg 1$ and $T/m\ll 1$, the
low-temperature contribution to $\text{Im}\, {\cal L}(E)$ can be
neglected for strong electric fields. Similarly to the case of strong
magnetic fields, we find that the non-linearities of pure (zero-$T$)
vacuum polarization exceed the polarizability of the thermally induced
real plasma by far in the strong field limit.
Nevertheless, in the limit of weak electric fields, thermal effects
can increase the pair-production probability $P=1-\exp(-2\text{Im}\,
{\cal L}(E))$ significantly, as was shown in Eq. \re{87}. Of course,
for these values of $\eta$, the total imaginary part is very small due
to the inverse power of $\eta$ in the exponential.
Since we did not consider thermalized fermions, our approach is not
capable of describing high-temperature pair production, which would be
desirable for forthcoming heavy-ion collision experiments. However, as
can be read off from our results for light propagation and photon
splitting, extrapolating the power-law behavior to higher
temperature scales of $T\sim m$ or even $T/m\gg 1$ overestimates a
possible two-loop contribution by far, since, for these values of
temperature, the one-loop contribution can be expected to be the
dominant one. The latter increases at most logarithmically with $T$.
Therefore, it is reasonable to assume that the pair-production
probability also increases at most logarithmically with $T$. In view
of these considerations, a power-law growth as suggested in
\cite{loew92,gang95,gang98} does not appear plausible. Of course, in
order to decide this question, the two-loop calculation has to be
carried out for arbitrary values of temperature.
\section{Discussion}
In the present work, we calculated the thermal two-loop contribution
to the effective QED action for arbitrary constant electromagnetic
fields in the low-temperature limit, $T/m\ll 1$. Contrary to the usual
loop hierarchy in a perturbation theory with small coupling, the
thermal two-loop part is found to be dominating over the thermal
one-loop part in the low-temperature limit, since the former exhibits
a power-law behavior in $T/m$, while the latter is exponentially
suppressed by factors of $\exp(-m/T)$. The physical reason behind this
is that the one-loop approximation does not involve virtual photons,
which, due to their being massless, can be more easily excited at
low temperatures than massive fermions; thus, the one-loop
approximation should be rated as an inconsistent truncation of
finite-temperature QED for $T$ much below the electron mass.
The power-law dependence of the thermal effective action to two loop
starting with $T^4/m^4$ implies a {\em two-loop dominance} in the
low-energy domain of thermal QED, which holds roughly up to
$T/m\simeq0.05$.
For the subject of light propagation at finite temperature, this
two-loop dominance has been known for some time from studies of the
polarization tensor \cite{tarr83,lato95}. Moreover, for the subject of
QED in a Casimir vacuum like the parallel-plate configuration, the
two-loop dominance is very natural and well known, since the fermions
are not considered to be subject to the periodic boundary
conditions anyway. This gives rise to a non-trivial check of our results,
since Casimir and finite-temperature calculations highly resemble each
other. Replacing, as usual, $T$ by $1/(2a)$ in Eq. \re{85} for the
weak-field limit of the imaginary part of the effective Lagrangian,
where $a$ denotes the separation of the Casimir plates, we obtain:
\begin{equation}
\text{Im}\, {\cal L}^{2a}(E)\biggl|_{a^{-4}}
=\frac{\pi e^2}{2^8\cdot 45}
\frac{1}{a^4} \left( \frac{m^2}{eE}\right)^2 \text{e}^{-\pi m^2/eE},
\label{99}
\end{equation}
which agrees precisely with the findings of \cite{roba87} for the
Casimir corrections to the Schwinger formula\footnote{Actually,
Eq. \re{99} agrees with the findings of \cite{roba87} except for a
global sign; however, as was pointed out by one of the authors in a
footnote of \cite{scha90}, the expression in \cite{roba87} is wrong
by a minus sign, which saves the day.}.
In order to illustrate the two-loop dominance, we studied light
propagation and photon splitting in a weak magnetic background at
low temperature. Since we are dealing with the two-loop level, the
here-considered effects are naturally very tiny and a significant
influence on, e.g., photon physics near astrophysical compact objects
appears not very probable. One should rather take a closer look at
photon physics on large galatic scales.
Furthermore, we calculated the imaginary part of the thermal two-loop
effective action for electric background fields at low temperature.
Under mild assumptions, this result can be related to a thermally
induced production probability of electron-positron pairs. Especially
in the weak-field limit, the thermal contribution has a significant
influence on the production rate. Since no thermal one-loop imaginary
part exists, any finite two-loop result automatically dominates at
any temperature scale.
For the subjects of light propagation and photon splitting, the loop
hierarchy is restored above $T/m\simeq 0.05$. Already at this
comparably low value of temperature, the thermal excitation of the
fermions begins to compete with that of the virtual photon. Hence,
a calculation of the two-loop thermal Lagrangian at intermediate or
high temperatures would appear as an imposition, were it not for the
high-temperature pair-production probability which is beyond the range
of the one-loop approximation and of great interest for, e.g.,
heavy-ion collisions.
\section*{Appendix}
\renewcommand{\thesection}{\mbox{\Alph{section}}}
\renewcommand{\theequation}{\mbox{\Alph{section}.\arabic{equation}}}
\setcounter{section}{0}
\setcounter{equation}{0}
\section{One-loop Polarization Tensor}
While the polarization tensor in an external magnetic field has been
considered by many authors (a comprehensive study can, e.g., be found
in \cite{tsai75}), a generalization to arbitrary constant
electromagnetic fields in a straightforward manner is associated with
a substantial increase in calculational difficulties. The problem was
first solved by Batalin and Shabad \cite{bata71}; their extensive
result was later brought into a practical form by Artimovich
\cite{arti90}. In the following, we will briefly sketch a simpler
derivation of the polarization tensor in arbitrary constant
electromagnetic fields; our approach is based on the findings of
Urrutia \cite{urru79}, who solved the problem for the special case of
parallel electric and magnetic fields.
\vspace{7mm}
\begin{figure}[h]
\begin{center}
\begin{picture}(125,20)
\put(27,0){
\begin{fmffile}{fmfpicPolTens}
\begin{fmfgraph*}(70,20)
\fmfleft{i1}
\fmfright{o1}
\fmf{phantom,tension=1}{i1,v1,v2,v3,v4,v5,v6,o1}
\fmffreeze
\fmf{photon}{i1,v1,v2}
\fmf{double,left,tension=0.1}{v2,v5}
\fmf{double,left,tension=0.1}{v5,v2}
\fmf{photon}{v5,v6,o1}
\fmfdot{v2,v5}
\end{fmfgraph*}
\end{fmffile}}
\put(25,10){$k^\mu$}
\end{picture}
\end{center}
\vspace{0.3cm}
\caption{Diagrammatic representation of the one-loop polarization
tensor. The fermionic double line represents the coupling to all
orders to the external electromagnetic field.}
\label{figpol}
\end{figure}
Assume that the $(-E)$-field and the $B$ field point along the
3-axis. 4-vectors like the external momentum (cf. Fig. \ref{figpol})
can then be decomposed into
\begin{equation}
k^\mu=k^\mu_\|+k^\mu_\bot, \qquad k^\mu_\|=(k^0,0,0,k^3), \qquad
k^\mu_\bot= (0,k^1,k^2,0). \label{11125}
\end{equation}
In the same manner, tensors can be decomposed, e.g.,
$g^{\mu\nu}=g^{\mu\nu}_\|+ g^{\mu\nu}_\bot$. With respect to each
subspace, we easily find the unique orthogonal vector to a given one:
\begin{equation}
\tilde{k}^\mu_\|=(k^3,0,0,k^0),\qquad\qquad
\tilde{k}^\mu_\bot=(0,k^2,-k^1,0). \label{11126}
\end{equation}
Following Urrutia \cite{urru79}, the polarization tensor for the
special field configuration can be written as:
\begin{eqnarray}
\Pi^{\mu\nu}(k|A})\!=\! \frac{\alpha}{2\pi}\!\!
\int\limits_0^\infty \frac{ds}{s}\!\int\limits_{-1}^1\!
\frac{d\nu}{2}\Biggl\{{&&\!\!\!\!\!\!\! \text{e}^{-\text{i} s\phi_0} \frac{z z'}{\sin
z \sinh z'}
\biggl[ \bigl(g^{\mu\nu} k^2 -k^\mu k^\nu \bigr)N_0
+\bigl(g^{\mu\nu}_\| k^2_\| -k^\mu_\| k^\nu_\| \bigr)N_1 \nonumber\\
&&+\bigl(g^{\mu\nu}_\bot k^2_\bot -k^\mu_\bot k^\nu_\bot \bigr)N_2
-\bigl( \tilde{k}^\mu_\bot \tilde{k}^\nu_\| +\tilde{k}^\mu_\|
\tilde{k}^\nu_\bot \bigr) N_3 \biggr]+\text{c.t.}\Biggr\}. \label{11127}
\end{eqnarray}
The electric and magnetic field strengths $E,B$ are contained in the
variables $z:=eBs$ and $z':=eEs$. The exponent $\phi_0$ is given
by\footnote{This formula has been misprinted in Ref. \cite{urru79}.}
\begin{equation}
\phi_0:=m^2+\frac{k^2_\|}{2} \frac{\cosh z'-\cosh \nu z'}{z' \sinh z'}
+\frac{k^2_\bot}{2} \frac{\cos \nu z -\cos z}{z \sin z}. \label{11128}
\end{equation}
The functions $N_i$ read:\footnote{$N_3$ differs from Urrutia's
findings by a minus sign, since he considers {\em parallel} $E$- and
$B$-fields.}
\begin{eqnarray}
N_0&=&\cosh \nu z'\, \cos \nu z-\sinh \nu z'\, \sin \nu z\, \cot z\, \coth z',
\nonumber\\
N_1&=&2\cos z \frac{\cosh z' -\cosh \nu z'}{\sinh^2 z'} -N_0\qquad\quad
=: \tilde{N}_1 -N_0, \nonumber\\
N_2&=&2\cosh z' \frac{\cos \nu z -\cos z}{\sin^2 z} -N_0\qquad\qquad =:
\tilde{N}_2 -N_0, \label{11129}\\
N_3&=&-\frac{1-\cos z\, \cos \nu z}{\sin z}\frac{\cosh \nu z'\, \cosh z'
-1}{\sinh z'} +\sin \nu z\, \sinh \nu z', \nonumber
\end{eqnarray}
where we have incidentally defined the functions $\tilde{N}_{1,2}$ for
later use. The determination of the contact term corresponds to a charge
and field strength renormalization and yields:
\begin{equation}
\text{c.t.}=-\text{e}^{-\text{i} m^2 s}(1-\nu^2) \bigl(g^{\mu\nu}k^2-k^\mu
k^\nu\bigr). \label{111210}
\end{equation}
Now, one can show \cite{gies99c} that the Lorentz invariant form of
the polarization tensor for arbitrary constant electromagnetic fields
can be completely reconstructed from the special form given above for
anti-parallel electric and magnetic fields. This is achieved by,
first, a one-to-one mapping between Urrutia's scalar variables
($k_\|^2,k_\bot^2,E,B$) and a set of invariants which reduce to
Urrutia's variables in the special system:
\begin{eqnarray}
a&\to&B, \qquad\qquad\qquad\qquad\quad b\,\,\to-E, \nonumber\\
z_k&\to&-E^2k^2_\| +B^2k^2_\bot, \qquad\quad
k^2\to k^2_\|+k^2_\bot. \label{111212}
\end{eqnarray}
The inverse map is obtained by a simple calculation; the non-trivial
relations are:
\begin{equation}
k^2_\|\to\frac{a^2k^2-z_k}{a^2+b^2}, \qquad\qquad k^2_\bot\to \frac{b^2k^2
+z_k}{a^2+b^2}. \label{111213}
\end{equation}
Secondly, the reconstruction requires a one-to-one mapping between
Urrutia's tensor structures in Eq. \re{11127} and Lorentz covariant
tensors which reduce to Urrutia's in the special system. For this, we
need to introduce the following definitions. First, we employ a set of
four linearly independent 4-vectors:
\begin{equation}
k^\mu,\quad F\!k^\mu\equiv F^{\mu\alpha}k_\alpha,\quad
F\!{}^2\!k^\mu\equiv F^{\mu\alpha}F_{\alpha\beta}k^\beta,\quad
\text{and}\quad \sta{F\!k}^\mu\equiv
\sta{F}^{\mu\alpha}k_\alpha. \label{111214}
\end{equation}
From these, we construct the 4-vectors:
\begin{eqnarray}
v^\mu_\|&:=&\frac{1}{a^2+b^2}\Bigl(a\,\sta{F\!k}^\mu -b\, F\!k^\mu
\Bigr)\quad \to \quad \tilde{k}^\mu_\|,\nonumber\\
v^\mu_\bot&:=&\frac{1}{a^2+b^2}\Bigl(b\,\sta{F\!k}^\mu +a\, F\!k^\mu
\Bigr)\quad \to \quad \tilde{k}^\mu_\bot,\label{111219}
\end{eqnarray}
where the subscripts $\|$ and $\bot$ are to remind us of the meaning of
$v_\|$ and $v_\bot$ in the special Lorentz system (longitudinal and
transversal part of $\tilde{k}$). Incidentally, they obey the
relations (cf. Eq. \re{111213}):
\begin{equation}
v^2_\|=v^\mu_\|v_{\|\mu}=-\frac{a^2k^2-z_k}{a^2+b^2},\quad
v^2_\bot=v^\mu_\bot v_{\bot\mu}=\frac{b^2k^2+z_k}{a^2+b^2},\quad
v^\mu_\|\,v_{\bot\mu}=0. \label{111220}
\end{equation}
Finally introducing the projectors
\begin{eqnarray}
P^{\mu\nu}_0&:=& \frac{1}{k^2\left[2{\cal F}\frac{z_k}{k^2}
+{\cal G}^2-\left(\frac{z_k}{k^2}\right)^2\right]} \left(
F^2k^\mu+\frac{z_k}{k^2}\,
k^\mu\right)\left(F^2k^\nu+\frac{z_k}{k^2}\,
k^\nu\right),\nonumber\\
P^{\mu\nu}_\|&:=&\frac{v^\mu_\| v^\nu_\|}{v^2_\|},\qquad\qquad\qquad
P^{\mu\nu}_\bot:=\frac{v^\mu_\bot v^\nu_\bot}{v^2_\bot},
\label{111222}
\end{eqnarray}
which satisfy the usual projector identities,
$P_{0,\|,\bot}^2=P_{0,\|,\bot}$,
$P_{0,\|,\bot}{}^\mu{}_\mu=1$, we can establish the one-to-one
mapping:
\begin{eqnarray}
-v^\mu_\|\,v^\nu_\|&\to& \bigl(g^{\mu\nu}_\| k^2_\| -k^\mu_\|
k^\nu_\| \bigr),\nonumber\\
v^\mu_\bot\, v^\nu_\bot&\to& \bigl(g^{\mu\nu}_\bot k^2_\bot
-k^\mu_\bot k^\nu_\bot \bigr),\nonumber\\
Q^{\mu\nu}:= v^\mu_\bot\, v^\nu_\| +v^\mu_\| \, v^\nu_\bot&\to&\bigl
( \tilde{k}^\mu_\bot \tilde{k}^\nu_\| +\tilde{k}^\mu_\|
\tilde{k}^\nu_\bot \bigr), \label{111221}\\
k^2\Bigl[ P^{\mu\nu}_0
+P^{\mu\nu}_\|+P^{\mu\nu}_\bot\Bigr] &\to&\bigl(g^{\mu\nu} k^2
-k^\mu k^\nu \bigr). \nonumber
\end{eqnarray}
In the third line, we have defined the object $Q^{\mu\nu}$, which is
neither a projector nor orthogonal to the $P^{\mu\nu}_{\|,\bot}$'s but
orthogonal to $P^{\mu\nu}_0$.
We are finally in a position to transform the polarization tensor
for the parallel field configuration into its generalized form for
arbitrary constant electromagnetic fields:
\begin{equation}
\Pi^{\mu\nu}(k|A)=\Pi_0\, P_0^{\mu\nu} +\Pi_\|\, P_\|^{\mu\nu}
+\Pi_\bot\, P_\bot^{\mu\nu} +\Theta\, Q^{\mu\nu}, \label{111223}
\end{equation}
where $\Pi_{0,\|,\bot}$ and $\Theta$ are functions of the invariants
and read:
\begin{equation}
\left\{ \begin{array}{c} \Pi_0 \\ \Pi_\| \\ \Pi_\bot \\ \Theta
\end{array} \right\} =\frac{\alpha}{2\pi}\!\!
\int\limits_0^\infty \frac{ds}{s}\!\int\limits_{-1}^1\!
\frac{d\nu}{2}\Biggl[ \text{e}^{-\text{i} s\phi_0}\frac{eas\,ebs}{\sin eas \sinh
ebs} \left\{ \begin{array}{c} k^2N_0 \\ N_0v^2_\bot -\tilde{N}_1
v^2_\| \\ \tilde{N}_2 v^2_\bot-N_0v^2_\| \\ -N_3 \end{array}
\right\} + \text{c.t.} \Biggr]. \label{111224}
\end{equation}
Substituting the invariants into Eqs. \re{11128} and \re{11129}, the
functions $N_i$ and $\phi_0$ yield:
\begin{eqnarray}
\phi_0&=&m^2-\frac{v^2_\|}{2} \frac{\cosh ebs-\cosh \nu ebs}{ebs \sinh
ebs} +\frac{v^2_\bot}{2} \frac{\cos \nu eas -\cos eas}{eas \sin eas},
\nonumber\\
N_0&=&\cosh \nu ebs\, \cos \nu eas-\sinh \nu ebs\, \sin \nu eas\, \cot
eas\, \coth ebs, \nonumber\\
\tilde{N}_1&=&2\cos eas \frac{\cosh ebs -\cosh \nu ebs}{\sinh^2 ebs},
\nonumber\\
\tilde{N}_2&=&2\cosh ebs \frac{\cos \nu eas -\cos eas}{\sin^2 eas},
\nonumber\\
N_3&=&-\frac{1-\cos eas\, \cos \nu eas}{\sin eas}\frac{\cosh \nu ebs\,
\cosh ebs -1}{\sinh ebs} +\sin \nu eas\, \sinh \nu ebs. \label{111225}
\end{eqnarray}
The scalars $v^2_{\|,\bot}$ are given by certain combinations of the
invariants and can be found in Eq. \re{111219}. The contact term given
in Eq. \re{111210} contributes equally to the $\Pi_i$'s,
\begin{equation}
\text{c.t.}=-\text{e}^{-\text{i} m^2 s} k^2(1-\nu^2), \label{111226}
\end{equation}
but does not modify the function $\Theta$, which is already finite.
Note that Eq. \re{111223} almost appears in a diagonalized form except
for the term $\Theta\, Q^{\mu\nu}$. While $P_0^{\mu\nu}$ indeed
projects onto an eigenspace of $\Pi^{\mu\nu}$ with eigenvalue $\Pi_0$,
this is generally not the case for the projectors
$P_{\|,\bot}^{\mu\nu}$, due to $\Theta\, Q^{\mu\nu}$. Although a
further diagonalization is straightforward, we will not bother to
write it down, since we only need the trace of $\Pi^{\mu\nu}$, which
is simply given by:
\begin{equation}
\Pi^\mu{}_\mu=\Pi_0+\Pi_\|+\Pi_\bot, \qquad Q^\mu{}_\mu=0. \label{7}
\end{equation}
In the actual two-loop calculation, the contact terms can be omitted
for two reasons: first, it does not contribute to the thermal part,
since the latter is finite; secondly, for the zero-temperature
Lagrangian, a renormalization procedure is required anyway and, in
particular, the mass renormalization would not be covered by an
inclusion of the contact terms.
Inserting Eq. \re{111224} into Eq. \re{7} brings us to the explicit
representation of the trace:
\begin{equation}
\Pi^\mu{}_\mu\!=\frac{\alpha}{2\pi}\!
\int\limits_0^\infty\!\frac{ds}{s}\!
\!\int\limits_{-1}^1\!\frac{d\nu}{2} \frac{\text{e}^{-\text{i}
s\phi_0}}{a^2\!+\!b^2} \frac{eas\,ebs}{\sin eas \sinh ebs}
\Bigl[z_k (\tilde{N}_2 -\tilde{N}_1) +k^2\bigl( 2N_0(a^2\!+\!b^2)
+b^2\tilde{N}_2 +a^2\tilde{N}_1\bigr)\!\Bigr]\!. \label{10}
\end{equation}
This is the desired expression which is required in Eq. \re{5}. For
reasons of convenience, it is useful to rewrite the function $\phi_0$
in terms of the variables $k^2$ and $z_k$. For this, we insert
Eq. \re{111219} into the first line of Eq. \re{111225}; a
reorganization yields:
\begin{equation}
\text{e}^{-\text{i} s\phi_0}=\text{e}^{-\text{i} m^2s}\, \text{e}^{-A_z \, z_k}\, \text{e}^{-A_k\, k^2},
\label{15}
\end{equation}
where we implicitly defined:
\begin{eqnarray}
A_z&:=& \frac{\text{i} s}{2}\, \frac{1}{a^2+b^2}\,
\left( \frac{\cos \nu eas -\cos eas}{eas \sin eas} + \frac{\cosh \nu
ebs -\cosh ebs}{ebs \sinh ebs} \right), \nonumber\\
A_k&:=& \frac{\text{i} s}{2}\, \frac{1}{a^2+b^2}\,
\left( b^2\frac{\cos \nu eas -\cos eas}{eas \sin eas} -a^2
\frac{\cosh \nu ebs -\cosh ebs}{ebs \sinh ebs}
\right). \label{16}\\
\end{eqnarray}
This provides us with the required necessities for the two-loop
calculation in Sec. 2.
\section{Finite-Temperature Coordinate Frame}
In order to make the paper self-contained, we briefly review the
construction of the finite-temperature coordinate frame as introduced
in \cite{gies99a}, and then apply it to the present problem.
First, we define the {\em vierbein} $e^{A\mu}$ which mediates between
the given system labelled by $\mu,\nu,\dots=0,1,2,3$ and the desired
system labelled by the (Lorentz) indices $A,B,\dots=0,1,2,3$ by:
\begin{eqnarray}
e_0{}^\mu&:=& u^\mu,\nonumber\\
e_1{}^\mu&:=& \frac{u_\alpha F^{\alpha\mu}}{\sqrt{{\cal E}}},
\nonumber\\
e_2{}^\mu&:=& \frac{1}{\sqrt{d}} \bigl( u^\alpha F_{\alpha\beta}
F^{\beta\mu} -{\cal E}\, e_0{}^{\mu} \bigr), \nonumber\\
e_3{}^\mu&:=&\epsilon^{\alpha\beta\gamma\mu}\, e_{0\alpha}\,
e_{1\beta}\, e_{2\gamma}, \label{2.3}
\end{eqnarray}
where the quantity $d$ abbreviates the combination of invariants:
\begin{equation}
d:=2{\cal F}{\cal E}-{\cal G}^2+{\cal E}^2. \label{2.4}
\end{equation}
The vierbein satisfies the identity
\begin{equation}
e_{A\mu}\, e_B{}^\mu =g_{AB}\equiv\text{diag}(-1,1,1,1), \label{2.5}
\end{equation}
where $g_{AB}\sim g^{AB}$ denotes the metric which raises and lowers
capital indices. By a direct computation, we can transform the field
strength tensors and the heat-bath vector:
\begin{eqnarray}
n^A&:=&g^{AB}e_{B}{}^\mu\,n_\mu=(T,0,0,0), \nonumber\\
F_{AB}&:=&e_{A\mu}F^{\mu\nu}e_{B\nu}
=\left(\begin{array}{cccc}
0 &\sqrt{{\cal E}} & 0 & 0 \\
-\sqrt{{\cal E}}& 0 &\sqrt{d/{\cal E}} & 0 \\
0 &-\sqrt{d/{\cal E}}& 0 &-{\cal G}/\sqrt{{\cal E}}\\
0 & 0 &{\cal G}/\sqrt{{\cal E}} & 0
\end{array}\right). \label{2.8}
\end{eqnarray}
Obviously, the new system corresponds to the heat-bath rest frame with
the spatial axes oriented along the electromagnetic field in some
sense. The components of the field strength tensor are now given by
combinations of the invariants.
In order to determine the form of $z_k=k_\mu F^{\mu\alpha} k_\nu
F^\nu{}_\alpha\equiv -k^A F_{AC} F^C{}_B k^B$, we need the square of
the field strength tensor:
\begin{equation}
F^2_{AB}\equiv F_{AC} F^C{}_B =
\left( \begin{array}{cccc}
-{\cal E} & 0 & \sqrt{d} & 0\\
0 &{\cal E}-\frac{d}{{\cal E}}&0 &-\frac{\sqrt{d}{\cal
G}}{{\cal E}}\\
\sqrt{d} & 0 &-\frac{{\cal G}^2+d}{{\cal E}}&0\\
0&-\frac{\sqrt{d}{\cal G}}{{\cal E}}&0&-\frac{{\cal G}^2}{{\cal E}}
\end{array}\right). \label{20}
\end{equation}
This allows us to write $z_k$ in the form:
\begin{equation}
z_k={\cal E}\, (k^0)^2 -2\sqrt{d}\, k^0k^2 +(2{\cal F}+{\cal E})\,
(k^2)^2 +\left(\!\frac{d}{{\cal E}}-{\cal E}\!\right) (k^1)^2
+2\frac{\sqrt{d}{\cal G}}{{\cal E}}\, k^1k^3 +\frac{{\cal G}^2}{{\cal
E}}\, (k^3)^2, \label{21a}
\end{equation}
where $k^0,k^1,k^2,k^3$ represent the components of the rotated
momentum vector $k^A=e^A{}_\mu k^\mu$.
Now we can finally determine the desired form for the exponent in
Eq. \re{18} in terms of finite-temperature coordinates:
\begin{eqnarray}
A_z z_k +A_k k^2 \!\!&=&\!\! \bigl( A_k +(a^2\!-\!b^2\!+\!{\cal E})
A_z\bigr)\! \left(\! k^2\!-\!{\scriptstyle
\frac{A_z\sqrt{d}}{A_z(2{\cal F} +{\cal E}) +A_k}}\, k^0\!
\right)^2 -\frac{(A_k\!+\!a^2A_z)(A_k\!-\!b^2A_z)}{ A_k
+(a^2\!-\!b^2\!+\!{\cal E}) A_z} \, (k^0)^2 \nonumber\\
&&\!\! +\left(\! A_z\frac{a^2b^2}{{\cal E}} +A_k\!\right)\!\left(\!k^3
+{\scriptstyle \frac{A_z \frac{\sqrt{d}{\cal G}}{{\cal E}}}{A_z
\frac{{\cal G}^2}{{\cal E}} +A_k}}\, k^1\!\right)
+\frac{(A_k+a^2A_z)(A_k-b^2A_z)}{ A_k\frac{a^2b^2}{{\cal E}} + A_k}\,
(k^1)^2, \nonumber\\
&&\label{23}
\end{eqnarray}
where again, $k^0,k^1,k^2,k^3$ represent the components of $k^A$.
\section{Two-loop Effective Action of QED at Zero Temperature}
Dedicating this appendix to the derivation of the zero-temperature
two-loop Lagrangian has two reasons: first, we want to make
contact to well-known results, which serve as a check for our
computations; secondly, our results will represent a generalization of
the work of Dittrich and Reuter \cite{ditt85}, who considered purely
magnetic fields, to the case of constant electromagnetic
fields.
Here, we will only give a version of the unrenormalized effective
Lagrangian, since the renormalization program requires detailed
investigations which are beyond the scope of the present
work. In particular, the mass renormalization has to be treated with
great care \cite{flie97}, taking a correct matching of the imaginary parts
of the effective action into account.
In Eq. \re{29}, we achieved a separation of the thermal and the
zero-temperature parts in the integral $I_1$; concentrating on the
zero-$T$ case, we found:
\begin{equation}
I_1^{T=0}=\frac{1}{16\pi^2} \text{e}^{-\text{i} m^2 s}\,
\frac{1}{q_a\,q_b}, \label{A.1}
\end{equation}
where $q_a=(A_k+a^2A_z)$ and $q_b=(A_k-b^2A_z)$, and $A_k$ and $A_z$
are defined in Eq. \re{16}. We also need the second integral
$I_2^{T=0}$, which is related to $I_1^{T=0}$ by Eq. \re{17}, leading
us to:
\begin{equation}
I_2^{T=0}=\frac{\text{e}^{-\text{i} m^2s}}{16\pi^2} \int\limits_0^\infty ds'
\left[
\frac{a^2}{(s'+q_a)^2(s'+q_b)}
-\frac{b^2}{(s'+q_a)(s'+q_b)^2}
\right].\label{A.2}
\end{equation}
The integration over $s'$ can be carried out with elementary
techniques:
\begin{equation}
I_2^{T=0}=\frac{\text{e}^{-\text{i} m^2s}}{16\pi^2}\left[ a^2 \left
( \frac{1}{q_a(q_b-q_a)} -\frac{1}{(q_b-q_a)^2} \ln
\frac{q_b}{q_a}\right) -b^2\left( \frac{1}{q_b(q_a-q_b)}
+\frac{1}{(q_b-q_a)^2} \ln
\frac{q_b}{q_a}\right)\right]\!. \label{A.3}
\end{equation}
Inserting these $T=0$-contributions \re{A.1} and \re{A.3} into Eq.
\re{11a} and reorganizing the result a bit, we finally arrive at:
\begin{eqnarray}
{\cal L}^2&=&-\frac{\alpha}{(4\pi)^3} \int\limits_0^\infty \frac{ds}{s}
\int\limits_{-1}^1 \frac{d\nu}{2}\,\text{e}^{-\text{i} m^2 s}\, \frac{eas\,
ebs}{\sin eas \sinh ebs} \label{A.7}\\
&&\qquad\quad \times \left[ \frac{2N_0+\tilde{N}_2}{q_a(q_b-q_a)} +
\frac{2N_0 +\tilde{N}_1}{q_b(q_a-q_b)}
-\frac{\tilde{N}_2-\tilde{N}_1}{(q_b-q_a)^2} \ln \frac{q_b}{q_a}
\right], \nonumber
\end{eqnarray}
where $N_0$, $\tilde{N}_1$, and $\tilde{N}_2$ are functions of the
integration variables and the invariants $a$ and $b$, and are defined
in Eq. \re{111225}, whereas $q_a$ and $q_b$ after insertion of Eq.
\re{16} can be written as:
\begin{equation}
q_a=\frac{\text{i} s}{2} \frac{\cos \nu eas -\cos eas}{eas \sin eas},\qquad
q_b=\frac{\text{i} s}{2} \frac{\cosh ebs -\cosh \nu ebs}{ebs \sinh
ebs}. \label{A.4}
\end{equation}
Equation \re{A.7} represents our final result for the unrenormalized
two-loop effective Lagrangian of QED for arbitrary constant
electromagnetic fields. In the limit of vanishing electric fields,
we recover exactly the findings of \cite{ditt85}; hence our comparably
compact representation generalizes the results of \cite{ditt85} to
arbitrary constant electromagnetic fields.
\section*{Acknowledgments}
I would like to thank Professor W. Dittrich for helpful
discussions and for carefully reading the manuscript. I am also
grateful to Dr R. Shaisultanov for valuable comments and especially
for drawing my attention to photon-photon scattering.
|
2,869,038,156,276 | arxiv | \section{Introduction}
A \textit{classical parking function} of length $n$ is a list $(a_1, a_2, \ldots, a_n)$ of positive integers whose nondecreasing rearrangement $b_1 \leq b_2 \leq \cdots \leq b_n$ satisfies $b_i \leq i$.
It is well-known that the number of classical parking functions of length $n$ is $(n+1)^{n-1}$; this number surfaces in a variety of places, for example, it counts the number of rooted labeled trees on $n+1$ vertices,
and the number of regions of a Shi arrangement (see \cite{Bon} for further discussion).
Let $\mathsf{PF}_n$ denote the convex hull of all parking functions of length $n$ in $\mathbb{R}^n$.
In 2020, Stanley \cite{Sta} asked for the number of vertices, the number of faces, the number of lattice points $ \mathsf{PF}_n \cap \mathbb{Z}^n$, and the volume of $\mathsf{PF}_n$.
These questions were first answered by Amanbayeva and Wang \cite{AW}.
In Section \ref{sec:classical}, we revisit the classical parking function polytope $\mathsf{PF}_n$.
We provide new results on the appearance of both lower-dimensional parking function polytopes and permutahedra as facets of $\mathsf{PF}_n$.
We connect the classical parking function polytope with the recent work of Heuer and Striker \cite{HS} and Behrend et al. \cite{BCC} on partial permutahedra, and show when they are integrally equivalent.
We collect different characterizations of the normalized volume of $\mathsf{PF}_n$ in Theorem \ref{thm:main_theorem} and give the first closed-form, i.e., non-recursive, answer to Stanley's original question on the volume of $\mathsf{PF}_n$.
A natural direction is to extend these results for generalizations of parking functions.
In this paper we consider $\mathbf{x}$-parking functions, where $\mathbf{x}$ is a vector of positive integers.
They have been explored from an enumerative perspective previously by Yan \cite{Yan2, Yan, Bon} and Pitman and Stanley \cite{PitmanStanley}.
In Section \ref{sec:x-parking}, we consider when $\mathbf{x} = (a,b,b,\ldots, b)$ and generalize results of Amanbayeva and Wang \cite{AW}.
We establish in Theorem \ref{thm:generalized_closed_form_volume} a closed-form normalized volume formula for all positive integers $a,b$.
\begin{theorem}\label{thm:generalized_closed_form_volume}
For any positive integers $a,b,n$, the normalized volume of the $\mathbf{x}$-paraking function polytope $\nVol(\mathcal{X}_{n}(a,b))$ is given by
\begin{equation}
\nVol(\mathcal{X}_{n}(a,b)) = -n!\left(\frac{b}{2}\right)^n \sum_{i=0}^n \binom{n}{i} (2i-3)!! \left(2n-1 + \frac{2a-2}{b}\right)^{n-i}. \nonumber
\end{equation}
\end{theorem}
\noindent The proof of Theorem \ref{thm:generalized_closed_form_volume} uses tools from analytic combinatorics and analytic number, e.g., Ramanujan's Master Theorem.
By determining when partial permutohedra are integrally equivalent to $\mathbf{x}$-parking functions, we also prove a conjecture of Behrend et al. \cite{BCC}, as Corollary \ref{cor:conjecture}.
In Section \ref{sec:weakly_increasing}, we introduce weakly increasing $\mathbf{x}$-parking functions, which are $\mathbf{x}$-parking functions that are in weakly increasing order $a_1 \leq a_2 \leq \cdots \leq a_n$ without any rearrangement.
We explore a subpolytope of the $\mathbf{x}$-parking function polytope which is constructed as the convex hull of weakly increasing $\mathbf{x}$-parking functions.
We show that the convex hull of weakly increasing $\mathbf{x}$-parking functions is integrally equivalent to certain Pitman-Stanley polytopes.
In the literature, the Pitman-Stanley polytope has been called the ``parking function polytope'' as its volume is related to parking functions.
However, we will call the convex hull of any generalization of parking functions a parking function polytope.
We conclude with Section \ref{sec:future} where we present some routes for further study on unimodular triangulations, Ehrhart theory, other generalizations where $\mathbf{x} \neq (a, b, \ldots, b)$, and rational parking functions.
\section{The classical parking function polytope}\label{sec:classical}
The classical parking function polytope was first studied in \cite{AW}. As mentioned in the introduction, a \textit{classical parking function} of length $n$ is a list $(a_1, a_2, \ldots, a_n)$ of positive integers whose nondecreasing rearrangement $b_1 \leq b_2 \leq \cdots \leq b_n$ satisfies $b_i \leq i$.
Let $\mathsf{PF}_n$ denote the convex hull of all parking functions of length $n$ in $\mathbb{R}^n$, which we call the \textit{classical parking function polytope}.
For example, the classical parking functions of length $3$ are $111$, $112$, $121$, $211$, $113$, $131$, $311$, $122$, $212$, $221$, $123$, $132$, $231$, $213$, $312$, and $321$.
For $\mathsf{PF}_3$, their convex full in $\mathbb{R}^3$, see Figure \ref{fig:PF_3} (left).
\begin{figure}[h]
\centering
\begin{tikzpicture}%
[x={(-0.702073cm, -0.395494cm)},
y={(0.711985cm, -0.374594cm)},
z={(-0.013069cm, 0.838608cm)},
scale=1.9,
back/.style={loosely dotted, thin},
edge/.style={color=black, thick},
facet/.style={fill=andresblue,fill opacity=0.500000},
vertex/.style={inner sep=1pt,circle,draw=andrespink,fill=andrespink,thick}]
\coordinate (1.00000, 1.00000, 1.00000) at (1.00000, 1.00000, 1.00000);
\coordinate (3.00000, 2.00000, 1.00000) at (3.00000, 2.00000, 1.00000);
\coordinate (1.00000, 1.00000, 3.00000) at (1.00000, 1.00000, 3.00000);
\coordinate (3.00000, 1.00000, 2.00000) at (3.00000, 1.00000, 2.00000);
\coordinate (3.00000, 1.00000, 1.00000) at (3.00000, 1.00000, 1.00000);
\coordinate (1.00000, 2.00000, 3.00000) at (1.00000, 2.00000, 3.00000);
\coordinate (1.00000, 3.00000, 1.00000) at (1.00000, 3.00000, 1.00000);
\coordinate (1.00000, 3.00000, 2.00000) at (1.00000, 3.00000, 2.00000);
\coordinate (2.00000, 3.00000, 1.00000) at (2.00000, 3.00000, 1.00000);
\coordinate (2.00000, 1.00000, 3.00000) at (2.00000, 1.00000, 3.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 3.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (3.00000, 1.00000, 1.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 3.00000, 1.00000);
\node[vertex] at (1.00000, 1.00000, 1.00000) {};
\fill[facet] (1.00000, 3.00000, 2.00000) -- (1.00000, 2.00000, 3.00000) -- (2.00000, 1.00000, 3.00000) -- (3.00000, 1.00000, 2.00000) -- (3.00000, 2.00000, 1.00000) -- (2.00000, 3.00000, 1.00000) -- cycle {};
\fill[facet] (2.00000, 3.00000, 1.00000) -- (1.00000, 3.00000, 1.00000) -- (1.00000, 3.00000, 2.00000) -- cycle {};
\fill[facet] (2.00000, 1.00000, 3.00000) -- (1.00000, 1.00000, 3.00000) -- (1.00000, 2.00000, 3.00000) -- cycle {};
\fill[facet] (3.00000, 1.00000, 1.00000) -- (3.00000, 2.00000, 1.00000) -- (3.00000, 1.00000, 2.00000) -- cycle {};
\draw[edge] (3.00000, 2.00000, 1.00000) -- (3.00000, 1.00000, 2.00000);
\draw[edge] (3.00000, 2.00000, 1.00000) -- (3.00000, 1.00000, 1.00000);
\draw[edge] (3.00000, 2.00000, 1.00000) -- (2.00000, 3.00000, 1.00000);
\draw[edge] (1.00000, 1.00000, 3.00000) -- (1.00000, 2.00000, 3.00000);
\draw[edge] (1.00000, 1.00000, 3.00000) -- (2.00000, 1.00000, 3.00000);
\draw[edge] (3.00000, 1.00000, 2.00000) -- (3.00000, 1.00000, 1.00000);
\draw[edge] (3.00000, 1.00000, 2.00000) -- (2.00000, 1.00000, 3.00000);
\draw[edge] (1.00000, 2.00000, 3.00000) -- (1.00000, 3.00000, 2.00000);
\draw[edge] (1.00000, 2.00000, 3.00000) -- (2.00000, 1.00000, 3.00000);
\draw[edge] (1.00000, 3.00000, 1.00000) -- (1.00000, 3.00000, 2.00000);
\draw[edge] (1.00000, 3.00000, 1.00000) -- (2.00000, 3.00000, 1.00000);
\draw[edge] (1.00000, 3.00000, 2.00000) -- (2.00000, 3.00000, 1.00000);
\node[vertex] at (3.00000, 2.00000, 1.00000) {};
\node[vertex] at (1.00000, 1.00000, 3.00000) {};
\node[vertex] at (3.00000, 1.00000, 2.00000) {};
\node[vertex] at (3.00000, 1.00000, 1.00000) {};
\node[vertex] at (1.00000, 2.00000, 3.00000) {};
\node[vertex] at (1.00000, 3.00000, 1.00000) {};
\node[vertex] at (1.00000, 3.00000, 2.00000) {};
\node[vertex] at (2.00000, 3.00000, 1.00000) {};
\node[vertex] at (2.00000, 1.00000, 3.00000) {};
\end{tikzpicture}
\qquad\qquad
\begin{tikzpicture}[x = {(0.9cm,-0.076cm)},
y = {(-0.06cm,0.95cm)},
z = {(-0.44cm,-0.29cm)},
scale = 1.5,
]
\definecolor{pointcolor_p}{rgb}{ 1,0,1 }
\tikzstyle{pointstyle_p} = [fill=pointcolor_p]
\coordinate (v0_p) at (1, 1, 4);
\coordinate (v1_p) at (1, 4, 1);
\coordinate (v2_p) at (4, 1, 1);
\coordinate (v3_p) at (0.25, 0.25, 0.25);
\coordinate (v4_p) at (1, 3, 4);
\coordinate (v5_p) at (1, 4, 3);
\coordinate (v6_p) at (3, 1, 4);
\coordinate (v7_p) at (3, 4, 1);
\coordinate (v8_p) at (4, 1, 3);
\coordinate (v9_p) at (4, 3, 1);
\coordinate (v10_p) at (0.333333, 0.333333, 1.33333);
\coordinate (v11_p) at (0.333333, 1.33333, 0.333333);
\coordinate (v12_p) at (1.33333, 0.333333, 0.333333);
\coordinate (v13_p) at (0.25, 0.25, 0.75);
\coordinate (v14_p) at (0.25, 0.75, 0.25);
\coordinate (v15_p) at (0.75, 0.25, 0.25);
\coordinate (v16_p) at (2, 3, 4);
\coordinate (v17_p) at (2, 4, 3);
\coordinate (v18_p) at (3, 2, 4);
\coordinate (v19_p) at (3, 4, 2);
\coordinate (v20_p) at (4, 2, 3);
\coordinate (v21_p) at (4, 3, 2);
\coordinate (v22_p) at (0.5, 1.5, 2);
\coordinate (v23_p) at (0.5, 2, 1.5);
\coordinate (v24_p) at (1.5, 0.5, 2);
\coordinate (v25_p) at (1.5, 2, 0.5);
\coordinate (v26_p) at (2, 0.5, 1.5);
\coordinate (v27_p) at (2, 1.5, 0.5);
\coordinate (v28_p) at (0.333333, 0.666667, 1.33333);
\coordinate (v29_p) at (0.333333, 1.33333, 0.666667);
\coordinate (v30_p) at (0.666667, 0.333333, 1.33333);
\coordinate (v31_p) at (0.666667, 1.33333, 0.333333);
\coordinate (v32_p) at (1.33333, 0.333333, 0.666667);
\coordinate (v33_p) at (1.33333, 0.666667, 0.333333);
\coordinate (v34_p) at (0.25, 0.5, 0.75);
\coordinate (v35_p) at (0.25, 0.75, 0.5);
\coordinate (v36_p) at (0.5, 0.25, 0.75);
\coordinate (v37_p) at (0.5, 0.75, 0.25);
\coordinate (v38_p) at (0.75, 0.25, 0.5);
\coordinate (v39_p) at (0.75, 0.5, 0.25);
\definecolor{edgecolor_p}{rgb}{ 0,0,0 }
\tikzstyle{facestyle_p} = [fill=andresblue,fill opacity=0.500000, draw=edgecolor_p, line width=1 pt, line cap=round, line join=round]
\draw[facestyle_p] (v6_p) -- (v0_p) -- (v10_p) -- (v30_p) -- (v24_p) -- (v6_p) -- cycle;
\draw[facestyle_p] (v2_p) -- (v8_p) -- (v26_p) -- (v32_p) -- (v12_p) -- (v2_p) -- cycle;
\draw[facestyle_p] (v8_p) -- (v6_p) -- (v24_p) -- (v26_p) -- (v8_p) -- cycle;
\draw[facestyle_p] (v0_p) -- (v4_p) -- (v22_p) -- (v28_p) -- (v10_p) -- (v0_p) -- cycle;
\draw[facestyle_p] (v36_p) -- (v13_p) -- (v3_p) -- (v15_p) -- (v38_p) -- (v36_p) -- cycle;
\draw[facestyle_p] (v30_p) -- (v10_p) -- (v13_p) -- (v36_p) -- (v30_p) -- cycle;
\draw[facestyle_p] (v24_p) -- (v30_p) -- (v36_p) -- (v38_p) -- (v32_p) -- (v26_p) -- (v24_p) -- cycle;
\fill[pointcolor_p] (v24_p) circle (1 pt);
\node at (v24_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v30_p) circle (1 pt);
\node at (v30_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v36_p) circle (1 pt);
\node at (v36_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v26_p) circle (1 pt);
\node at (v26_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v5_p) -- (v1_p) -- (v11_p) -- (v29_p) -- (v23_p) -- (v5_p) -- cycle;
\draw[facestyle_p] (v12_p) -- (v32_p) -- (v38_p) -- (v15_p) -- (v12_p) -- cycle;
\fill[pointcolor_p] (v32_p) circle (1 pt);
\node at (v32_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v38_p) circle (1 pt);
\node at (v38_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v35_p) -- (v14_p) -- (v3_p) -- (v13_p) -- (v34_p) -- (v35_p) -- cycle;
\draw[facestyle_p] (v4_p) -- (v5_p) -- (v23_p) -- (v22_p) -- (v4_p) -- cycle;
\draw[facestyle_p] (v10_p) -- (v28_p) -- (v34_p) -- (v13_p) -- (v10_p) -- cycle;
\fill[pointcolor_p] (v10_p) circle (1 pt);
\node at (v10_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v13_p) circle (1 pt);
\node at (v13_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v29_p) -- (v11_p) -- (v14_p) -- (v35_p) -- (v29_p) -- cycle;
\draw[facestyle_p] (v23_p) -- (v29_p) -- (v35_p) -- (v34_p) -- (v28_p) -- (v22_p) -- (v23_p) -- cycle;
\fill[pointcolor_p] (v23_p) circle (1 pt);
\node at (v23_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v29_p) circle (1 pt);
\node at (v29_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v35_p) circle (1 pt);
\node at (v35_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v34_p) circle (1 pt);
\node at (v34_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v28_p) circle (1 pt);
\node at (v28_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v22_p) circle (1 pt);
\node at (v22_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v9_p) -- (v2_p) -- (v12_p) -- (v33_p) -- (v27_p) -- (v9_p) -- cycle;
\draw[facestyle_p] (v1_p) -- (v7_p) -- (v25_p) -- (v31_p) -- (v11_p) -- (v1_p) -- cycle;
\draw[facestyle_p] (v37_p) -- (v39_p) -- (v15_p) -- (v3_p) -- (v14_p) -- (v37_p) -- cycle;
\fill[pointcolor_p] (v3_p) circle (1 pt);
\node at (v3_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v7_p) -- (v9_p) -- (v27_p) -- (v25_p) -- (v7_p) -- cycle;
\draw[facestyle_p] (v33_p) -- (v12_p) -- (v15_p) -- (v39_p) -- (v33_p) -- cycle;
\fill[pointcolor_p] (v12_p) circle (1 pt);
\node at (v12_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v15_p) circle (1 pt);
\node at (v15_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v11_p) -- (v31_p) -- (v37_p) -- (v14_p) -- (v11_p) -- cycle;
\fill[pointcolor_p] (v11_p) circle (1 pt);
\node at (v11_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v14_p) circle (1 pt);
\node at (v14_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v25_p) -- (v27_p) -- (v33_p) -- (v39_p) -- (v37_p) -- (v31_p) -- (v25_p) -- cycle;
\fill[pointcolor_p] (v25_p) circle (1 pt);
\node at (v25_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v27_p) circle (1 pt);
\node at (v27_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v33_p) circle (1 pt);
\node at (v33_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v39_p) circle (1 pt);
\node at (v39_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v37_p) circle (1 pt);
\node at (v37_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v31_p) circle (1 pt);
\node at (v31_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v17_p) -- (v19_p) -- (v7_p) -- (v1_p) -- (v5_p) -- (v17_p) -- cycle;
\fill[pointcolor_p] (v1_p) circle (1 pt);
\node at (v1_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v9_p) -- (v21_p) -- (v20_p) -- (v8_p) -- (v2_p) -- (v9_p) -- cycle;
\fill[pointcolor_p] (v2_p) circle (1 pt);
\node at (v2_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v19_p) -- (v21_p) -- (v9_p) -- (v7_p) -- (v19_p) -- cycle;
\fill[pointcolor_p] (v9_p) circle (1 pt);
\node at (v9_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v7_p) circle (1 pt);
\node at (v7_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v18_p) -- (v16_p) -- (v4_p) -- (v0_p) -- (v6_p) -- (v18_p) -- cycle;
\fill[pointcolor_p] (v0_p) circle (1 pt);
\node at (v0_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v16_p) -- (v17_p) -- (v5_p) -- (v4_p) -- (v16_p) -- cycle;
\fill[pointcolor_p] (v5_p) circle (1 pt);
\node at (v5_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v4_p) circle (1 pt);
\node at (v4_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v20_p) -- (v18_p) -- (v6_p) -- (v8_p) -- (v20_p) -- cycle;
\fill[pointcolor_p] (v6_p) circle (1 pt);
\node at (v6_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v8_p) circle (1 pt);
\node at (v8_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\draw[facestyle_p] (v20_p) -- (v21_p) -- (v19_p) -- (v17_p) -- (v16_p) -- (v18_p) -- (v20_p) -- cycle;
\fill[pointcolor_p] (v20_p) circle (1 pt);
\node at (v20_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v21_p) circle (1 pt);
\node at (v21_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v19_p) circle (1 pt);
\node at (v19_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v17_p) circle (1 pt);
\node at (v17_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v16_p) circle (1 pt);
\node at (v16_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\fill[pointcolor_p] (v18_p) circle (1 pt);
\node at (v18_p) [text=black, inner sep=0.5pt, above right, draw=none, align=left] {};
\end{tikzpicture}
\caption{On the left, we have the classical parking function polytope $\mathsf{PF}_3$, where the hexagonal facet is the regular permutahedron $\Pi_3$ and the three triangular facets are copies of $\mathsf{PF}_2$.
On the right, we have the the Schlegel diagram of $\mathsf{PF}_4$. }
\label{fig:PF_3}
\end{figure}
\subsection{Face structure}
We are able to say more about its face structure by using the regular permutahedron, which we now define.
\begin{definition}[Example 0.10, \cite{Ziegler}; Definition 2.1, \cite{Pos}]\label{def:permutahedron}
The \textit{regular permutahedron} $\Pi_{n} \subseteq \mathbb{R}^n$ is the $(n-1)$-dimensional polytope obtained as the convex hull of all vectors obtained by permuting the coordinates of the vector $(1,2,\ldots,n)$.
Its vertices can be identified with the permutations in $S_n$ by associating with $(x_1, x_2, \ldots, x_n)$ the permutation that maps $x_i \mapsto i$ such that two permutations are adjacent if and only if the corresponding permutations differ by an adjacent transposition.
More generally, for $\mathbf{r}:=(r_1, \ldots, r_n) \in \mathbb{R}^n$, a \textit{permutahedron} $\Pi_n(\mathbf{r})$ is the convex hull of all vectors that are obtained by permuting the coordinates of the vector $\mathbf{r}$.
The defining inequalities for $\Pi_n$ (Proposition 2.5, \cite{Pos}; \cite{Rad}) are of the form \[x_1 +\cdots +x_n = \frac{n(n+1)}{2},\] and for all nonempty subsets $\{i_1, \ldots, i_k\} \subseteq \{1, \ldots, n\}$, \[x_{i_1} + \cdots +x_{i_k} \leq n + \cdots + (n-k+1) = \frac{n(n+1)}{2} - \frac{k(k+1)}{2}.\]
\end{definition}
\begin{example}
Consider the classical parking function polytope $\mathsf{PF}_3$, observe that one facet is the hexagon with vertices $(1,2,3)$, $(1,3,2)$, $(2,3,1)$, $(2,1,3)$, $(3,1,2)$, and $(3,2,1)$.
This aligns exactly with the description of the regular permutahedron $\Pi_3$.
Note that $\Pi_3$ is the intersection of $\mathsf{PF}_3$ with the supporting hyperplane given by the linear functional $x_1+x_2+x_3 = \frac{3(3+1)}{2}=6$. See Figure \ref{fig:PF_3} (left).
\end{example}
\begin{example}
For $\mathsf{PF}_4$, there is only one three-dimensional face that is the convex hull of 24 vertices, which are exactly the $4!$ permutations of $(1,2,3,4)$; it is $\Pi_4$.
Similarly, we can find that there are exactly 8 two-dimensional faces which are the convex hulls of 6 vertices, which are, of course, the 8 hexagonal faces (and thus 2-dimensional permutahedra) which are facets of the three-dimensional permutahedra. See Figure \ref{fig:PF_3} (right) for the Schlegel diagram of $\mathsf{PF}_4$.
\end{example}
We have the following result on when the permutahedron appears as a facet of the classical parking function polytope.
\begin{proposition}\label{prop:permutahedron}
The regular permutahedron appears as a facet of the classical parking function polytope exactly once.
\end{proposition}
\begin{proof}
By the definition of the parking function polytope $\mathsf{PF}_n$, it is the convex hull of all vertices given by permutations of \[ (\underbrace{1, \ldots, 1}_k, \underbrace{k+1, k+2, \ldots, n-2, n-1}_{n-k}),\] for $1 \leq k \leq n$ (here, $k=0$ would be superfluous).
Each permutation of $(1,2,\ldots,n)$ appears as a vertex.
Thus, the convex hull of these $n!$ vertices, which is exactly $\Pi_n$, is contained within $\mathsf{PF}_n$.
We now use the hyperplane description of $\mathsf{PF}_n$.
Consider the hyperplane $H$ defined by $x_{1}+ x_{2}+ \cdots + x_{n} = \frac{n(n+1)}{2} $.
We claim $H\cap \mathsf{PF}_n = \Pi_{n}$.
Since all permutations of $(1,2,\ldots, n)$ satisfy $1+2+\cdots+n = \frac{n(n+1)}{2}$, the vertices that give the vertex description of $\Pi_{n}$ are in $H \cap \mathsf{PF}_n$.
By taking their convex hull, it follows that $\Pi_n\subseteq H \cap \mathsf{PF}_n$.
Now, suppose $\mathbf{x}:=(x_1, x_2, \ldots, x_n)$ is a point in $H \cap \mathsf{PF}_n$.
As we are dealing with a subset of a polytope intersecting a hyperplane, $H \cap \mathsf{PF}_n$ is a polytope of dimension (at most) $n-1$.
Suppose towards a contradiction that $\mathbf{x}$ is not in $\Pi_{n}$.
This means that by the defining inequalities for $\Pi_n$,
\begin{enumerate}
\item $x_1 + \cdots + x_n \neq \frac{n(n+1)}{2}$, a contradiction, or
\item there exists some nonempty subset $\{i_1, \ldots, i_k\} \subseteq \{1, \ldots, n\}$ such that
\[x_{i_1} + \cdots + x_{i_k} > n + \cdots + (n-k+1) = \frac{n(n+1)}{2} - \frac{k(k+1)}{2}.\]
\end{enumerate}
But (2) is not allowed by the defining inequalities for a parking function polytope.
Hence, $\mathbf{x}$ is in $\Pi_n$, so $\Pi_n = H \cap \mathsf{PF}_n$ is a facet of $\mathsf{PF}_n$.
To show uniqueness, it suffices to show that the only hyperplane in the inequality description that corresponds to a facet with $n!$ vertices is $H$.
Assume a facet with $n!$ vertices corresponds to a hyperplane of the form \[x_{i_1}+\cdots + x_{i_k} = n + \cdots + (n+1-k), \text{ where } 1\leq k\leq n-2. \]
By the vertex description of $\mathsf{PF}_n$, the number of vertices that satisfy this equation is given by
$$k! \sum_{m=1}^{n-k}\frac{(n-k)!}{m!},$$
where $m$ is the number of coordinates in the vertex that have value 1.
Note that since $1\leq k \leq n-2$, it follows that $ n = \binom{n}{1} \leq \binom{n}{k}$.
Rearranging the inequality, we get that $n\cdot k!(n-k)!\leq n!$.
We can also see that $$\frac{1}{1}+\frac{1}{2!}+\cdots +\frac{1}{(n-k)!} < (n-k) < n.$$
Hence, $$k! (n-k)! \sum\limits_{m=1}^{n-k}\frac{1}{m!}< n\cdot k!(n-k)! \leq n!.$$
Therefore, $H$ is the only possible hyperplane in the inequality description of $\mathsf{PF}_n$ that corresponds to a facet with $n!$ many variables.
\end{proof}
The following lemma allows us to establish a lower bound for the number of permutahedra of any dimension in the parking function polytope.
\begin{lemma}
The $(n-2)$-dimensional permutahedron $\Pi_{n-1}$ appears as a facet of $\Pi_{n}$ exactly $2n$ times.
\end{lemma}
\begin{proof}
Consider the defining inequalities of $\Pi_{n}$ given in Definition \ref{def:permutahedron}.
We claim that the hyperplanes that correspond to $\Pi_{n-1}$ as facets are those of the form \[x_i \leq n \text{ and } x_{i_1} + \cdots + x_{i_{n-1}}\leq n + (n-1) + \cdots +2.\]
We can see that for any $i$, the vertices of $\Pi_n$ that satisfy $x_i = n$ are those where the $i$-th coordinate is $n$ and all the other coordinates can be written as a permutation of $(1,2,\dots, n-1)$.
This facet is exactly $\Pi_{n-1}$.
Similarly, we can see the vertices that satisfy \[x_{i_1} + \cdots + x_{i_{n-1}} = n + (n-1) + \cdots +2, \text{ where } \{i_1,\dots, i_{n-1}\} = \{1,\ldots, n\} \setminus \{k\}, \text{ for some } k,\] are vertices where the $k$-th coordinate is 1 and all the other coordinates can be written as a permutation of $(2,\dots, n)$.
Thus, facets that correspond to hyperplanes of these forms are $\Pi_{n-1}$.
To show that these are the only hyperplanes that can correspond to $\Pi_{n-1}$, we will show that the only hyperplanes that can correspond to a facet with $(n-1)!$ vertices are the ones mentioned above.
The proof is similar to the uniqueness proof of Proposition \ref{prop:permutahedron}.
For a hyperplane $x_{i_1} + \cdots +x_{i_k} = n + \cdots + (n-k+1)$ consisting of $k$ many variables, we can see that there are $k!(n-k)!$ vertices of $\Pi_n$ that satisfy it. If we have that $k$ is neither equal to $1$ nor $n-1$, it follows that $k!(n-k)!<(n-1)!$.
Hence, if a hyperplane with $k$ many variables corresponds to $\Pi_{n-1}$, which has $(n-1)!$ vertices, it follows that $k$ equals $1$ or $n-1$.
\end{proof}
\begin{proposition}
The $(n-1)$-dimensional parking function polytope $\mathsf{PF}_{n-1}$ appears as a facet of the $n$-dimensional parking function polytope $\mathsf{PF}_n$ exactly $n$ times.
\end{proposition}
\begin{proof}
Consider the inequality description of $\mathsf{PF}_n$:
\begin{align}
1 \leq x_i &\leq n, &\text{ for } 1 \leq i \leq n,\label{eq:1}\\
x_i+x_j &\leq n + (n-1), &\text{ for } i < j,\label{eq:2}\\
x_i+x_j+x_k &\leq n + (n-1) + (n-2), &\text{ for } i < j < k,\label{eq:3}\\
&\vdots\nonumber\\
x_{i_1}+ x_{i_2}+ \cdots + x_{i_{n-2}} &\leq n + (n-1) + \cdots + 3, &\text{ for } i_1 < i_2 < \cdots < i_{n-2},\label{eq:n-2}\\
x_{1}+ x_{2}+ \cdots + x_{n} &\leq n + (n-1) + \cdots + 1.\label{eq:n}
\end{align}
Fix $x_i = n$ for some $i$; without loss of generality say $i=n$.
Then we proceed to reduce the system of inequalities.
For all $1 \leq i \leq n-1$, we still have $1 \leq x_i$, but if any \[x_i > n - 1, \text{ then } x_i + x_n > n + (n-1),\] which contradicts (\ref{eq:2}).
Thus, we have $1 \leq x_i \leq n-1$ for $1 \leq i \leq n-1$.
Next, for $i < j < n$, if \[x_i + x_j > (n-1) + (n-2), \text{ then } x_i + x_j + x_n > n + (n-1) + (n-2),\] which contradicts (\ref{eq:3}).
Thus, we have $x_i + x_j \leq (n-1) + (n-2)$ for $i < j < n$.
Continuing this process, we can refine the inequalities up through: for $i_1 < i_2 < \cdots < i_{n-3} < n$, if \[x_{i_1} + x_{i_2} + \cdots + x_{i_{n-3}} > (n-1) + (n-2) + \cdots + 3,\] then \[x_{i_1} + x_{i_2} + \cdots + x_{i_{n-3}} + x_n > n + (n-1) + (n-2) + \cdots +3,\] which contradicts (\ref{eq:n-2}).
Thus, we have \[x_{i_1} + x_{i_2} + \cdots + x_{i_{n-3}} \leq (n-1) + (n-2) + \cdots + 3.\]
Lastly, consider if $x_1 + x_2 + \cdots + x_{n-1} > (n-1) + \cdots +1$.
Then \[x_1 + x_2 + \cdots + x_{n-1} + x_n > n+ (n-1) + \cdots +1,\] which contradicts (\ref{eq:n}).
Thus, \[x_1 + x_2 + \cdots + x_{n-1} \leq (n-1) + \cdots +1.\]
Collecting these results gives the following inequality description.
\begin{align*}
1 \leq x_i &\leq n-1, &\text{ for } 1 \leq i \leq n-1,\\
x_i+x_j &\leq (n-1) + (n-2), &\text{ for } i < j < n,\\
x_i+x_j+x_k &\leq (n-1) + (n-2) + (n-3), &\text{ for } i < j < k < n,\\
&\vdots\\
x_{i_1}+ x_{i_2}+ \cdots + x_{i_{n-3}} &\leq (n-1) + (n-2) + \cdots + 3, &\text{ for } i_1 < i_2 < \cdots < i_{n-2} < n,\\
x_{1}+ x_{2}+ \cdots + x_{n-1} &\leq (n-1) + \cdots + 1.
\end{align*}
This is exactly the inequality description for $\mathsf{PF}_{n-1}$.
Hence, $\mathsf{PF}_{n-1}$ is a facet of $\mathsf{PF}_n$.
Now, as the choice of $i$ was arbitrary, there are $n$ choices for $i$, so there are $n$ copies of $\mathsf{PF}_{n-1}$ appearing as facets of $\mathsf{PF}_n$.
Now we will show that these are the only occurrences.
We know that $\mathsf{PF}_{n-1}$ has $(n-1)!\sum\limits_{m=1}^{n-1} \frac{1}{m!}$ many vertices.
Similarly to the proof of Proposition \ref{prop:permutahedron}, we can see that for a hyperplane with $k$ variables, there are $k!(n-k)!\sum\limits_{m=1}^{n-k} \frac{1}{m!}$ many vertices that satisfy it.
If $k$ is not equal to $1$ or $n$, we have $n\leq \binom{n}{k}$, thus $k!(n-k)!\leq (n-1)!$.
Then since $$k!(n-k)!\sum\limits_{m=1}^{n-k} \frac{1}{m!} < k!(n-k)!\sum\limits_{m=1}^{n-1} \frac{1}{m!} \leq (n-1)! \sum\limits_{m=1}^{n-1} \frac{1}{m!},$$ these hyperplanes cannot correspond to a facet with $(n-1)!\sum\limits_{m=1}^{n-1} \frac{1}{m!}$ many vertices.
If $k =n$, then the vertices that satisfy the equation are exactly all permutations of $(1,2,\dots, n)$, which is $n! > (n-1)! \sum\limits_{m=1}^{n-1} \frac{1}{m!}$ many vertices, hence this hyperplane cannot correspond to $\mathsf{PF}_{n-1}$.
For a hyperplane of the form $x_i = 1$, there are $\sum_{m=0}^{n-1}\frac{(n-1)!}{m!} \neq \sum_{m=1}^{n-1} \frac{(n-1)!}{m!}$ many vertices that satisfy the equation, hence it also cannot correspond to $\mathsf{PF}_{n-1}$.
Thus, the only hyperplanes that support a facet of the form $\mathsf{PF}_{n-1}$ are the $n$ mentioned above.
\end{proof}
\subsection{Volume}
Next, we consider a collection of related polytopes, called partial permutahedra, which were first introduced in \cite{HS} in terms of partial permutation matrices; a recursive volume formula was presented in \cite{BCC}.
For our purposes, it suffices to consider their vertex description.
\begin{definition}[Proposition 5.7, \cite{HS}; Proposition 2.6, \cite{BCC}]\label{def:pp-vertices}
The \emph{partial permutahedron} $\mathcal{P}(n,p)$ is the polytope with all permutations of the vectors
\[(\underbrace{0, \ldots, 0}_{n-k}, \underbrace{p-k+1, \ldots, p-1, p}_k ),\]
for all $0 \leq k \leq \min(n,p)$, as vertices.
\end{definition}
Two integral polytopes $P\subseteq \mathbb{R}^m$ and $Q\subseteq \mathbb{R}^n$ are \emph{integrally equivalent} if there exists an affine transformation $\Phi:\mathbb{R}^m \rightarrow \mathbb{R}^n$ whose restriction to $P$ preserves the lattice. We establish the following simple result relating partial permutahedra and classical parking functions.
\begin{proposition}\label{prop:classical-partial}
The classical parking function polytope $\mathsf{PF}_n$ is integrally equivalent to the partial permutahedron $\mathcal{P}(n, n-1)$.
\end{proposition}
\begin{proof}
Note that all vertices of $\mathcal{P}(n,n-1)$ given in Definition \ref{def:pp-vertices} map to all vertices of $\mathsf{PF}_n$ by the translation $(x_1, x_2, \ldots, x_n) \mapsto (x_1 +1, x_2 +1, \ldots, x_n+1)$.
\end{proof}
We can now establish the equivalence of several different formulas throughout the literature, as they count the normalized volume of the same polytope.
Additionally, while \cite{AW} provided the first answer to Stanley's question to find the volume of $\mathsf{PF}_n$ in \cite{Sta}, the following theorem gives the first closed-form answer to Stanley's question.
\begin{theorem}\label{thm:main_theorem}
The following are equivalent normalized volume formulas for the classical parking function polytope, where $\nVol(\mathsf{PF}_n) := n!V_n$ denotes the normalized volume:
\begin{enumerate}[(i)]
\item From \cite{AW}, with $\nVol(\mathsf{PF}_{0})=1$ and $\nVol(\mathsf{PF}_{1})=0$, for $n \geq 2$ we have recursively, \[ \nVol(\mathsf{PF}_n) = (n-1)! \sum_{k=0}^{n-1} \binom{n}{k} \frac{(n-k)^{n-k-1}(n+k-1)}{2} \frac{\nVol(\mathsf{PF}_k)}{k!}.\]
\item From \cite{BCC}, for $\mathcal{P}(n,n-1)$, with $\nVol(\mathsf{PF}_{0})=1$ and $\nVol(\mathsf{PF}_{1})=0$, for $n \geq 2$ we have recursively,
\[ \nVol(\mathsf{PF}_n) = (n-1)! \sum_{k=1}^n k^{k-2} \frac{\nVol(\mathsf{PF}_{n-k})}{(n-k)!} \left(k(n-1)- \binom{k}{2}\right) \binom{n}{k}.\]
\item \[\nVol(\mathsf{PF}_n) = - \frac{n!}{2^n}\sum_{i=0}^n \binom{n}{i}(2i-3)!!(2n-1)^{n-i}.\]
\item From \cite{She}, equation (33), for $n \geq 2$, \[\nVol(\mathsf{PF}_n)=\frac{n!}{2^n}\sum_{i=0}^{n} (2i-1)(2i-1)!!\binom{n}{i}(2n-1)^{n-i-1}.\]
\item From \cite{She}, equation (23), for $n \geq 2$, \[\nVol(\mathsf{PF}_n) = n!\frac{n-1}{2^{n-1}}\sum_{i=0}^{n-2}(2i+1)!!\binom{n-2}{i}(2n-1)^{n-i-2}. \]
\item From \cite{She}, $\nVol(\mathsf{PF}_n)$ equals the number of $n \times n$ $(0,1)$-matrices with two 1's in each row that have positive permanent.
\end{enumerate}
\end{theorem}
\begin{proof}
Proposition \ref{prop:classical-partial} implies $(i) \iff (ii)$. In the next section, we show Theorem \ref{thm:generalized_closed_form_volume}, which by taking $a=1$ and $b=1$ implies $(i) \iff (iii)$.
Next, $(iv) \iff (v) \iff (vi)$ is given in cite \cite{She}.
We finish by showing that $(iii) \iff (iv)$. This is equivalent to showing that their difference is $0$. From $(iv)$, subtract $(iii)$, which gives
\begin{align*}
&\frac{n!}{2^n}\sum_{i=0}^{n} (2i-1)(2i-1)!!\binom{n}{i}(2n-1)^{n-i-1}+ \frac{n!}{2^n}\sum_{i=0}^n \binom{n}{i}(2i-3)!!(2n-1)^{n-i}\\
&= \frac{n!}{2^n}\sum_{i=0}^{n}\binom{n}{i}(2i-3)!!(2n-1)^{n-i-1}[(2i-1)^2 + (2n-1)]\\
&= \frac{n!}{2^n}\sum_{i=0}^{n}\binom{n}{i}\frac{(2i)!}{2^i i! (2i-1)}(2n-1)^{n-i-1}[(2i-1)^2 + (2n-1)].
\end{align*}
We now use Wilf-Zeilberger theory.
We need only show that $f(n) := \sum_{i=0}^n F(n,i)$ is $0$, where
\[ F(n,i) := \binom{n}{i}\frac{(2i)!}{2^i i! (2i-1)}(2n-1)^{n-i-1}[(2i-1)^2 + (2n-1)].\]
Note that as $F(n,i)$ contains a binomial coefficient, the sum $f(n)= \sum_{i \in \mathbb{Z}} F(n,i)$.
We use Zeilberger's creative telescoping algorithm \texttt{ct} in the package \texttt{EKHAD} as described in Chapter 6.5 of \cite{PWZ} and available from \cite{Zei}. Calling
\begin{center}
\begin{BVerbatim}
ct(binomial(n,i)*(2*i)!*(2*n-1)^(n-i-1)*
((2*i-1)^2+(2*n-1))/(2^i*i!*(2*i-1)),0,i,n,N);
\end{BVerbatim}
\end{center}
in Maple gives output $1 \cdot f(n) + 0 \cdot f(n+1)= 0$, that is $f(n) = 0$, with certificate $R(n,i) = (-2n+1)i/(2i^2-2i+n)$. If we wish to check our solution, let $G(n,i) := R(n,i)\cdot F(n,i)$.
Then one can verify that $1 = (G(n,i+1) - G(n,i))/F(n,i)$ via some algebra.
\end{proof}
The sequence for the normalized volume of $\mathsf{PF}_n$ begins \[0, 1, 24, 954, 59040, 5295150, 651354480, 105393619800, 21717404916480, \ldots\] and is OEIS sequence \href{http://oeis.org/A174586}{A174586} \cite{OEIS}.
\section{The convex hull of \texorpdfstring{$\mathbf{x}$}{\textbf{x}}-parking functions}\label{sec:x-parking}
Next, we discuss a generalization of the classical parking functions.
Let $\mathbf{x}=(x_1,\dots,x_n)\in \mathbb{Z}_{>0}^n$.
Define an $\mathbf{x}$\emph{-parking function} to be a sequence $(a_1,\dots,a_n)$ of positive integers whose nondecreasing rearrangement $b_1\leq b_2\leq \cdots \leq b_n$ satisfies $b_i\leq x_1+\cdots + x_i$.
As mentioned in \cite{Yan}, from work of Pitman and Stanley \cite{PitmanStanley}, the number of $\mathbf{x}$-parking functions for $\mathbf{x}=(a,b,b,\ldots,b)$ is the following:
\begin{theorem}[Theorem 1, \cite{Yan}]\label{thm:number_x-parking}
For $\mathbf{x}=(a,b,b,\ldots,b) \in \mathbb{Z}_{>0}^n$, the number of $\mathbf{x}$-parking functions is given by $a(a+nb)^{n-1}$.
\end{theorem}
The following definition introduces an $n$-dimensional polytope associated to $\mathbf{x}$-parking functions of length $n$ for the specific sequence $\mathbf{x}=(a,b,b,\dots,b)\in \mathbb{Z}_{>0}^n$, which is one of the main objects of study in this paper.
We remark that we focus on $\mathbf{x}=(a,b,b,\dots,b)$ following the work of Yan \cite{Yan2, Yan}.
\begin{definition}
Define the \emph{$\mathbf{x}$-parking function polytope} $\mathcal{X}_{n}(a,b)$ as the convex hull of all $\mathbf{x}$-parking functions of length $n$ in $\mathbb{R}^n$ for $\mathbf{x}=(a,b,b,\dots,b)\in \mathbb{Z}_{>0}^n$. See Figure \ref{fig:x-pfs} for examples.
\end{definition}
\begin{remark}\label{rem:n=1}
Note that if $n=1$, $\mathbf{x} = (a)$, so no $b$ is used.
As a result, we may denote the $\mathbf{x}$-parking function polytope $\mathcal{X}_{1}(a,b)$ alternatively by $\mathcal{X}_{1}(a)$.
\end{remark}
\begin{figure}[h]
\centering
\scalebox{.7}{
\begin{tikzpicture}%
[x={(-0.692569cm, -0.420596cm)},
y={(0.721351cm, -0.403777cm)},
z={(-0.000033cm, 0.812443cm)},
scale=1.000000,
back/.style={loosely dotted, thin},
edge/.style={color=black, thick},
facet/.style={fill=andresblue,fill opacity=0.500000},
vertex/.style={inner sep=1pt,circle,draw=andrespink,fill=andrespink,thick}]
\coordinate (1.00000, 1.00000, 1.00000) at (1.00000, 1.00000, 1.00000);
\coordinate (3.00000, 2.00000, 1.00000) at (3.00000, 2.00000, 1.00000);
\coordinate (1.00000, 1.00000, 3.00000) at (1.00000, 1.00000, 3.00000);
\coordinate (3.00000, 1.00000, 2.00000) at (3.00000, 1.00000, 2.00000);
\coordinate (3.00000, 1.00000, 1.00000) at (3.00000, 1.00000, 1.00000);
\coordinate (1.00000, 2.00000, 3.00000) at (1.00000, 2.00000, 3.00000);
\coordinate (1.00000, 3.00000, 1.00000) at (1.00000, 3.00000, 1.00000);
\coordinate (1.00000, 3.00000, 2.00000) at (1.00000, 3.00000, 2.00000);
\coordinate (2.00000, 3.00000, 1.00000) at (2.00000, 3.00000, 1.00000);
\coordinate (2.00000, 1.00000, 3.00000) at (2.00000, 1.00000, 3.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 3.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (3.00000, 1.00000, 1.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 3.00000, 1.00000);
\node[vertex] at (1.00000, 1.00000, 1.00000) {};
\fill[facet] (1.00000, 3.00000, 2.00000) -- (1.00000, 2.00000, 3.00000) -- (2.00000, 1.00000, 3.00000) -- (3.00000, 1.00000, 2.00000) -- (3.00000, 2.00000, 1.00000) -- (2.00000, 3.00000, 1.00000) -- cycle {};
\fill[facet] (2.00000, 3.00000, 1.00000) -- (1.00000, 3.00000, 1.00000) -- (1.00000, 3.00000, 2.00000) -- cycle {};
\fill[facet] (2.00000, 1.00000, 3.00000) -- (1.00000, 1.00000, 3.00000) -- (1.00000, 2.00000, 3.00000) -- cycle {};
\fill[facet] (3.00000, 1.00000, 1.00000) -- (3.00000, 2.00000, 1.00000) -- (3.00000, 1.00000, 2.00000) -- cycle {};
\draw[edge] (3.00000, 2.00000, 1.00000) -- (3.00000, 1.00000, 2.00000);
\draw[edge] (3.00000, 2.00000, 1.00000) -- (3.00000, 1.00000, 1.00000);
\draw[edge] (3.00000, 2.00000, 1.00000) -- (2.00000, 3.00000, 1.00000);
\draw[edge] (1.00000, 1.00000, 3.00000) -- (1.00000, 2.00000, 3.00000);
\draw[edge] (1.00000, 1.00000, 3.00000) -- (2.00000, 1.00000, 3.00000);
\draw[edge] (3.00000, 1.00000, 2.00000) -- (3.00000, 1.00000, 1.00000);
\draw[edge] (3.00000, 1.00000, 2.00000) -- (2.00000, 1.00000, 3.00000);
\draw[edge] (1.00000, 2.00000, 3.00000) -- (1.00000, 3.00000, 2.00000);
\draw[edge] (1.00000, 2.00000, 3.00000) -- (2.00000, 1.00000, 3.00000);
\draw[edge] (1.00000, 3.00000, 1.00000) -- (1.00000, 3.00000, 2.00000);
\draw[edge] (1.00000, 3.00000, 1.00000) -- (2.00000, 3.00000, 1.00000);
\draw[edge] (1.00000, 3.00000, 2.00000) -- (2.00000, 3.00000, 1.00000);
\node[vertex] at (3.00000, 2.00000, 1.00000) {};
\node[vertex] at (1.00000, 1.00000, 3.00000) {};
\node[vertex] at (3.00000, 1.00000, 2.00000) {};
\node[vertex] at (3.00000, 1.00000, 1.00000) {};
\node[vertex] at (1.00000, 2.00000, 3.00000) {};
\node[vertex] at (1.00000, 3.00000, 1.00000) {};
\node[vertex] at (1.00000, 3.00000, 2.00000) {};
\node[vertex] at (2.00000, 3.00000, 1.00000) {};
\node[vertex] at (2.00000, 1.00000, 3.00000) {};
\end{tikzpicture}
\qquad
\begin{tikzpicture}%
[x={(-0.707031cm, -0.408259cm)},
y={(0.707183cm, -0.408200cm)},
z={(0.000025cm, 0.816516cm)},
scale=1.000000,
back/.style={loosely dotted, thin},
edge/.style={color=black, thick},
facet/.style={fill=andresblue,fill opacity=0.500000},
vertex/.style={inner sep=1pt,circle,draw=andrespink,fill=andrespink,thick}]
\coordinate (1.00000, 1.00000, 1.00000) at (1.00000, 1.00000, 1.00000);
\coordinate (5.00000, 3.00000, 1.00000) at (5.00000, 3.00000, 1.00000);
\coordinate (5.00000, 1.00000, 3.00000) at (5.00000, 1.00000, 3.00000);
\coordinate (5.00000, 1.00000, 1.00000) at (5.00000, 1.00000, 1.00000);
\coordinate (1.00000, 1.00000, 5.00000) at (1.00000, 1.00000, 5.00000);
\coordinate (3.00000, 5.00000, 1.00000) at (3.00000, 5.00000, 1.00000);
\coordinate (3.00000, 1.00000, 5.00000) at (3.00000, 1.00000, 5.00000);
\coordinate (1.00000, 5.00000, 3.00000) at (1.00000, 5.00000, 3.00000);
\coordinate (1.00000, 5.00000, 1.00000) at (1.00000, 5.00000, 1.00000);
\coordinate (1.00000, 3.00000, 5.00000) at (1.00000, 3.00000, 5.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (5.00000, 1.00000, 1.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 5.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 5.00000, 1.00000);
\node[vertex] at (1.00000, 1.00000, 1.00000) {};
\fill[facet] (1.00000, 3.00000, 5.00000) -- (3.00000, 1.00000, 5.00000) -- (5.00000, 1.00000, 3.00000) -- (5.00000, 3.00000, 1.00000) -- (3.00000, 5.00000, 1.00000) -- (1.00000, 5.00000, 3.00000) -- cycle {};
\fill[facet] (5.00000, 1.00000, 1.00000) -- (5.00000, 3.00000, 1.00000) -- (5.00000, 1.00000, 3.00000) -- cycle {};
\fill[facet] (1.00000, 5.00000, 1.00000) -- (3.00000, 5.00000, 1.00000) -- (1.00000, 5.00000, 3.00000) -- cycle {};
\fill[facet] (1.00000, 3.00000, 5.00000) -- (1.00000, 1.00000, 5.00000) -- (3.00000, 1.00000, 5.00000) -- cycle {};
\draw[edge] (5.00000, 3.00000, 1.00000) -- (5.00000, 1.00000, 3.00000);
\draw[edge] (5.00000, 3.00000, 1.00000) -- (5.00000, 1.00000, 1.00000);
\draw[edge] (5.00000, 3.00000, 1.00000) -- (3.00000, 5.00000, 1.00000);
\draw[edge] (5.00000, 1.00000, 3.00000) -- (5.00000, 1.00000, 1.00000);
\draw[edge] (5.00000, 1.00000, 3.00000) -- (3.00000, 1.00000, 5.00000);
\draw[edge] (1.00000, 1.00000, 5.00000) -- (3.00000, 1.00000, 5.00000);
\draw[edge] (1.00000, 1.00000, 5.00000) -- (1.00000, 3.00000, 5.00000);
\draw[edge] (3.00000, 5.00000, 1.00000) -- (1.00000, 5.00000, 3.00000);
\draw[edge] (3.00000, 5.00000, 1.00000) -- (1.00000, 5.00000, 1.00000);
\draw[edge] (3.00000, 1.00000, 5.00000) -- (1.00000, 3.00000, 5.00000);
\draw[edge] (1.00000, 5.00000, 3.00000) -- (1.00000, 5.00000, 1.00000);
\draw[edge] (1.00000, 5.00000, 3.00000) -- (1.00000, 3.00000, 5.00000);
\node[vertex] at (5.00000, 3.00000, 1.00000) {};
\node[vertex] at (5.00000, 1.00000, 3.00000) {};
\node[vertex] at (5.00000, 1.00000, 1.00000) {};
\node[vertex] at (1.00000, 1.00000, 5.00000) {};
\node[vertex] at (3.00000, 5.00000, 1.00000) {};
\node[vertex] at (3.00000, 1.00000, 5.00000) {};
\node[vertex] at (1.00000, 5.00000, 3.00000) {};
\node[vertex] at (1.00000, 5.00000, 1.00000) {};
\node[vertex] at (1.00000, 3.00000, 5.00000) {};
\end{tikzpicture}
\qquad
\begin{tikzpicture}%
[x={(-0.707031cm, -0.408259cm)},
y={(0.707183cm, -0.408200cm)},
z={(0.000025cm, 0.816516cm)},
scale=1.000000,
back/.style={loosely dotted, thin},
edge/.style={color=black, thick},
facet/.style={fill=andresblue,fill opacity=0.500000},
vertex/.style={inner sep=1pt,circle,draw=andrespink,fill=andrespink,thick}]
\coordinate (1.00000, 1.00000, 1.00000) at (1.00000, 1.00000, 1.00000);
\coordinate (4.00000, 3.00000, 2.00000) at (4.00000, 3.00000, 2.00000);
\coordinate (4.00000, 3.00000, 1.00000) at (4.00000, 3.00000, 1.00000);
\coordinate (1.00000, 1.00000, 4.00000) at (1.00000, 1.00000, 4.00000);
\coordinate (4.00000, 2.00000, 3.00000) at (4.00000, 2.00000, 3.00000);
\coordinate (4.00000, 1.00000, 3.00000) at (4.00000, 1.00000, 3.00000);
\coordinate (4.00000, 1.00000, 1.00000) at (4.00000, 1.00000, 1.00000);
\coordinate (3.00000, 4.00000, 2.00000) at (3.00000, 4.00000, 2.00000);
\coordinate (3.00000, 4.00000, 1.00000) at (3.00000, 4.00000, 1.00000);
\coordinate (3.00000, 2.00000, 4.00000) at (3.00000, 2.00000, 4.00000);
\coordinate (3.00000, 1.00000, 4.00000) at (3.00000, 1.00000, 4.00000);
\coordinate (1.00000, 3.00000, 4.00000) at (1.00000, 3.00000, 4.00000);
\coordinate (1.00000, 4.00000, 1.00000) at (1.00000, 4.00000, 1.00000);
\coordinate (2.00000, 4.00000, 3.00000) at (2.00000, 4.00000, 3.00000);
\coordinate (1.00000, 4.00000, 3.00000) at (1.00000, 4.00000, 3.00000);
\coordinate (2.00000, 3.00000, 4.00000) at (2.00000, 3.00000, 4.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 4.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (4.00000, 1.00000, 1.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 4.00000, 1.00000);
\node[vertex] at (1.00000, 1.00000, 1.00000) {};
\fill[facet] (4.00000, 1.00000, 1.00000) -- (4.00000, 3.00000, 1.00000) -- (4.00000, 3.00000, 2.00000) -- (4.00000, 2.00000, 3.00000) -- (4.00000, 1.00000, 3.00000) -- cycle {};
\fill[facet] (2.00000, 3.00000, 4.00000) -- (3.00000, 2.00000, 4.00000) -- (4.00000, 2.00000, 3.00000) -- (4.00000, 3.00000, 2.00000) -- (3.00000, 4.00000, 2.00000) -- (2.00000, 4.00000, 3.00000) -- cycle {};
\fill[facet] (3.00000, 4.00000, 1.00000) -- (4.00000, 3.00000, 1.00000) -- (4.00000, 3.00000, 2.00000) -- (3.00000, 4.00000, 2.00000) -- cycle {};
\fill[facet] (3.00000, 1.00000, 4.00000) -- (4.00000, 1.00000, 3.00000) -- (4.00000, 2.00000, 3.00000) -- (3.00000, 2.00000, 4.00000) -- cycle {};
\fill[facet] (2.00000, 3.00000, 4.00000) -- (3.00000, 2.00000, 4.00000) -- (3.00000, 1.00000, 4.00000) -- (1.00000, 1.00000, 4.00000) -- (1.00000, 3.00000, 4.00000) -- cycle {};
\fill[facet] (2.00000, 4.00000, 3.00000) -- (2.00000, 3.00000, 4.00000) -- (1.00000, 3.00000, 4.00000) -- (1.00000, 4.00000, 3.00000) -- cycle {};
\fill[facet] (1.00000, 4.00000, 3.00000) -- (1.00000, 4.00000, 1.00000) -- (3.00000, 4.00000, 1.00000) -- (3.00000, 4.00000, 2.00000) -- (2.00000, 4.00000, 3.00000) -- cycle {};
\draw[edge] (4.00000, 3.00000, 2.00000) -- (4.00000, 3.00000, 1.00000);
\draw[edge] (4.00000, 3.00000, 2.00000) -- (4.00000, 2.00000, 3.00000);
\draw[edge] (4.00000, 3.00000, 2.00000) -- (3.00000, 4.00000, 2.00000);
\draw[edge] (4.00000, 3.00000, 1.00000) -- (4.00000, 1.00000, 1.00000);
\draw[edge] (4.00000, 3.00000, 1.00000) -- (3.00000, 4.00000, 1.00000);
\draw[edge] (1.00000, 1.00000, 4.00000) -- (3.00000, 1.00000, 4.00000);
\draw[edge] (1.00000, 1.00000, 4.00000) -- (1.00000, 3.00000, 4.00000);
\draw[edge] (4.00000, 2.00000, 3.00000) -- (4.00000, 1.00000, 3.00000);
\draw[edge] (4.00000, 2.00000, 3.00000) -- (3.00000, 2.00000, 4.00000);
\draw[edge] (4.00000, 1.00000, 3.00000) -- (4.00000, 1.00000, 1.00000);
\draw[edge] (4.00000, 1.00000, 3.00000) -- (3.00000, 1.00000, 4.00000);
\draw[edge] (3.00000, 4.00000, 2.00000) -- (3.00000, 4.00000, 1.00000);
\draw[edge] (3.00000, 4.00000, 2.00000) -- (2.00000, 4.00000, 3.00000);
\draw[edge] (3.00000, 4.00000, 1.00000) -- (1.00000, 4.00000, 1.00000);
\draw[edge] (3.00000, 2.00000, 4.00000) -- (3.00000, 1.00000, 4.00000);
\draw[edge] (3.00000, 2.00000, 4.00000) -- (2.00000, 3.00000, 4.00000);
\draw[edge] (1.00000, 3.00000, 4.00000) -- (1.00000, 4.00000, 3.00000);
\draw[edge] (1.00000, 3.00000, 4.00000) -- (2.00000, 3.00000, 4.00000);
\draw[edge] (1.00000, 4.00000, 1.00000) -- (1.00000, 4.00000, 3.00000);
\draw[edge] (2.00000, 4.00000, 3.00000) -- (1.00000, 4.00000, 3.00000);
\draw[edge] (2.00000, 4.00000, 3.00000) -- (2.00000, 3.00000, 4.00000);
\node[vertex] at (4.00000, 3.00000, 2.00000) {};
\node[vertex] at (4.00000, 3.00000, 1.00000) {};
\node[vertex] at (1.00000, 1.00000, 4.00000) {};
\node[vertex] at (4.00000, 2.00000, 3.00000) {};
\node[vertex] at (4.00000, 1.00000, 3.00000) {};
\node[vertex] at (4.00000, 1.00000, 1.00000) {};
\node[vertex] at (3.00000, 4.00000, 2.00000) {};
\node[vertex] at (3.00000, 4.00000, 1.00000) {};
\node[vertex] at (3.00000, 2.00000, 4.00000) {};
\node[vertex] at (3.00000, 1.00000, 4.00000) {};
\node[vertex] at (1.00000, 3.00000, 4.00000) {};
\node[vertex] at (1.00000, 4.00000, 1.00000) {};
\node[vertex] at (2.00000, 4.00000, 3.00000) {};
\node[vertex] at (1.00000, 4.00000, 3.00000) {};
\node[vertex] at (2.00000, 3.00000, 4.00000) {};
\end{tikzpicture}
\qquad
\begin{tikzpicture}%
[x={(-0.707031cm, -0.408259cm)},
y={(0.707183cm, -0.408200cm)},
z={(0.000025cm, 0.816516cm)},
scale=1.000000,
back/.style={loosely dotted, thin},
edge/.style={color=black, thick},
facet/.style={fill=andresblue,fill opacity=0.500000},
vertex/.style={inner sep=1pt,circle,draw=andrespink,fill=andrespink,thick}]
\coordinate (1.00000, 1.00000, 1.00000) at (1.00000, 1.00000, 1.00000);
\coordinate (6.00000, 4.00000, 2.00000) at (6.00000, 4.00000, 2.00000);
\coordinate (6.00000, 4.00000, 1.00000) at (6.00000, 4.00000, 1.00000);
\coordinate (6.00000, 2.00000, 4.00000) at (6.00000, 2.00000, 4.00000);
\coordinate (6.00000, 1.00000, 4.00000) at (6.00000, 1.00000, 4.00000);
\coordinate (1.00000, 1.00000, 6.00000) at (1.00000, 1.00000, 6.00000);
\coordinate (6.00000, 1.00000, 1.00000) at (6.00000, 1.00000, 1.00000);
\coordinate (4.00000, 6.00000, 2.00000) at (4.00000, 6.00000, 2.00000);
\coordinate (4.00000, 6.00000, 1.00000) at (4.00000, 6.00000, 1.00000);
\coordinate (4.00000, 2.00000, 6.00000) at (4.00000, 2.00000, 6.00000);
\coordinate (4.00000, 1.00000, 6.00000) at (4.00000, 1.00000, 6.00000);
\coordinate (2.00000, 6.00000, 4.00000) at (2.00000, 6.00000, 4.00000);
\coordinate (2.00000, 4.00000, 6.00000) at (2.00000, 4.00000, 6.00000);
\coordinate (1.00000, 6.00000, 4.00000) at (1.00000, 6.00000, 4.00000);
\coordinate (1.00000, 6.00000, 1.00000) at (1.00000, 6.00000, 1.00000);
\coordinate (1.00000, 4.00000, 6.00000) at (1.00000, 4.00000, 6.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 6.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (6.00000, 1.00000, 1.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 6.00000, 1.00000);
\node[vertex] at (1.00000, 1.00000, 1.00000) {};
\fill[facet] (6.00000, 1.00000, 1.00000) -- (6.00000, 4.00000, 1.00000) -- (6.00000, 4.00000, 2.00000) -- (6.00000, 2.00000, 4.00000) -- (6.00000, 1.00000, 4.00000) -- cycle {};
\fill[facet] (2.00000, 4.00000, 6.00000) -- (4.00000, 2.00000, 6.00000) -- (6.00000, 2.00000, 4.00000) -- (6.00000, 4.00000, 2.00000) -- (4.00000, 6.00000, 2.00000) -- (2.00000, 6.00000, 4.00000) -- cycle {};
\fill[facet] (4.00000, 6.00000, 1.00000) -- (6.00000, 4.00000, 1.00000) -- (6.00000, 4.00000, 2.00000) -- (4.00000, 6.00000, 2.00000) -- cycle {};
\fill[facet] (1.00000, 6.00000, 1.00000) -- (4.00000, 6.00000, 1.00000) -- (4.00000, 6.00000, 2.00000) -- (2.00000, 6.00000, 4.00000) -- (1.00000, 6.00000, 4.00000) -- cycle {};
\fill[facet] (1.00000, 4.00000, 6.00000) -- (2.00000, 4.00000, 6.00000) -- (2.00000, 6.00000, 4.00000) -- (1.00000, 6.00000, 4.00000) -- cycle {};
\fill[facet] (1.00000, 4.00000, 6.00000) -- (1.00000, 1.00000, 6.00000) -- (4.00000, 1.00000, 6.00000) -- (4.00000, 2.00000, 6.00000) -- (2.00000, 4.00000, 6.00000) -- cycle {};
\fill[facet] (4.00000, 1.00000, 6.00000) -- (6.00000, 1.00000, 4.00000) -- (6.00000, 2.00000, 4.00000) -- (4.00000, 2.00000, 6.00000) -- cycle {};
\draw[edge] (6.00000, 4.00000, 2.00000) -- (6.00000, 4.00000, 1.00000);
\draw[edge] (6.00000, 4.00000, 2.00000) -- (6.00000, 2.00000, 4.00000);
\draw[edge] (6.00000, 4.00000, 2.00000) -- (4.00000, 6.00000, 2.00000);
\draw[edge] (6.00000, 4.00000, 1.00000) -- (6.00000, 1.00000, 1.00000);
\draw[edge] (6.00000, 4.00000, 1.00000) -- (4.00000, 6.00000, 1.00000);
\draw[edge] (6.00000, 2.00000, 4.00000) -- (6.00000, 1.00000, 4.00000);
\draw[edge] (6.00000, 2.00000, 4.00000) -- (4.00000, 2.00000, 6.00000);
\draw[edge] (6.00000, 1.00000, 4.00000) -- (6.00000, 1.00000, 1.00000);
\draw[edge] (6.00000, 1.00000, 4.00000) -- (4.00000, 1.00000, 6.00000);
\draw[edge] (1.00000, 1.00000, 6.00000) -- (4.00000, 1.00000, 6.00000);
\draw[edge] (1.00000, 1.00000, 6.00000) -- (1.00000, 4.00000, 6.00000);
\draw[edge] (4.00000, 6.00000, 2.00000) -- (4.00000, 6.00000, 1.00000);
\draw[edge] (4.00000, 6.00000, 2.00000) -- (2.00000, 6.00000, 4.00000);
\draw[edge] (4.00000, 6.00000, 1.00000) -- (1.00000, 6.00000, 1.00000);
\draw[edge] (4.00000, 2.00000, 6.00000) -- (4.00000, 1.00000, 6.00000);
\draw[edge] (4.00000, 2.00000, 6.00000) -- (2.00000, 4.00000, 6.00000);
\draw[edge] (2.00000, 6.00000, 4.00000) -- (2.00000, 4.00000, 6.00000);
\draw[edge] (2.00000, 6.00000, 4.00000) -- (1.00000, 6.00000, 4.00000);
\draw[edge] (2.00000, 4.00000, 6.00000) -- (1.00000, 4.00000, 6.00000);
\draw[edge] (1.00000, 6.00000, 4.00000) -- (1.00000, 6.00000, 1.00000);
\draw[edge] (1.00000, 6.00000, 4.00000) -- (1.00000, 4.00000, 6.00000);
\node[vertex] at (6.00000, 4.00000, 2.00000) {};
\node[vertex] at (6.00000, 4.00000, 1.00000) {};
\node[vertex] at (6.00000, 2.00000, 4.00000) {};
\node[vertex] at (6.00000, 1.00000, 4.00000) {};
\node[vertex] at (1.00000, 1.00000, 6.00000) {};
\node[vertex] at (6.00000, 1.00000, 1.00000) {};
\node[vertex] at (4.00000, 6.00000, 2.00000) {};
\node[vertex] at (4.00000, 6.00000, 1.00000) {};
\node[vertex] at (4.00000, 2.00000, 6.00000) {};
\node[vertex] at (4.00000, 1.00000, 6.00000) {};
\node[vertex] at (2.00000, 6.00000, 4.00000) {};
\node[vertex] at (2.00000, 4.00000, 6.00000) {};
\node[vertex] at (1.00000, 6.00000, 4.00000) {};
\node[vertex] at (1.00000, 6.00000, 1.00000) {};
\node[vertex] at (1.00000, 4.00000, 6.00000) {};
\end{tikzpicture}
}
\caption{The $\mathbf{x}$-parking function polytopes, from left to right: $\mathcal{X}_3(1,1)$, $\mathcal{X}_3(1,2)$, $\mathcal{X}_3(2,1)$, and $\mathcal{X}_3(2,2)$. Observe that $\mathcal{X}_3(1,2)$ is a dilate of $\mathcal{X}_3(1,1)$. Note that when $a>1$, there are new facets that do not appear when $a=1$.}
\label{fig:x-pfs}
\end{figure}
\subsection{Face structure}
In this subsection we describe the face structure of the \emph{$\mathbf{x}$-parking function polytope} $\mathcal{X}_{n}(a,b)$.
From Theorem \ref{thm:number_x-parking}, we obtain an upper bound for the number of $\mathbf{x}$-parking functions which arise as vertices of $\mathcal{X}_{n}(a,b)$ since it is well-known that if a polytope can be written as the convex hull of a finite set of points, then the set contains all the vertices of the polytope (Proposition 2.2, \cite{Ziegler}).
We now give a vertex description of $\mathcal{X}_{n}(a,b)$.
\begin{proposition}\label{prop:abn_vertex}
The vertices of $\mathcal{X}_{n}(a,b)$ are all permutations of
\[ (\underbrace{1, \ldots, 1}_k, \underbrace{a+kb, a+(k+1)b, \ldots, a+(n-2)b, a+(n-1)b}_{n-k}),\] for all $0 \leq k \leq n$.
\end{proposition}
\begin{proof}
Consider an $\mathbf{x}$-parking function $x = (x_1, \ldots, x_n)$ for which there is a term $x_i > 1$ such that $(x_1, \ldots, x_{i-1}, x_{i} + 1, x_{i+1}, \ldots, x_n)$ is also an $\mathbf{x}$-parking function.
Then $x$ is a convex combination of two other $\mathbf{x}$-parking functions.
Second, if \[ x= (\underbrace{1, \ldots, 1}_k, \underbrace{a+kb, a+(k+1)b, \ldots, a+(n-2)b, a+(n-1)b}_{n-k})\] is a convex combination of $y,z \in \mathcal{X}_{n}(a,b)$, then $x=y=z$, as the first $k$ coordinates of $x$ are minimal at 1 and the last $(n-k)$ coordinates are maximized due to the condition on the nondecreasing rearrangement of $\mathbf{x}$-parking functions.
Thus, $x$ is a vertex of $\mathcal{X}_{n}(a,b)$.
\end{proof}
Next, we enumerate the vertices of $\mathcal{X}_{n}(a,b)$.
In the case that $a=1$, we have the same number of vertices as in the case of the classical parking function polytope $\mathsf{PF}_n$, as $b$ is a parameter that increases the lengths of the edges of the polytope.
However, when $a >1$, new vertices and edges come into play.
\begin{proposition}\label{thm:num_vertices}
The number of vertices of $\mathcal{X}_{n}(a,b)$ is
\[ \begin{cases}
n!\left( \frac{1}{1!} + \cdots + \frac{1}{n!} \right) & \text{if } a = 1\\
n!\left( \frac{1}{0!} + \frac{1}{1!} + \cdots + \frac{1}{n!} \right) & \text{if } a > 1.
\end{cases}\]
\end{proposition}
\begin{proof}
The vertices are the permutations of the following:
\begin{align*}
v_0 &= (\underbrace{1,\ldots,1}_n),\\
v_1 &= (\underbrace{1,\ldots,1}_{n-1}, a+(n-1)b),\\
v_2 &= (\underbrace{1,\ldots,1}_{n-2}, a+(n-2)b, a+(n-1)b),\\
&\vdots\\
v_{n-1} &= (1,\underbrace{a+b,\ldots, a+(n-1)b}_{n-1}),\\
v_{n} &= (a,\underbrace{a+b,\ldots, a+(n-1)b}_{n-1}).
\end{align*}
Observe that there is one permutation of $v_0$, $n$ permutations of $v_1$, $n(n-1)$ permutations of $v_2$, and in general, $n(n-1)\cdots(n-k+1)$ permutations of $v_k$.
If $a>1$, then the vertices $v_{n-1}$ and $v_n$ are distinct, so we count the contribution of both.
However, if $a=1$, then $v_{n-1}=v_n$, and we only count one.
For $a>1$, this gives
\begin{align*}
&1 + n+ n(n-1) + \cdots + \left(n(n-1)\cdots 2\right) + \left(n(n-1)\cdots 2\cdot1\right)\\ &= n!\left(\frac{1}{n!} + \frac{1}{(n-1)!} + \frac{1}{(n-2)!} + \cdots + \frac{1}{1!} + \frac{1}{0!}\right),
\end{align*}
and for $a=1$,
\begin{align*}
&1 + n+ n(n-1) + \cdots + \left(n(n-1)\cdots 2\right)\\ &= n!\left(\frac{1}{n!} + \frac{1}{(n-1)!} + \frac{1}{(n-2)!} + \cdots + \frac{1}{1!}\right).
\end{align*}
\end{proof}
We continue by presenting an inequality description of $\mathcal{X}_{n}(a,b)$.
\begin{proposition}\label{prop:inequality}
The $\mathbf{x}$-parking function polytope $\mathcal{X}_{n}(a,b)$ is given by the following minimal inequality description:
\begin{align*}
\intertext{\emph{For all} $1 \leq i \leq n$,}
1 &\leq x_i \leq (n-1)b+a, \\
\intertext{\emph{for all} $1 \leq i < j \leq n$,}
x_i+x_j &\leq \left((n-2)b+a\right) + \left( (n-1)b+a\right),\\
\intertext{\emph{for all} $1 \leq i < j < k \leq n$,}
x_i+x_j+x_k &\leq \left((n-3)b+a\right) + \left((n-2)b+a\right) + \left( (n-1)b+a\right),\\
&\vdots\\
\intertext{\emph{for all} $1 \leq i_1 < i_2 < \cdots < i_{n-2} \leq n$,}
x_{i_1}+ x_{i_2}+ \cdots + x_{i_{n-2}} &\leq (2b+a) + \cdots + \left( (n-2)b+a\right) + \left((n-1)b+a\right),\\
\intertext{\emph{if $a > 1$, for all} $1 \leq i_1 < i_2 < \cdots < i_{n-1} \leq n$,}
x_{i_1}+ x_{i_2}+ \cdots + x_{i_{n-1}} &\leq (b+a) + \cdots + \left( (n-2)b+a\right) + \left((n-1)b+a\right),\\
\intertext{\emph{and (regardless of $a$),}}
x_{1}+ x_{2}+ \cdots + x_{n} &\leq a + \cdots + \left( (n-2)b+a\right) + \left((n-1)b+a\right).
\end{align*}
\end{proposition}
\begin{proof}
We use the vertex description given in Proposition \ref{prop:abn_vertex}, which states that the vertices of $\mathcal{X}_{n}(a,b)$ are all permutations of
\[ (\underbrace{1, \ldots, 1}_k, \underbrace{a+kb, a+(k+1)b, \ldots, a+(n-2)b, a+(n-1)b}_{n-k})\] for all $0 \leq k \leq n$.
First consider that since $a,b \geq 1$, all coordinates in a vertex of $\mathcal{X}_{n}(a,b)$ are at least $1$, i.e., $1 \leq x_i$ for all $1 \leq i \leq n$.
Now, we turn our attention to inequalities that are solely upper bounds, so we can assume that $k=0$ and only consider the largest possible coordinates.
The largest a single coordinate could be is $a+(n-1)b$, so $x_i \leq a+(n-1)b$ for all $1\leq i \leq n$.
Next, summing the largest two coordinates is at most $\left((n-2)b+a\right) + \left( (n-1)b+a\right)$, so \[x_i + x_j \leq \left((n-2)b+a\right) + \left( (n-1)b+a\right), \text{ for all } 1 \leq i < j \leq n.\]
Repeating the same process with summing the largest possible $3, \ldots, n$ variables, completes the inequality description.
Note that in the case that $a=1$, the inequality that bounds the sum of any $n-1$ variables is not needed in the minimal inequality description.
We justify this as follows.
Given that $a=1$, the inequality is that \[x_{i_1}+ x_{i_2}+ \cdots + x_{i_{n-1}} \leq \frac{n(n-1)}{2} b +n-1 \text{ for all } 1 \leq i_1 < i_2 < \cdots < i_{n-1} \leq n.\]
Also, we always need the final inequality $x_{1}+ x_{2}+ \cdots + x_{n} \leq \frac{n(n-1)}{2} b +n$.
If we sum the $n-1$ largest terms of \[ (a, a+b, a+2b, \ldots, a+(n-2)b, a+(n-1)b),\] the only value that is not used is $a$, which in this case equals $1$.
Hence, the final inequality \[x_{1}+ x_{2}+ \cdots + x_{n} \leq \frac{n(n-1)}{2} b +n\] is sufficient, because removing one coordinate from the sum on its left hand side subtracts $a=1$ from the right hand side, exactly yielding the inequality \[x_{i_1}+ x_{i_2}+ \cdots + x_{i_{n-1}} \leq \frac{n(n-1)}{2} b +n-1.\]
Note that this only happens in the case that $a=1$ because if $a>1$, we are able to get a tighter inequality when we subtract $a >1$ from the right hand side as before.
We now show that the inequality description is minimal.
Observe that by construction for each of these defining inequalities, there exists an extremal point of the polytope which gives equality.
It is clear that the inequalities $1\leq x_i$ in each coordinate are necessary, as they are the only lower bounds on the coordinates.
We only need to consider the minimality of the set of inequalities which are upper bounds on partial sums of the coordinates.
A general inequality on $1 \leq k \leq n$ variables says
\begin{align*}
x_{i_1} + x_{i_2} + \cdots + x_{i_k} &\leq b\left(nk - \frac{k^2+k}{2}\right) + ka
\end{align*}
for some variables $1 \leq i_1 < i_2 < \cdots < i_k \leq n$.
Observe that for each of the inequalities on $k$ variables (and of course excluding the case of $a=1$ and $k=n-1$), it is strictly stronger on at least one point than all inequalities on $1, \ldots, k-1$ variables.
Explicitly, a general inequality on $1 \leq k-1 \leq n$ variables says
\begin{align}
x_{i_1} + x_{i_2} + \cdots + x_{i_{k-1}} &\leq b\left(n(k-1) - \frac{(k-1)^2+k-1}{2}\right) + (k-1)a\label{eq:proof_k-1}\\
&= b\left(nk - \frac{k^2+k}{2} +(k-n)\right) + ka -a\nonumber\\
&= b\left(nk - \frac{k^2+k}{2}\right) + ka +[b(k-n) -a].\nonumber
\end{align}
Note that adding one coordinate $x_k$ to the left hand side of (\ref{eq:proof_k-1}) and $a+(n-1)b$ to the right hand side gives
\begin{align*} x_{i_1} + x_{i_2} + \cdots + x_{i_k} &\leq b\left(nk - \frac{k^2+k}{2}\right) + ka +[b(k-n) -a] + a+(n-1)b\\
&= b\left(nk - \frac{k^2+k}{2}\right) + ka +b(k-1),
\end{align*}
which is a worse bound for all $k > 1$ due to the $b(k-1)$ term.
As we constructed this inequality description in increasing order on the number of variables, each one refining the previous, the inequality description is an irredundant system of inequalities and is therefore minimal.
\end{proof}
Since we have a minimal inequality description of $\mathcal{X}_{n}(a,b)$, by enumerating the number of defining inequalities we obtain a count for the number of facets of $\mathcal{X}_{n}(a,b)$.
\begin{corollary}\label{thm:x-facets}
The number of facets of $\mathcal{X}_{n}(a,b)$ is $2^n-1$ if $a=1$ and $2^n-1+n$ if $a > 1$.
\end{corollary}
\begin{proof}
It suffices to count the number of minimal defining inequalities given in Proposition \ref{prop:inequality}. Note that there are $\binom{n}{k}$ different inequalities with $1 < k < n-1$ or $k=n$ distinct variables. We have $2\binom{n}{1}$ many inequalities with one variable since we have $1\leq x_i$ and $x_i \leq (n-1)b + a$.
If $a=1$, there are no inequalities in $n-1$ variables. Then the number of inequalities is
\[ \binom{n}{1} + \binom{n}{1} + \binom{n}{2} + \cdots + \binom{n}{n-2} + \binom{n}{n} = 2^n-1.\]
If $a > 1$, then there are $\binom{n}{n-1}$ inequalities in $n-1$ variables, so the number of inequalities is
\[ \binom{n}{1} + \left[\binom{n}{1} + \binom{n}{2} + \cdots + \binom{n}{n-2} + \binom{n}{n-1}+ \binom{n}{n} \right] = n + 2^n-1.\]
\end{proof}
Next we study the edges of $\mathcal{X}_{n}(a,b)$.
\begin{definition}
Let $x$ be a vertex of $\mathcal{X}_{n}(a,b)$.
Then it is a permutation of
\[(1, \ldots, 1, a+kb, a+(k+1)b, \ldots, a+(n-2)b, a+(n-1)b),\]
for some unique $1 \leq k \leq n$ if $a=1$ or $0 \leq k \leq n$ if $a>1$. We say that $x$ is on \textit{layer} $n-k$. For $x = (1, \ldots, 1)$, we say that it is on layer 0.
\end{definition}
\begin{lemma}
If $v,u$ are two vertices of $\mathcal{X}_{n}(a,b)$ such that $vu$ is an edge, then the layers of $v$ and $u$ differ by at most 1.
\end{lemma}
\begin{proof}
The proof follows almost identically to that of Proposition 2.1 in \cite{AW} in the case that $a=1$.
In the case that $a >1$, there is just one more layer to address.
Let $c \cdot x = c_1x_1 + \cdots + c_nx_n$ be the dot product of vectors $c,x \in \mathbb{R}^n$.
If $vu$ is an edge, then there exists $c \in \mathbb{R}^n$ such that $c \cdot v = c \cdot u > c \cdot w$ for any vertex $w$ of $\mathcal{X}_{n}(a,b)$ such that $w \neq v,u$.
Note that $\mathcal{X}_{n}(a,b)$ is invariant under permutation of the coordinates, so without loss of generality, assume that $c_1 \leq c_2 \leq \cdots \leq c_n$.
Suppose for the sake of contradiction that $v,u$ are $t \geq 2$ layers apart.
Without loss of generality, suppose that $u$ is below $v$.
Hence, $v$ is (up to permutation)
$$(1, \ldots, 1, a+kb, a+(k+1)b, \ldots, a+(n-2)b, a+(n-1)b),$$
and $u$ is (up to permutation)
$$(1, \ldots, 1, a+(k+t)b, a+(k+t+1)b, \ldots, a+(n-2)b, a+(n-1)b),$$
where $1 \leq k < k+2 \leq k+t \leq n$ if $a=1$ and $0 \leq k < k+2 \leq k+t \leq n$ if $a > 1$.
Specifically, since $v,u$ uniquely maximizes $c \cdot x$, then by the rearrangement inequality, $v$ and $u$ are exactly
$$(1, \ldots, 1, a+kb, a+(k+1)b, \ldots, a+(n-2)b, a+(n-1)b),$$ and
$$(1, \ldots, 1, a+(k+t)b, a+(k+t+1)b, \ldots, a+(n-2)b, a+(n-1)b),$$
respectively (not just up to permutation), and $c_{k-1} < c_k < \cdots < c_n$.
Consider two cases.
First, if $c_{k+t-1} \geq 0$, then for $w = (1, \ldots, 1, a+(k+t-1)b, a+(k+t)b, \ldots, a+(n-2)b, a+(n-1)b) \in \mathcal{X}_{n}(a,b)$ which is not equal to $v,u$, we have that $c \cdot w \geq c \cdot u$, a contradiction. Otherwise, if $c_{k+t-1}<0$, we have $c_k < \cdots < c_{k+t-1}< 0$, so $c \cdot v - c \cdot u = c_k(a+kb - 1) + c_{k+1}(a+(k+1)b -1) + \cdots + c_{k+t-1}(a+(k+ t -1)b -1) <0$. This implies $c \cdot v < c \cdot u$, a contradiction.
\end{proof}
\begin{remark}
For fixed $n$, if $a=1$, then regardless of the value for $b$, there are the same number of layers.
If $a>1$, then regardless of the value for $b$, there are the same number of layers, which is one more than in the case where $a=1$.
\end{remark}
\begin{proposition}\label{2.2}
For each vertex $v$ of $\mathcal{X}_{n}(a,b)$, there are exactly $n$ edges of $\mathcal{X}_{n}(a,b)$ with $v$ as one of the vertices.
That is, $\mathcal{X}_{n}(a,b)$ is a simple polytope.
\end{proposition}
\begin{proof}[Proof sketch]
In the case that $a=1$, the proof is identical to the proof of Proposition 2.2 in \cite{AW}, where all instances of coordinates $(1, \ldots, 1, k+1, k+2, \ldots, n-1, n)$ are replaced by $(1, \ldots, 1, 1+kb, 1+(k+1)b, \ldots, 1+(n-2)b, 1+(n-1)b)$. The same proof works because $a=1$, so no new layers are introduced.
In the case that $a >1$, there is one more layer to address, as $(1, a+b, a+2b, \ldots, a+(n-2)b, a+(n-1)b) \neq (a, a+b, a+2b, \ldots, a+(n-2)b, a+(n-1)b)$ are distinct now. By modifying the proof from the case of $a=1$, one only needs to check that layers $n-2$, $n-1$ (the new layer, meaning the layer that causes new facets to appear), and $n$ have vertices with the correct number of edges.
\end{proof}
\begin{proposition}
The number of edges of $\mathcal{X}_{n}(a,b)$ is $\frac{n}{2}\Ver(\mathcal{X}_{n}(a,b))$, where $\Ver(\mathcal{X}_{n}(a,b))$ denotes the number of vertices $\mathcal{X}_{n}(a,b)$.
\end{proposition}
\begin{proof}
By Proposition \ref{2.2}, the graph of $\mathcal{X}_{n}(a,b)$ is an $n$-regular graph. This gives the desired formula, where the number of vertices is given by Proposition \ref{thm:num_vertices}.
\end{proof}
Up to this point we have the number of $0$-dimensional faces (vertices), the $1$-dimensional faces (edges), and $(n-1)$-dimensional facets (facets).
In what comes next, we study the faces of higher dimension.
\begin{proposition}\label{thm:faces}
Let $f_{k}$ be the number of $k$-dimensional faces of $\mathcal{X}_{n}(a,b)$ for $k \in \{ 0, \ldots, n\}$. Then if $a=1$,
\[f_{k} = \sum_{m=0,\ m \neq 1}^{n-k} \binom{n}{m} \cdot (n-k-m)! \cdot S(n-m+1, n-k-m+1),\] and if $a > 1$,
\[f_{k} = \sum_{m=0}^{n-k} \binom{n}{m} \cdot (n-k-m)! \cdot S(n-m+1, n-k-m+1),\] where $S(n,k)$ are the Stirling numbers of the second kind.
\end{proposition}
\begin{proof}[Proof sketch]
In the case that $a=1$, the proof is identical to the proof of Theorem 3.1 in \cite{AW}, where all instances of coordinates $(1, \ldots, 1, k+1, k+2, \ldots, n-1, n)$ are replaced by $(1, \ldots, 1, 1+kb, 1+(k+1)b, \ldots, 1+(n-2)b, 1+(n-1)b)$. The same proof works because $a=1$, so no new layers are introduced, and thus no new faces are introduced.
In the case that $a >1$, there is one more layer to address, as $(1, a+b, a+2b, \ldots, a+(n-2)b, a+(n-1)b) \neq (a, a+b, a+2b, \ldots, a+(n-2)b, a+(n-1)b)$ are distinct now. By modifying the proof from the case of $a=1$, the restriction given in Lemma 3.1 in \cite{AW} which causes $m = 1$ to be excluded from the sum in the proof of Theorem 3.1 in \cite{AW} is removed due to the additional layer.
\end{proof}
The faces of a convex polytope $P$ form a lattice called its \emph{face lattice}, denoted by $\mathcal{F}(P)$, where the partial ordering is given by set inclusion of the faces.
Two polytopes whose face lattices are isomorphic (as unlabeled partially ordered sets) are \emph{combinatorially equivalent}.
We continue with a characterization of which $\mathcal{X}_{n}(a,b)$ are combinatorially equivalent.
\begin{proposition}
For fixed $n$ and for $a=1$, the $\mathcal{X}_{n}(1,b)$ are combinatorially equivalent for all $b \geq 1$.
Additionally, for fixed $n$, the $\mathcal{X}_{n}(a,b)$ are combinatorially equivalent for all $a > 1$ and $b \geq 1$.
\end{proposition}
\begin{proof}
By Proposition \ref{thm:faces}, for each $b$, the face lattice of $\mathcal{X}_{n}(1,b)$ has the same number of elements, and the same number of elements at each level corresponding to the dimension of the face.
For a face lattice, the join of two elements in the same layer will be in the layer above it, and the meet of two elements in the same layer will be in the layer below it, as face inclusions pass through faces of one dimension more/less (face inclusions of faces of the same dimension result in equality).
Thus, it only remains to construct the isomorphism.
Define $\varphi: \mathcal{F}(\mathcal{X}_{n}(1,b_1)) \to \mathcal{F}(\mathcal{X}_{n}(1,b_2))$ by replacing $b_1$ by $b_2$ in the vertex description of each face in the face lattice.
It is clear that this is an isomorphism, as the inverse is given by replacing $b_2$ by $b_1$.
Therefore, the $\mathcal{X}_{n}(1,b)$ are combinatorially equivalent for $a=1$ and for any $b\geq 1$.
Similarly as before, for all $a>1$, we define $\varphi: \mathcal{F}(\mathcal{X}_{n}(a_1,b_1)) \to \mathcal{F}(\mathcal{X}_{n}(a_2,b_2))$ by replacing $a_1$ by $a_2$ and $b_1$ by $b_2$ in the vertex description of each face in the face lattice.
Hence, the $\mathcal{X}_{n}(a,b)$ are combinatorially equivalent for any $a>1$ and for all $b\geq 1$.
\end{proof}
\subsection{Volume}
We turn our attention to calculating the volume of $\mathbf{x}$-parking function polytopes $\mathcal{X}_{n}(a,b)$.
We extend previous work of \cite{AW} to find a recursive volume formula in Theorem \ref{thm:generalized_recursive_volume}.
Then, by using exponential generating functions, we find a closed-form expression for the volume of $\mathcal{X}_{n}(a,b)$ in Theorem \ref{thm:generalized_closed_form_volume}.
We also consider the relationship between the $\mathbf{x}$-parking function polytopes and partial permutahedra, allowing us to expand known results on the volume of partial permutahedra.
The relationship between partial permutahedra and $\mathbf{x}$-parking function polytopes is given by the following proposition.
\begin{proposition}\label{prop:x-pfp_pp}
Up to integral equivalence, the intersection of the set of all partial permutahedra $\mathcal{P}(n,p)$ with the set of $\mathbf{x}$-parking function polytopes corresponds to those partial permutahedra $\mathcal{P}(n,p)$ where $p \geq n-1$ and those $\mathbf{x}$-parking functions where $b=1$ (for $n>1$).
Specifically, if $n \geq 2$ and $p \geq n-1$, then $\mathcal{P}(n,p)$ is integrally equivalent to $\mathcal{X}_{n}(a,b)$ if and only if $b=1$ and $a = p - n +2$.
If $n = 1$ and $p \geq n-1$, then $\mathcal{P}(1,p)$ is integrally equivalent to $\mathcal{X}_{1}(a)$ if and only if $a = p + 1$.
\end{proposition}
\begin{proof}
First, recall from Remark \ref{rem:n=1} that when $n=1$ for an $\mathbf{x}$-parking function, $b$ is unused.
The case $n=1$ is clear, as these polytopes are just 1-dimensional line segments of length $a-1 = p$.
Thus, we reduce to the case that $n \geq 2$.
For two polytopes to be integrally equivalent, they must have the same dimension (relative to its affine span), which justifies using $n$ to denote the dimension for both.
We will show that the conditions $b=1, a=p-n+2$ are necessary for $\mathcal{P}(n,p)$ and $\mathcal{X}_{n}(a,b)$ to be integrally equivalent by comparing ``maximal'' vertices.
Apply the coordinate transformation to increase the each coordinate by one in $\mathcal{P}(n, p)$ (as in the proof of Proposition \ref{prop:classical-partial}) to match the minimal vertices of the two polytopes.
Consider the ``maximal'' vertices \[v = (a,a+b,\ldots, a+(n-2)b, a+(n-1)b)\] of $ \mathcal{X}_{n}(a,b)$ and $u= (\max(p -n +2,1), \ldots, p, p+1)$ of the shifted $\mathcal{P}(n,p)$, where ``maximal'' means the maximal vertex where all the coordinates are weakly increasing.
If the two polytopes are integrally equivalent, we must have $v = u$.
If $b\neq 1$, the difference between the last two coordinates of $v$ is $b\geq 2$ while the difference between the last two coordinates of $u$ is 1, thus $u\neq v$.
Hence, $b=1$ is a necessary condition.
Furthermore, consider $a \neq p -n +2$.
If $n \leq p+1$, the first coordinate of $u$ is $\max(p -n +2,1) = p -n +2$, which gives us $u\neq v$.
If $n > p+1$, we can see that $\max(p -n +2,1) = 1$.
Additionally, the second coordinate of $u$ is at most $1$, since $(p -n +2)+1 \leq 1$.
Thus we have the first two coordinates of $u$ must be 1.
However, the first two coordinates of $u$ are $a, a+b$, where $b$ is nonzero, thus $u\neq v$.
Hence, the conditions $b=1$, and $a = p - n +2$ are necessary for $\mathcal{P}(n,p) = \mathcal{X}_{n}(a,b)$.
We next show that they are sufficient. Consider $\mathcal{X}_{n}(a,b)$ where $b=1$, and $a = p - n +2$.
Note that this second condition implies $n\leq p+1$.
Using the vertex description of Proposition \ref{prop:abn_vertex}, we can consider $\mathcal{X}_{n}(a,b)$ to be the convex hull of all permutations of
\[ (\underbrace{1, \ldots, 1}_k, \underbrace{p -n +k+2, p -n +k+3, \ldots, p, p +1}_{n-k}),\] for all $0 \leq k \leq n$.
Since $n\leq p+1$ implies min$(n,p) = n$, this is exactly the vertex description of shifted $\mathcal{P}(n, p)$.
\end{proof}
We now recall the recursive volume formula for the classical parking function polytope.
\begin{theorem}[Theorem 4.1, \cite{AW}]\label{thm:mit_volume}
Define a sequence $\{V_n\}_{n \geq 0}$ by $V_0=1$ and $V_n = \Vol(\mathsf{PF}_n)$ for all positive integers $n$.
Then $V_1 = 0$ and for all $n \geq 2$,
\begin{equation*}
V_n = \frac{1}{n} \sum_{k=0}^{n-1} \binom{n}{k} \frac{(n-k)^{n-k-1}(n+k-1)}{2} V_k.
\end{equation*}
\end{theorem}
We are able to obtain some immediate corollaries of these results by considering dilations of $\mathsf{PF}_n$.
\begin{definition}
For any positive integer $d$, the $d$-dilate of an $\mathbf{x}$-parking function polytope $\mathcal{X}_{n}(a,b)$ is given by the following map on points $\varphi_d: \mathbb{R}^n_{\geq 1} \to \mathbb{R}^n_{\geq 1}$ defined by
\[ (x_1, \ldots, x_n) \mapsto (d(x_1-1) + 1, \ldots, d(x_n-1) + 1).\]
In general, a $d$-dilate of any polytope is the image of a map which (up to translation) multiplies all coordinates by $d$.
\end{definition}
\begin{lemma}\label{lem:1,b,n-dilate+a,1,n-dilate}\
\begin{enumerate}
\item The $\mathbf{x}$-parking function polytope $\mathcal{X}_{n}(1,b)$ is a $b$-dilate of $\mathcal{X}_{n}(1,1)$.
\item Fix $ a \geq 1$. The
$\mathbf{x}$-parking function polytope $\mathcal{X}_{n}(a + (b-1)(a-1) ,b)$ is a $b$-dilate of $\mathcal{X}_{n}(a,1)$.
\end{enumerate}
\end{lemma}
\begin{proof}
This follows from applying the $b$-dilate map $\varphi_b$ onto all the vertices of $\mathcal{X}_{n}(1,1)$ and $\mathcal{X}_{n}(a,1)$ given by Proposition \ref{prop:abn_vertex}, which results in all the vertices of $\mathcal{X}_{n}(1,b)$ and $\mathcal{X}_{n}(a + (b-1)(a-1) ,b)$.
\end{proof}
Knowing that one polytope is a dilate of another allows for more direct approach to finding volumes for some of these parking function polytopes.
\begin{corollary}\label{cor:b^n+a,1,n-vol}\
\begin{enumerate}
\item Fix a positive integer $b$. Then $\Vol(\mathcal{X}_{n}(1,b)) = b^n\Vol(\mathcal{X}_{n}(1,1))$.
\item Fix positive integers $a,b$. Then $\Vol(\mathcal{X}_{n}(a+ (b-1)(a-1),b)) = b^n\Vol(\mathcal{X}_{n}(a,1))$.
\end{enumerate}
\end{corollary}
\begin{proof}
Note that the dilation of an $n$-dimensional polytope by a factor of $b$ increases the volume by a factor of $b^n$. The result then follows from Lemma \ref{lem:1,b,n-dilate+a,1,n-dilate}.
\end{proof}
Using the following result on partial permutahedra, we can then calculate the volume of more $\mathbf{x}$-parking function polytopes.
\begin{theorem}[Theorem 4.2, \cite{BCC}]
For any $n$ and $p$ with $p \geq n-1$, the normalized volume of $\mathcal{P}(n,p)$ is given recursively by
\[ \nVol(\mathcal{P}(n,p)) = (n-1)! \sum_{k=1}^n k^{k-2} \frac{\nVol(\mathcal{P}(n-k,p-k))}{(n-k)!}\left(kp-\binom{k}{2}\right) \binom{n}{k},\]
with the initial condition $\nVol(\mathcal{P}(0,p))=1$.
\end{theorem}
\begin{remark}
By Proposition \ref{prop:x-pfp_pp}, $\mathcal{X}_{n}(a,1)$ for $a>1$ is integrally equivalent to $\mathcal{P}(n,p)$ where $p = n + a - 2 >n-1$.
Thus, the volume $\Vol(\mathcal{X}_{n}(a+ (b-1)(a-1),b))$ can be calculated using Corollary \ref{cor:b^n+a,1,n-vol}(2), and converting between volume and normalized volume.
\end{remark}
These results, however, do not give $\Vol(\mathcal{X}_{n}(a,b))$ for all $a,b \geq 1$.
To do so, we need a more general construction, which is given by the following recursive volume formula.
\begin{theorem}\label{thm:generalized_recursive_volume}
Fix two positive integers $a,b$.
Define a sequence $\{V_n^{a,b}\}_{n \geq 0}$ by $V^{a,b}_0=1$ and $V^{a,b}_n = \Vol(\mathcal{X}_{n}(a,b))$ for all positive integers $n$.
Then $V^{a,b}_1 = a-1$ and for $n \geq 2$, $V^{a,b}_n$ is given recursively by
\[ V_n^{a,b} = \frac{1}{n}\sum_{k=0}^{n-1}\binom{n}{k} \frac{(b(n-k))^{n-k-1}(nb+kb-b+2a-2)}{2}V^{a,b}_{k}.\]
\end{theorem}
In the proof of this theorem, we will use the following decomposition lemma.
\begin{lemma}[Proposition 4.1, \cite{AW}; Proposition 2, \cite{DGH}; Section 19.4, \cite{BZ}]\label{lem:decomp}
Let $K_1, \ldots, K_n$ be some convex bodies of $\mathbb{R}^n$ and suppose that $K_{n-m+1, \ldots, K_n}$ are contained in some $m$-dimensional affine subspace $U$ of $\mathbb{R}^n$.
Let $MV_U$ denote the mixed volume with respect to the $m$-dimensional volume measure on $U$, and let $MV_{U^\perp}$ be defined similarly with respect to the orthogonal complement $U^\perp$ of $U$.
Then the mixed volume of $K_1, \ldots, K_n$
\begin{align*}
&MV(K_1, \ldots, K_{n-m}, K_{n-m+1}, \ldots, K_n)\\
&= \binom{n}{m}^{-1} MV_{U^\perp} (K_1', \ldots, K_{n-m}')MV_U(K_{n-m+1, \ldots, K_n),}
\end{align*}
where $K_1', \ldots, K_{n-m}'$ denote the orthogonal projections of $K_1, \ldots, K_{n-m}$ onto $U^\perp$, respectively.
\end{lemma}
We will also use the following fact.
\begin{lemma}[Lemma 4.1, \cite{BCC}]\label{lem:euclidean_volume}
The Euclidean volume of the regular permutahedron $\Pi_n \subseteq \mathbb{R}^n$ is $n^{n-2}\sqrt{n}$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm:generalized_recursive_volume}]
The case where $a=1$ follows from Theorem \ref{thm:mit_volume} and Corollary \ref{cor:b^n+a,1,n-vol}(1).
They imply that
\begin{align*}
V^{1,b}_n &= b^n V_n \\
&= \frac{b^n}{n} \sum_{k=0}^{n-1} \binom{n}{k} \frac{(n-k)^{n-k-1}(n+k-1)}{2} V_k\\
&= \frac{b^n}{n} \sum_{k=0}^{n-1} \binom{n}{k} \frac{(n-k)^{n-k-1}(n+k-1)}{2} \frac{V^{1,b}_k}{b^k}\\
&= \frac{1}{n} \sum_{k=0}^{n-1} \binom{n}{k} \frac{b^{n-k}(n-k)^{n-k-1}(n+k-1)}{2}V^{1,b}_k.
\end{align*}
For the case where $a>1$, we generalize the proof of Theorem 4.1 in \cite{AW}.
Divide $\mathcal{X}_{n}(a,b)$ into $n$-dimensional (full dimensional) pyramids with facets of $\mathcal{X}_{n}(a,b)$ which do not contain $I = (1, \ldots, 1)$ as the base, and point $I$ as a vertex.
Recall from Theorem \ref{thm:x-facets} that there are $2^n-1+n$ facets since $a>1$.
Of those, exactly $n$ have $I$ as a vertex, so the number of pyramids is $2^n-1$.
Each pyramid has a base which is a facet $F$ with points of $\mathcal{X}_{n}(a,b)$ satisfying the equation
\[ x_{i_1} + x_{i_2} + \cdots + x_{i_k} = ((n-k)b+a) + \cdots + ((n-2)b + a) + ((n-1)b+1),\]
for some $k \in \{1,2,\ldots, n-1, n\}$ and distinct $i_1 < \cdots < i_k$, due to the defining inequalities of $\mathcal{X}_{n}(a,b)$.
Now let $\{j_1, j_2, \ldots, j_{n-k}\} = \{1,2,\ldots, n\} \setminus \{i_1, i_2, \ldots, i_k\}$. Let $\mathcal{X}'_{n-k}(a,b)$ be the polytope containing all points $x'$ such that $x_p' = 0$ for all $p \in \{i_1, i_2, \ldots, i_k\}$ and for some $x \in F$, $x_p' = x_p$ for all $p \in \{j_1, j_2, \ldots, j_{n-k}\}$.
Then $\mathcal{X}'_{n-k}(a,b)$ is an $(n-k)$-dimensional polytope, with the following defining inequalities:
\begin{align*}
\intertext{\emph{For all} $1 \leq p \leq n-k$,}
1 \leq x_{j_p}' &\leq ((n-k-1)b+a), \\
\intertext{\emph{for all} $1 \leq p < q \leq n-k$,}
x_{j_p}' + x_{j_q}' &\leq ((n-k-2)b+a) + ((n-k-1)b+a),\\
\intertext{\emph{for all} $1 \leq p < q < r \leq n-k$,}
x_{j_p}' + x_{j_q}' + x_{j_r}' &\leq ((n-k-3)b+a) + ((n-k-2)b+a) + ((n-k-1)b+a),\\
&\vdots\\
\intertext{\emph{for all} $1 \leq p_1 < p_2 < \cdots <p_{n-k-1} \leq n-k$,}
x_{j_{p_1}}' + x_{j_{p_2}}' + \cdots + x_{j_{p_{n-k-1}}}' &\leq (b+a) + (2b+a) + \cdots + ((n-k-1)b+a),\\
\intertext{\emph{and}}
x_{j_{p_1}}' + x_{j_{p_2}}' + \cdots + x_{j_{p_{n-k}}}' &\leq a + (b+a) + \cdots + ((n-k-1)b+a).
\end{align*}
By comparing with the inequality description given in Proposition \ref{prop:inequality}, we see that $\mathcal{X}'_{n-k}(a,b) \cong \mathcal{X}_{n-k}(a,b)$, as the defining inequalities are the same.
Hence the $(n-k)$-dimensional volume $\Vol_{n-k}(\mathcal{X}'_{n-k}(a,b)) = \Vol_{n-k}(\mathcal{X}_{n-k}(a,b)) = V_{n-k}^{a,b}$.
Let $Q^{a,b}_k$ be the polytope containing all points $x'$ such that for all $p \in \{j_1, j_2, \ldots, j_{n-k}\}$, we have $x_p'=0$, and for some $x \in F$, we have $x_p' = x_p$ for all $p \in \{i_1, i_2, \ldots, i_k\}$.
Then the coordinate values of $(x_{i_1}', x_{i_2}', \ldots, x_{i_k}')$ of the vertices of $Q^{a,b}_k$ are the permutations of $((n-k)b+a, \ldots, (n-2)b + a, (n-1)b+a)$.
Note that $Q^{a,b}_k$ is a translate of the polytope $Q^{1,b}_k$ (both are $(k-1)$-dimensional polytopes), so $Q^{a,b}_k$ and $Q^{1,b}_k$ have the same $(k-1)$-dimensional volume.
Furthermore, $Q^{1,b}_k$ is a $b$-dilate of $Q^{1,1}_k$, which is integrally equivalent to $\Pi(1, \ldots, k) = \Pi_{k-1}$, the regular permutahedron.
As $\Pi_{k-1}$ has volume $k^{k-2} \sqrt{k}$ by Lemma \ref{lem:euclidean_volume}, then $Q^{a,b}_k$ has volume $b^{k-1}k^{k-2} \sqrt{k}$.
Thus, $F$ is a Minkowski sum of two polytopes $\mathcal{X}'_{n-k}(a,b)$ and $Q^{a,b}_k$ which lie in two orthogonal subspaces of $\mathbb{R}^n$.
Therefore, by Lemma \ref{lem:decomp}, the volume of $F$ is
\[ \Vol(F) = \sum_{p_1, \ldots, p_n = 1}^2 MV(K_{p_1}, K_{p_2}, \ldots, K_{p_n}) = V^{a,b}_{n-k} \cdot b^{k-1}k^{k-2} \sqrt{k},\]
where $K_1 = \mathcal{X}_{n-k,a,b}'$ and $K_2 = Q^{a,b}_k$.
Then the volume of $\Pyr(I,F)$, the pyramid with $I$ as the vertex over base $F$ is
\[ \Vol(\Pyr(I,F)) = \frac{1}{n}h_k\Vol(F) = \frac{1}{n}h_k V^{a,b}_{n-k} \cdot b^{k-1}k^{k-2} \sqrt{k},\]
where $h_k$ denotes the minimum distance from point $I$ to the face $F$.
We calculate that
\begin{align*}
h_k &= \frac{|1+ \cdots + 1 -(((n-k)b+a) + \cdots + ((n-2)b + a) + ((n-1)b+1))|}{\sqrt{1+\cdots +1}}\\
&= \frac{|k - k((2n-k-1)b+2a)/2|}{\sqrt{k}}\\
&= \frac{k(2nb-kb-b+2a-2)}{2\sqrt{k}}.
\end{align*}
Thus,
\begin{align*}
\Vol(\Pyr(I,F))
&=\frac{1}{n}\cdot \frac{(2nb-kb-b+2a-2)}{2}V^{a,b}_{n-k} \cdot b^{k-1}k^{k-1}.
\end{align*}
By the definition of the sequence, $V^{a,b}_0=1$.
Note that $V^{a,b}_1 = a-1$, as the 1-dimensional volume (in this case length) of the convex hull of colinear points $1, \ldots, a$ in $\mathbb{R}$ is of length $a-1$. Thus for $n \geq 2$,
\begin{align*}
V^{a,b}_n &= \sum_{F}\Vol(\Pyr(I,F))\\
&= \sum_{k=1}^{n}\binom{n}{k}\frac{1}{n}\cdot \frac{(2nb-kb-b+2a-2)}{2}V^{a,b}_{n-k} b^{k-1}k^{k-1}\\
&= \frac{1}{n}\sum_{n-k=1}^{n}\binom{n}{n-k} \frac{(2nb-(n-k)b-b+2a-2)}{2}V^{a,b}_{k} b^{n-k-1}(n-k)^{n-k-1}\\
&= \frac{1}{n}\sum_{k=0}^{n-1}\binom{n}{k} \frac{(b(n-k))^{n-k-1}(nb+kb-b+2a-2)}{2}V^{a,b}_{k}.
\end{align*}
\end{proof}
\begin{reptheorem}{thm:generalized_closed_form_volume}
For any positive integers $a,b,n$, the normalized volume $\nVol(\mathcal{X}_{n}(a,b))$ is given by
\begin{equation}
\nVol(\mathcal{X}_{n}(a,b)) = -n!\left(\frac{b}{2}\right)^n \sum_{i=0}^n \binom{n}{i} (2i-3)!! \left(2n-1 + \frac{2a-2}{b}\right)^{n-i}. \nonumber
\end{equation}
\end{reptheorem}
To prove this, we use the following lemma.
\begin{lemma}\label{lemma:exp-g_b}
The exponential generating function for $\{(bn)^{n-1}\}_{n \geq 1}$
\begin{equation}
g_b(x) := \sum_{n \geq 1} \frac{(bn)^{n-1}}{n!} x^n
\end{equation}
satisfies
\begin{equation}\label{eq:b-Lambert}
g_b(x) = -\frac{1}{b}W_0(-bx),
\end{equation}
where $W_0$ denotes the principal branch of the Lambert $W$ function, and
\begin{equation}\label{eq:g_b(x)}
g_b(x) = xe^{bg_b(x)}.
\end{equation}
\end{lemma}
\begin{proof}
Recall that
\begin{equation}
W_0(z) = \sum_{n \geq 1} \frac{(-n)^{n-1}}{n!} z^n, \nonumber
\end{equation}
so substituting $z=-bx$ gives
\begin{equation}\label{eq:W_0(-bx)}
W_0(-bx) = \sum_{n \geq 1} \frac{(-n)^{n-1}}{n!} (-bx)^n = -b\sum_{n \geq 1} \frac{(bn)^{n-1}}{n!} x^n,
\end{equation}
which implies Equation (\ref{eq:b-Lambert}). A well-known property of the Lambert $W$ function is that
$W_0(z) = ze^{-W_0(z)}$.
Substituting $z=-bx$ gives
$W_0(-bx) = -bxe^{-W_0(-bx)}$,
which by Equation (\ref{eq:b-Lambert}) gives
$-bg_b(x) = -bxe^{bg_b(x)}$,
which implies Equation (\ref{eq:g_b(x)}).
\end{proof}
We also need the following exponential generating function for the (non-normalized) volume of $\mathcal{X}_{n,a,b}$. This is a generalization of Proposition 4.2 in \cite{AW}.
\begin{proposition}\label{prop:general_exp_gen}
Let $V^{a,b}_n$ denote the Euclidean volume of $\mathcal{X}_{n}(a,b)$. Let
\begin{equation}
f_{a,b}(x) := \sum_{n\geq 0} \frac{V^{a,b}_n}{n!}x^n
\end{equation}
be its exponential generating function. Then
\begin{equation}
f_{a,b}(x) = e^{b \int \frac{x}{2}(g_b'(x))^2 \, dx} e^{(a - \frac{b}{2} - \frac{1}{2})g_b(x)},
\end{equation}
where $g_b(x)$ is the exponential generating function given in Lemma \ref{lemma:exp-g_b}.
\end{proposition}
\begin{proof}
By Theorem \ref{thm:generalized_recursive_volume}, we have that
\begin{align*}
n \cdot \frac{V^{a,b}_n}{n!} &= \frac{1}{n!} \sum_{k=0}^{n-1} \binom{n}{k}\frac{(b(n-k))^{n-k-1}(nb+kb-b+2a-2)}{2} V^{a,b}_k\\
&= \sum_{k=0}^{n-1} \frac{(b(n-k))^{n-k-1}(b(n-k) + 2kb + (2a-b-2))}{2(n-k)!}\frac{V^{a,b}_k}{k!}\\
&= \sum_{k=0}^{n-1}\frac{1}{2} \frac{(b(n-k))^{n-k}}{(n-k)!}\frac{V^{a,b}_k}{k!} + \sum_{k=0}^{n-1}\frac{(b(n-k))^{n-k-1}bk}{(n-k)!}\frac{V^{a,b}_k}{k!}\\
&+ \sum_{k=0}^{n-1}\frac{2a-b-2}{2} \frac{(b(n-k))^{n-k-1}}{(n-k)!}\frac{V^{a,b}_k}{k!},
\end{align*}
which by summing over all $x^n$ gives
\begin{equation}\label{eq:first}
f_{a,b}'(x) = \frac{1}{2}g_b'(x)f_{a,b}(x) + bg_b(x)f_{a,b}'(x) + \frac{2a-b-2}{2x}g_b(x)f_{a,b}(x).
\end{equation}
Note that by Equation (\ref{eq:g_b(x)}),
\begin{align*}
g_b'(x) &= (xe^{bg_b(x)})'\\
&= e^{bg_b(x)} + bxg_b'(x)e^{bg_b(x)}\\
&= \frac{g_b(x)}{x} + bg_b'(x)g_b(x),
\end{align*}
which implies that
\begin{equation}\label{eq:star_b}
xg_b'(x) - g_b(x) = bxg_b'(x)g_b(x),
\end{equation}
and
\begin{equation}\label{eq:star_2}
1 - bg_b(x) = \frac{g_b(x)}{xg_b'(x)}.
\end{equation}
Thus by Equation (\ref{eq:first}),
\begin{align*}
f_{a,b}'(x)(1- bg_b(x)) &= \frac{1}{2}g_b'(x)f_{a,b}(x) + \frac{2a-b-2}{2x}g_b(x)f_{a,b}(x)\\
&= \frac{1}{2x}(xg_b'(x) + (2a-b-2)g_b(x))f_{a,b}(x)\\
&= \frac{1}{2x}(bxg_b'(x)g_b(x) + (2a-b-1)g_b(x))f_{a,b}(x),
\end{align*}
where the last line follows by Equation (\ref{eq:star_b}). Then by dividing both sides by Equation (\ref{eq:star_2}), we get
\begin{align*}
f_{a,b}'(x) &= \frac{xg_b'(x)}{g_b(x)}\frac{1}{2x}(bxg_b'(x)g_b(x) + (2a-b-1)g_b(x))f_{a,b}(x)\\
&= \left( b\frac{x}{2} (g_b'(x))^2 + (a-\frac{b}{2}-\frac{1}{2})g_b'(x) \right) f_{a,b}(x).
\end{align*}
This differential equation has solution
\begin{align*}
f_{a,b}(x) &= e^{\int b\frac{x}{2} (g_b'(x))^2 + (a-\frac{b}{2}-\frac{1}{2})g_b'(x)\, dx}\\
&= e^{b\int \frac{x}{2} (g_b'(x))^2 \, dx}e^{(a-\frac{b}{2}-\frac{1}{2})g_b(x)}.
\end{align*}
\end{proof}
Our proof will also use the following theorem (see for example Chapter 4, Entry 11, \cite{Ram}).
\begin{theorem}[Ramanujan's Master Theorem]\label{thm:RMT} Let $\Gamma(s)$ denote the gamma function. If \begin{equation}\label{eq:RMT_f} f(x) = \sum_{n=0}^\infty \frac{\varphi(n)}{n!}(-x)^n,\end{equation}
then the Mellin transform of $f(x)$ is given by
\begin{equation}\label{eq:RMT_mellin} \int_0^\infty x^{s-1} f(x) \, dx = \Gamma(s)\varphi(-s).\end{equation}
\end{theorem}
Now we are ready to prove our theorem.
\begin{proof}[Proof of Theorem \ref{thm:generalized_closed_form_volume}]
By Ramanujan's Master Theorem, for $\varphi(n) := (-1)^nV^{a,b}_n$,
\begin{equation}
\int_0^\infty x^{s-1} f_{a,b}(x) \, dx = \Gamma(s) \varphi(-s) = \Gamma(s) (-1)^{-s} V^{a,b}_{-s}, \nonumber
\end{equation}
where $\Gamma(s)$ denotes the gamma function, and taking $s = -n$, we get
\begin{equation}
\int_0^\infty x^{-n-1} f_{a,b}(x) \, dx= \Gamma(-n) (-1)^{n} V^{a,b}_{n}. \nonumber
\end{equation}
By Equation (\ref{eq:g_b(x)}), we have that
\begin{equation}\label{eq:newstar_1}
x=g_b(x)e^{-bg_b(x)} \qquad \text{and} \qquad dx = (1-bg_b )e^{-bg_b}\, dg_b.
\end{equation}
By Equations (\ref{eq:star_2}) and (\ref{eq:g_b(x)}), we have that
\begin{equation}
(g_b'(x))^2 = \left( \frac{g_b(x)}{x(1-bg_b(x))} \right)^2 = \left(\frac{xe^{bg_b(x)}}{x(1-bg_b(x))}\right)^2 = \left(\frac{e^{bg_b(x)}}{(1-bg_b(x))}\right)^2. \nonumber
\end{equation}
Thus
\begin{align*}
bx (g_b'(x))^2 \, dx &= b\left(g_be^{-bg_b}\right) \left(\frac{e^{bg_b}}{(1-bg_b)}\right)^2 (1-bg_b )e^{-bg_b}\, dg_b\nonumber\\
&= \frac{bg_b}{1-bg_b}\, dg_b.
\end{align*}
By Proposition \ref{prop:general_exp_gen}, we have
\begin{align}\label{eq:newstar_2}
f_{a,b}(x)
&= e^{\frac{1}{2}\int bx(g_b'(x))^2 \, dx}e^{(a-\frac{b}{2}-\frac{1}{2})g_b(x)}\nonumber\\
&= e^{\frac{1}{2}\int \frac{bg_b}{1-bg_b}\, dg_b}e^{(a-\frac{b}{2}-\frac{1}{2})g_b(x)}\nonumber\\
&= e^{\frac{1}{2}\int -1 + \frac{1}{1-bg_b}\, dg_b}e^{(a-\frac{b}{2}-\frac{1}{2})g_b(x)}\nonumber\\
&= e^{-\frac{g_b(x)}{2}} e^{-\frac{1}{2}\ln(1-bg_b(x))}e^{(a-\frac{b}{2}-\frac{1}{2})g_b(x)}\nonumber\\
&= \frac{1}{\sqrt{1-bg_b(x)}}e^{(a-\frac{b}{2}-1)g_b(x)}.
\end{align}
Thus by Equations (\ref{eq:newstar_1}) and (\ref{eq:newstar_2}), we have that
\begin{align*}
\Gamma(-n) (-1)^n V^{a,b}_n &= \int_0^{-\infty} (g_b e^{-bg_b})^{-n-1} \frac{e^{(a-\frac{b}{2}-1)g_b}}{\sqrt{1-bg_b}} (1-bg_b )e^{-bg_b}\, dg_b\\
&= \int_0^{-\infty} g_b^{-n-1} \sqrt{1-bg_b} e^{(nb+a-\frac{b}{2}-1)g_b}\, dg_b
\intertext{and by replacing $g_b$ by $-t$ and $dg_b$ by $-dt$,}
&= \int_0^{\infty} (-1)^{-n}t^{-n-1} \sqrt{1+bt} e^{-(nb+a-\frac{b}{2}-1)t}\, dt,
\end{align*}
which implies that
\begin{equation}
V^{a,b}_n = \frac{1}{\Gamma(-n)} \int_0^\infty t^{-n-1}\sqrt{1+bt} e^{-(nb+a-\frac{b}{2}-1)t}\, dt. \nonumber
\end{equation}
Note that as a formal power series,
\begin{equation}
\sqrt{1+bt} = \sum_{\ell=0}^\infty \frac{(-1)^\ell (bt)^\ell (-\frac{1}{2})_\ell}{\ell!}, \nonumber
\end{equation}
where $(-\frac{1}{2})_\ell$ is the Pochhammer symbol, defined by $(\lambda)_\ell := \lambda (\lambda + 1) \cdots (\lambda + \ell -1)$, for any positive integer $\ell$, and by $(\lambda)_0:=1$. So,
\begin{align*}
V^{a,b}_n &= \frac{1}{\Gamma(-n)} \int_0^\infty t^{-n-1} e^{-(nb+a-\frac{b}{2}-1)t}\sum_{\ell=0}^\infty \frac{(-1)^\ell (bt)^\ell (-\frac{1}{2})_\ell}{\ell!}\, dt\\
&= \frac{1}{\Gamma(-n)} \sum_{\ell=0}^\infty \frac{(-1)^\ell b^\ell (-\frac{1}{2})_\ell}{\ell!}\int_0^\infty t^{\ell-n-1} e^{-(nb+a-\frac{b}{2}-1)t}\, dt,
\end{align*}
of which we consider the integral. Let $u = (nb+a-\frac{b}{2} - 1)t$ so $du = (nb+a-\frac{b}{2} - 1)dt$. Then
\begin{align*}
\int_0^\infty t^{\ell-n-1} e^{-(nb+a-\frac{b}{2}-1)t}\, dt &= \int_0^\infty \left(\frac{u}{nb+a-\frac{b}{2}-1}\right)^{\ell-n-1} \frac{e^{-u}}{nb+a-\frac{b}{2} - 1}\, du\\
&= \int_0^\infty e^{-u} \frac{u^{\ell-n-1}}{(nb+a-\frac{b}{2}-1)^{\ell-n}}\\
&= \frac{\Gamma(\ell-n)}{(nb+a-\frac{b}{2}-1)^{\ell-n}}.
\end{align*}
Thus,
\begin{align*}
V^{a,b}_n &= \sum_{\ell=0}^\infty \frac{(-1)^\ell b^\ell (-\frac{1}{2})_\ell\Gamma(\ell-n)}{\ell! \Gamma(-n)(nb+a-\frac{b}{2}-1)^{\ell-n}}\\
&= \sum_{\ell=0}^\infty \frac{(-1)^\ell b^\ell (-\frac{1}{2})_\ell (-n)_\ell}{\ell! (nb+a-\frac{b}{2}-1)^{\ell-n}}\\
&= (nb+a - \frac{b}{2} -1)^n \sum_{\ell=0}^\infty \frac{(-\frac{1}{2})_\ell (-n)_\ell}{\ell!}\left( \frac{-1}{n+\frac{a}{b}-\frac{1}{2}-\frac{1}{b}}\right)^\ell,
\end{align*}
which is a multiple of a Poisson-Charlier polynomial. See for example \cite{OE} for the following facts about the Poisson-Charlier polynomial.
\begin{align*}
C_n\left(\frac{1}{2}; n + \frac{a}{b} - \frac{1}{2} - \frac{1}{b}\right)
&= \sum_{i=0}^n(-1)^i \binom{n}{i}\binom{\frac{1}{2}}{i}i!\left(n + \frac{a}{b} - \frac{1}{2} - \frac{1}{b}\right)^{-i}\\
&= {}_2F_0\left(-n; -\frac{1}2{}; - ; \frac{-1}{n + \frac{a}{b} - \frac{1}{2} - \frac{1}{b}}\right)\\
&= \sum_{\ell=0}^\infty \frac{ (-\frac{1}{2})_\ell(-n)_\ell}{\ell!} \left( \frac{-1}{n + \frac{a}{b} - \frac{1}{2} - \frac{1}{b}}\right)^\ell.
\end{align*}
Thus,
\begingroup
\allowdisplaybreaks
\begin{align*}
V^{a,b}_n &= (nb +a - \frac{b}{2}-1)^n C_n \left(\frac{1}{2}; n + \frac{a}{b} - \frac{1}{2} - \frac{1}{b}\right)\\
&= b^n\left(n +\frac{a}{b} - \frac{1}{2}-\frac{1}{b}\right)^n \sum_{i=0}^n(-1)^i \binom{n}{i}\binom{\frac{1}{2}}{i}i!\left(n + \frac{a}{b} - \frac{1}{2} - \frac{1}{b}\right)^{-i}\\
&= b^n \sum_{i=0}^n(-1)^i \binom{n}{i}\binom{2i}{i} \frac{(-1)^{i+1}}{2^{2i}(2i-1)}i!\left(n + \frac{a}{b} - \frac{1}{2} - \frac{1}{b}\right)^{n-i}\\
&= -b^n \sum_{i=0}^n \binom{n}{i}\frac{(2i)!}{i!2^{i}} \frac{1}{2^i(2i-1)}\left(n + \frac{a}{b} - \frac{1}{2} - \frac{1}{b}\right)^{n-i}\\
&= -b^n \sum_{i=0}^n \binom{n}{i}(2i-1)!!\frac{1}{2^i(2i-1)}2^{i-n}\left(2n - 1 + \frac{2a-2}{b}\right)^{n-i}\\
&= -\left(\frac{b}{2}\right)^n \sum_{i=0}^n \binom{n}{i}(2i-3)!!\left(2n - 1 + \frac{2a-2}{b}\right)^{n-i},
\end{align*}
\endgroup
and to get the normalized volume, we simply multiply by $n!$, which gives the desired formula.
\end{proof}
As a corollary, we prove the following conjecture of Behrend et al. \cite{BCC}.
\begin{corollary}[Conjecture 4.5 and Remark 4.6, \cite{BCC}]\label{cor:conjecture}
For any $n$ and $p$, with $p \geq n-1$,
\begin{equation}
\nVol(\mathcal{P}(n,p)) = n! \sum_{k=0}^n \binom{n}{k}\frac{c_k}{2^k}p^{n-k},
\end{equation}
where the sequence $c_k$ satisfies $c_k = 2(k-1)(c_{k-1}-c_{k-2})$ and has exponential generating function $\sqrt{1-2x}e^x$. Equivalently,
\begin{equation}\label{eq:red_conjecture}
\nVol(\mathcal{P}(n,p)) = - \frac{n!}{2^n}\sum_{i=0}^n \binom{n}{i}(2i-3)!!(2p+1)^{n-i}.
\end{equation}
\end{corollary}
\begin{proof}
The two statements are shown to be equivalent in \cite{BCC}, so we prove the latter.
By Proposition \ref{prop:x-pfp_pp}, for $p \geq n-1$, $\nVol(\mathcal{P}(n,p) = \nVol(\mathcal{X}_{n}(p-n+2,1))$.
Then by Theorem \ref{thm:generalized_closed_form_volume},
\begin{align*}
\nVol(\mathcal{X}_{n,p-n+2,1}) &= -n!\left(\frac{1}{2}\right)^n \sum_{i=0}^n \binom{n}{i} (2i-3)!! \left(2n-1 + 2(p-n+2)-2\right)^{n-i}\\
&= -\frac{n!}{2^n} \sum_{i=0}^n \binom{n}{i} (2i-3)!! (2p+1)^{n-i},
\end{align*}
as desired.
\end{proof}
\section{The convex hull of weakly increasing \texorpdfstring{$\mathbf{x}$}{\textbf{x}}-parking functions}\label{sec:weakly_increasing}
\begin{definition}
A weakly increasing $\mathbf{x}$-parking function associated to a positive integer vector of the form $(a,b,b,\ldots, b)$ is a weakly increasing sequence $(a_1, a_2, \ldots, a_n)$ of positive integers who satisfy $a_i \leq a+(i-1)b$.
\end{definition}
Denote the weakly increasing $\mathbf{x}$-parking function polytope associated to a positive integer vector of the form $(a,b,b,\ldots, b)$ by $\mathcal{X}_{n}^w(a,b)$, where $n$ is the length of the vector. Note that the weakly increasing $\mathbf{x}$-parking functions are just the subset of the $\mathbf{x}$-parking functions which are weakly increasing.
Note that $\mathcal{X}^w_n (1,b)$ is an $(n-1)$-dimensional polytope in $\mathbb{R}^n$ since the first coordinate of any weakly increasing parking function is $1$, that is $x_1=1$, which brings down the dimension of the polytope by 1. See Figure \ref{fig:weakly} for examples.
Next, we reveal a connection between the weakly increasing $\mathbf{x}$-parking function polytope and the Pitman-Stanley polytope.
The Pitman-Stanley polytope is a well-studied polytope, which has connections to flow polytopes, parking functions, and many other combinatorial objects.
\begin{definition}
For any $\mathbf{x}\in \mathbb{R}^n$, the Pitman-Stanley polytope $\mathsf{PS}_n(\mathbf{x}) $ is defined to be \[\{\mathbf{y} \in \mathbb{R}^n : y_i \geq 0 \text{ and } y_1+\cdots + y_i \leq x_1 + \cdots + x_i \text{ for all } 1\leq i \leq n\}.\]
\end{definition}
\begin{figure}[h]
\centering
\begin{tikzpicture
[x={(0.003058cm, -0.089913cm)},
y={(0.999995cm, 0.000293cm)},
z={(-0.000018cm, 0.995950cm)},
scale=1.000000,
back/.style={loosely dotted, thin},
edge/.style={color=black, thick},
facet/.style={fill=andresblue,fill opacity=0.500000},
vertex/.style={inner sep=1pt,circle,draw=andrespink,fill=andrespink,thick}]
\coordinate (1.00000, 1.00000, 1.00000) at (1.00000, 1.00000, 1.00000);
\coordinate (1.00000, 2.00000, 3.00000) at (1.00000, 2.00000, 3.00000);
\coordinate (1.00000, 1.00000, 3.00000) at (1.00000, 1.00000, 3.00000);
\coordinate (1.00000, 2.00000, 2.00000) at (1.00000, 2.00000, 2.00000);
\fill[facet] (1.00000, 2.00000, 3.00000) -- (1.00000, 2.00000, 2.00000) -- (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 3.00000) -- cycle {};
\draw[edge] (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 3.00000);
\draw[edge] (1.00000, 1.00000, 1.00000) -- (1.00000, 2.00000, 2.00000);
\draw[edge] (1.00000, 2.00000, 3.00000) -- (1.00000, 1.00000, 3.00000);
\draw[edge] (1.00000, 2.00000, 3.00000) -- (1.00000, 2.00000, 2.00000);
\node[vertex] at (1.00000, 1.00000, 1.00000) {};
\node[vertex] at (1.00000, 2.00000, 3.00000) {};
\node[vertex] at (1.00000, 1.00000, 3.00000) {};
\node[vertex] at (1.00000, 2.00000, 2.00000) {};
\end{tikzpicture}
\qquad
\begin{tikzpicture
[x={(-0.001611cm, -0.075149cm)},
y={(0.999999cm, -0.000130cm)},
z={(0.000009cm, 0.997172cm)},
scale=1.000000,
back/.style={loosely dotted, thin},
edge/.style={color=black, thick},
facet/.style={fill=andresblue,fill opacity=0.500000},
vertex/.style={inner sep=1pt,circle,draw=andrespink,fill=andrespink,thick}]
\coordinate (1.00000, 1.00000, 1.00000) at (1.00000, 1.00000, 1.00000);
\coordinate (1.00000, 3.00000, 5.00000) at (1.00000, 3.00000, 5.00000);
\coordinate (1.00000, 3.00000, 3.00000) at (1.00000, 3.00000, 3.00000);
\coordinate (1.00000, 1.00000, 5.00000) at (1.00000, 1.00000, 5.00000);
\fill[facet] (1.00000, 3.00000, 5.00000) -- (1.00000, 1.00000, 5.00000) -- (1.00000, 1.00000, 1.00000) -- (1.00000, 3.00000, 3.00000) -- cycle {};
\draw[edge] (1.00000, 1.00000, 1.00000) -- (1.00000, 3.00000, 3.00000);
\draw[edge] (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 5.00000);
\draw[edge] (1.00000, 3.00000, 5.00000) -- (1.00000, 3.00000, 3.00000);
\draw[edge] (1.00000, 3.00000, 5.00000) -- (1.00000, 1.00000, 5.00000);
\node[vertex] at (1.00000, 1.00000, 1.00000) {};
\node[vertex] at (1.00000, 3.00000, 5.00000) {};
\node[vertex] at (1.00000, 3.00000, 3.00000) {};
\node[vertex] at (1.00000, 1.00000, 5.00000) {};
\end{tikzpicture}
\qquad
\begin{tikzpicture
[x={(-0.217826cm, -0.499310cm)},
y={(0.975988cm, -0.111370cm)},
z={(-0.000078cm, 0.859236cm)},
scale=1.000000,
back/.style={loosely dotted, thin},
edge/.style={color=black, thick},
facet/.style={fill=andresblue,fill opacity=0.500000},
vertex/.style={inner sep=1pt,circle,draw=andrespink,fill=andrespink,thick}]
\coordinate (1.00000, 1.00000, 1.00000) at (1.00000, 1.00000, 1.00000);
\coordinate (2.00000, 3.00000, 4.00000) at (2.00000, 3.00000, 4.00000);
\coordinate (2.00000, 3.00000, 3.00000) at (2.00000, 3.00000, 3.00000);
\coordinate (1.00000, 1.00000, 4.00000) at (1.00000, 1.00000, 4.00000);
\coordinate (2.00000, 2.00000, 4.00000) at (2.00000, 2.00000, 4.00000);
\coordinate (2.00000, 2.00000, 2.00000) at (2.00000, 2.00000, 2.00000);
\coordinate (1.00000, 3.00000, 4.00000) at (1.00000, 3.00000, 4.00000);
\coordinate (1.00000, 3.00000, 3.00000) at (1.00000, 3.00000, 3.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 3.00000, 3.00000);
\fill[facet] (1.00000, 1.00000, 4.00000) -- (1.00000, 3.00000, 4.00000) -- (2.00000, 3.00000, 4.00000) -- (2.00000, 2.00000, 4.00000) -- cycle {};
\fill[facet] (1.00000, 3.00000, 3.00000) -- (2.00000, 3.00000, 3.00000) -- (2.00000, 3.00000, 4.00000) -- (1.00000, 3.00000, 4.00000) -- cycle {};
\fill[facet] (2.00000, 2.00000, 2.00000) -- (2.00000, 3.00000, 3.00000) -- (2.00000, 3.00000, 4.00000) -- (2.00000, 2.00000, 4.00000) -- cycle {};
\fill[facet] (2.00000, 2.00000, 2.00000) -- (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 4.00000) -- (2.00000, 2.00000, 4.00000) -- cycle {};
\draw[edge] (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 4.00000);
\draw[edge] (1.00000, 1.00000, 1.00000) -- (2.00000, 2.00000, 2.00000);
\draw[edge] (2.00000, 3.00000, 4.00000) -- (2.00000, 3.00000, 3.00000);
\draw[edge] (2.00000, 3.00000, 4.00000) -- (2.00000, 2.00000, 4.00000);
\draw[edge] (2.00000, 3.00000, 4.00000) -- (1.00000, 3.00000, 4.00000);
\draw[edge] (2.00000, 3.00000, 3.00000) -- (2.00000, 2.00000, 2.00000);
\draw[edge] (2.00000, 3.00000, 3.00000) -- (1.00000, 3.00000, 3.00000);
\draw[edge] (1.00000, 1.00000, 4.00000) -- (2.00000, 2.00000, 4.00000);
\draw[edge] (1.00000, 1.00000, 4.00000) -- (1.00000, 3.00000, 4.00000);
\draw[edge] (2.00000, 2.00000, 4.00000) -- (2.00000, 2.00000, 2.00000);
\draw[edge] (1.00000, 3.00000, 4.00000) -- (1.00000, 3.00000, 3.00000);
\node[vertex] at (1.00000, 1.00000, 1.00000) {};
\node[vertex] at (2.00000, 3.00000, 4.00000) {};
\node[vertex] at (2.00000, 3.00000, 3.00000) {};
\node[vertex] at (1.00000, 1.00000, 4.00000) {};
\node[vertex] at (2.00000, 2.00000, 4.00000) {};
\node[vertex] at (2.00000, 2.00000, 2.00000) {};
\node[vertex] at (1.00000, 3.00000, 4.00000) {};
\node[vertex] at (1.00000, 3.00000, 3.00000) {};
\end{tikzpicture}
\qquad
\begin{tikzpicture
[x={(-0.268062cm, -0.535732cm)},
y={(0.963402cm, -0.149036cm)},
z={(-0.000034cm, 0.831131cm)},
scale=1.000000,
back/.style={loosely dotted, thin},
edge/.style={color=black, thick},
facet/.style={fill=andresblue,fill opacity=0.500000},
vertex/.style={inner sep=1pt,circle,draw=andrespink,fill=andrespink,thick}]
\coordinate (1.00000, 1.00000, 1.00000) at (1.00000, 1.00000, 1.00000);
\coordinate (2.00000, 4.00000, 6.00000) at (2.00000, 4.00000, 6.00000);
\coordinate (2.00000, 4.00000, 4.00000) at (2.00000, 4.00000, 4.00000);
\coordinate (2.00000, 2.00000, 6.00000) at (2.00000, 2.00000, 6.00000);
\coordinate (2.00000, 2.00000, 2.00000) at (2.00000, 2.00000, 2.00000);
\coordinate (1.00000, 1.00000, 6.00000) at (1.00000, 1.00000, 6.00000);
\coordinate (1.00000, 4.00000, 6.00000) at (1.00000, 4.00000, 6.00000);
\coordinate (1.00000, 4.00000, 4.00000) at (1.00000, 4.00000, 4.00000);
\draw[edge,back] (1.00000, 1.00000, 1.00000) -- (1.00000, 4.00000, 4.00000);
\fill[facet] (2.00000, 2.00000, 2.00000) -- (2.00000, 4.00000, 4.00000) -- (2.00000, 4.00000, 6.00000) -- (2.00000, 2.00000, 6.00000) -- cycle {};
\fill[facet] (1.00000, 4.00000, 4.00000) -- (2.00000, 4.00000, 4.00000) -- (2.00000, 4.00000, 6.00000) -- (1.00000, 4.00000, 6.00000) -- cycle {};
\fill[facet] (1.00000, 4.00000, 6.00000) -- (2.00000, 4.00000, 6.00000) -- (2.00000, 2.00000, 6.00000) -- (1.00000, 1.00000, 6.00000) -- cycle {};
\fill[facet] (2.00000, 2.00000, 6.00000) -- (1.00000, 1.00000, 6.00000) -- (1.00000, 1.00000, 1.00000) -- (2.00000, 2.00000, 2.00000) -- cycle {};
\draw[edge] (1.00000, 1.00000, 1.00000) -- (2.00000, 2.00000, 2.00000);
\draw[edge] (1.00000, 1.00000, 1.00000) -- (1.00000, 1.00000, 6.00000);
\draw[edge] (2.00000, 4.00000, 6.00000) -- (2.00000, 4.00000, 4.00000);
\draw[edge] (2.00000, 4.00000, 6.00000) -- (2.00000, 2.00000, 6.00000);
\draw[edge] (2.00000, 4.00000, 6.00000) -- (1.00000, 4.00000, 6.00000);
\draw[edge] (2.00000, 4.00000, 4.00000) -- (2.00000, 2.00000, 2.00000);
\draw[edge] (2.00000, 4.00000, 4.00000) -- (1.00000, 4.00000, 4.00000);
\draw[edge] (2.00000, 2.00000, 6.00000) -- (2.00000, 2.00000, 2.00000);
\draw[edge] (2.00000, 2.00000, 6.00000) -- (1.00000, 1.00000, 6.00000);
\draw[edge] (1.00000, 1.00000, 6.00000) -- (1.00000, 4.00000, 6.00000);
\draw[edge] (1.00000, 4.00000, 6.00000) -- (1.00000, 4.00000, 4.00000);
\node[vertex] at (1.00000, 1.00000, 1.00000) {};
\node[vertex] at (2.00000, 4.00000, 6.00000) {};
\node[vertex] at (2.00000, 4.00000, 4.00000) {};
\node[vertex] at (2.00000, 2.00000, 6.00000) {};
\node[vertex] at (2.00000, 2.00000, 2.00000) {};
\node[vertex] at (1.00000, 1.00000, 6.00000) {};
\node[vertex] at (1.00000, 4.00000, 6.00000) {};
\node[vertex] at (1.00000, 4.00000, 4.00000) {};
\end{tikzpicture}
\caption{The weakly increasing $\mathbf{x}$-parking function polytopes, from left to right: $\mathcal{X}_3^w(1,1)$, $\mathcal{X}_3^w(1,2)$, $\mathcal{X}_3^w(2,1)$, $\mathcal{X}_3^w(2,2)$. Note that when $a=1$, they are two-dimensional, and when $a>1$, they are three-dimensional.}
\label{fig:weakly}
\end{figure}
\begin{proposition}\label{prop:integral_equivalence}
The weakly increasing $\mathbf{x}$-parking function polytope $\mathcal{X}^w_n(a,b)$ is integrally equivalent to the Pitman-Stanley polytope $\mathsf{PS}_n(a-1,b,\dots, b)$.
\end{proposition}
\begin{proof}
Let $T:\mathbb{R}^{n}\to\mathbb{R}^n$ be the linear transformation defined $$T(\mathbf{x})=(x_1 - 1, x_2-x_1,x_3-x_2,\dots, x_{n}-x_{n-1}). $$
Note that if $\mathbf{x}\in \mathcal{X}_n^w(a,b)\cap \mathbb{Z}^{n}$, it follows that $\mathbf{x}$ is also a weakly increasing $\mathbf{x}$-parking function.
We can see $T(\mathbf{x}) = \mathbf{y} \in \mathsf{PS}_n(a-1,b,\dots, b)\cap \mathbb{Z}^n$ since $1\leq x_i \leq x_{i+1}$ implies $y_i \geq 0$ and $1\leq x_i \leq a+(i-1)b$ implies $\sum_{i=1}^k y_i = x_k - 1\leq a+(i-1)b -1$ for all $i$.
Next, define the linear transformation $S:\mathbb{R}^n\to \mathbb{R}^{n}$ by \[S(\mathbf{y})=(1+y_1,1+y_1+y_2,\dots, 1+y_1+\dots +y_n).\]
For $\mathbf{y}\in \mathsf{PS}_n\cap \mathbb{Z}^n$, we have that $\mathbf{x} = S(\mathbf{y})$ satisfies $x_i\leq x_{i+1}$ and $$x_i = 1 + \sum_{k=1}^i y_k \leq 1+ (a-1) + (i-1)b = a +(i-1)b,$$
hence $\mathbf{x}\in \mathcal{X}^w_n(a,b)\cap \mathbb{Z}^{n}$.
By construction, both $T$ and $S$ are injective.
\end{proof}
\begin{corollary}
Let $t\in \mathbb{Z}_{\geq 0}$.
The number of lattice points in the $t$-dilate of $\mathcal{X}^w_n(a,b)$ is given by
$$|t\mathcal{X}^w_n(a,b)\cap \mathbb{Z}^n|=\frac{1}{n!}(t(a-1)+1)(t(a-1+nb)+2)(t(a-1+nb)+3)\cdots (t(a-1+nb)+n).$$
\end{corollary}
\begin{proof}
By Proposition \ref{prop:integral_equivalence}, $\mathcal{X}^w_n(a,b)$ is integrally equivalent to $\mathsf{PS}_n(a-1,b,\dots, b)$.
By substituting $a-1$ for $a$ in the equation of Theorem 13 in \cite{PitmanStanley}, we obtain the result.
\end{proof}
Another consequence of Proposition \ref{prop:integral_equivalence} is the following:
\begin{corollary}
For the special case when $\mathbf{x}=(a,b,\dots,b)=(1,1,\dots,1)\in \mathbb{R}^n$, the weakly increasing classical parking function $\mathcal{X}_{n}^w(1,1)$ has volume $n^{n-2}$ and contains $C_{n}$ lattice points, where $C_n=\displaystyle{\frac{1}{n+1}\binom{2n}{n}}$ denotes the $n$-th Catalan number.
\end{corollary}
\begin{proof}
The weakly increasing classical parking function $\mathcal{X}^w_n(1,1)$is integrally equivalent to the Pitman-Stanley polytope $\mathsf{PS}_{n-1}(1,1,\ldots,1)$.
It follows from work by Pitman and Stanley \cite{PitmanStanley} and Benedetti et al.~\cite{BGHHKMY} that the Pitman-Stanley polytope is integrally equivalent to a flow polytope arising from a graph consisting of a path $1\rightarrow 2 \rightarrow \cdots \rightarrow n$ and additional edges $(1,n),(2,n),\dots, (n-2,n)$, for which the volume and lattice point count are known.
\end{proof}
The following proposition known and can be deduced from the existing literature detailed above, but we provide a proof for completeness.
\begin{proposition}
The weakly increasing classical parking function polytope $\mathcal{X}^w_n(1,1)$ is an $(n-1)$-dimensional polytope given by the following equality and inequalities.
\begin{align*}
x_1 &= 1,\\
x_i &\leq i, &\text{ \emph{for} } 2\leq i\leq n,\\
x_{i-1} &\leq x_i, & \text{ \emph{for} }2\leq i\leq n.
\end{align*}
Furthermore, $\mathcal{X}_n^w(1,1)$ has
\begin{enumerate}[\emph{(}i\emph{)}]
\item $2(n-1)$ facets,
\item $2^{n-1}$ vertices, and
\item $2^{n-2}(n-1)$ edges.
\end{enumerate}
\end{proposition}
\begin{proof}
The inequality description follows similarly to the proof of Proposition \ref{prop:inequality} restricting ourselves to the weakly-increasing $\mathbf{x}$-parking functions.
\begin{enumerate}[($i$)]
\item This follows from a straightforward enumeration of the inequalities in the description above.
\item All the vertices are of the form $$(\underbrace{1,\ldots,1}_k, v_{k+1}, \dots, c_n),$$ where $v_k = k$ and for each $k \leq i < n$ we have either $v_{i+1} = v_i$ (``min'') or $v_{i+1} =i+1$ (``max'').
Hence, each vertex corresponds to a sequence of $n-1$ binary choices.
\item We claim that vertices $v,u$ are connected by an edge if and only if their construction (given above by a sequence of binary choices) differs by exactly one choice.
We show this inductively.
Assume this is true for $\mathcal{X}_m^w (1,1)$ where $m<n$ and let $u,v$ be two vertices in $\mathcal{X}_n^w(1,1)$ that differ by the $k$-th choice.
If $k = n$, we have that $u' = v'$ where $v' = (v_1, \dots, v_{n-1})$, which is a vertex of $\mathcal{X}_{n-1}^w (1,1)$.
Hence, there exists a $c'\in \mathbb{R}^{n-1}$ that maximizes it.
Let $c = (c',0)\in \mathbb{R}^n$; it is evident that $c\cdot u = c\cdot v > c \cdot w$ for any other vertex $w\in \mathcal{X}_n^w(1,1)$.
If $k<n$ and we continue to choose min for the rest of the choices after $k$, it follows that
\begin{align*}
u &= (u_1, \dots, u_{k-1}, \underbrace{u_{k-1}, \dots, u_{k-1}}_{n-k+1}), \\ v &= (u_1, \dots, u_{k-1}, \underbrace{k, \dots, k}_{n-k+1}).
\end{align*}
\end{enumerate}
\end{proof}
\begin{remark}
It follows from the inequality description of $\mathcal{X}_n^w(1,1)$ that all lattice points are weakly increasing parking functions.
\end{remark}
\section{Further Directions and Discussion}\label{sec:future}
We conclude this paper by providing some directions for future research.
\subsection{On the classical parking function polytope}
Given a term order $\prec$, every non-zero polynomial $f \in k[\mathbf{x}]$ has a unique initial monomial, denoted by $in_\prec(f)$.
If $I$ is an ideal in $k[\mathbf{x}]$, then its \textit{initial ideal} is the monomial ideal $in_\prec(I) := \langle in_\prec(f) : f \in I \rangle$.
Let $\mathcal{A} = \{ \mathbf{a}_1, \ldots, \mathbf{a}_k\} \subseteq \mathbb{Z}^n$ and denote the toric ideal of $\mathcal{A}$ by $I_\mathcal{A}$.
\begin{proposition}[Corollary 8.9, \cite{Stu}] The initial ideal $in_\prec(I_\mathcal{A})$ is square-free if and only if the corresponding regular triangulation $\Delta_\prec$ of $\mathcal{A}$ is unimodular.
\end{proposition}
Using the theory of Gr\"obner bases, one may approach proving or disproving the following conjecture that we pose.
\begin{conjecture}
The parking function polytope $\mathsf{PF}_n$ admits a regular unimodular triangulation.
\end{conjecture}
\begin{example}
Consider all lattice points of $\mathsf{PF}_3$, which includes the $16$ parking functions of length $3$ and the point $(2,2,2)$ which can be used for a triangulation.
Let $R = \mathbb{Q}[a,b, \ldots,p,q]$, where each variable corresponds to a lattice point of $\mathsf{PF}_3$, and $S = \mathbb{Q}[x,y,z,w]$, and take $f: S \to R$ to be defined by
\[(x,y,z,w) \mapsto (x^1 y^1 z^1 w, x^1 y^1 z^2 w, x^1 y^2 z^1 w, x^2 y^1 z^1 w, x^1 y^1 z^3 w, x^1 y^3 z^1 w, x^3 y^1 z^1 w, x^1 y^2 z^2 w, \]
\[x^2 y^1 z^2 w, x^2 y^2 z^1 w,x^1 y^2 z^3 w, x^1 y^3 z^2 w, x^2 y^1 z^3 w, x^3 y^1 z^2 w, x^3 y^2 z^1 w, x^2 y^3 z^1 w, x^2 y^2 z^2 w). \]
Then the initial ideal is
\[(ae, af, bf, ef, ag, bg, cg, eg, fg, ah, bh, gh, ai, bi, ci, fi, aj, bj, cj, ej, ak, bk, ck, dk, fk, gk, al, bl, cl, \]
\[ dl, el, gl, il, am, bm, cm, dm, fm, gm, hm, an, bn, cn, dn, en, fn, hn, kn, ln, ao, bo, co, do, eo, fo, \]
\[ho, io, ko, lo, mo, ap, bp, cp, dp, ep, gp, hp, ip, kp, mp, np, aq, bq, cq, dq, eq, fq, gq, hq, iq, kq).\]
Notice that the initial ideal is square-free; hence, there exists a unimodular triangulation of this parking function polytope using on the parking functions and an additional lattice point as vertices.
\end{example}
If this conjecture holds true, the following problem may be of interest.
\begin{problem}
Find a bijection between the simplices of a unimodular triangulation of $\mathsf{PF}_n$ and $(0,1)$-matrices with two 1's in each row with positive permanent, as discussed in Theorem \ref{thm:main_theorem}.
\end{problem}
The \emph{Ehrhart function} of a polytope $P\subset \mathbb{R}^n$ is $\operatorname{ehr}_P(t):=|tP\cap \mathbb{Z}^n|$, where $tP=\{t\mathbf{x}:\ \mathbf{x}\in P\}$.
When $P$ is a lattice polytope (its vertices have integer coordinates), the Ehrhart function is a polynomial in $t$, with degree equal to the dimension of $P$, leading coefficient equal to its normalized volume, second-leading coefficient equal to half the surface area, and constant coefficient 1.
The \emph{Ehrhart polynomial} of a lattice polytope $P$ of dimension $n$ can always be written in the form $\operatorname{ehr}_P(t)=\sum_{i=0}^nh_i^*\binom{t+n-i}{n}$; the sequence $(h_0^*,\dots,h_n^*)$ is called the \emph{$h^*$-vector}.
Equivalently, $\sum_{t\geq 0}\operatorname{ehr}_P(t)z^t=\frac{h^*(P;z)}{(1-z)^{n+1}}$, where $h^*(P;z)=h_0^*+h_1^*z+\cdots+h_n^*z^n$.
\begin{problem}
Determine a formula for the Ehrhart polynomial (or equivalently, the $h^*$-polynomial) of $\mathsf{PF}_n$.
\end{problem}
If the conjecture above holds true, then it may be useful in studying the $h^*$-polynomial due to the following proposition due to Stanley \cite{StanleyDecompositions}, which would require us instead to study the $h$-polynomial of the triangulation.
\begin{proposition}
If $P$ is a lattice polytope that admits a unimodular triangulation, then the $h^*$-polynomial is given by the $h$-polynomial of the triangulation.
\end{proposition}
\subsection{On \texorpdfstring{$\mathbf{x}$}{\textbf{x}}-parking function polytopes}
The main object of study in this paper is the $\mathbf{x}$-parking function polytope for $\mathbf{x}=(a,b,\dots,b)$.
One can ask for the face structure, volume, and lattice-point enumeration for parking function polytopes where $\mathbf{x}\neq (a,b,\dots,b)$.
In the same spirit as Stanley's original problem \cite{Sta}, we pose the following:
\begin{problem}\label{problem_x} For $\mathbf{x}=(x_1,\dots,x_n)\in \mathbb{Z}_{>0}^n$, let $\mathfrak{X}_n$ be the convex hull of $\mathbf{x}$-parking functions of length $n$.
\begin{enumerate}
\item Find the number of $k$-dimensional faces of $\mathfrak{X}_n$ for $k \in \{ 0, \ldots, n\}$ and given $\mathbf{x}$.
\item Find the volume of $\mathfrak{X}_n$ for given $\mathbf{x}$.
\item Find the number of integer points in $\mathfrak{X}_n$ for given $\mathbf{x}$, i.e., the number of elements of $\mathfrak{X}_n \cap \mathbb{Z}^n$.
\item More generally, find a formula for the Ehrhart polynomial (or equivalently, the $h^*$-polynomial) of $\mathfrak{X}_n$ for given $\mathbf{x}$.
\end{enumerate}
\end{problem}
This problem is likely to be challenging in its full generality, as even finding explicit formulas for the number of $\mathbf{x}$-parking functions for arbitrary $\mathbf{x}$ is noted by Yan to be challenging \cite{Yan2}.
General enumerative results can be found in \cite{KY} in terms of Gon\v{c}arov polynomials and more on the history of enumerative results can be found in \cite{GH}.
A starting point for Problem \ref{problem_x} could be to consider some of those special cases from \cite{Yan2} and explore the connection to Gon\v{c}arov polynomials.
We note that the problem to determine the number of lattice points and the Ehrhart polynomial (or $h^*$-polynomial) for the $\mathbf{x}$-parking function polytope when $\mathbf{x}=(a,b,\dots,b)$, remains open.
\subsection{Other generalizations of parking functions and their convex hulls}
There are a plethora of generalizations of parking functions in the literature and one can naturally ask similar problems to Problem \ref{problem_x} for their favorite generalization.
One such generalization is known as the $(a,b)$-\emph{rational parking functions}.
There is a well-known bijection between Dyck paths of length $n$ and all possible increasing rearrangements of parking functions of length $n$.
By labeling the North steps of the Dyck paths with elements in $[n] = \{1,2,\dots, n\}$ such that each element appears exactly once and the labels increase within each column going North, we can construct a bijection between the labeled Dyck paths of length $n$ and all parking functions of length $n$.
Now, let $a,b \in \mathbb{Z}_{>0}$.
An $(a,b)$-\emph{Dyck path} is a lattice path from $(0,0)$ to $(b,a)$ (that is, with $a$ North steps and $b$ East steps) which stays weakly above the diagonal line $y = \frac{a}{b}x$.
A $(n,n)$-Dyck path is just a standard Dyck path of length $n$.
There is a canonical bijection between $(n,n)$-Dyck paths and $(n,n+1)$-Dyck paths since the last step of a $(n,n+1)$-Dyck paths must be an East step.
If $a,b$ are coprime, the number of $(a,b)$-Dyck paths is given by $$\frac{1}{a+b}\binom{a+b}{a,b} = \frac{(a+b-1)!}{a!b!},$$ and is called the \emph{rational Catalan number}.
\begin{definition}
Let $a,b$ be coprime.
An $(a,b)$-\emph{parking function} is an $(a,b)$-Dyck path together with a labeling of the North steps by the set $[a] = \{1,2,\dots, a\}$ such that the labels increase within each column going North.
Define the $(a,b)$-\emph{parking function polytope} $\mathcal{P}_{a,b}$ as the convex hull of all $(a,b)$-parking functions of length $n$ in $\mathbb{R}^n$.
\end{definition}
\begin{remark}
There are $b^{a-1}$ many $(a,b)$-parking functions for coprime $(a,b)$.
The classical parking functions are recovered when $(a,b) = (n,n+1)$.
\end{remark}
The following are two propositions towards the study of the $(a,b)$-\emph{parking function polytope} (proofs ommitted).
\begin{proposition}
Consider the $(a,b)$-parking function polytope $\mathcal{P}_{a,b}$.
\begin{enumerate}
\item $\mathcal{P}_{a,b}$ is an $a$-dimensional polytope where the vertices are permutations of
\[(\underbrace{1,\ldots, 1\,}_\text{$k$ many }, b_{k+1}, b_{k+2}, \ldots, b_a),\]
for $1 \leq k \leq a$ where $b_i = \lceil \frac{b}{a}(i-1)\rceil$ for $i>1$, $b_1 = 1$.
\item If $b>a$, then the number of vertices of $\mathcal{P}_{a,b}$ is
\[a! \left(\frac{1}{1!} + \frac{1}{2!} + \cdots + \frac{1}{a!}\right).\]
If $b<a$, for any $1\leq i\leq b$, let $m_i = | \{j \text{ such that } b_j = i\}|$. Then the number of vertices of $\mathcal{P}_{a,b}$ is
\[a! \left(\frac{1}{m_1!m_2!\cdots m_b!} + \sum\limits_{k=2}^b M_k\right),\]
where for $2\leq k \leq b$, $$M_k = \frac{1}{m_{k+1}!\cdots m_b!} \left(\sum\limits_{i=1}^{m_k} \frac{1}{(m_1+\cdots + m_{k-1} + i)! (m_k-i)!}\right).$$
\end{enumerate}
\end{proposition}
\begin{proposition}
The $(a,b)$-parking function polytope $\mathcal{P}_{a,b}$ is given by the following minimal inequality description:
\begin{itemize}
\item For $b>a$,
\begin{align*}
1&\leq x_i\leq b_a, &\text{ for } 1\leq i \leq a,\\
x_i+x_j &\leq b_{a-1} + b_a, &\text{ for } i<j,\\
&\vdots \\
x_{i_1} + x_{i_2} + \cdots + x_{i_{a-2}} &\leq b_3 + b_4 +\cdots + b_{a-1}+ b_a, &\text{ for } i_1 < i_2 < \cdots < i_{a-2}, \\
x_1+x_2+\cdots + x_a &\leq b_1 + b_2 + \cdots + b_{a}.
\end{align*}
Furthermore, the number of facets is equal to $2^a -1$.
\item If $b = a-1$, the minimal hyperplane description is
\begin{align*}
1&\leq x_i\leq b_a, &\text{ for } 1\leq i \leq a,\\
x_i+x_j &\leq b_{a-1} + b_a, &\text{ for } i<j,\\
&\vdots \\
x_{i_1} + x_{i_2} + \cdots + x_{i_{a-3}} &\leq b_4 + b_5+\cdots + b_{a-1}+ b_a, &\text{ for } i_1 < i_2 < \cdots < i_{a-3}, \\
x_1+x_2+\cdots + x_a &\leq b_1 + b_2 + \cdots + b_{a}.
\end{align*}
Furthermore, the number of facets is equal to $2^a -1 - \binom{a}{a-2} = 2^a -\frac{(n-2)(n+1)}{2}$.
\end{itemize}
\end{proposition}
\section*{Acknowledgements}
The authors thank Esme Bajo and Jason Zhao for helpful conversations and Douglas Varela for his insight on the analytical tools used.
\bibliographystyle{amsplain}
|
2,869,038,156,277 | arxiv |
\section{Programming with Equations}
\label{sec:progeq}
Functional programming has proved extremely useful in making the task of writing
correct software more abstract and thus less tied to the specific, and complex,
architecture of modern computers. This, is in a large part, due to its extensive
use of types as an abstraction mechanism, specifying in a crisp way the intended
behaviour of a program, but it also relies on its \emph{declarative} style, as
a mathematical approach to functions and data structures. However, the vast gain
in expressivity obtained through the development of \emph{dependent types} makes
the programming task more challenging, as it amounts to the question of proving
complex theorems --- as illustrated by the double nature of proof
assistants such as Coq~\cite{dowek:al:93:coq} and Agda~\cite{norell:phd}.
Keeping this task as simple as possible is then of the highest importance, and
it requires the use of a clear declarative style.
There are two main avenues for specifying a language of proofs, or programs,
that is abstract enough to support complex developments involving dependent
types. The first approach, chosen by the Coq project, is to have a language
of \emph{tactics} that partially automate the construction of proofs --- that
is, to mechanically construct complex programs based on the composition of a
few generic commands. While this takes the development task closer to the
usual idea of proving a mathematical theorem, the second approach is to take
the programming viewpoint: although Coq allows to directly write proof terms,
this is better illustrated by Agda, where a syntax inspired by Haskell
\cite{haskell:www} provides a clear \emph{equational} style.
Our goal here is to investigate the relations between the equational style
of dependently-typed functional programming as found in Agda to the
proof-theoretical description of intuitionistic logic given in the sequent
calculus. In particular, we claim that a \emph{focused} sequent calculus,
akin to the \textbf{{LJF}}\xspace system of Liang and Miller
\cite{liang:miller:09:focpol}, offers a logical foundation of choice for the
development of a practical dependently-typed language. We intend to support this
claim by showing how the equational syntax of Agda and the internal structure
of its implementation correspond to a computational interpretation of such
a calculus --- for an extended for of intuitionistic logic including
dependencies and (co)induction. As it turns out, the use of left rules
rather than eliminations for \emph{positive} connectives such as disjunction,
in sequent calculus, yields a simpler syntax. In general, beyond the use
of \emph{spines} in applications, as in \textbf{{LJT}}\xspace \cite{herbelin:94:chseq} and
quite common in the implementation of functional programming languages or
proof assistants, the structure of the sequent calculus is much closer to
the equational style of programming than natural deduction, the standard
formalism in which type theory is usually expressed \cite{martinloef:84:itt}.
Using~a focused system rather than a plain sequent calculus based on \textbf{{LJ}}\xspace
provides a stronger structure, and emphasizes the importance of
\emph{polarities}, already observed in type theory
\cite{abel:pientka:thibodeau:setzer:14:copat}.
Beyond the definition of a logical foundation for a functional language in
equational style, giving a proof-theoretical explanation for the way Agda
is implemented requires to accomodate in the sequent calculus both dependent
types and a notion of inductive definition. This is not an easy task, although
there has been some work on dependent types in the sequent calculus
\cite{dyckhoff:lengrand:mckinna:11:focpts} and there is a number of
approaches to inductive definitions in proof theory, including focused
systems \cite{baelde:12:mumall}. For example, the system found in
\cite{dyckhoff:lengrand:mckinna:11:focpts} is based on \textbf{{LJT}}\xspace but is limited
to $\Pi$ and does not support $\Sigma$, while \cite{dyckhoff:pinto:98:seqdep}
has both, but requires an intricate mixture of natural deduction and sequent
calculus to handle $\Sigma$. Induction is even more complex
to handle, since there are several approaches, including
definitions \cite{schroeder-heister:93:defr} or direct least and
greatest fixpoints as found in \textbf{{$\mu$MALL}}\xspace \cite{baelde:12:mumall} and
\textbf{{$\mu$LJ}}\xspace \cite{baelde:phd}. From the viewpoint of proof-theory, the least
fixpoint operator $\mu$ seems to be well-suited, as it embodies
the essence of induction, while the greatest fixpoint $\nu$ allows to
represent coinduction. However, these operators are not used the same way
as inductive definitions found in Agda or other languages or proof assistants
--- they seem more primitive, but the encoding of usual constructs in terms
of fixpoints is not obvious. Even more complicated is the question of using
fixpoints in the presence of dependent types, and this has only been studied
from the type-theoretic viewpoint in complex systems such as the \emph{Calculus
of Inductive Constructions} \cite{coquand:paulin:88:cic}. In the end, what
we would like to obtain is a proof-theoretical understanding of the
equational style of dependent and (co)inductive programming, related to the
goals of the Epigram project. In particular, we consider that the sequent
calculus, with its use of left rules, provides access to the
\emph{``left''} of equations in a sense similar to what is
described in \cite{mcbride:mckinna:04:left}.
Here, we will describe the foundamental ideas for using a variant of \textbf{{LJF}}\xspace
as the basis for the design of a dependently-typed programming language.
We start in Section \ref{sec:focpsc} by considering a propositional system
and show how the shape of sequent calculus rules allows to type terms in
equational style.~This is made even more obvious by the use of pattern
in the binding structure of the calculus. Then, in Section \ref{sec:depind}
we discuss the extension of this system to support dependent types and
induction, problems related to patterns in this setting, as well as the
question of which proof-theoretical approach to induction and coinduction
is better suited for use in a such a language. Finally, we conclude by
the review of some research problems opened by this investigation, and
an evaluation of the possible practical applications to languages and
proofs assistants.
\section{Focusing and Polarities in the Sequent Calculus}
\label{sec:focpsc}
We start our investigation with a propositional intuitionistic system presented
as a focused sequent calculus. It is a variant of \textbf{{LJF}}\xspace \cite{liang:miller:09:focpol}
to which we assign a term language extending the $\lbar$-calculus of
Herbelin~\cite{herbelin:94:chseq}. Unlike the calculus based on \textbf{{LJT}}\xspace, this system
has positive disjunctions and conjunctions $\lor$ and $\times$, but it has
no positive atoms. We use the following grammar of formulas:
$$N,M ~\grdef~ a \,\:\mid\:\, \nfy P \,\:\mid\:\, P \imp N \,\:\mid\:\, N \land M
\qquad\qquad
P\,\!,Q ~\grdef~ \pfy N \,\:\mid\:\, P \lor Q \,\:\mid\:\, P \times Q$$
where $\nfy$ and $\pfy$ are called \emph{polarity shifts} and are meant to
maintain an explicit distinction between the two categories of formulas,
negatives and positives. This is not absolutely necessary, but it clarifies
the definition of a focused system by linking the \emph{focus} and \emph{blur}
rules to actual connectives. Note that this was also used in the presentation
of a computational interpretation of the full \textbf{{LJF}}\xspace system
\cite{brockn:guenot:gustafsson:15:ljfoc}.
\newpage
\begin{figure}[t]
\centerline{
$\begin{array}{|@{\quad}c@{\quad~}|}
\hline \\
\begin{array}{@{~}c@{~}}
\fseqaxrule{}{\Psi,[N]}{\trm{\els}:N} \qqquad
\irule{}{\gseq{\Psi \:\mid\: \cdot}{\trm{\don d}:\nfy P}}
{\fseq{\Psi}{\trm{d}:[P]}} \qqquad
\irule{}{\fseq{\Psi}{\trm{\nod t}:[\pfy N]}}
{\gseq{\Psi \:\mid\: \cdot}{\trm{t}:N}} \\
\\
\irule{}{\gseq{\Psi,\trm{x}:\pfy N \:\mid\: \cdot}{\trm{x\;k}:M}}
{\fseq{\Psi,\trm{x}:\pfy N,[N]}{\trm{k}:M}} \qquad
\irule{}{\gseq{\Psi \:\mid\: \Gamma,\trm{x}:\pfy N}{\trm{t}:M}}
{\gseq{\Psi,\trm{x}:\pfy N \:\mid\: \Gamma}{\trm{t}:M}} \qquad
\irule{}{\fseq{\Psi,[\nfy P]}{\trm{\kappa p.t}:N}}
{\gseq{\Psi \:\mid\: \trm{p}:P}{\trm{t}:N}} \\
\\
\begin{array}{c@{\qquad}c}
\gseqrule{}{\Psi \:\mid\: \Gamma}{\trm{\lambda p.\,t}:P \imp N}
{\Psi \:\mid\: \Gamma,\trm{p}:P}{\trm{t}:N} &
\fseqrule{}{\Psi,[N \land\, M]}{\trm{\prl k}:L}
{\Psi,[N]}{\trm{k}:L} \quad
\fseqrule{}{\Psi,[N \land\, M]}{\trm{\prr k}:L}
{\Psi,[M]}{\trm{k}:L} \\
\\
\iruule{}{\fseq{\Psi,[P \imp N]}{\trm{d::k}:M}}
{\fseq{\Psi}{\trm{d}:[P]} \quad}{\fseq{\Psi,[N]}{\trm{k}:M}} &
\gseqruule{}{\Psi \:\mid\: \Gamma}{\trm{\pr{t,u\,}}:N \land\, M}
{\Psi \:\mid\: \Gamma}{\trm{t}:N \quad}
{\Psi \:\mid\: \Gamma}{\trm{u}:M} \\
\end{array} \\
\\
\begin{array}{c@{\qquad}c}
\fseqruule{}{\Gamma}{\trm{(d,e)}:[P \times\, Q]}
{\Gamma}{\trm{d}:[P] \quad}{\Gamma}{\trm{e}:[Q]} &
\fseqrule{}{\Gamma}{\trm{\inl d}:[P \,\lor\, Q]}
{\Gamma}{\trm{d}:[P]} \quad
\fseqrule{}{\Gamma}{\trm{\inr d}:[P \,\lor\, Q]}
{\Gamma}{\trm{d}:[Q]} \\
\\
\gseqrule{}{\Psi \:\mid\: \Gamma,\trm{(p,q)}:P \times\, Q}{\trm{t}:N}
{\Psi \:\mid\: \Gamma,\trm{p}:P,\trm{q}:Q}{\trm{t}:N} &
\gseqruule{}{\Psi \:\mid\: \Gamma,\trm{\spt{x}{p\!}{q}}:P \,\lor\, Q}
{\trm{\spt{x}{t}{u}}:N}
{\Psi \:\mid\: \Gamma,\trm{p}:P}{\trm{t}:N \quad}
{\Psi \:\mid\: \Gamma,\trm{q}:Q}{\trm{u}:N} \\
\end{array} \\
\end{array} \\
\\
\hline
\end{array}$}
\caption{Typing rules for a pattern-based $\lambda$-calculus based on $\lbar$}
\label{figproplam}
\end{figure}
The rules we use in this system are shown in Figure \ref{figproplam}, where
the term assignment is indicated in red and several turnstiles are used to
distinguish an inversion phase $\vdash$ from a focused phase $\vDash$. In this
syntax, brackets are used to pinpoint the precise formula under focus. The
extended $\lambda$-calculus we use to represent proofs is based on the
following grammar:
$$\begin{array}{r@{~\grdef~}c@{~\:\mid\:~}c@{~\:\mid\:~}l}
t,u & \don d & \lambda p.t & x\;k \,\:\mid\:\, \pr{t,u}
\,\:\mid\:\, \spt{x}{t}{u} \\[0.1em]
p,q & x & (p,q) & \spt{x}{p}{q} \\[0.1em]
d,e & \nod t & (d,e) & \inl d \;\:\mid\:\; \inr d \\[0.1em]
k,m & \els & t::k & \hspace*{0.5pt}\prl k \;\:\mid\:\; \prr k \,\:\mid\:\, \kappa p.t \\
\end{array}$$
where $t$ denotes a \emph{term}, $p$ a binding \emph{pattern}, $d$ a \emph{data}
structure and $k$ an application \emph{context}. In terms of programming, terms are
describing computation, mostly by means of functions, while data structures
implement pairs and constructors. Note that computations can use \emph{case
splittings} $\spt{x}{t}{u}$ to choose between the subterms $t$ or $u$ depending
on the contents of the data bound to $x$. The use of patterns rather than plain
variables to annotate formulas in the context of typing judgement is taken
from \cite{cerrito:kesner:99:patcut} and allows to express more directly the
equational style found in Agda. For example, we could write:
$$\begin{array}{l}
\mathtt{f~:~(\mathbb{N}\;\times\;\mathbb{N})~\uplus~\mathbb{N}~\imp~\mathbb{N}} \\
\mathtt{f~(inl~(x,y))~=~x~+~y} \\
\mathtt{f~(inr~z)~=~z}
\end{array}$$
to define a function $f$ that uses pattern-matching on its argument and
computes the result based on the components of the data structure it
received. Such a function can be written in our calculus as the following
term: $\lambda \spt{w}{(x,y)}{z}.
\spt{w}{\mathrm{add}\;((x\;\els)::(y\;\els)::\els)}{z\;\els}$,
where $\mathrm{add}$ is the name of the addition function. This makes
the compilation of the code written above to the adequate representation
in our calculus relatively easy, since different parts of a definition
can be aggregated into a term with a pattern and a case splitting. This
is very much related to the question of compiling pattern-matching into
a specific \emph{splitting tree} where case constructs are used
\cite{augustsson:85:compat}.
The idea of the logical approach is that \emph{cut elimination} in this
system yields a reduction system implementing the dynamics of computation
in the corresponding calculus. In such a focused calculus, a number of
cut rules are needed to complete the proof of completeness of the cut-free
fragment, but only two of them really need to be considered as rules ---
the other cuts can simply be stated as principles, and their reduction will
correspond to a big step of computation. These two rules are:
$$\iruule{}{\gseq{\Psi \:\mid\: \Gamma}{\trm{p=d \din t}:N}}
{\fseq{\Psi}{\trm{d}:[P]} \quad}
{\gseq{\Psi \:\mid\: \Gamma,\trm{p}:P}{\trm{t}:N}}
\qquad\quad
\iruule{}{\gseq{\Psi \:\mid\: \Gamma}{\trm{t\;k}:M}}
{\gseq{\Psi \:\mid\: \Gamma}{\trm{t}:N} \quad}
{\fseq{\Psi,[N]}{\trm{k}:M}}$$
the first one being the binding of a data structure to a matching
pattern, and the second a simple application of a term to a list of
arguments. The latter is already part of the \textbf{{LJT}}\xspace system
\cite{herbelin:94:chseq}, but the former is specific to \textbf{{LJF}}\xspace in the sense
that it appears only when formulas can be focused on the right of a sequent.
The main reduction rule extracted from cut elimination is the $\lbar$
variant of $\beta$-reduction:
$$(\lambda p.t)\;(d :: k) ~~\rdc~~ (p=d \din t)\;k$$
but there are a number of other reduction rules generated by the use of
other connectives than implication. In particular, conjunction yields a
form of pairing where a term $\pr{t,u}$ has to be applied to a list
$\prl k$ to reduction to $t\;k$. The binding cut is simpler in a certain
sense, since its reduction corresponds to a decomposition of the data
structure $d$ according to the shape of the pattern $p$, and a simple
substitution when $p$ is just a variable. Moreover, other cuts encountered
during reduction usually amount to a form of substitution, except for the
one, already present in \textbf{{LJT}}\xspace, that yields lists concatenation in the argument
of an application.
Note that the $\don d$ construct is present in the internal language of Agda,
but the constructs $\nod t$ and $\kappa p.t$ are not, although they can be
obtained indirectly using a cut. While $\nod t$ should simply be understood
as a \emph{thunk}, which is a term made into data, the list $\kappa p.t$
is slightly more complex. This construct, already present in
\cite{barendregt:ghilezan:00:lamndseq}, is more a \emph{context} than a
list in the sense that it stops the application of a term to $\kappa p.t$
and enforces the execution of $t$, where the original term applied is bound
to $p$. This can be understood by considering the reduction extracted from
cut elimination:
$$(\don d)\;(\kappa p.t) ~~\rdc~~ p=d \din t$$
Finally, note that we could have an explicit contraction rule in the system,
that would appear in terms under the form of a pattern $p \!\pcp q$ indicating
that $p$ and $q$ will be the patterns associated to two copies of the same
assumption $P$. The associated typing rule is:
$$\gseqrule{}{\Psi \:\mid\: \Gamma,\trm{p \!\pcp q}:P}{\trm{t}:N}
{\Psi \:\mid\: \Gamma,\trm{p}:P,\trm{q}:P}{\trm{t}:N}$$
and it is reminiscent of the pattern using the same syntax in Haskell ---
which is meant to exist in Agda as well, but this not yet implemented. However,
in Haskell, this is restricted to the form $x \pcp p$ so that it can only
serve to name an assumption before decomposing it, and we could allow for
such a use by avoiding maximal inversion, which is not strictly necessary
in a focused system \cite{brockn:guenot:gustafsson:15:ljfoc}. This rule is
not necessary for the completeness of the calculus, and there are other ways
to obtain the same result. Of course, in a very similar way, the pattern
\raisebox{0.1em}{$\texttt{\_}$} can be associated to the weakening rule,
also admissible.
\newpage
\section{Adding Dependent Types and Induction}
\label{sec:depind}
\begin{figure}[t]
\centerline{
$\begin{array}{|@{\quad}c@{\quad~}|}
\hline \\
\begin{array}{@{}c@{}}
\begin{array}{@{}c@{~\quad}c@{}}
\fseqaxrule{}{\Psi,[N]}{\trm{\els}:N} \qqquad
\irule{}{\gseq{\Psi \:\mid\: \cdot}{\trm{\don d}:\nfy P}}
{\fseq{\Psi}{\trm{d}:[P]}} \qqquad
\irule{}{\fseq{\Psi}{\trm{\nod t}:[\pfy N]}}
{\gseq{\Psi \:\mid\: \cdot}{\trm{t}:N}} &
\fseqrule{}{\Psi,[N \land\, M]}{\trm{\prl k}:L}
{\Psi,[N]}{\trm{k}:L} \\
\\
\irule{}{\gseq{\Psi,\trm{x}:\pfy N \:\mid\: \cdot}{\trm{x\;k}:M}}
{\fseq{\Psi,\trm{x}:\pfy N,[N]}{\trm{k}:M}} \quad~
\irule{}{\gseq{\Psi \:\mid\: \Gamma,\trm{x}:\pfy N}{\trm{t}:M}}
{\gseq{\Psi,\trm{x}:\pfy N \:\mid\: \Gamma}{\trm{t}:M}} \quad~
\irule{}{\fseq{\Psi,[\nfy P]}{\trm{\kappa x.t}:N}}
{\gseq{\Psi \:\mid\: \trm{x}:P}{\trm{t}:N}} &
\fseqrule{}{\Psi,[N \land\, M]}{\trm{\prr k}:L}
{\Psi,[M]}{\trm{k}:L} \\
\end{array} \\
\\
\begin{array}{@{\!}c@{\!}}
\gseqrule{}{\Psi \:\mid\: \Gamma}{\trm{\lambda x.\,t}:\Pi(x:P). N}
{\Psi \:\mid\: \Gamma,\trm{x}:P}{\trm{t}:N} ~~
\iruule{}{\fseq{\Psi,[\Pi(x:P).N]}{\trm{d::k}:M}}
{\fseq{\Psi}{\trm{d}:[P]} \quad}{\fseq{\Psi,[N\msub{d/x}]}{\trm{k}:M}} ~~
\gseqruule{}{\Psi \:\mid\: \Gamma}{\trm{\pr{t,u\,}}:N \land\, M}
{\Psi \:\mid\: \Gamma}{\trm{t}:N \quad}
{\Psi \:\mid\: \Gamma}{\trm{u}:M} \\
\end{array} \\
\\
\begin{array}{@{}c@{\quad}c@{}}
\gseqrule{}{\Psi \:\mid\: \Gamma,\trm{x}:\Sigma(y:P).Q}{\trm{y,z=x \din t}:N}
{\Psi \:\mid\: \Gamma,\trm{y}:P,\trm{z}:Q}{\trm{t}:N\msub{(y,z)/x}} \quad
\fseqruule{}{\Gamma}{\trm{(d,e)}:[\Sigma(x:P). Q]}
{\Gamma}{\trm{d}:[P] \quad}{\Gamma}{\trm{e}:[Q\msub{d/x}]} &
\fseqrule{}{\Gamma}{\trm{\inl d}:[P \,\lor\, Q]}
{\Gamma}{\trm{d}:[P]} \\
\\
\gseqruule{}{\Psi \:\mid\: \Gamma,\trm{x}:P \,\lor\, Q}
{\trm{\spt{x}{y.t}{z.u}}:N}
{\Psi \:\mid\: \Gamma,\trm{y}:P}{\trm{t}:N\msub{\inl y/x} \quad}
{\Psi \:\mid\: \Gamma,\trm{z}:Q}{\trm{u}:N\msub{\inr z/x}} &
\fseqrule{}{\Gamma}{\trm{\inr d}:[P \,\lor\, Q]}
{\Gamma}{\trm{d}:[Q]} \\
\end{array} \vspace*{0.5em} \\
\dotfill \\[-0.3em]
\\
\iruule{}{\gseq{\Psi \:\mid\: \Gamma,\Delta\msub{d/x}}{\trm{x=d \din t}:B\msub{d/x}}}
{\fseq{\Psi}{\trm{d}:[A]} \quad}
{\gseq{\Psi \:\mid\: \Gamma,\trm{x}:A,\Delta}{\trm{t}:B}}
\qquad\quad
\iruule{}{\gseq{\Psi \:\mid\: \Gamma}{\trm{t\;k}:B}}
{\gseq{\Psi \:\mid\: \Gamma}{\trm{t}:A} \quad}
{\fseq{\Psi,[A]}{\trm{k}:B}}\\
\end{array} \\
\\
\hline
\end{array}$}
\caption{Typing rules for a dependent $\lambda$-calculus based on $\lbar$}
\label{figdeplam}
\end{figure}
We continue our investigation by adapting our variant of \textbf{{LJF}}\xspace to dependent
types, but this unveils some issues that we will now discuss. On problem
we immediately encounter is the adaptation of the pattern machinery to the
dependent setting, mostly due to the substitutions involved in the types,
where patterns should have appeared. For the dependent implication $\Pi(x:P).N$,
using a pattern $p$ rather than a binding variable $x$ yields the question of
substituting a data structure $d$ for $p$: this becomes a much more complicated
operation than the traditional substitution. Moreover, keeping the patterns
and variables synchronised between their use in terms and in types is a
challenging task, that would probably require heavy syntactic mechanisms.
For this reason, the system shown above in Figure \ref{figdeplam} has no patterns,
but rather falls back to the traditional style of typing using only variables to
label assumptions. The language used in this variant can still be related to the
equational approach to functional programming, but the translation between
equations and terms is more involved.
The generalisation of the implication into the dependent product
$\Pi(x:P).N$ is a straightforward operation, and the rules we use are
essentially the ones found in \cite{dyckhoff:lengrand:mckinna:11:focpts} ---
except that it involves a data structure, corresponding to a focus on the
right-hand side of a sequent. Now, the case of $\Sigma$ is more complicated,
as it is \textit{a priori} unclear whether it should be obtained as a
generalisation of the negative conjunction $\land$ or of the positive
product $\times$ and both solutions might even be possible. But a generalisation
of the negative disjunction seems to be problematic, when it comes to the
specification of the second left rule, typing the $\prr\!\!\!\!$ operation.
Indeed, when focusing on $\Sigma(x:N).M$ we would need to plug a term of type
$N$ for $x$ in $M$, but this would require to maintain some \emph{``natural
deduction version''} of the term currently being transformed, and to plug
at adequate locations some translation between natural deduction style and
our sequent calculus syntax --- as done in \cite{dyckhoff:pinto:98:seqdep}.
This is quite unsatisfactory and will not help us build a proper understanding
of dependent types in a pure sequent calculus setting. The solution we adopt
here is to obtain $\Sigma(x:P).Q$ as a generalisation of the positive product
$\times$ and simply update the corresponding rules as shown in Figure
\ref{figdeplam}. The left rule is simple to define in this case, because
the decomposition of the $\Sigma$ in the context preserves the binding of
$y$ in the type $Q$.
There is a particularly interesting benefit to the use of the sequent calculus
to handle splitting as done in the left $\Sigma$ rule. Consider the elimination
rule in natural deduction:
$$\iruuuule{$\vee$e}
{\gseq{\Gamma}{\texttt{match}~{[x.C]}~(t\,;\,y.u\,;\,z.v):C\msub{t/x}}}
{\gseq{\Gamma,x:A\vee B}{C : \typ}}
{\gseq{\Gamma}{t : A \vee B}}
{\gseq{\Gamma,y:A}{u : C\msub{\inl\!\! y/x}}}
{\gseq{\Gamma,z:B}{v : C\msub{\inr\! z/x}}}$$
and observe that it is necessary to be explicit about the return type, since
obtaining $C$ from $C\msub{t/x}$ is a complicated process, that \emph{reverses}
a substitution. This makes the term syntax heavy, while the problem is avoided
in the sequent calculus, where no substitution is needed in the conclusion.
Note that in Coq, the natural deduction style is used for the proof language,
but tactics are written in a style that is much closer to the sequent calculus
--- as this is the framework of choice for proof search --- so that tactics
have to perform some kind of translation between the two formalisms.
At the level of dependent types, there is a number of tricks used in the
Agda implementation that diverge from the proof-theoretical viewpoint. For
example, substitutions in types are treated in a complex way and may be
grouped together. Although some of the design choices can be justified by
a similarity to the focused sequent calculus, there is probably a number of
implementation techniques that have no proof-theoretical foundation.
Moreover, we have chosen here a particularly precise framework where
formulas are explicitly polarised, but in practice types in a programming
language should not always require these annotations: the question of the
presence of specific terms corresponding to shifts is therefore not obvious,
as it depends if some interesting programming constructs require their
presence or their absence. One can observe, for example, that in the system
proposed here, dependencies are subject to the presence of delays, because
of the contraction present in the left focus rule and of the treatment of
names in the $\kappa x.t$ operation.
The problem of generalising the equational style of programming associated
to the focused sequent calculus at the propositional level to the level of
dependent types is parametrised by a choice: using patterns seems to require
a complex tracking mechanism, but provides a relatively direct logical
representation of equations, while using simple variables leads to a translation
overhead. Notice however that one could think of an implementation based on
variables in which equations are easily obtained, since the language would
already be expressed in the style of the sequent calculus --- this is the
approach suggested by Epigram, where equations are meant to clarify the
meaning of programs but are not their internal representation. But we now
turn to the most challenging task of our whole enterprise: the accomodation
of induction in the framework of a focused sequent calculus in a form that
can be exploited to design declarative programs.
Induction can be expressed in Agda in a concise manner and enjoys the
benefits of the equational presentation. Consider for example the
following inductive scheme for natural numbers:
$$\begin{array}{l}
\mathtt{ind_\mathbb{N}~:~P~zero~\imp~(\Pi(x:
\mathbb{N}).~P~x~\imp~P~(suc~x))~\imp~\Pi(n:\mathbb{N}).~P~n}\\
\mathtt{ind_\mathbb{N}~base~ih~zero~=~base} \\
\mathtt{ind_\mathbb{N}~base~ih~(suc~n)~=~ih~n~(ind_\mathbb{N}~base~ih~n)}
\end{array}$$
where the code essentially relies on the matching of a natural
number, that can be either zero or the successor of another number.
It is not obvious to see through this program and select a particular
approach to induction that would be a good candidate for a proof-theoretical
description. The natural candidate for a representation of induction in
the sequent calculus would be the $\mu$ operator as studied in
\cite{baelde:phd} in the setting of intuitionistic logic. The unfocused
rules for this operator would be, from a purely logical viewpoint:
$$\gseqrule{}{\Gamma}{\mu a.B}{\Gamma}{B\msub{\mu a.B/a}}
\qquad\quad
\gseqrule{}{\Gamma,\mu a.B}{C}{B\msub{C/a}}{C}$$
but the presence of fixpoints has consequences for cut elimination,
as it prevents some cuts to be reduced. The usual technique applied
to avoid this problem is to build the cut rule into the left rule
for $\mu$ and to consider the result as cut free. This way, all the
cuts that cannot be reduced further are explicitly attached to the
blocking rule instance. However, the use of these rules in terms
of computation is not obvious to specify, in part because of the
complexity of the associated cut reduction, that involves the
creation of several other cuts and appeals to the functoriality
of the body $B$ of any fixpoint $\mu a.B$ --- ensured by a positivity
condition. In addition, these rules seem to interact poorly with
dependent types, as dealing with fixpoints will require a complex
handling of terms appearing inside types. It is unclear as of now
if fixpoints as expressed by $\mu$ --- and $\nu$ in the case of
induction --- can fit our scheme of explaining the implementation
of a language such as Agda, but at the same time there is no
obvious \emph{proof-theoretical} approach that accounts in a
straightforward way for the pervasive nature of inductive definitions
in the internal language of Agda, where they are handled by expansion
of names with the body of the definition.
\section{Conclusion and Future Work}
\label{sec:conc}
As we have seen here, the $\lbar$-calculus proposed by Herbelin as an
interpretation of the \textbf{{LJT}}\xspace focused sequent calculus can be extended beyond
its original scope to include positive connectives, leading to a full-fledged
intuitionistic system where we can focus on the right-hand side of sequents
to decompose positives. The language we obtain is well-suited to represent
programs written in the kind of equational style found in Haskell or Agda,
the relation to equations can be made even tighter by using patterns as
labels for assumptions in the type system. The opens up the possibility to
select focused sequent calculus as a logical framework of choice for the
implementation of such languages --- as evidenced by the current state of
the implementation of Agda, containing many elements that can be explained
as sequent calculus constructs. The benefit could not only be a simplication
of such an implication, but possibly an improvement in terms of efficiency
if advanced techniques from proof theory are transferred and made practical.
Moreover, one of the strength of the logical approach is that generalisations
and extensions of all kinds are usually made simpler by the strong principles
at work: any kind of progress made on the side of proof theory could translate
into more expressive languages using the clear equational style of Haskell
and Agda --- that could be modalities, linearity or many other elements studied
in the field of computational logic.
The generalisation of this idea to handle dependent types has already
been partially investigated, but some question are left unresolved as to
the specific rules used in such a system, and the possibility of making
the system more equational by exploiting patterns. But the most difficult task
at hand is the explanation of the various treatments of induction available
in language and proofs assistants in terms of the sequent calculus. As
observed previously \cite{abel:pientka:thibodeau:setzer:14:copat}, the notion
of polarity seems to be important in the understanding of this question, but
unfortunately the proper polarised handling of fixpoints in proof theory has
yet to be found --- a number of choices are left open when it comes to the
definition of a focused system using fixpoints \cite{baelde:12:mumall}.
Note that our enterprise also yields the question of the treatment of
the identity type in proof theory, as it makes dependent pattern matching
admit the axiom $\rnm{K}$. This axiom is undesirable in homotopy type theory,
and thus the restriction of dependent pattern matching has been studied
\cite{cockx:devriese:piessens:14:nok}. But this was achieved by restricting
unification in the splitting rules, and as Agda has no explicit calculus for
splitting, this was somewhat hidden. The framework we want to develop
provides a calculus and could thus help making this restriction simpler.
\vspace*{0.3em}
{\bf Acknowledgements}. This work was funded by the grant number 10-092309
from the Danish Council for Strategic Research to the \emph{Demtech} project.
\vspace*{-0.6em}
\begin{raggedright}
\bibliographystyle{eptcs/eptcs.bst}
|
2,869,038,156,278 | arxiv | \section{Introduction}
\label{intro}
The Perseus star forming region is one of the largest associations of young stars within a distance of 500\,pc. This complex encompasses two well-studied young clusters, IC\,348 and NGC\,1333, and a population of distributed young stars, all embedded in the Perseus molecular cloud at distances of around 300\,pc \citep{bally_2008}. The three dimensional structure of the region is still poorly understood. Thanks to \textit{Gaia} DR2, the distances to IC348 and NGC\,1333 are now much better constrained to 320$\pm$26\,pc and 293$\pm$22\,pc \citep{ortiz_leon_2018}.
Previous studies of the Perseus complex focus mostly on these two famous clusters, and in some cases, smaller clouds in between and around them. The canonical overview paper by \cite{bally_2008} discusses the two main clusters, the Barnard clouds B1 and B5 and the clouds L1448 and L1445, along with some individual sources, all located in an area approximately between and around the clusters (see their Figure 6). The Spitzer C2D survey, an extensive mid-infrared view of the region, covers predominantly the same regions \citep{jorgensen_2006}. In the submm, the James Clerk Maxwell Telescope (JCMT) survey by \cite{hatchell_2013} observed a restricted region around NGC\,1333, to construct dust temperature maps using Sub-millimetre Common-User Bolometer Array 2 (SCUBA-2) data.
The most recent census of the stellar/substellar population was provided by \cite{luhman_2016}, hereafter referred to as L16. They compile a list of members for both IC\,348 and NGC\,1333 using optical and near-infrared spectra. Based on various indicators of youth, it has been surmised that IC\,348 is slightly older than NGC\,1333 with ages between 2-6\,Myr and 1\,Myr respectively (L16) in agreement across the literature (for example \cite{scholz_2012,stelzer_2012}).
So far the areas beyond these inner parts of the complex have not been studied systematically at multiple wavelengths and with sufficient depth to identify young low-mass stars. In this paper we present the first results of a \textit{Gaia} DR2-based study of the entire Perseus star forming region. In particular, we report the discovery and characterisation of five new groups of young stars with ages 1-5\,Myr in this region and a few dozen to a few hundred potential members, which have not previously been reported in the literature.
We divide this paper as follows: In Section \ref{data} we introduce the data we use for our analysis, in Section \ref{selection} we select candidate members for the entire cloud of Perseus, in Section \ref{new_clusters} we present our selection method of the five new groups and the final lists of their candidate members. In Section \ref{youth} we demonstrate that these are groups of young stars and point out their relative age sequence. We discuss our results in Section \ref{discussion} and summarise in Section \ref{summary}.
\section{Data used in this paper}
\label{data}
Launched in 2013, \textit{Gaia} is considered the astrometric successor of \textit{Hipparcos} \citep{gaia_main_2016}. The second data release, \textit{Gaia} DR2, has enabled improvement in a large variety of astrophysical fields. The typical position, proper motion, and parallax uncertainties in DR2 are a few milli-arcsec for the faintest objects ($G> 21$ mag) and reduce to less than 0.1\,mas for bright stars \citep{gaia_dr2_2018}. The revolutionary precision in astrometry in combination with the depth allows us for the first time to study the three-dimensional distribution of the low-mass members of nearby star forming regions.
In order to identify objects with circumstellar discs we also use data from the Wide-field Infrared Survey Explorer (\textit{WISE}), observing at 4 bands centered at 3.4, 4.6, 12 and 22 $\mu$m and covering the entire sky \citep{wright_2010}. We use data from the AllWISE catalogue and combine it with Two Micron All Sky Survey \textit{2MASS} magnitudes \citep{skrutskie_2006} to search for excess emission in the infrared as an indicator of the presence of a disc.
Finally, we use the large-scale map of the dust emission measured at 353 GHz, as made available by the \textit{Planck} survey \citep{planck_2016}, to study the link between young stars and the presence of cold dust.
\section{Selecting young stars in Perseus}
\label{selection}
We use the \textit{Gaia} DR2 catalogue to identify young stellar objects (YSOs) in the Perseus star forming complex. We limit our survey to a $13\times 13$ degree area encompassing the two main clusters, IC\,348 and NGC\,1333. This survey area includes the entire previously known star forming complex \citep{bally_2008}.
To define the selection criteria in parallax and proper motion, we use the confirmed members of the two known clusters, as listed in the comprehensive photometric and spectroscopic survey paper by L16. These known young stars, as observed by \textit{Gaia}, are constrained in proper motion to $+1 \ldots +10$ mas/yr in right ascension, $\alpha$, and $-15 \ldots -2$ mas/yr in declination, $\delta$. We use these ranges in proper motion as limits for our initial sample.
To confine our initial sample further, we also apply a cut-off in the parallax angle. The likely distance to the Perseus cloud ranges from 294 to 350\,pc according to \cite{zucker_2018}. We intentionally choose a wider range to explore the full depth of the star forming complex. \textit{Gaia} sources at these distances come with high uncertainties in parallax, therefore adopting a wider range ensures that our initial cloud sample is complete, and no real members are discarded. We keep sources in a range between 1.8 and 4.3 mas in parallax which translates to distances between $\sim$230 and 550\,pc. We also impose a limit on the parallax error of $<0.7$ mas which is the typical error at G = 20 mags (see Table 3 in \cite{gaia_dr2_2018}).
Finally, we apply a conservative cut in the \textit{Gaia} (G-RP,G) colour-magnitude space (see Fig.~\ref{fig:age_cut}). To define this limit we use the 10\,Myr isochrone calculated by \citet{marigo_2013}. The cutoff implies that we keep all objects with an estimated age younger than 10\,Myr. The 10\,Myr isochrone is fitted here with a 5-degree polynomial.
The sample selected with these criteria in proper motion, parallax and age comprises in total 6202 objects and is hereafter called the {\it Gaia sample}. This is designed to be an inclusive sample, but it will contain contaminating stars in foreground and background of the star forming regions. Most of this population will require additional validation to be confirmed as YSOs and/or as members of Perseus. The full list of criteria used to select the {\it Gaia sample} is summarised in Table \ref{tab:perseus_conditions}.
In Fig.~\ref{fig:gaia_sample} we show a map of the 6202 sources in the {\it Gaia sample}. The two known clusters NGC\,1333 and IC\,348 stand out as obvious dense populations around the central coordinates of ($52.3,31.3$) deg and ($56.1,32.2$) deg, respectively. Our basic search recovers 50\% and 78\% of the members listed in L16 for NGC\,1333 and IC348 respectively.
The reason known members are not identified is that some of them are too faint to be detected by \textit{Gaia}.
\begin{table}
\centering
\caption{Perseus \textit{Gaia Sample} Selection Conditions}
\begin{tabular}{l c}
\hline
\hline
Property & Condition \\ [0.5ex]
\hline
Area (sqdeg) & $ 13 \times 13 $ \\
Proper Motion $\alpha$ (mas\,yr$^{-1}$) & $1.0<\mu_{\alpha}<10.0$ \\
Proper motion $\delta$ (mas\,yr$^{-1}$) & $-15.0<\mu_{\delta}<-2.0$ \\
Parallax Angle (mas) & $ 1.8<\varpi<4.3$ \\
Approximate Age (Myr) & $\lesssim 10 $ \\
Number of Sources & 6202 \\
\hline
\hline
\end{tabular}
\label{tab:perseus_conditions}
\end{table}
\begin{figure}
\includegraphics[width=\columnwidth]{age_cut_10.png}
\caption{Colour-Magnitude Diagram from \textit{Gaia} photometry for our \textit{Gaia sample} (dark blue). Overplotted is a 10\,Myr isochrone constructed using models from \citet{marigo_2013}. The sources discarded are estimated to be older than 10\,Myr (light blue).}
\label{fig:age_cut}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{position_ellipses.png}
\caption{Spatial distribution of the 6202 sources in the \textit{Gaia sample}, satisfying the conditions of Table \ref{tab:perseus_conditions}. The ellipses show the 3$\sigma$ borders for each of the newly found groups.}
\label{fig:gaia_sample}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{heatmap.png}
\caption{Number density plot of the \textit{Gaia sample}. To construct this plot, we use a box size used of $0.5 \times 0.5$ degrees and a stepsize of 0.25\,degrees. The orange labels identify the five new groups.}
\label{fig:density_map_per}
\end{figure}
\section{New groups of young stars}
\label{new_clusters}
\subsection{Identification of new groups}
Visual inspection of Fig.~\ref{fig:gaia_sample} shows several additional groupings of potential YSOs in the {\it Gaia sample}, outside the main clusters IC\,348 and NGC\,1333. We identify five groups, named Autochthe (PST1), Alcaeus (PST2), Mestor (PST3), Electryon (PST4) and Heleus (PST5) after five (out of the many) children of king Perseus in Greek Mythology. We also introduce the designation PST (last name initials of the authors) 1 to 5 for these five groups. We indicate the positions and names of these five groups in Fig.~\ref{fig:gaia_sample}.
To confirm the visual identification of these groups, we show a colour map of the number density of stars in the {\it Gaia sample} in Fig.~\ref{fig:density_map_per}. This figure was produced by counting the number of stars in each $0.5 \times 0.5$ degree box, while moving the centre of the box by a stepsize of 0.25\,degrees in $\alpha$ and $\delta$. Again, IC\,348 and NGC\,1333 clearly show up as regions of enhanced stellar density. In addition to IC\,348 and NGC\,1333, four of the five new groups of young stars mentioned above are distinctly visible as regions with a high density of stars and are labelled in the figure. Autochthe, located to the west of NGC\,1333, is not clearly distinguished from its neighbouring larger cluster.
We determine the peaks of the density enhancements in Fig.~\ref{fig:density_map_per} to define the spatial centers of the new groups.
\subsection{Selection of members in spatial distribution}
\label{position_selection}
In the following, we define preliminary membership lists for the five new groups from the {\it Gaia sample}. Based on Fig.~\ref{fig:density_map_per} the groups have a diameter of around 1.0\,deg, with the exception of Autochthe, which is more compact. We define 5 square boxes, centered on the adopted central coordinates, with a sidelength chosen to encompass the visible density enhancement. Then we fit a Bivariate Gaussian to the $\alpha$-$\delta$ space of the objects contained in each box, using the tasks \texttt{fit\_bivariate\_normal} from the astroML.stats package in python and \texttt{\_bivariate\_normal} from astroML.stats.random \citep{VanderPlas_2012}. The Gaussian fit gives updated central coordinates and a measure for the dispersion of the members, $\sigma$. We keep as candidate members the sources that are located within a radius of 3$\sigma$ from the new central coordinates. We choose the 3$\sigma$ level as this corresponds well with the visual clustering in spatial distribution. This threshold presents a good compromise between completeness and avoiding excessive contamination. This step gives us preliminary lists of candidate members for each group. The circles with 3$\sigma$ radius are shown in Fig.~\ref{fig:gaia_sample}.
\begin{figure*}
\includegraphics[width=\textwidth]{5panel_pms.png}
\caption{\textit{Gaia} proper motions of the initial samples for each of the 5 new groups. The orange ellipses show the 5$\sigma$ level within which we keep sources as members in each group. In panels (b) and (d) we see two clusterings, the bottom one corresponding to Alcaeus and the top one to Electryon. The two groups overlap in the plane of the sky but are well separated in proper motion and parallax.}
\label{fig:pm_all}
\end{figure*}
\subsection{Selection of members using proper motion and parallax}
\label{pmotion_selection}
When plotted in proper motion space these previously defined samples are clearly clustered around a well-defined proper motion vector, with scattering much smaller than that of the {\it Gaia sample} (see Fig.~\ref{fig:pm_all}). This demonstrates that these five groups contain co-moving young stars within the wider Perseus star forming complex and hence constitute kinematically distinct groups.
To further confine the samples for each group, we define a box in proper motion space for each group and fit a Bivariate Gaussian, using the same method as for the spatial distribution. For all groups, we keep as members sources within 5$\sigma$ from the mean proper motion, shown as ellipses in Fig.~\ref{fig:pm_all}. This further confinement gives a new sample for each group.
Next we probe the distribution of parallaxes for each group. In Fig.~\ref{fig:plx_panels} we show the parallax errors against the parallax angles. Recall that all objects in the {\textit Gaia sample} satisfy only a very wide parallax selection criterion (see Table \ref{tab:perseus_conditions}). From Fig.~\ref{fig:plx_panels} it is evident that the members of each of the five new groups are centered around a well defined mean, i.e. they share a common distance. This provides a further argument in favour of these five groups being actual clusterings of young stars, as opposed to accidental density enhancements. Fig.~\ref{fig:plx_panels} also shows that candidate members of each group are mostly contained within 3$\sigma$ from the mean. We discard all sources outside 3$\sigma$ of the mean.
Discarding the few parallax outliers gives us the final number of members for each group. The final samples sizes are 170 for Alcaeus, 302 for Mestor, 329 for Electryon, 85 for Heleus, and 27 for Autochthe. We cross-checked the list for Autochthe with the list of members for NGC\,1333 published by L16, and did not find any sources in common. All objects should be considered candidates prior to spectroscopic confirmation, but given that the final sample for each group shows consistent parallax and proper motion, as well as clustering on the sky, the contamination from foreground and background objects should be low. The selection criteria and sample sizes for all five groups are listed in Table \ref{tab:cluster_conditions}. In Appendix \ref{appendixB}, Tables \ref{tab:autochthe_members}-\ref{tab:heleus_members} list the members of the five groups with their key \textit{Gaia} parameters and their \textit{AllWISE} designation.
\begin{figure*}
\includegraphics[width=\textwidth]{5panel_plx.png}
\caption{Parallax errors versus the parallax of the 5 new groups. Autochthe (PST1) and Alcaeus (PST2) are at parallax of $\sim3.3$ mas corresponding to a distance of around 300\,pc, similar to NGC\,1333. Mestor (PST3), Electryon (PST4) and Heleus (PST5) are all located further away at mean parallaxes of $\sim2.6$ mas ($\sim400$\,pc).}
\label{fig:plx_panels}
\end{figure*}
\begin{table*}
\centering
\caption{Selection criteria and sample properties for the five new groups.}
\label{tab:cluster_conditions}
\begin{tabular}{lccccc}
\hline
\hline
& Autochthe (PST1) & Alcaeus (PST2) & Mestor (PST3) & Electryon (PST4) & Heleus (PST5)\\
\hline
($\alpha$,$\delta$)$_\mathrm{mean}$ (deg) & 51.34, 30.95 & 58.53, 31.98 & 57.91, 35.09 & 60.35, 32.19 & 56.42, 29.83 \\
$\sigma(\alpha$,$\delta)$ (deg) & 0.40 & 1.25 & 1.21 & 1.19 & 0.71 \\
\textit{Initial sample size} & 47 & 718 & 704 & 739 & 184 \\
\hline
($\mu_{\alpha}^*$,$\mu_{\delta}$)$_\mathrm{mean}$ (mas\,yr$^{-1}$) & 7.7, -8.5 & 6.3, -9.8 & 3.3, -4.5 & 3.5, -5.5 & 2.8, -5.3 \\
$\sigma$($\mu_{\alpha}^*$,$\mu_{\delta})$ (mas\,yr$^{-1}$) & 1.1 & 0.89 & 0.73 & 0.80 & 0.68 \\
\textit{Sample size after proper motion cut} & 28 & 175 & 305 & 329 & 86 \\
\hline
$\varpi_\mathrm{mean}$ (mas) & 3.3 & 3.4 & 2.6 & 2.7 & 2.5 \\
$\sigma(\varpi)$ (mas) & 0.33 & 0.48 & 0.26 & 0.34 & 0.32 \\
Mean distance (pc) & 298$\pm24$ & 291$\pm13$ & 395$\pm16$ & 370$\pm15$ & 413$\pm31$ \\
\textit{Final sample size} & 27 & 170 & 302 & 329 & 85 \\
\hline
Number of WISE counterparts & 24 & 161 & 291 & 311 & 82 \\
Percentage of sources with discs & 56\% & 12\% & 23\% & 18\% & 24\%\\
\hline
\hline
\end{tabular}
\end{table*}
\section{Evidence for Youth}
\label{youth}
In this section, we aim to demonstrate that these are indeed groups of young stars, which will lead to an estimate of the ages.
\subsection{Colour-Magnitude Diagrams}
\label{cmds}
In Fig.~\ref{fig:cmds} we present the colour-magnitude diagrams in \textit{Gaia} photometry of the final samples of the five groups including 1\,Myr and 5\,Myr isochrones as calculated from \cite{marigo_2013}, using a solar metallicity, Z $\sim$ 0.0152. We shift the isochrones to the mean distance of each group as given in Table \ref{tab:cluster_conditions}. We note that the distance error will somewhat affect the position of the isochrones; for a distance of $\sim$300\,pc and an error of $\sim$20\,pc the isochrone position along the y-axis is uncertain by $\sim$ 0.3\,mags. We include a reddening vector in the diagrams.
All groups form a clear sequence in colour-magnitude space in line with the expected positions of young stars, confirming their youth and confirming they are coeval groups. Comparing with the isochrones, most of Autochthe members and a few of Heleus members seem to be very young, with ages of 1\,Myr or less. Alcaeus, Mestor and Electryon show a wider range in age up to 5\,Myr. In particular, most members of Alcaeus are located around the 5\,Myr isochrone, making Alcaeus the oldest of the five groups.
The masses for the isochrones range from $\sim0.1$ to a few $M_{\odot}$. As it can be seen in Fig.~\ref{fig:cmds}, our samples do contain some objects with masses below the low mass limit of the isochrones. The fact that isochrones are close to the sequences of young stars indicates low extinction and a small range in extinctions. The only exception is Autochthe which may be affected by up to 2\,mag in optical extinction.
\begin{figure*}
\includegraphics[width=\textwidth]{5panel_cmds_shifted_iso.png}
\caption{Colour-Magnitude Diagrams from \textit{Gaia} photometry for our final samples for each group. The red and orange solid lines correspond to isochrones of 1\,Myr and 5\,Myr respectively adopted from \citet{marigo_2013}. The isochrones were shifted to each group's distance as listed in Table \ref{tab:cluster_conditions}. The black arrow corresponds to an extinction of A$_\mathrm{v}$ = 2.0 mag, calculated for G and $G_{RP}$ \textit{Gaia} bands using information from \citet{wang_2019}.}
\label{fig:cmds}
\end{figure*}
\begin{figure}
\includegraphics[width=\columnwidth]{planck_labels_.png}
\caption{A dust emission map at 353\,Ghz from Planck data of the whole Perseus star forming complex \citep{planck_2016}. The samples of young stars in the five groups discussed in this paper, plus the two known clusters, are overplotted.}
\label{fig:planck}
\end{figure}
\subsection{Infrared Excess Emission}
\label{IR_excess}
To test the evolutionary state of the candidate members we also search for infrared excess. We use infrared data from the WISE mission since none of the five groups are covered by Spitzer \citep{jorgensen_2008}. We cross-match our samples for each group with the AllWISE catalogue using a position tolerance of 1.0\,arcsec. The overwhelming majority of the sources have a WISE counterpart. We list the number of counterpart sources found for each group in Table \ref{tab:cluster_conditions}.
In Fig.~\ref{fig:all_wise} we show a colour-colour plot for each group using 2MASS and WISE data. In this figure objects with excess emission due to circumstellar dusty discs will appear on the right hand side, with a colour of $K-W2>0.5$. We adopt this value from \cite{teixeira_2012}, Figure 7 (top panel), where stars with thick discs all have colours $K-[4.5 \mu m]$ larger than $\sim$0.5. According to this criterion, a subset of the members in our groups shows evidence for the presence of a disc. Autochthe hosts the largest percentage of objects with discs, with 15 out of its 27 members ($\sim$56\%). Heleus and Mestor have similar disc fractions, 20 out of 85 ($\sim$24\%) and 69 out of 302 ($\sim$23\%), respectively. For Electryon, 58 out of 329 ($\sim$18\%) stars show infrared excess as defined above. Alcaeus has the lowest disk fraction with 20 out of 170 ($\sim$12\%). Using the prevalence of discs as indicator for age, Autochthe is the youngest of the groups, and Alcaeus the oldest, in line with the estimate from the colour-magnitude diagrams.
\begin{figure*}
\includegraphics[width=\textwidth]{5panel_wise.png}
\caption{Infrared colour-colour plot for all five groups. Objects to the right of the vertical line at $K - W2 = 0.5$ mag are sources with infrared excess, likely due to the presence of a disc \citep[e.g.][]{teixeira_2020}.}
\label{fig:all_wise}
\end{figure*}
\subsection{Location within the molecular cloud}
\label{cloud_location}
In Fig.~\ref{fig:planck} we show the dust emission map of the Perseus molecular cloud at 353GHz available from the Planck survey, along with the samples of the five new groups in different colours. The two known clusters, IC\,348 and NGC\,1333, are located on the broad cloud emission band. Autochthe appears as a very compact clustering and is clearly located in a region of dust emission as well. The remaining groups sit outside the main band. However, Mestor's central region is coinciding with less pronounced dust emission. Heleus is comparatively close to the large dust clouds in the complex. Alcaeus and Electryon do not coincide with dust emission. We also note that Electryon, Alcaeus, and Mestor are large dispersed groups, while Heleus and Autochthe are more compact, similar to the known clusters. Overall, the links with the dust emission and the physical structure of the new groups broadly confirm the evolutionary sequence suggested by the analysis in Sect. \ref{cmds} and \ref{IR_excess}, with Autochthe as youngest, Alcaeus and Electryon as oldest groups. We note that the fact that we see groups of stars in the same position as a dust clump does not necessarily mean that the young stars are physically associated with the clump, since we do not know the distance to the clumps.
\begin{figure}
\includegraphics[width=\columnwidth]{relative_distances.png}
\caption{Three-dimensional distance of each of the five new groups from IC\,348 versus their distance form NGC\,1333. The size of the symbol is proportional to the percentage of sources with discs.}
\label{fig:rel_dist}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Perseus_All_Clusters_pm.png}
\caption{The proper motions of all seven groups associated with Perseus. The IC\,348 and NGC\,1333 samples were cross-matched between the membership list of L16 and the \textit{Gaia} DR2 catalogue. We show the mean proper motion vector of each of the groups with diamonds.}
\label{fig:7_pms}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{cluster_groups.png}
\caption{Distance versus Galactic Latitude for the seven groups associated with the Perseus star-forming complex.
The upward pointing triangle symbols represent groups with an age of about 1\,Myr, the downward pointing triangle symbols represent groups older than 3\,Myr, whilst the diamond symbol represents IC\,348 with an age spread of 2 to 6\,Myr (L16). The seven groups separate into two kinematic sets according to their proper motion values (the mean proper motion of each set is written in the corresponding ellipse). We also include a schematic representation of the dense molecular cloud based on \ref{fig:planck}.}
\label{fig:cluster_groups}
\end{figure*}
\section{Discussion: links between centres of star formation}
\label{discussion}
The existence of five clustered groups of young stars in the Perseus complex, in addition to the known clusters NGC\,1333 and IC\,348, enables new insights into the star formation history of the region. In this section we explore the links between all seven groups. We base the discussion on the position, proper motion, distance and age of each group. We introduce membership samples for the two known clusters, by cross-matching the membership lists from L16, to our knowledge the best available census for these clusters, with the {\it Gaia} DR2 catalogue to end up with 101 and 375 members for NGC\,1333 and IC\,348, respectively.
We note that, we refer to Autochthe, Alcaeus, Mestor, Electryon and Heleus as ``groups" and to NGC\,1333 and IC\,348 as ``clusters" throughout the paper. Across the literature (e.g. L16) NGC\,1333 and IC\,348 are referred to as clusters. Most of the five new groups we present are more spread out in spatial distribution than the clusters, with the exception of Autochthe which harbours significantly fewer members.
\subsection{Trends in location}
\label{spatial}
In the $\alpha$-$\delta$ plane, Mestor, Heleus, Electryon, Alcaeus and IC\,348 are located on the eastern side of the cloud while Autochthe and NGC\,1333 are located further to the west. With respect to distances, NGC\,1333, Autochthe and Alcaeus sit at around 300\,pc, corresponding to the nearest portion of the cloud. Mestor, Heleus and Electryon are further away at distances between $\sim$ 370\,pc and $\sim$ 410\,pc. IC\,348 is located in-between these two sets, at $\sim$ 320\,pc \citep{ortiz_leon_2018}.
In Fig.~\ref{fig:rel_dist} we show the relative three-dimensional distances of all five groups from IC\,348 against their distances from NGC\,1333. The size of the crosses corresponds to the percentage content of stars with discs (see Sect. \ref{IR_excess}). Mestor, Heleus and Electryon are close to each other, but far away from the two known centres of star formation -- more than 80\,pc from NGC\,1333 and more than 40\,pc from IC\,348. All three also show low fractional content of stars with discs. On the other hand, Autochthe is the closest to both of the known clusters -- only $\sim 5$\,pc away from NGC\,1333 and $\sim 30$\,pc from IC\,348 -- and has the highest fraction of stars with discs. Finally, Alcaeus is equidistant from both NGC\,1333 and IC\,348 at $\sim 30$\,pc. Thus, in terms of spatial positioning, Mestor, Heleus and Electryon form one subset, and Autochthe, Alcaeus, NGC\,1333 and IC\,348 another.
\subsection{Trends in kinematics}
\label{kinematics}
In Fig.~\ref{fig:7_pms} we show all seven groups in proper motion space, together with their mean proper motion as diamonds. In terms of proper motion, the groups again clearly separate into two sets. Mestor, Heleus, and Electryon are occupying the same proper motion space. NGC\,1333, Autochthe and Alcaeus are distinct in this diagram, but overlap with each other. In general, IC\,348 almost spans the whole proper motion space, and overlaps with Autochthe, but also with Electryon and Heleus.
\subsection{Ages}
\label{age}
We estimate the age sequence within the new five groups based on four different indicators (see Sect. \ref{youth}): trend in colour-magnitude diagrams, fraction of stars with discs, the groups' association with cold dust, and how compact the groups are. Taking into account these indicators together, there is a clear age sequence.
Autochthe is clearly the youngest of the five new groups with a 56\% content of stars with discs and a strong association with dust (see Fig.~\ref{fig:planck}). Its brighter members also closely follow the 1\,Myr isochrone in the colour-magnitude diagram. That means, Autochthe is likely coeval to NGC\,1333, also at $\sim$ 1\,Myr according to L16.
Mestor and Heleus also contain very young stars, according to the indicators listed above. About a fifth to a quarter of their members have discs. Some of their members appear close to or above the 1\,Myr isochrone in the colour-magnitude diagram. Their members form relatively compact groups as well, as seen from Fig.~\ref{fig:planck}. Electryon and Alcaeus are clearly older with the lowest disc fractions (18\% and 12\% respectively) and many members with magnitude and colours consistent with the 5\,Myr isochrone. They show no association with dust and are more spread out in spatial distribution.
Thus, the age sequence from youngest to oldest is: Autochthe and NGC\,1333, Heleus and Mestor, Electryon and Alcaeus. IC\,348 is assumed to host stars with a very wide range of ages, from 2 to 6\,Myr (L16), encompassing much of the age range of the new groups.
\subsection{Star formation history}
With the parameters analysed above in sections \ref{spatial}, \ref{kinematics} and \ref{age}, a few clear links between separate groups of young stars become apparent. The most obvious one is between Autochthe and NGC\,1333: These two show similarities across all parameters, in particular, they are located very close together and may be part of the same star formation event.
Apart from that, Mestor, Electryon and Heleus show similar proper motions, similar range of ages, and are all in the north-eastern part of the complex in terms of their spatial location. All three are off the main cloud. Alcaeus is a special case, kinematically and spatially close to NGC\,1333, but with an age range similar to Electryon. IC\,348 spans a wide range of proper motions and ages which almost covers that of all the rest of the groups. It is the richest cluster and clearly the most pronounced centre of star formation in the complex.
Only the pair NGC\,1333 and Autochthe, as well as IC\,348 show evidence for ongoing star formation. Thus, in the Perseus cloud the centres of current star formation are located at the near side of the cloud, at distance around 300\,pc, both on the eastern (with IC\,348) and western side (with NGC\,1333 and Autochthe). The more distant groups are all more evolved and have stopped actively forming stars. All groups, with the exception of Autochthe, show an age spread of at least 2\,Myr in the colour-magnitude diagrams, pointing to extended periods of star formation bursts throughout the region.
In Fig.~\ref{fig:cluster_groups} we aim to visualise the set of properties for the seven groups of young stars in Perseus. The groups and clusters are plotted in distance vs. Galactic Latitude. The figure shows again that two sets of groups share a similar distance. The segregation in proper motions between the two sets is illustrated with the two ellipses of different colours. The three new groups in the orange ellipse, Electryon, Mestor and Heleus, also share a similar age -- triangles pointing down in that diagram represent ages of 3-5\,Myr. In the blue ellipse, Autochthe and NGC\,1333 located close to each other again share a similar and very young age of $\sim$1\,Myr (shown in triangles pointing up). Alcaeus, which is also in this ellipse due to its proper motion and distance, is an older group ($\sim$5\,Myr). IC\,348 lies in between these two sets and it is plotted as a black diamond symbol to represent the range of ages it spans. The current star forming cloud is indicated in a dashed-line ellipse containing NGC\,1333, Autochthe and IC\,348.
Fig.~\ref{fig:cluster_groups} illustrates that the older groups in Perseus are located closer to the Galactic Plane, at lower latitudes. Mestor, Electryon, and Alcaeus at latitudes of $-14$ to $-17$ degrees have formed first, about 5\,Myr ago, so has Heleus at $-19$ degrees. Star formation is finished in these regions. NGC\,1333 and Autochthe, at higher latitudes of $-20$ to $-22$ degrees are sites of active ongoing star formation, with stars at typical ages of 1\,Myr.
\section{Summary}
\label{summary}
We summarize below the main conclusions of our study:
\begin{enumerate}
\item We report the discovery of five new groups in the Perseus star-forming complex: Autochthe, Alcaeus, Mestor, Electryon and Heleus.
\item Four of the new groups are located off-cloud: Heleus, Electryon, Mestor (at distances between 380\,pc and 420\,pc) and Alcaeus (at a distance of 290\,pc);
\item All the off-cloud groups have ages between 3 and 5\,Myr, and disc fractions between 12 and 24\%. The on-cloud group, Autochthe, has an age of $\lesssim$1\,Myr and a disc fraction of 56\%;
\item Autochthe is located to the east of NGC\,1333, at a distance of 298\,pc, similar to NGC\,1333 at 293$\pm$22\,pc. The two share very similar proper motions too, and are both around 1\,Myr old. These similarities in spatial distribution, proper motion and age suggest that these two may have formed together.
\item The newly discovered groups appear to segregate into two kinematic sets of similar proper motion; the first set is composed of Heleus, Electryon and Mestor with a mean proper motion of $(\mu_\alpha^*, \mu_\delta)$ = (3.5, -5.5) mas\,yr$^{-1}$, and the second set is composed of Alcaeus, Autochthe, and NGC\,1333 with a mean proper motion of $(\mu_\alpha^*, \mu_\delta)$ = (6.4, -8.8) mas\,yr$^{-1}$;
\item We find a general sequence of star formation, where the older groups are located closer to the Galactic Plane, whereas the youngest groups are at higher Galactic latitudes.
\end{enumerate}
\section*{Acknowledgements}
We thank the referee for a constructive report that helped to improve the paper. This project was supported by STFC grant ST/R000824/1.
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
This research has made use of Python, https://www.python.org, NumPy \citep{vanderwalt11}, and Matplotlib \citep{hunter07}.
This research made use of APLpy, an open-source plotting package for Python and hosted at http://aplpy.github.com \citep{robitaille12}. This research made use of Astropy, a community-developed core Python package for Astronomy \citep{robitaille13}. This research made use of TOPCAT (Tool for OPerations on Catalogues And Tables), an interactive graphical viewer and editor for tabular data \citep{taylor05}.
\bigskip
\textit{Note added in Proof:} Independently, \cite{kounkel_2019} have published a subsample of the young stars selected in this paper, as part of population numbered 21 and 77 in their catalogue (see also \cite{kounkel_2020}). These two populations cover a mix of members from various groups identified in our paper. Their population 21 contains 8 out of the 27 members in our sample of Autochthe and 86 out the 170 members in our sample for Alcaeus. Very few members of our sample of Electryon (3) and of our sample of Heleus (2) are also part of this population. Their population 77 contains 186 out of the 302 members in our sample of Mestor, 170 out of the 329 members in our sample of Electryon and 37 out of the 85 members in our sample of Heleus.
\section*{Data Availability}
The data underlying this article are available in the article and in its online supplementary material. The datasets were derived from the public domain of Gaia, at \url{https://gea.esac.esa.int/archive/}, WISE, at \url{https://irsa.ipac.caltech.edu/Missions/wise.html} and Planck at \url{https://irsa.ipac.caltech.edu/data/Planck/release_2/all-sky-maps/}.
\bibliographystyle{mnras}
\input{ms.bbl}
|
2,869,038,156,279 | arxiv | \section{Introduction}
Many assessments have already been done for Gaia expected harvest of eclipsing binaries (EB) to V$<$15, where both photometric and RV observations will have been available (see Munari et al.~2001, Zwitter et al.~2003, Marrese et al.~2004 and others). Out of 50 million observed stars, roughly 100\,000 will be double-lined eclipsing binaries. However, based on experience from Hipparcos, out of 1 billion stars observed to V$<$20, there will be $\sim$\emph{2 million} (Eyer, these proceedings) eclipsing binaries without spectroscopic observations, but with quite decent photometric accuracy. This study presents a new approach developed for automatic reduction of observed data along with an estimate of how much we may expect to obtain from them.
\section{The method}
Obtaining physical parameters from observations is an inverse problem solved numerically by a modeling program. Affirmed and most widely used is the WD code \citep{wilson1971}, which features Differential Corrections method (DC) powered by the Method of Multiple Subsets (MMS) \citep{wilson1993}. DC method has already been applied successfully to automatic parameter extraction (e.g.~Wyithe \& Wilson 2001, Wyithe \& Wilson 2002, Pr\v sa 2003 and others), but its original philosophy is based on interactive monitoring of each convergence step. The algorithm is very fast and works well if the discrepancy between the observed and computed curves is relatively small, but it tends to diverge or give physically unmeaningful results if the discrepancy is large. As part of an effort to create a reliable and powerful package for EB analysis, a complementing minimization scheme is proposed.
\subsection{Nelder \& Mead's Downhill Simplex}
There are two main deficiences of the DC method that are especially striking. {\bf 1)} Once a DC method converges to a minimum, there is no way of telling whether that minimum is local or global; even if it is local, the method is stuck and cannot escape. {\bf 2)} The main source of divergence and the loss of accuracy in the DC algorithm is the computation of numerical derivatives.
To circumvent these two problems, Nelder \& Mead's downhill Simplex\footnote{Nelder \& Mead's downhill simplex is not to be confused with linear or non-linear programming algorithms, which are also referred to as Simplex methods.} method \citep{nelder1965}, hereafter NMS, is implemented. Since NMS doesn't compute derivatives but relies only on function evaluations, it is a promising candidate for our purpose. Basic form of the NMS method along with WD implementation was first proposed by \cite{kallrath1987}. We take a step further and adapt the method specifically to EBs.
NMS method acts in $n$-dimensional parameter hyperspace. It constructs $n$ vectors $\mathbf p_i$ from the vector of initial parameter values $\mathbf x$ and the vector of step-sizes $\mathbf s$ as follows:
\begin{equation}
\mathbf p_i = (x_0, x_1, \dots, x_i + s_i, \dots, x_n)
\end{equation}
These vectors form $(n+1)$ vertices of an $n$-dimensional simplex. During each iteration step, the algorithm tries to improve parameter vectors $\mathbf p_i$ by modifying the vertex with the highest function value by simple geometrical transformations: reflection, reflection followed by expansion, contraction and multiple contraction \citep{galassi2003}. Using these transformations, the simplex moves through parameter space towards the deepest minimum, where it contracts itself.
PHOEBE ({\rm http://phoebe.fiz.uni-lj.si}) is a software package built on top of the WD code that extends its basic functionality to encompass, among other extensions summarized in \citet{prsa2005}, the NMS method. It is especially suited for EBs: powered by heuristic scans, parameter kicking and conditional constraining, the method is able to efficiently escape from local minima.
\subsection{Heuristic Scan}
NMS is a robust method that always converges, but it can converge to a local minimum, particularly since parameter hyperspace in vicinity of the global minimum is typically very flat, with lots of local minima. In adition, global minimum may be shadowed by data noise and degeneracy.
Heuristic scan is a method by which a minimization algorithm selects a set of starting points in parameter hyperspace and starts the minimization from each such point. It then sorts all solutions by the cost function (the $\chi^2$, for example) and calculates parameter histograms and convergence tracers for given hyperspace cross-sections (specific examples are given in Section \ref{results}). The way of how the algorithm selects starting points is determined by the user: the points may be gridded, stochastically dispersed, distributed according to some probability distribution function (PDF) etc. The basic idea of heuristic scan is to obtain decent statistics of adjusted parameter values from which a fair and realistic error estimate may be given.
\subsection{Parameter Kicking} \label{parameter_kicking}
Another possible approach to detect and escape from local minima is to use some stochastic method like Simulated Annealing (SA). However, such methods are notoriously slow. Since the EB hyperspace is typically very flat, stochastic methods would be practical only in the vicinity of the global minimum. Thus instead of full-featured SA scan, a simple new procedure has been developed that achieves the same effect as stochastic methods, but in significantly shorter time.
The idea is as follows: whenever NMS reaches a minimum within a given accuracy, the algorithm runs a globality assessment on that minimum. If we presume that standard deviations $\sigma_k$ of observations are estimated properly and that they apply to all data points regardless of phase or nightly variations, we may use them for $\chi^2$ weighting:
\begin{equation}
\chi_k^2 = \sum_{i=1}^M w_k (x_i - \bar x)^2 = \frac 1{\sigma_k^2} \sum_{i=1}^M (x_i - \bar x)^2,
\end{equation}
where index $i$ runs over $M$ measurements within a single data-set and index $k$ runs over $N$ data-sets (different photometric curves). Since the variance is given by:
\begin{equation}
s_k^2 = \frac{1}{N_k-1} \sum_i (x_i - \bar x)^2,
\end{equation}
we may readily express $\chi_k^2$ as:
\begin{equation}
\chi_k^2 = (N_k - 1) \frac{s_k^2}{\sigma_k^2}.
\end{equation}
and the overall $\chi^2$ value as:
\begin{equation}
\chi^2 = \sum_k (N_k - 1) \left( \frac {s_k}{\sigma_k} \right)^2.
\end{equation}
If $\sigma_k$ are fair and all data-sets contain approximately the same number of observations, then the ratio $s_k / \sigma_k$ is of the order unity and $\chi^2$ of the order $N M$. This we use for parametrizing $\chi^2$ values:
\begin{equation} \label{eq_lambda}
\chi_0^2 = N M, \quad \lambda := \left( \chi^2 / \chi_0^2 \right) \,: \textrm{quantization.}
\end{equation}
Parameter kicking is a way of knocking the obtained parameter-set out of the minimum: using the Gaussian PDF, the method randomly picks an offset for each parameter. The strength of the kick is determined by the Gaussian dispersion $\sigma_{\mathrm{kick}}$, which depends on $\lambda$: if the value is high, then the kick should be strong, but if it is low, i.e.~around $\lambda \sim 1$, then only subtle perturbations should be allowed. Experience shows that a simple expression such as:
\begin{equation}
\sigma_{\mathrm{kick}} = \frac{0.5 \lambda}{100}
\end{equation}
works very reliably. This causes $\sigma_{\mathrm{kick}}$ to assume a value of $0.5$ for $10 \sigma$ offsets and $0.005$ for $1 \sigma$ offsets, being linear in between. Note that this $\sigma_{\mathrm{kick}}$ is \emph{relative}, i.e.~given by:
\begin{equation}
\sigma_{\mathrm{kick}}^{\mathrm{abs}} = x \, \sigma_{\mathrm{kick}}^{\mathrm{rel}},
\end{equation}
where $x$ is the value of the given parameter.
\subsection{Conditional Constraining}
Having purely photometric (LC) observations, it is impossible to determine absolute physical parameters of the observed binary. However, if the distance to EB is measured independently or if additional assumptions about the EB are set, even purely photometric observations can yield absolute values of physical parameters. If additional constraints are imposed on the model from the outside, the model is referred to as \emph{conditionally constrained} (CC'd).
Two CCs are immediately evident: {\bf 1)} astrometric measurements on-board Gaia may be used to measure the distance with the accuracy of $\sim$11\,$\mu$as at V=15 to $\sim$165\,$\mu$as at V=20 (\citet{redbook}4); {\bf 2)} since a substantial number of stars are main sequence stars, M--L, L--T and T--R relations may be adopted as constraints. See \citet{prsa2005} for details on CC implementation in PHOEBE.
\section{Simulation} \label{simulation}
To test the suitability of NMS for EBs, we built a partially-eclipsing synthetic main-sequence F8\,V--G1\,V binary using PHOEBE. Table \ref{params} lists all of its principal parameters. The simulation presented in this paper is based exclusively on photometric data: two sets of observations corresponding to Johnson B and V filters are created, each with 82 points with Poissonian scatter $\sigma_{\mathrm{obs}} = 0.02$mag at $V=20$, values typical to expect from Gaia (\citet{redbook}4).
Simulation flow is the following: all physical parameters set for adjustment ($a$, $i$, $T_1$, $T_2$, $\Omega_1$, $\Omega_2$, $L_1$ and $L_2$) were displaced by $\sim$50\%. The obtained set was used as initial guess for the NMS. In the first part of the simulation the method converges to a solution using only heuristic scan and parameter kicking, which yields \emph{relative} values of parameters. In the second part the simulation was additionally CC'd by the main-sequence constraint to obtain \emph{absolute} values of parameters.
\begin{table}[!ht]
\caption{Principal parameters of the simulated binary.}
\label{params}
\begin{center}
\leavevmode
\begin{tabular}{lrrr}
\hline \hline
Parameter [units] & & Binary & \\
& F8\,V & & G1\,V \\
\hline \\
$P_0$ [days] & & 1.000 & \\
$a\,\,\,[R_\odot]$ & & 5.524 & \\
$q=m_2/m_1$ & & 0.831 & \\
$i\,\,\,[{}^\circ]$ & & 85.000 & \\
$T_\mathrm{eff}\,\,\,[\mathrm K]$ & 6200 & & 5860 \\
$L\,\,\,[L_\odot]$ & 2.100 & & 1.100 \\
$M\,\,\,[M_\odot]$ & 1.236 & & 1.028 \\
$R\,\,\,[R_\odot]$ & 1.259 & & 1.020 \\
$\Omega\,\,\,[-]\,{}^{\mathrm{(a)}}$ & 5.244 & & 5.599 \\
\hline \\
\multicolumn{4}{p{6.4cm}}{${}^{\mathrm{(a)}}$~Unitless effective potentials defined by \citet{wilson1979}.} \\
\end{tabular}
\end{center}
\end{table}
\section{Results} \label{results}
Heuristic scan over all adjustable parameters has been performed to obtain accurate convergence statistics: over 2\,000 uniformly distributed starting points in parameter hyperspace were used during simulation. We present the results of overall minimization step-by-step.
{\bf a)\, Convergence assessment.} Depending on the bumpiness of the hyperspace, heuristic scan will generally yield different solutions from different starting points; it is our hope that only few of these solutions will account for most scans. To evaluate their quality, the globality assessment mechanism introduced in section \ref{parameter_kicking} is used to sort solutions by the depth of the reached minimum. Tests show that the NMS method itself is all-too-often stuck in local minima and only $\sim$15\% of all runs end up within one percent of ideal $\lambda$ ($\lambda$=1 in case of Fig.~\ref{lambda_hist}). However, tests also show that parameter kicking \emph{significantly} improves this percentage ($\sim$50\% after the first kick, $\sim$63\% after the second and $\sim$75\% after the third kick); even more, parameter kicking also enhances convergence speed after each kick.
\begin{figure}[!t]
\begin{center}
\leavevmode
\centerline{\epsfig{file=figs/chi2hist_alt.eps,width=1.0\linewidth,height=0.5\linewidth}}
\end{center}
\caption{$\lambda$-histograms for 3 consecutive parameter kicks. Top-left figure demonstrates how numerous are local minima and how difficult it is for NMS to circumvent them. Other three figures show significant improvement by using parameter kicking. Histograms consist of 20 bins (single bin width is 0.01). The last bin encompasses all higher values of $\lambda$. Labels \,\emph{(a)}\, through \,\emph{(d)}\, are used consistently throughout the paper.}
\label{lambda_hist}
\end{figure}
\begin{figure}[!t]
\begin{center}
\leavevmode
\centerline{\epsfig{file=figs/tracer_alt.eps,width=1.0\linewidth}}
\end{center}
\caption{Convergence tracers for 3 consecutive parameter kicks. These plots trace convergence steps within 2D cross-sections of the hyperspace, revealing areas of minima and degeneracy. Cross-hairs mark the location of the global minimum.}
\label{tracers}
\end{figure}
Although the values of $\lambda$ may seem promising, they don't necessarily guarantee that the corresponding solution is optimal. Rather, additional assessments should be done. Fig.~\ref{tracers} shows \emph{convergence tracers}: 2D cross-sections of parameter hyperspace tracing convergence from each starting point of heuristic scan to the corresponding solution. Such tracers clearly show areas of minima and degeneracy. Since the location of the global minimum is not known in real life, extra care should be taken to \emph{never} blindly trust the statistics of such a degenerate problem.
{\bf b)\, Statistics of obtained parameters.}
The usual practice in literature, when listing obtained parameters from the model, is to give their values with \emph{formal} errors, i.e.~standard deviations reported by the used numerical method. These errors are often too optimistic, since degeneracy and noise noticeably contribute to the overall error. NMS powered by heuristic scan and parameter kicking has the advantage of obtaining parameter errors \emph{statistically}, independent of the method itself. Fig.~\ref{ihist} shows an example of obtained histogram for inclination $i$. It is evident that for the NMS without parameter kicking (and similarly for {\bf any} other numerical method that cannot escape from local minima) any eclipsing system is a tie; for a fully minimized solution (bottom right plot on Fig.~\ref{ihist}) the error is simply standard deviation of the Gaussian being fitted over the histogram, yielding roughly $0.5^\circ$.
\begin{figure}[!t]
\begin{center}
\leavevmode
\centerline{\epsfig{file=figs/ihist_alt.eps,width=1.0\linewidth,height=0.5\linewidth}}
\end{center}
\caption{Histogram of the inclination for 3 consecutive parameter kicks. Histogram consists of 20 bins with $0.5^\circ$ each. The solution is symmetric to $i=90^\circ$, but we adopt $i<90^\circ$ by convention.}
\label{ihist}
\end{figure}
{\bf c)\, Conditional constraining.}
Inclination is the only intrinsic parameter that may be obtained in absolute sense from photometric observations. Using CC this deficiency is removed: any particular CC adds one or more implicit parameter ties into the system. It basically introduces an intersection plane with the otherwise degenerate part of the hyperspace, thus eliminating degeneracy. Fig.~\ref{tracers_ms} demonstrates how a main-sequence constraint breaks the degeneracy for gravity potentials $\Omega_{1,2}$.
\begin{figure}[!t]
\begin{center}
\leavevmode
\centerline{\epsfig{file=figs/tracer_ms_alt.eps,width=1.0\linewidth}}
\end{center}
\caption{Main-sequence constrained convergence tracers for 3 consecutive parameter kicks. Comparing this result to Fig.~\ref{tracers} clearly shows that the intersection of both areas indeed gives the right solution. This is expected, since our synthetic binary is in fact a main-sequence binary.}
\label{tracers_ms}
\end{figure}
\section{Discussion} \label{conclusion}
The idea behind the NMS implementation is \emph{not} to replace the DC method but to \emph{complement} it. DC is created for interactive usage and converges in discrete steps that need monitoring. NMS on the other hand aims to automate this process so that intermediate monitoring is no longer necessary, which is a key goal for Gaia. DC is one of the fastest methods (WD's {\tt DC} in particular, since it is optimized for EBs), but may easily diverge. On expense of speed, NMS is one of the most robust algorithms for solving non-linear problems and never diverges. Finally, both DC and NMS methods suffer from degeneracy and may be stuck in local minima. To overcome this, DC is complemented by the MMS and NMS is complemented by heuristic scan and parameter kicking. These differences in intent make a combination of both methods a powerful engine for solving the inverse problem.
|
2,869,038,156,280 | arxiv | \section{Introduction}
Heavy flavor physics has been one of the major forefronts
of high energy physics in the past two decades,
whose primary goal is to trace the origin of the CP violation,
precisely pin down the Cabbibo-Kobayashi-Maskawa (CKM) matrix,
as well as search for possible footprints of new physics.
Much of effort in this subject concentrates on the detailed study of $B$ meson exclusive decays,
which contains very rich phenomenology.
To reliably describe uncountable decay channels, it is mandatory to develop a
systematic and thorough understanding towards the underlying hadronization dynamics.
By invoking the hierarchy $m_b\gg \Lambda_{\rm QCD}$,
the so-called QCD factorization (QCDF) approach has become the dominant theoretical arsenal
which derives from the first principle~\cite{Beneke:1999br}.
The key concept is to express the decay amplitude into the convolution of the perturbatively calculable, yet, process-dependent,
hard-scattering kernel and the nonperturbative, yet, universal, $B$ meson light-cone distribution amplitude (LCDA).
It is important to emphasize that this factorization framework applies in the heavy quark limit,
corroborated by the fact that the $b$ quark field in the LCDA is defined in the heavy quark effective theory (HQET)
rather than in full QCD. The simplest application of the QCD factorization is the radiative decay $B\to\gamma \ell \nu$ ~\cite{Korchemsky:1999qb,DescotesGenon:2002mw,Lunghi:2002ju}. This factorization picture has also been extended to more sophisticated processes,
{\it e.g.}, $B\to\gamma\gamma$~\cite{Bosch:2002bv}, $B\to M_1 M_2$~\cite{Beneke:2000ry,Beneke:2001ev}, and so on.
In contrast to exclusive $B$ decays, the hard exclusive production of heavy-flavored hadrons
receives much fewer attention in literature.
The main reason might be attributed to the highly suppressed production rate of
such processes in high energy collision experiments.
The $D_s$ meson exclusive production from $W$ decay, {\it e.g.}, $W^+\rightarrow D_{s}^+\gamma$, was among
the early exploration in this topics~\cite{Arnellos:1981gy,Keum:1993eb}, and an upper limit was placed by the
Fermilab \texttt{Tevatron} in late 90s~\cite{Abe:1998vm}.
Motivated by a gigantic number of $W^\pm$, $Z^0$ bosons produced at LHC, {\it i.e.}
about ${\cal O}(10^{11})$ with the projected $3000\;{\rm fb}^{-1}$ integrated luminosity,
a number of exclusive channels of $W$, $Z$ radiative decays into heavy-light mesons has recently been investigated
at leading order (LO) in $\alpha_s$~\cite{Grossmann:2015lea}.
The theoretical basis underneath \cite{Grossmann:2015lea} is the standard collinear factorization (or light-cone
factorization) for hard exclusive reactions~\cite{Lepage:1980fj,Chernyak:1983ej}.
By exploiting the hierarchy $m_{W,Z}\gg m_{b,c}\sim \Lambda_{\rm QCD}$, one expresses the amplitude as the convolution between the
hard-scattering kernel and the universal LCDAs of heavy mesons.
However, it is worth stressing that, the nonperturbative LCDAs encountered in this case are of
the standard Brodsky-Lepage type~\cite{Lepage:1980fj}, where the $b$ quark field is
defined in full QCD rather than in HQET, in sharp contrast with those arising in $B$ exclusive decays.
Unfortunately, our current constraints on the heavy meson QCD LCDAs
are rather limited, which severely obstructs the predictive power of the standard collinear factorization approach.
A further drawback of this conventional formalism is that, the LCDAs of heavy-flavor mesons cannot not
be genuinely nonperturbative, where the hard scale $m_{b,c}$ is still entangled with the hadronic scale
$\Lambda_{\rm QCD}$.
It is desirable if asymptotic freedom can be invoked to separate
some sort of short-distance effects from
the heavy meson QCD LCDAs.
Inspired by NRQCD factorization tailored for inclusive heavy quarkonium production~\cite{Bodwin:1994jh},
the {\it heavy-quark recombination} (HQR) mechanism~\cite{Braaten:2001bf}
was developed in the beginning of this century,
to supplement the single-parton fragmentation mechanism for the inclusive heavy-flavor hadron production
with contribution subleading in $1/p_\perp$.
The basic idea behind this mechanism is intuitively simple, after a hard scattering,
the heavy quark would have a significant chance to combine with a spectator quark which is {\it soft}
in its rest frame to form a heavy-light hadron. In the color-singlet channel,
where the inclusive and exclusive production of heavy hadrons practically make no difference at lowest order,
the HQR formalism only involves a single nonperturbative factor, which is proportional to the first inverse
moment of the $B$ meson LCDA defined in HQET, $1/\lambda_B$, which has been intensively investigated in the field
of $B$ meson decays. A notable success of this mechanism is to economically account for the
charm/anticharm hadron production asymmetry (leading particle effect) observed at numerous
Fermilab fixed-target experiments~\cite{Braaten:2001uu,Braaten:2002yt,Braaten:2003vy}.
The key idea behind HQR is to formalize the production rate of heavy-flavored hadrons
in the heavy quark limit, separating the dynamics of order $m_b$ or higher from
the hadronic effects at ${\cal O}(\Lambda_{\rm QCD})$,
but no longer distinguishing the hard-scattering scale specific to the process, say, $Q$, and $m_b$.
In this sense, one can tackle hard exclusive heavy hadron production in a fashion very much alike
the $B$ exclusive decay, in principle guaranteed by a factorization theorem valid to all orders in $\alpha_s$.
In this work, we choose not to stick to the old jargon HQR mechanism,
rather we decide to term this factorization approach as the {\it HQET factorization},
in close analogy with the NRQCD factorization
for heavy quarkonium production. To be specific, in this work we will take $W\to B(D_s)+\gamma$
as the prototype processes for heavy meson exclusive production.
To our knowledge, the HQR mechanism so far
is only illustrated at leading order (LO) in $\alpha_s$.
It is the very goal of this work to verify the validity of
HQET factorization through next-to-leading order (NLO) in $\alpha_s$,
which provides a much more informative revelation about nontrivial QCD dynamics.
The rest of the paper is structured as follows.
In Sec.~\ref{sec:form:factors:kinematics}, we first decompose the $W^+\to B^++\gamma$ amplitude
in terms of two Lorentz-invariant form factors,
and introduce some light-cone kinematical variables.
In Sec.~\ref{sec:rev:B:meson:LCDA}, we recap some essential features about the leading-twist
$B$ meson LCDA, which
enters the factorization theorem in many $B$ exclusive decay processes.
In Sec.~\ref{sec:HQET:factorization}, in analogy with the NRQCD factorization for exclusive heavy
quarkonium production, we propose the HQET factorization formalism
for exclusive heavy-light hadron production, presumably valid to all orders in $\alpha_s$.
In Sec.~\ref{sec:hard:scatter:kernel}, we first determine the hard-scattering kernel for
$W\to B+\gamma$ at tree level.
We then proceed to compute the ${\cal O}(\alpha_s)$ correction to this process.
We explicitly verify that the resulting soft IR pole can be properly factorized into the $B$ meson LCDA, which
establishes the correctness of HQET factorization at the first nontrivial order.
The IR-finite hard-scattering kernel at ${\cal O}(\alpha_s)$ is also deduced.
In Sec.~\ref{sec:phenomenology}, we present a comprehensive numerical prediction for the processes
$W^+\rightarrow B^+(D_s^+)+\gamma$, accurate through next-to-leading order in $\alpha_s$.
Assuming a simple exponential parametrization for $B$ meson LCDA defined in some initial scale,
we study its evolution behavior with different factorization scale.
It is found that including the ${\cal O}(\alpha_s)$ correction would significantly
reduce the LO decay rate.
Finally, we present a summary and an outlook in Sec.~\ref{sec:summary}.
\section{Decomposition of amplitude and light-cone kinematics}
\label{sec:form:factors:kinematics}
Let us first specify the kinematics for the process $W^+ \to B^+ +\gamma $.
The momenta of $W^+$, $B^+$ and $\gamma$ are designated by $Q$, $P$ and $q$, respectively,
with $Q=P+q$. They are subject to the on-shell conditions: $Q^2=m_W^2$, $P^2=m_B^2$, and $q^2=0$,
with $m_W$, $m_B$ standing for the masses of the $W$ and $B$ meson, respectively.
For future usage, we also introduce a dimensionless four velocity $v^\mu$ via
$P^{\mu}=m_{B} v^{\mu}$, obviously obeying $v^{2}=1$.
The polarization vectors of the $W$ and $\gamma$ are denoted by $\varepsilon_{W}$ and
$\varepsilon_\gamma$.
In accordance with Lorentz invariance, the decay amplitude for
$W^+\to B^+\gamma$ can be decomposed as~\cite{Grossmann:2015lea}
\begin{align}
\mathcal {M}\left(W^+\rightarrow B^+\gamma\right) = {\frac { { e_u e^2 } V_{ub} } { 4\sqrt{2} \sin \theta_W}}\left( \epsilon _ { \mu \nu \alpha \beta } \frac { P ^ { \mu } q ^ { \nu } \varepsilon _ {W} ^ { \alpha } \varepsilon _ { \gamma } ^ { * \beta } } { P \cdot q } F_V +i \varepsilon _ { W } \cdot \varepsilon _ { \gamma } ^ { * } F_A \right),
\label{Ampl:Lorentz:decomp}
\end{align}
where $e$ is the electric coupling constant, $e_{u}={+}\frac{2}{3}$ is the electric charge of the
$u$ quark, $\theta_W$ is the weak mixing angle, $V_{ub}$ denotes the CKM matrix element,
and $f_B$ signifies the $B$ meson decay constant.
$F_{V}(F_A)$ represent the Lorentz scalar form factors, which are affiliated with
vacuum-to-$B+\gamma$ matrix element mediated by the vector (axial-vector) weak current.
All the nontrivial QCD dynamics are encoded in these scalar form factors,
which are functions of $m_W$, $m_B$, and $\Lambda_{rm QCD}$.
Note that, since weak interaction violates the parity conservation, this exclusive process bears
two independent helicity amplitudes, which can be expressed via the linear
combination of these two form factors.
Squaring \eqref{Ampl:Lorentz:decomp}, averaging upon and summing over the
polarizations of $W$ and $\gamma$, one expresses the unpolarized decay rate in the $W$ rest frame as
\begin{align}
\Gamma\left(W^+\rightarrow B^+\gamma\right)={\frac{e_u^2 \pi\alpha^2}{48 \sin^2 \theta_W m_W^3}}\left|V_{ub}\right|^2\left(m_W^2-m_B^2\right)\left(\left|F_V\right|^2+\left|F_A\right|^2\right),
\label{eq:dcy_wdth}
\end{align}
with $\alpha\equiv e^2/4\pi$ denoting the QED fine structure constant.
To facilitate the discussion in the following sections, it is convenient to
set up the light-cone representation for the kinematics.
We first introduce two light-like reference vectors
$n_{\pm}^{\mu}\equiv\frac{1}{\sqrt{2}}(1,0,0,\mp 1)$,
which obey $n_\pm^2=0$ and $n_+\cdot n_-=1$.
In these light-cone basis, any four vector $a^{\mu}=(a^0,a^1,a^2,a^3)$ can
then be decomposed into
\begin{equation}
a^{\mu}=(n_{-}\cdot a)
n_{+}^{\mu}+ (n_{+}\cdot a)n_{-}^{\mu}+a^{\mu}_{\perp}\equiv
a^+ n_{+}^{\mu}+a^- n_{-}^{\mu} + a^{\mu}_{\perp},
\end{equation}
where ${a}^{\mu}_{\perp}=(0,a^{1},a^{2},0)$ is the transverse component of the four vector.
The scalar product of two four vectors then become
\begin{equation}
a\cdot b = a^+ b^- + a^- b^+ + a_\perp\cdot b_\perp.
\end{equation}
Were we interested in investigating the process $W\to B+\gamma$ within
the standard collinear factorization approach, it would be most natural to stay with
the rest frame of the $W$ boson, where the $B$ meson is energetic owing to $m_W\gg m_B$.
In this reference frame, we presume that the $B$ meson moves along the positive $\hat{z}$ axis,
while the photon flies in the opposite direction.
The light-cone representations for the momenta of the $B^+$ and $\gamma$ then become
\begin{subequations}
\begin{align}
P^\mu \Big|_{W\;{\rm rest\;frame}} &=(P^{+},P^{-},\boldsymbol{P}_{\bot})=\frac{1}{\sqrt{2}}\left(m_W,\frac{m_B^2}{m_W},\boldsymbol{0}_{\bot}\right),
\\
q^\mu \Big|_{W\;{\rm rest\;frame}} &=\left( q^{+},q^{-},\boldsymbol{q}_{\bot} \right)
=\frac{1}{\sqrt{2}}\left( 0,\frac{m_W^2-m_B^2}{m_W}, \boldsymbol{0}_{\bot}\right),
\label{photon:momentum:W:rest:frame}
\end{align}
\label{momenta:W:rest:frame}\end{subequations}
where $P^-$ is suppressed by a factor $m_B^2/m_W^2$ relative to $P^+$.
Since the form factors $F_{V,A}$ themselves are Lorentz scalars, they can be computed in any reference frame.
In order to make the picture of HQET factorization more
transparent, as well as be closely connected with the exclusive $B$ decay channel $B\to \gamma (W^*\to) l\nu$,
it looks most natural to boost this process to the $B$ meson rest frame.
The corresponding momenta of $B$ and photon in the light-cone basis then read
\begin{subequations}
\begin{align}
P^\mu\Big|_{B\;{\rm rest\;frame}} &=(P^{+},P^{-},\boldsymbol{P}_{\bot})=\frac{1}{\sqrt{2}}\left(m_B,m_B,\boldsymbol{0}_{\bot}\right),
\\
q^\mu \Big|_{B\;{\rm rest\;frame}} &=(q^{+},q^{-}, \boldsymbol{q}_{\bot})
=\frac{1}{\sqrt{2}}\left(0, {m_W^2-m_B^2\over m_B}, \boldsymbol{0}_{\bot}\right).
\label{photon:momenta:B:rest:frame}
\end{align}
\label{momenta:B:rest:frame}\end{subequations}
Note the photon becomes enormously energetic in this frame,
enhanced with respect its energy in the $W$ rest frame by a factor $m_W/m_B$.
\section{Review of $\bm{B}$ meson LCDA defined in HQET}
\label{sec:rev:B:meson:LCDA}
$B$ meson LCDA is a crucial nonperturbative entity that ubiquitously enters the
various exclusive $B$ decay processes. It is the same entity that also enters
the $B$ exclusive production process in HQET factorization framework.
In this section, we briefly recapitulate some of its essential features.
Let us consider the correlator composed of the light spectator quark and $b$ quark separated by light-like
distance, sandwiched between the vacuum and the $B$ meson with velocity $v^\mu$ (for simplicity, we actually will
work in the $B$ meson rest frame).
Its most general parametrization may be cast into the following form~\cite{Grozin:1996pq,Beneke:2000wa}~\footnote{Note
that we have intentionally put the $B$ meson in the bra rather than the ket,
since we are interested in $B$ production rather than decay in this work.}:
\begin{equation}
\langle B(v)|\bar
{u}_{\beta}(z)[z,0] h_{v,\alpha}(0)|0\rangle=\frac{i \hat
f_{B}m_{B}}{4}\left\{\left[2
\widetilde{\phi}_{B}^{+}(t)-\frac{z\!\!\!\slash}{t}
\Big(\widetilde{\phi}_{B}^{-}(t)-\widetilde{\phi}_{B}^{+}(t)\Big)\right]
\frac{1-v\!\!\!\slash}{2}\gamma_{5}\right\}_{\alpha\beta},
\label{eq:LCDA_twist_decomp}
\end{equation}
where $z^2 =0$, $t=v\cdot z$, and $\widetilde{\phi}_{B}^{\pm}$ are a pair of
nonperturbative functions of $t$. Here $u$ refers to the standard
QCD field for $u$ quark, and $h_v$ signifies the $\bar{b}$ quark field
with velocity label $v$ introduced in HQET.
$\alpha$, $\beta$ are spinor indices.
$\hat{f}_{B}$ signifies the $B$ meson decay constant defined in HQET as
\begin{equation}
\langle B(v)|\bar
{u}\gamma^\mu \gamma_5 h_{v}|0\rangle=i\hat{f}_{B}m_{B}v^\mu\,,
\label{eq:decayconstant_HQET}
\end{equation}
which can be converted from the QCD decay constant $f_{B}$ through
perturbative series~\cite{Eichten:1989zv,Neubert:1993mb}:
\begin{align}
f_B = \hat{f}_B (\mu_F) \left[ 1 - {\alpha_s C_F \over 4 \pi} \left(3\ln \frac { \mu_F}{m_b} + 2
\right) \right] +\mathcal{O}\left(\alpha_s^2\right).
\label{hat:f:matching:fB}
\end{align}
The light-like gauge link,
\begin{equation}
[z, 0]=\mathcal{P} \exp \left[-i g_{s} \int_{0}^{z} d
\xi^{\mu} A_\mu^{a}(\xi) t^{a}\right],
\end{equation}
has been inserted in \eqref{eq:LCDA_twist_decomp} to ensure gauge
invariance of the nonlocal quark bilinear. Here $t^{a}(a=1, \cdots, 8)$ signify the $SU(3)$ generators in
fundamental representation, and $\mathcal{P}$ indicates the path
ordering.
The phenomenologically relevant $B$ meson LCDAs are usually referred in momentum space,
which can be inferred by Fourier transforming the coordinate-space correlators in
\eqref{eq:LCDA_twist_decomp}~\cite{Grozin:1996pq, Braaten:2001bf}:
\begin{equation}
\Phi_{B}^{\pm}(\omega) \equiv i\hat{f}_{B}m_{B}\phi^\pm_{B}(\omega)=\frac{1}{v^{\pm}}\displaystyle\int
{dt\over 2\pi}\, e^{i\omega t}\langle B(v)|\bar
{u}(z)[z,0]\slashed n_\mp \gamma_5 h_{v}(0)|0\rangle\Big|_{z^{+},z^{\perp}=0}\,,
\label{eq:LCDA_HQET}
\end{equation}
where a pair of $B$ meson LCDAs are defined through
\begin{equation}
{\phi}^{\pm}_{B}(\omega)=\int_{0}^{\infty} {d t\over 2\pi} e^{i
\omega t} \widetilde\phi^{\pm}_{B}(t).
\end{equation}
Here $\omega$ indicates the ``+''-momentum carried by the spectator
quark in the $B$ rest frame, whose typical value is $\sim \Lambda_{\rm QCD}$.
By construction, $\phi^{\pm}_{B}(\omega)$ has nonvanishing support only when $\omega \in (0,\infty)$.
General principle constrains that as $\omega\to 0$,
$\phi^{+}_{B}(\omega)\propto \omega$, whereas $\phi^{-}_{B}(\omega)\propto 1$.
Note the light-cone correlator in \eqref{eq:LCDA_HQET} in general entails UV divergences,
and is subject to the renormalization involving the mixing among an infinite number of light-ray operators.
As a consequence, $\phi^{\pm}_{B}(\omega)$ become scale-dependent quantities.
Practically speaking, provided that we are only interested in the leading-power contribution
in $1/m_b$ expansion, we are justified to concentrate on the $B$ meson LCDA $\phi_B^{+}(\omega)$
and discard $\phi_B^{-}(\omega)$.
The evolution equation governing $\phi_B^{+}(\omega,\mu)$ was first correctly
written down by Lange and Neubert in 2003~\cite{Lange:2003ff}:
\begin{align}
\notag \frac { d } { d \ln \mu } \phi _ { B } ^ { + } ( \omega ,\mu ) =& - \frac { \alpha _ { s } C _ { F } } { 4 \pi } \int _ { 0 } ^ { \infty } d \omega ^ { \prime }\left\{\left( 4 \ln \frac { \mu} { \omega } - 2 \right) \delta \left( \omega - \omega ^ { \prime } \right) - 4 \omega \left[ \frac { \theta \left( \omega ^ { \prime } - \omega \right) } { \omega^{ \prime } \left( \omega ^ { \prime } - \omega \right) }\right.\right.
\\
& \left.\left.+ \frac { \theta \left( \omega - \omega ^ { \prime } \right) }
{ \omega \left( \omega - \omega ^ { \prime } \right) } \right] _ { + }\right\}\phi_{ B }^{+}
\left( \omega ^ { \prime } , \mu \right),
\label{LN:evolution:eq}
\end{align}
with $\mu$ the renormalization scale.
Note this renormalization group equation looks rather different from the
celebrated Efremov-Radyushkin-Brodsky-Lepage (ERBL) equation~\cite{earlyBL,Efremov:1979qk,earlyCZ},
which controls
the scale dependence of the meson LCDA defined in full QCD.
Once the profile of $\phi^{+}_{B}(\omega)$ is determined at some initial scale (say, $\mu_0 = 1$ GeV),
one can utilize \eqref{LN:evolution:eq} to obtain its profile at any other scale (usually $1\;{\rm GeV}\le \mu \le m_b$).
There exists some model-dependent studies on the properties of the
$B$ meson LCDAs using QCD sum rules~\cite{Braun:2003wx}.
The model-independent features of $\phi^{+}_{B}(\omega)$
have also been abstracted by applying the
operator-product-expansion technique~\cite{Lee:2005gza}.
It turns out that for hard heavy hadron exclusive production,
only $\phi^{+}_{B}(\omega)$ survives in the factorization theorem
in the heavy quark limit.
As was mentioned before, of central phenomenological relevance is
the first inverse moment of $\phi^{+}_{B}(\omega)$, usually referred to as $\lambda_B^{-1}$:
\begin{equation}
\lambda_B^{-1}(\mu) \equiv \int_0^\infty \frac{d\omega}{\omega} \phi^+_B(\omega,\mu).
\label{first:inv:moment}
\end{equation}
Intuitively, one expects $\lambda_B^{-1}\sim \Lambda_{\rm QCD}^{-1}$. Note this inverse moment
is also scale-dependent.
We also plan to study the NLO perturbative corrections to $W\to B+\gamma$. To this purpose, it is
also necessary to introduce the first and second logarithmic inverse moments by
\begin{align}
\lambda_B^{-1}\sigma_{B,n} (\mu) &\equiv
-\int_0^\infty \frac{d\omega}{\omega} \ln^n\frac{\omega}{\mu} \phi^+_B(\omega,\mu),
\qquad n=1,2
\label{Log:first:inverse:moments}
\end{align}
which are scale dependent as well.
As a side remark, we finally mention that the positive Mellin moments of $\phi^+_B\big(\omega)$, that is,
$\int_0^\infty \! d\omega \,\omega^{N-1} \phi^+_B(\omega)$ ($N > 0$), are generally
UV divergent, thus become ill-defined. Usually one imposes a hard UV cutoff in the upper end of the integral
to regularize the UV divergence~\cite{Lee:2005gza}.
\section{HQET factorization for exclusive heavy-light meson production}
\label{sec:HQET:factorization}
In this section we shall state the exact form of HQET factorization for
heavy-light meson exclusive production.
Since this factorization picture naturally evolves from the previously developed heavy quark recombination mechanism,
especially for the color-singlet channel, it might be beneficial to
first elaborate on the underlying physical picture, by taking the $W\to B+\gamma$ channel as a concrete example.
Viewed in the rest frame of the $B^+$ meson, after a hard scattering the $\bar{b}$ quark
has a considerable chance to pick up the {\it soft} spectator $u$ quark to hadronize into the $B^+$ meson,
with the recombination probability proportional to the square of the nonperturbative
factor $\lambda_B^{-1}$ defined in \eqref{first:inv:moment}.
By emitting a highly energetic photon, the soft $u$ quark is necessarily transformed into a hard-collinear one,
which endorses the usage of the $B$ meson LCDA.
Let $k^\mu$ signify the momentum carried by the spectator $u$ quark inside the $B^+$ meson.
According to the recipe of the HQR mechanism~\cite{Beneke:2000wa},
one may obtain the $W^+\to B^+ +\gamma$ amplitude through making the following substitution
in the quark amplitude $W^+\to [\bar{b}(P-k) u(k)]^{(1)} + \gamma$~\footnote{This form
may look superficially different from the projectors adopted in \cite{Braaten:2001bf}.
Nevertheless, once the equation of motion and
Wandzura-Wilczek approximation are invoked, they can be proven to be equivalent~\cite{Beneke:2000wa}.}:
\begin{equation}
v_i (P\!-\!k) \bar{u}_j (k)\to {\delta_{ij}\over N_c} \frac{i \hat{f}_{B}
m_b}{4}\!\left\{\frac{1\!-\!\slashed v}{2}\!\left[\phi_{B}^{+}(\omega)
{\slashed n_{+}\over\sqrt{2}}\!+\!\phi_{B}^{-}(\omega) {\slashed
n_{-}\over \sqrt{2}}-\omega \phi_{B}^{-}(\omega) \gamma^{\mu}_{\perp}
\frac{\partial}{\partial k_{\perp\mu}}\right]\!\!
\gamma_{5}\right\}\Bigg|_{k=\omega v},
\label{HQR:spin:projector:B}
\end{equation}
where $i,j=1,2, \cdots, N_c$ are color indices and $N_c=3$.
The first Kronecker symbol serves the color-singlet projector.
After taking the momentum derivative on the quark amplitude, one then makes the substitution
$k\to \omega v$ and retain the most singular piece in the $\omega\to 0$ limit,
which is usually $\propto 1/\omega$.
Curiously, in the heavy quark limit the $\phi_{B}^{-}(\omega)$ turns out not to contribute,
and only $\phi_{B}^{+}(\omega)$ yields a nonvanishing contribution,
whose effect is simply encoded in the first inverse moment $\lambda_B^{-1}$.
A shortcut can be invoked to quickly reproduce the
heavy meson production amplitude from the HQR mechanism, with much less effort~\cite{Beneke:2000wa}.
Rather than consider $B^+$ meson production, one simply starts with the flavored qurakonium, $B_c$, production.
Assume the momenta are partitioned by two constitutes of the $B_c$ as
$p_c= \kappa P$ and $p_b = (1-\kappa) P$, where $P$ is the $B_c$ momentum
and $\kappa=m_c/(m_c+m_b)$.
One can then employs the familiar covariant spin projector for
quarkonium production at LO in velocity expansion:
\begin{equation}
v_i (p_b) \bar{u}_j (p_c)\to \delta_{ij} {f_{B_c}\over 12}
\left(\slashed P - m_{B_c}\right) \gamma_5.
\label{HQR:spin:projector:Bc}
\end{equation}
Consequently, the amplitude for $B^+$ production through the $[\bar{b} u]({}^1S_0^{(1)})$
channel can be deduced by taking the $\kappa \to 0$ limit of that for $B_c$
production and replacing $f_{B_c}/(4\kappa)\to {\hat{f}_B}/{(4\lambda_B)}$.
In general, many Feynaman diagrams do not contribute for their lack of the
$1/\kappa$ singularity. Notice that, this shortcut has been utilized to ascertain
the NLO perturbative corrections to the $B$ electromagnetic form factor and $W\to B+\gamma$
from their $B_c$ counterparts in NRQCD factorization~\cite{Jia:2010fw,Feng:2019meh}.
To the best of our knowledge, the HQR mechanism thus far has not yet been extended to NLO in $\alpha_s$.
It is not straightforward to achieve this goal from the projector approach specified in
\eqref{HQR:spin:projector:B}, or from the shortcut of extracting the hard-scattering kernel from $B_c$ production,
as described in \eqref{HQR:spin:projector:Bc}.
One reason might be that $\phi_B^+$ develops UV divergence at NLO
in $\alpha_s$, therefore the hard-scattering kernel also acquires an explicit factorization scale dependence.
This is in sharp contrast with the one-loop correction to exclusive quarkonium production process,
since the local NRQCD bilinear operator is one-loop UV finite.
It is not {\it a priori} obvious why the aforementioned projector approach
can lead to the IR-finite hard-scattering kernel once beyond $\alpha_s$.
Since we treat $m_W$ as the same order as $m_b$, the argument that leads to the factorization theorem for
$B\to\gamma l\nu$ can be transplanted to $W\to B+\gamma$ without modification. We thus propose a
factorization theorem for $W\to B+\gamma$, which is valid in the heavy quark limit, albeit
to all orders in $\alpha_s$:
\begin{align}
{\mathcal M}(W^+\to B^+\gamma) = \hat{f}_B(\mu_F) \int_0^\infty \!\! d\omega\, T(\omega, m_b, \mu_F)
\phi^+_B(\omega,\mu_F)+\mathcal{O}\left(m_b^{-1}\right),
\label{HQET:factorization:theorem}
\end{align}
where $T(\omega,m_b, \mu_F)$ is referred to as the hard-scattering kernel, which can be computed in perturbation theory.
Note the hard-scattering kernel is explicitly dependent on $m_W$ and $m_b$ as well as the factorization scale $\mu_F$.
Note only the leading-twist $B$ meson LCDA, $\phi^+_B(\omega)$, explicitly enters the formula.
The $\mu_F$ dependence of the hard-scattering kernel should counteract that of the $\hat{f}_B$ and $\phi_B^+$
so that the physical amplitude gets insensitive to the artificial scale $\mu_F$.
An important characteristic of this process is $T(\omega)\propto 1/\omega$,
so the convolution integral is UV finite and well-defined.
Equation~\eqref{HQET:factorization:theorem} lays down the foundation for the HQET factorization.
In some sense, \eqref{HQET:factorization:theorem} offers a systematic realization of
the HQR mechanism. The virtue of this factorization framework is to allow us to systematically investigate the
higher-order perturbative corrections for the hard-scattering kernel,
to ensure its IR finiteness.
\section{Form factors for $W\to B+\gamma$ through NLO in $\alpha_s$}
\label{sec:hard:scatter:kernel}
This section reports the central results of this work, where the hard-scattering kernel is computed
through NLO in $\alpha_s$.
Rather than employ the projector approach given in \eqref{HQR:spin:projector:Bc},
we choose to employ the perturbative matching method to determine the hard-scattering kernel.
The calculation presented in this section closely follows the analogous NLO calculation
for $B\to\gamma l\nu$~\cite{DescotesGenon:2002mw}.
Unless otherwise specified, the calculation of the form factors $F_{V,A}$ is performed
in the rest frame of the $B$ meson.
The hard-scattering kernel is insensitive to the long-distance physics.
For the sake of extracting it using perturbation theory, it is legitimate to replace the physical $B^+$ meson in
\eqref{HQET:factorization:theorem} with a fictitious $B^+$ meson
composed of a pair of free quarks $[\bar{b}(P-k) u(k)]$.
One can act the following projector on the quark amplitude:
\begin{equation}
v_i (P-k) \bar{u}_j (k)\to {\delta_{ij} \over N_c}
\frac{1-\slashed v}{4} \gamma_5,
\label{eq:projector:bbaru}
\end{equation}
just similar to the projector (\ref{HQR:spin:projector:Bc}) which
guarantees the fictitious ``$B^+$ meson'' being the color and spin singlet.
The momentum of the spectator $u$ quark is assumed to be soft, {\it i.e.}, which
scales as $k^\mu\sim \Lambda_{\rm QCD}$.
The LCDA in \eqref{eq:LCDA_HQET} for such a fictitious $B^+$ meson
then becomes
\begin{equation}
\Phi_{[\bar b u]}^{\pm}(\omega)=\frac{1}{v^{\pm}}\displaystyle\int
{dt\over 2\pi}\, e^{i\omega t}\langle [\bar b u](P)|\bar
{u}(z)[z,0]\slashed n_\mp \gamma_5 h_{v}(0)|0\rangle\Big|_{z^{+},z^{\perp}=0}.
\label{eq:LCDA_bbaru}
\end{equation}
Both of ingredients in \eqref{HQET:factorization:theorem}
can then be expanded in perturbation theory:
\begin{equation}
\Phi_{[\bar b u]}^+ = \Phi_{[\bar b u]}^{+(0)} + \Phi_{[\bar b u]}^{+(1)}+{\cal O}(\alpha_s^2),\qquad
\qquad
T = T^{(0)} + T^{(1)}+{\cal O}(\alpha_s^2),
\label{Expand:Phi:T}
\end{equation}
with the superscript indicating the powers of $\alpha_s$.
At LO, the LCDAs for the fictitious $B^+$ meson \eqref{eq:LCDA_bbaru} looks exceedingly simple
\begin{equation}
\Phi_{[\bar b u]}^{\pm (0)}(\omega)=\frac{1}{v^{\pm}}\delta\left(k^{+}/v^{+}-\omega\right)\text{Tr}\Big[ \frac{1-\slashed v}{4} \gamma_5 \slashed n_\mp\gamma_5\Big]=\delta\left(k^{+}/v^{+}-\omega\right).
\label{eq:LCDA_tree}
\end{equation}
The QCD amplitude on the left side of \eqref{HQET:factorization:theorem} can also be computed
perturbatively. Through NLO in $\alpha_s$, it reads
\begin{equation}
\mathcal{M} =\mathcal{M}^{(0)}+ \mathcal{M}^{(1)}+{\cal O}(\alpha_s^2),
\label{Expanding:fact:ampl:NLO:alphas}
\end{equation}
where
\begin{subequations}
\begin{eqnarray}
\mathcal{M}^{(0)} &=& \Phi_{[\bar b u]}^{+(0)}\otimes T^{(0)},
\label{Def:M0:LO:alphas}
\\
\mathcal{M}^{(1)} &=& \Phi_{[\bar b u]}^{+(0)}\otimes T^{(1)} + \Phi_{[\bar b u]}^{+(1)}\otimes T^{(0)},
\label{Def:M1:NLO:alphas}
\end{eqnarray}
\label{M0:M1:pert:ampl}
\end{subequations}
with $\otimes$ signifying the convolution integral in $\omega$.
Since both ${\cal M}$ and $\Phi_{[\bar b u]}^+$ are perturbatively calculable
for such a fictitious $B^+$ meson, one can solve \eqref{M0:M1:pert:ampl} iteratively,
to ascertain $T$ order by order in $\alpha_s$.
\subsection{Tree level}
\label{sec:tree:level:hard:kernel}
\begin{figure}
\centering
\includegraphics[clip,width=0.8\textwidth]{Tree.eps}
\caption{The Feynman diagrams for $W^{+}\rightarrow [\bar{b} u]+\gamma$ at tree level.
The bold line represents the $\bar b$ quark.
}
\label{fig:tree}
\end{figure}
As depicted in Fig.~\ref{fig:tree}, there arise three electroweak diagrams at lowest order
that contribute to the quark-level process $W\to [\bar{b}(P-k) u(k)]^{(1)}+\gamma$.
Recall the momentum of the outgoing photon in \eqref{photon:momenta:B:rest:frame}
scales as $\left(q^+,q^-,|{\bf q}_\perp|\right)\sim(0,m_b, 0)$.
The $u$ propagator in Fig.~\ref{fig:tree}$a)$ becomes hard-collinear, and contribute a
$1/q\cdot k\sim 1/k^+q^-$ singularity to the amplitude. One can readily convince oneself that the other two diagrams,
the one with photon emitted from the $\bar{b}$ quark (Fig.~\ref{fig:tree}$b$) and the one emitted via a $WW\gamma$ vertex
(Fig.~\ref{fig:tree}$c$) do not possess such a $1/k^+$ enhancement, thus can be safely dropped.
Therefore, Fig.~\ref{fig:tree}$a)$ yields the following tree-level QCD amplitude
in the heavy quark limit:
\begin{eqnarray}
&&\mathcal{M}^{(0)}(W^{+}\rightarrow [\bar{b} u] + \gamma)
=\frac{e V_{ub} }{2\sqrt{2}\sin \theta_W} \left\langle[\bar b u] (P)\gamma (q,\varepsilon_\gamma)\left|\overline{u}\slashed\varepsilon_W(1-\gamma_{5})b\right|0\right\rangle\nonumber\\
&\approx&{\frac{e_{u} e^2 V_{ub} }{4\sqrt{2}\sin\theta_W q^-
k^+}}\text{Tr}\Big[ \frac{1-\slashed v}{4} \gamma_5 \varepsilon\!\!\!\slash_\gamma^{\ast}q\!\!\!\slash\varepsilon\!\!\!\slash_W(1-\gamma_{5})\Big]
\nonumber\\
&=&{\frac{e_{u} e^2 V_{ub} }{4\sqrt{2}\sin\theta_W
}}\left( -i\frac{\epsilon _ { \mu \nu \alpha \beta } v ^ { \mu } n_- ^ { \nu } \varepsilon _ {W} ^ { \alpha } \varepsilon _ { \gamma } ^ { * \beta } }{v^+} + \varepsilon _ { W } \cdot \varepsilon _ { \gamma } ^ { * } \right)\int_0^\infty {d\omega\over\omega} \delta\left(k^+/v^+-\omega\right).
\label{eq:M_LO}
\end{eqnarray}
With the aid of \eqref{eq:LCDA_tree} and \eqref{Def:M0:LO:alphas},
it is then straightforward to solve for
the tree-level hard-scattering kernel:
\begin{equation}
T^{(0)}(\omega)={\frac{e_{u} e^2 V_{ub} }{4\sqrt{2}\sin\theta_W
}}\left( -i\frac{\epsilon _ { \mu \nu \alpha \beta } P^ { \mu }q ^ { \nu } \varepsilon _ {W} ^ { \alpha } \varepsilon _ { \gamma } ^ { * \beta } }{P\cdot q} + \varepsilon _ { W } \cdot \varepsilon _ { \gamma } ^ { * } \right)\frac{1}{\omega}\,.
\label{eq:T_0}
\end{equation}
Comparing with the Lorentz decomposition specified in \eqref{Ampl:Lorentz:decomp},
one can deduce the final expressions for the vector/axial-vector form factors at tree level:
\begin{align}
F_V^{(0)}=F_A^{(0)}&={\hat{f}_B m_B} \int_0^\infty \frac{d\omega}{\omega} \phi_B^+\big(\omega\big)
={\frac{\hat{f}_B m_B}{\lambda_B}}.
\label{FVA:LO:expression}
\end{align}
We stress that the $\Phi_{[\bar{b}u]}^-(\omega)$ indeed does not enter the factorization formula.
Starting from \eqref{HQR:spin:projector:B}, inspecting the spinor structure of \eqref{eq:M_LO},
one can prove the $\Phi_{[\bar{b}u]}^-$-dependent terms vanish due to some specific identities such
as $n_-^2=0$ and $\gamma_\perp^\mu\varepsilon\!\!\!\slash_\gamma^{\ast}\gamma_{\perp\mu}=(4-d)\varepsilon\!\!\!\slash_\gamma^{\ast}$,
with $d=4$ signifying the spacetime dimension.
\subsection{One-loop level}
\label{sec:NLO:alphas:hard:kernel}
\begin{figure}
\centering
\includegraphics[clip,width=0.85\textwidth]{NLO_LCDA.eps}
\caption{One-loop QCD correction to LCDA for a fictitious $B$ meson.
The double line represents the $\bar{b}$ field in HQET, dashed line represents the gauge link.}
\label{Fig:One:loop:LCDA}
\end{figure}
\begin{figure}
\centering
\includegraphics[clip,width=0.8\textwidth]{NLO.eps}
\caption{One-loop QCD correction to the amplitude for $W^{+}\rightarrow
[\bar{b}u]^{(1)}+\gamma$. We just retain those diagrams with photon emitted from the spectator $u$ quark,
which yield leading contribution in the heavy quark limit.}
\label{Fig:One:loop:Amplitude}
\end{figure}
In this subsection, we proceed to extract the hard-scattering kernel through order-$\alpha_s$.
Following the ansatz given in \eqref{Def:M1:NLO:alphas},
the one-loop hard-scattering kernel $T^{(1)}$ can be extracted via
\begin{equation}
\Phi^{(0)}\otimes T^{(1)}=\mathcal{M}^{(1)}-\Phi^{(1)}\otimes T^{(0)},
\label{Extracting:T1}
\end{equation}
The one-loop diagrams for $\Phi^{(1)}$ and $\mathcal{M}^{(1)}$ are depicted in Fig.~\ref{Fig:One:loop:LCDA}
and Fig.~\ref{Fig:One:loop:Amplitude}, respectively.
By the general principle of effective field theory, both entities in the right-hand side of \eqref{Extracting:T1}
must possess the {\it identical} IR divergences, so that upon subtraction, $T^{(1)}$ must be infrared finite.
It appears instructive to compute the difference in \eqref{Extracting:T1}
on a diagram-by-diagram basis.
We employ dimensional regularization (with spacetime dimensions
$d=4-2 \epsilon$) to regularize UV divergences.
We treat $\gamma_5$ in the naive dimensional regularization (NDR) scheme
in which $\gamma_5$ anti-commutes with $\gamma^\mu$ ($\mu=0,1,...,d-1$).
We affiliate the 't Hooft unit mass $\mu_R$ when calculating
the QCD amplitude $\mathcal{M}^{(1)}$.
We also affiliate a different 't Hooft unit mass $\mu_F$
in computing $\Phi^{(1)}$.
When evaluating loop integrals, we have redefined the 't Hooft unit mass through
$\mu^2\to \mu^2 {e^{-\gamma_E}\over 4\pi}$, which expedites the renormalization of
the LCDA according to the $\overline{\rm MS}$ subtraction scheme.
Moreover, a nonzero mass $m_u$ is retained for the spectator $u$ quark to regularize the
mass (collinear) singularity. For simplicity we adopt the Feynman gauge.
The Feynman rules for einkonal vertex and propagator
are given by $-ig_sT^a n_+^\mu$ and $1/p^+$, respectively,
with $p$ denoting the momentum flowing into the gauge link~\cite{Collins:2011zzd}.
The one-loop contributions to the perturbative ``$B^+$ meson'' LCDA,
as indicated in Fig.~\ref{Fig:One:loop:LCDA}$a)$, $b)$ and $e)$
can also be extracted from the {\it soft} loop region of
the electromagnetic vertex correction, weak vertex correction,
and light quark propagator correction for their QCD counterparts as shown
in Fig.~\ref{Fig:One:loop:Amplitude}.
The contributions from these one-loop bare diagrams
turn out to be~\cite{DescotesGenon:2002mw}
\begin{subequations}
\begin{align}
\Phi_{+\mathrm{em}\!}^{(1)}\otimes T^{(0)}= &
\frac{\alpha_s C_F}{4\pi}\left(\frac{2}{\epsilon}\!-\!4\ln\frac{m_u}{\mu_{F}}\!+\!4\right)\mathcal{M}^{(0)},
\\
\Phi_{+\mathrm{wk}\!}^{(1)}\otimes T^{(0)}=&
\frac{\alpha_s C_F}{4\pi}\left(\frac{1}{\epsilon^{2}}\!-\!\frac{2}{\epsilon}\ln\frac{k^{+}}{v^+\mu_{F}}
+2\ln^{2}\frac{k^{+}}{v^+\mu_{F}}\!+\!\frac{3\pi^{2}}{4}\right)\mathcal{M}^{(0)},
\label{Weak:vertex:LCDA:one:loop}
\\
\Phi^{(1)}_{+\delta Z_u}\!\otimes T^{(0)} =&
\frac{1}{2}\delta Z_u(\mu_F)\mathcal{M}^{(0)},
\end{align}
\end{subequations}
where $\delta Z_u$ is the standard one-loop quark wave function renormalization constant in QCD.
Note that the occurrence of the double UV pole in \eqref{Weak:vertex:LCDA:one:loop} is
a peculiar trait of the HQET LCDA, from which one can infer the cusp anomalous dimension.
The gauge link self-energy diagram Fig.~\ref{Fig:One:loop:LCDA}$d)$ does not contribute to $\Phi_{+}^{(1)}\otimes T^{(0)}$ because its contribution is proportional to $n^2_+=0$.
To extract $T^{(1)}(\omega)$ following the recipe outlined in \eqref{Extracting:T1},
we need also calculate $\mathcal{M}^{(1)}$ indicated by those one-loop QCD diagrams in Fig.~\ref{Fig:One:loop:Amplitude}.
At leading power in $1/m_b$, we are only interested in retaining the contribution to
$\mathcal{M}^{(1)}$ of order $\Lambda_{\rm QCD}^{-1}$.
It is for this reason that we have excluded those diagrams where the photon is emitted
off the $\bar b$ quark or the $W^+$ boson.
It is straightforward to calculate the electromagnetic vertex correction, weak vertex correction a
nd internal quark self-energy QCD diagrams,
which have been depicted in Fig.~\ref{Fig:One:loop:Amplitude} $a)$, $b)$ and $d)$, respectively
(We have also tacitly included those quark mass counterterm diagrams in order to obtain UV-finite
results).
All three diagrams possess the same Lorentz structure as $\mathcal{M}^{(0)}$,
hence the corresponding contributions to $T^{(1)}(\omega)$ can be readily extracted by
subtracting the respective contributions of type $\Phi_{+}^{(1)}\otimes T^{(0)}$ in Fig.~\ref{Fig:One:loop:LCDA}.
We then obtain~\footnote{Note the matching procedure specified in \eqref{Extracting:T1} involves
the difference between two {\it renormalized} quantities. Including the quark wave-function and mass renormalization,
the one-loop QCD amplitude becomes UV finite and free from $\mu_R$ dependence; for the HQET LCDA,
one tacitly utilizes the $\overline{MS}$ subtraction scheme to render it finite. For simplicity,
in the following we will drop the UV poles in the hard-scattering
kernel associated with each individual Feynaman diagram.}
\begin{subequations}
\begin{align}
T_{\mathrm{em}}^{(1)} (\omega)= &\frac{\alpha_s C_F}{4\pi}T^{(0)}(\omega)\left(\ln\frac{2q^{-}v^{+} \omega}{\mu_F^2}+2\ln\frac{\mu_R}{\mu_F}-4-i\pi\right),
\\
\notag T_{\mathrm{wk}}^{(1)} (\omega) =& \frac{\alpha _s
C_F}{4\pi}T^{(0)}(\omega)\left\{\ln \frac{2 q^{-}
v^{+}\omega}{\left(1-r\right) m_W^2}
\left[ \ln \frac{2 \left(1-r\right) m_W^2 q^{-} v^{+}\omega}{\mu _F^4}-2\right]\right.
\\
\notag &+2\left(\ln^2\frac{m_b}{\mu _F}- \ln \frac{m_b}{\mu _R}\right)+2\mathrm{Li}_2(r)+\ln^2(1-r)
+\left[r+2 \ln \left(1-r\right)-1\right]\ln\frac{r}{1-r}
\\
&\left.+\frac{\pi ^2}{12}-i\pi \left(r+2\ln \frac{2 q^{-}
v^{+}\omega}{ m_W^2}-1\right)\right\},
\\
T_{\Sigma}^{(1)}(\omega) = &\frac{\alpha_s C_F}{4\pi}T^{(0)}(\omega)
\left(\ln\frac{2q^{-}v^{+}\omega}{\mu_R^2}-1-i\pi\right),
\end{align}
\label{T:one:bulk:of:diagrams}
\end{subequations}
where we have introduced a dimensionless ratio for convenience:
\begin{equation}
r \equiv m^2_b/m^2_W.
\label{def:r:ratio}
\end{equation}
The LSZ reduction formula requests us also to consider the
wave function correction to the the $u$ quark,
which are represented by Fig.~\ref{Fig:One:loop:Amplitude}$f)$ and Fig.~\ref{Fig:One:loop:LCDA}$f)$.
They contribute to $T^{(1)}$ through
\begin{align}
\Phi_{+}^{(0)}\big(\omega\big)\otimes T_{\delta Z_u}^{(1)}
\big(\omega\big)= \mathcal{M}_{\delta Z_u}^{(1)} -
T^{(0)}\big(\omega\big)\otimes\Phi_{+\delta Z_u}^{(1)}\big(\omega\big)\,.
\label{T1:delta:Zu}
\end{align}
Obviously, the $u$ quark self-energy diagrams
are identical between Fig.~\ref{Fig:One:loop:Amplitude}$e)$ and Fig.~\ref{Fig:One:loop:LCDA}$e)$.
Substituting $\mathcal{M}_{\delta Z_u}^{(1)} =
\tfrac{1}{2}\delta Z_u(\mu_R)\mathcal{M}_{\delta Z_u}^{(0)}$ and $\Phi_{+\delta Z_u}^{(1)} =
\tfrac{1}{2}\delta Z_u(\mu_F)\Phi_{+}^{(0)}$ in \eqref{T1:delta:Zu}, one finds
\begin{equation}
T_{\delta Z_u}^{(1)}
(\omega)=\frac{1}{2}\left[\delta Z_u(\mu_R)-\delta Z_u(\mu_F)\right]T^{(0)}(\omega)
=\frac{\alpha_sC_F}{4\pi}\ln\frac{\mu_F}{\mu_R}T^{(0)}(\omega),
\label{T:one-loop:Zu}
\end{equation}
which simply vanishes once setting $\mu_R=\mu_F$.
In a similar vein, one needs also consider the external $\bar{b}$ leg correction
in QCD amplitude and HQET LCDA, as depicted in Fig.~\ref{Fig:One:loop:Amplitude}$f)$ and Fig.~\ref{Fig:One:loop:LCDA}$f)$.
The on-shell $b$-quark wave function renormalization constant in QCD reads
\begin{equation}
\delta Z_b\left(\mu\right) = {\alpha_s C_F\over 4\pi}\left( -{1\over \epsilon_{\rm UV}}-
{2\over \epsilon_{\rm IR}} - 3 \ln {\mu^2 \over m_b^2} - 4\right).
\end{equation}
When computing the $\bar{b}$ quark self-energy contribution to $\Phi_{+}^{(1)}\otimes T^{(0)}$, one encounters a vanishing result
in DR. This is because the self-energy diagram of an on-shell $b$ quark in HQET yields a scaleless integral.
If we insist on distinguishing the UV and IR poles,
the $b$ quark on-shell wave function renormalization constant in HQET then reads
\begin{equation}
\delta \hat{Z}_{b}= {\alpha_s C_F\over 4 \pi}\left({2\over \epsilon_{\text{UV}}}-{2\over \epsilon_{\text{IR}}}\right),
\end{equation}
which obviously has the same IR pole as $Z_b$ in full QCD.
Discarding the UV poles, the external $\bar{b}$ leg corrections yields the following
contribution to $T^{(1)}$:
\begin{equation}
T_{\delta Z_b}^{(1)} (\omega)=\frac{1}{2}\left[\delta Z_b(\mu_R)-\delta \hat
Z_b(\mu_F)\right]T^{(0)}(\omega)=\frac{\alpha_sC_F}{4\pi}
\left(2\ln\frac{m_b}{\mu_F}+\ln\frac{m_b}{\mu_R}-2\right)T^{(0)}(\omega).
\label{T:one-loop:Zb}
\end{equation}
In the leading power of $1/m_b$, it turns out that, reassuringly,
the box diagram in Fig.~\ref{Fig:One:loop:Amplitude}$c)$
makes vanishing contribution to the hard-scattering kernel.
Because the loop momentum generates the leading contribution to $\mathcal{M}^{(1)}_{\mathrm{box}}$
only in the {\it soft} region ($l^\mu \sim \Lambda_{\rm QCD}$),
which is already fully captured by $\Phi^{(1)}_{\mathrm{box}}\otimes T^{(0)}$. Therefore there arises no contribution to
$T^{(1)}_{\mathrm{box}}$, which exhibit the same pattern as what is observed
in the case of $B\to \gamma l\nu$~\cite{DescotesGenon:2002mw}.
Piecing all the relevant one-loop contributions in \eqref{T:one:bulk:of:diagrams},
\eqref{T:one-loop:Zu} and
\eqref{T:one-loop:Zb} together, we
can deduce the complete hard-scattering kernel at ${\cal O}(\alpha_s)$:
\begin{align}
\label{T1:full:expression}
T^{(1)}(\omega,m_b, \mu_F) =&\frac{\alpha_sC_F}{4\pi}\Bigg\{\ln
^2\frac{2 q^{-} v^{+}\omega}{\mu_F^2}-2
\ln^2\frac{m_{b}}{\mu_F}+\left(5-4 \ln\frac{1-r}{r}\right)
\ln\frac{m_{b}}{\mu_F}
\\
\notag &+2\mathrm{Li}_2(r)+\ln^2r-\left(2\ln\frac{1-r}{r}-3+r\right)
\ln\frac{1-r}{r}+\frac{\pi ^2}{12}-7
\nonumber\\
&- i\pi \left[2 \ln \frac{2 q^{-}
v^{+}\omega}{\mu_F^2}
-4\ln\frac{m_{b}}{\mu_F}-r-4\ln{\left(1-r\right)}+2\ln r+3\right]\Bigg\} T^{(0)}(\omega).\nonumber
\end{align}
It should be noticed that the unit mass $\mu_R$ has disappeared in the final answer,
as the consequence of the conserved (axial)vector current in QCD.
Nevertheless, $T^{(1)}$ still explicitly depends on
the factorization scale $\mu_F$. One can readily checks that, the
$\mu_F$ dependence of $T^{(1)}$ reads
\begin{align}
\mu_F\frac{d}{d\mu_F} T^{(1)}(\omega,\mu_F)=-\frac{\alpha_s C_F}{4\pi}\left(4\ln\frac{\omega}{\mu_F}+5\right)T^{(0)}(\omega)+\mathcal{O}\left( \alpha_s^2 \right).
\end{align}
Hearteningly, this specific $\mu_F$ dependence is exactly what we need.
In junction with the $\mu_F$ dependence of $\hat{f}_B(\mu_F)$ specified
in \eqref{hat:f:matching:fB} and the scale dependence of $\phi_B^+(\omega,\mu_F)$ governed by \eqref{LN:evolution:eq},
such a scale dependence of the hard-scattering kernel
guarantees that the physical amplitude $\mathcal{M}^{(1)}$
in \eqref{HQET:factorization:theorem} is independent of the artificial scale $\mu_F$.
We have formally assigned $m_W\sim m_b$ in order to justify the HQET factorization.
Nevertheless, we should admit that $m_W\gg m_b$ in the realistic world. The symptom of such
practical hierarchy is reflected in the large logarithm of $m_W/m_b$ in the hard-scattering kernel.
To see this more transparently, let us expand $ T^{(1)}(\omega,m_b, \mu_F)$ in \eqref{T1:full:expression}
to the zeroth order of $r$:
\begin{align}
\label{T1:omega:expanded}
\notag T^{(1)}(\omega,m_b, \mu_F) \Big\vert_{\rm expd}= &\frac{\alpha_s C_F}{4\pi}
\Bigg[\ln^{2}\frac{\omega}{\mu_F}-\ln^{2}\frac{m_b}{\mu_F}+\ln\frac{m_b}{\mu_F}
\left(5+2\ln\frac{\omega}{\mu_F}\right)
+\ln\frac{m_W^2}{m_b^2}\left(3+2\ln\frac{\omega}{m_b}\right)
\\
&+\frac{\pi^2}{12}-7-i\pi\left(3+2\ln\frac{\omega}{m_b}\right)\Bigg] T^{(0)}(\omega)+\mathcal{O}(r).
\end{align}
Interestingly, we notice that the occurrence of the single collinear logarithm ${\alpha_s C_F\over 4\pi}\ln{m_W\over m_b}$,
which might be large and potentially spoil the convergence of perturbative expansion.
We admit this is an inevitable drawback of the fixed-order calculation in the HQET factorization.
In the future work, we will illustrate how to apply the ERBL equation to effectively resum these types of
large logarithms through a refactorization program. The theoretical framework that renders such a
resummation program feasible is based upon a recently proposed factorization theorem
that links the LCDAs of $B$ meson defined in full QCD
and HQET~\cite{Ishaq:2019dst}.
With the ${\mathcal O}(\alpha_s)$ hard-scattering kernel at hand,
we are then able to present the NLO perturbative corrections to the form factors $F_{V/A}$:
\begin{align}
&F_V^{(1)} = F_A^{(1)}= F_{V/A}^{(0)} \int_0^\infty
\frac{d\omega}{\omega}
\frac{T^{(1)}\big(\omega\big)}{T^{(0)}\big(\omega\big)}
\phi_B^+\big(\omega\big)
\nonumber\\
& = F_{V/A}^{(0)} {\alpha_s C_F\over 4\pi}
\bigg\{-\ln^2{m_b\over \mu_F} -\ln{m_b\over \mu_F}
\left(2 \ln {1-r\over r}- 2\right)+ 2 {\rm Li}_2 (r)-\ln^2(1-r)
\nonumber\\
&+2\ln r\ln(1-r)+\left(3-r\right)\ln\frac{1-r}{r}+\frac{\pi^{2}}{12}-5-
2 \sigma_{B,1} \left(\ln\frac{1-r}{r}+\ln\frac{m_b}{\mu_F}\right)
\nonumber\\
&-\sigma_{B,2}+{i \pi}\left[2\ln\frac{m_b}{\mu_F}-3+r+2\ln\big(1-r\big)+2\sigma_{B,1}\right]\bigg\},
\label{eq:F_VA_01}
\end{align}
where the first inverse moment $\lambda_B^{-1}$ and the $n$-th logarithmic moment $\sigma_n\lambda_B^{-1}$ of the
$B$ meson LCDA have been defined in (\ref{first:inv:moment}) and (\ref{Log:first:inverse:moments}).
In the final expression, we have also utilized \eqref{hat:f:matching:fB} to
trade $\hat{f}_B$ in favor of the QCD decay constant $f_B$.
One can check that, the NLO prediction $F_{V/A}^{(0)}+F_{V/A}^{(1)}$ becomes independent of
the factorization scale $\mu_F$ up to the error at $\mathcal{O}(\alpha_s^2)$.
Equation~\eqref{eq:F_VA_01} constitutes the most important result of this work.
The equality between $F_V$ and $F_A$ at NLO in $\alpha_s$ looks peculiar, which
should not be simply viewed as a coincidence.
The underlying reason may be likely attributed to the heavy quark spin symmetry, which
is an exact symmetry at the lowest order in $1/m_b$.
It is instructive to compare \eqref{eq:F_VA_01}, which entails the NLO corrections rigorously
derived from HQET factorization, with the corresponding expressions of the form factors
for a similar process with $B^+$ meson replaced with the flavored quarkonium
$B_c^+$, {\it e.g.}, $W^+\rightarrow B_{c}^+\gamma$~\cite{Feng:2019meh}.
This exclusive quarkonium production process has recently been investigated
through ${\cal O}(\alpha_s)$ within NRQCD factorization framework~\cite{Feng:2019meh}.
There the NRQCD short-distance coefficient for such a case contains three distinct scales:
$m_W$, $m_b$ and $m_c$. Besides the standard light-cone limit $m_W\gg m_b\sim m_c$,
the authors of \cite{Feng:2019meh} have also investigated the so-called ``heavy quark limit'',
where the NRQCD short-distance coefficient is expanded according to the scale hierarchy $m_W\sim m_b\gg m_c$.
In such a limit, the expanded NRQCD short-distance coefficient reads~\cite{Feng:2019meh}
\begin{align}
& F_{A}^{(1)} = {F_{A}^{(0)}} \, {\alpha_s C_F\over 4\pi} \bigg\{
-\ln^2 x_0+ \ln x_0 \left(2 \ln \frac{1-r}{r}-5\right) + 2\mathrm{Li}_2(r) - \ln ^{2}(1-r)
\nonumber\\
& + 2 \ln(1-r)\ln r + (3-r) \ln \frac{1-r}{r} -{2\pi^2\over 3}-9 + i \pi\left[ -2\ln x_0 -3+r+ 2\ln (1-r) \right]\bigg\},
\label{Bc:NRQCD:short-distance-coeff:Heavy-Quark-Limit}
\end{align}
where $x_0\equiv \tfrac{m_c}{m_b+m_c}\approx \tfrac{m_c}{m_b}$. It is amazing that \eqref{Bc:NRQCD:short-distance-coeff:Heavy-Quark-Limit} resembles
\eqref{eq:F_VA_01} in many aspects, once the substitution $\mu_F = \omega \rightarrow m_c$ is
made in \eqref{eq:F_VA_01}. A simplification of this scale-fixing is that
the logarithmic moments $\sigma_{B,n}$ ($n=1,2$) drops out in \eqref{eq:F_VA_01}.
It looks peculiar that these two expressions agree on the bulk
of (di-)logarithmic functions of $r$, but only differ
in constant~\footnote{To ``perfectly'' match
\eqref{Bc:NRQCD:short-distance-coeff:Heavy-Quark-Limit} with \eqref{eq:F_VA_01},
one may attempt to substitute the analytic $B^+$ meson LCDA defined in HQET~\cite{Bell:2008er},
where the $B^+$ meson is modelled as a free nonrelativistic $\bar{b} u$ pair.
The $\mu_F$ dependence in \eqref{eq:F_VA_01} would cancel explicitly,
after adding two convolution integrals $\Phi_{[\bar b u]}^{+(0)}\otimes T^{(1)}$ and
$\Phi_{[\bar b u]}^{+(1)}\otimes T^{(0)}$.
After the substitutions $\lambda_B\rightarrow m_c$, $\sigma_{B,1}\rightarrow -\ln m_c /\mu_F$ and $\sigma_{B,1}\rightarrow -\ln^2 m_c /\mu_F$ are made, \eqref{eq:F_VA_01} becomes almost identical to
\eqref{Bc:NRQCD:short-distance-coeff:Heavy-Quark-Limit}.}.
Similar to \eqref{T1:omega:expanded}, we can also expand the form factors $F_{V/A}^{(1)}$ to
the zeroth order in $r$:
\begin{align}
F_{V/A}^{(1)}\Big\vert_{\rm expd} = & {F_{V/A}^{(0)}} \frac{\alpha_s C_F}{4\pi}
\Bigg[\ln{m^2_b\over m^2_W}\left(2\ln{m_b \over \mu_F}+2\sigma_{B,1}-3\right)
-\ln^{2}\frac{m_b}{\mu_F}+2\left(1-\sigma_{B,1}\right)\ln\frac{m_b}{\mu_F}
\nonumber\\
&-\sigma_{B,2}+\frac{\pi^2}{12}-5+i\pi\left(2\ln\frac{m_b}{\mu_F}
+2\sigma_{B,1}-3\right)\Bigg]+\mathcal{O}(r).
\label{Form:factors:expanded:r}
\end{align}
Given the huge hierarchy between $m_W$ and $m_b$, we expect that this approximate
expression should be numerically quite close to the exact one in \eqref{eq:F_VA_01}.
\section{Numerical results}
\label{sec:phenomenology}
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{LCDA_Evltn.eps}
\caption{Profiles of the LCDAs $\phi^{+}_{B/D_s}(\omega,\mu_F)$ at some typical values of the
factorization scale.} \label{fg:LCDA_Evltn}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{InvMmntsFgv1.eps}
\caption{Scale dependence of the first inverse moments $\lambda^{-1}_{M}$ and
the logarithmic inverse moments $\lambda^{-1}_{M} \sigma_{M,1/2}$ for $M= B^+, D_s^+$.
The renormalization scale ranges from 1 GeV to twice meson mass.}\label{Fig:Inverse:Moments:evol}
\end{figure}
In this section, we carry out detailed numerical predictions for vector/axial-vector form factors related to $W^+$
radiative decay into $B^+$ and $D_s^+$ mesons, as well as the corresponding partial widths and branching fractions.
The impact of the NLO perturbative corrections is also investigated.
We specify various input parameters as follows~\cite{Tanabashi:2018oca,Jegerlehner:2011mw}:
\begin{table}[H]
\begin{centering}
\begin{tabular}{llll}
$\sin\theta_{W}=0.481,\;\;$ & $\alpha\left(m_{W}/2\right)=1/130,\;\;$ & $m_{W}=80.379\;\mathrm{GeV},\;\;$ & $f_{B}=0.187\;\mathrm{GeV},$ \tabularnewline
$\left|V_{ub}\right|=3.65\times10^{-3},\;\;$ & $m_{b}=4.6\;\mathrm{GeV},\;\;$ & $m_{B}=5.279\;\mathrm{GeV},\;\;$& $f_{D}=0.249\;\mathrm{GeV},$\tabularnewline
$\left|V_{cs}\right|=0.997,\;\;$ & $m_{c}=1.4\;\mathrm{GeV},\;\;$ & $m_{D}=1.968\;\mathrm{GeV}.$\tabularnewline
\end{tabular}
\par\end{centering}
\end{table}
We utilize the automated package
\texttt{HOPPET}~\cite{Salam:2008qg} to evaluate the QCD running coupling $\alpha_s$ with one-loop accuracy,
which has appropriately incorporated the effect associated with crossing the flavor threshold.
For the LCDAs of the $B^+$ and $D_s^+$ mesons defined at the initial scale $\mu_{F\,0}=1$ GeV,
we employ the simple exponential ansatz, first introduced by Grozin and Neubert~\cite{Grozin:1996pq}:
\begin{align}
\phi_{M}^{+}\big(\omega\big)=&\frac{\omega}{\lambda_M^2}\exp\left({-\frac{\omega}{\lambda_M}}\right),
\end{align}
with $\lambda_B=0.360$ GeV~\cite{Gelb:2018end} and $\lambda_{D_s}=0.294$ GeV~\cite{Yang:2016wtm}.
We would like to fathom out how the predicted decay rates depend on
the artificial factorization scale $\mu_F$.
To this purpose, we first need know how the LCDAs, and the corresponding (lograithmic) inverse moments, vary
with $\mu_F$.
The analytic solutions of the Lange-Neubert evolution equation in \eqref{LN:evolution:eq} have been
recently available~\cite{Lee:2005gza,Bell:2013tfa}.
Nevertheless in this work, we are content with numerically solving the Lange-Neubert equation via the
fourth order Runge-Kutta method, with the help of the package \texttt{GNU Scientific Library}~\cite{GSL}.
The profiles of the LCDAs for $B$ and $D_s$ at several factorization scales
are depicted in Fig.~\ref{fg:LCDA_Evltn}.
In Fig.~\ref{Fig:Inverse:Moments:evol}, we plot the scale dependence of the first inverse moment
and logarithmic inverse moments for both $B^+$ and $D^+_s$ mesons. We observe that the moments $\lambda_B^{-1}$
and $\lambda^{-1}_{M} \sigma_{M,1}$ exhibit a mild $\mu_F$ dependence between 1 GeV and twice meson mass,
whereas the inverse logarithmic moment $\lambda^{-1}_{M} \sigma_{M,2}$ develops a relatively
stronger $\mu_F$ dependence.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{FVAFgp.eps}
\caption{Factorization scale dependence of the vector/axial-vector form factors $F_{V/A}$ at LO and NLO in $\alpha_s$,
for the process $W^+\to B^+\gamma$ and
$W^+\to D^+_s+\gamma$, respectively. The range of $\mu_F$ lies between 1 GeV to twice meson mass.}
\label{Fig:FromFactor:scale:dep}
\end{figure}
We define the LO and NLO predictions to form factors as
$F^{\rm LO}_{V/A}\equiv F^{(0)}_{V/A}$, and $F^{\rm NLO}_{V/A}\equiv F^{(0)}_{V/A}+F^{(1)}_{V/A}$,
At a given $\mu_F$, we evaluate the form factors through ${\cal O}(\alpha_s)$
in accordance with \eqref{FVA:LO:expression} and \eqref{eq:F_VA_01}.
In Fig.~\ref{Fig:FromFactor:scale:dep}, we plot the variation of the form factors with $\mu_F$,
at both LO and NLO accuracy. Clearly ${\mathcal O}(\alpha_s)$ correction is negative and significant for
both $W^+\to B^+\gamma$ and $W^+\to D_s^+\gamma$, especially with relatively small $\mu_F$.
Notice the scale dependence of the form factors becomes significantly reduced after incorporating the ${\mathcal O}(\alpha_s)$ correction, in particular with relatively greater $\mu_F$.
One can analytically prove that $F^{\mathrm{NLO}}_{V/A}$ in \eqref{eq:F_VA_01} is independent of
$\mu_F$ through $\mathcal{O}(\alpha_s)$. Notwithstanding considerable reduction of the $\mu_F$ dependence relative to the
LO prediction, $F^{\mathrm{NLO}}_{V/A}$ in Fig.~\ref{Fig:FromFactor:scale:dep} still bear notable
factorization scale dependence, particularly in small $\mu_F$.
The residual scale dependence is clearly caused by neglected higher-order corrections.
In small $\mu_F$, the scale dependence may be amplified either by the large $\alpha_s$ or the large prefactors
accompanying $\ln \mu_F$.
One source causing the residual $\mu_F$ dependence can be readily identified, that is,
the $\ln m^2_b/m^2_W \ln m_b/\mu_F$ term in \eqref{Form:factors:expanded:r}.
Numerically, the collinear logarithm is sizable, $\ln m_b^2/m_W^2\approx -5.7$,
which is largely responsible for the fall of the NLO prediction of the form factors with decreasing $\mu_F$.
In order to quantify our reasoning about the major source governing the residual scale dependence,
in Fig.~\ref{Fig:FromFactor:mW:dep} we artificially tune the value of $m_W$ from $80.38$ GeV down to $10$ GeV
for $W^+\to B^+\gamma$, and from physical mass down to $5\mathrm{GeV}$ for $W^+\to D^+_s\gamma$.
As can be clearly visualized from Fig.~\ref{Fig:FromFactor:mW:dep}, the smaller $m_W$ is chosen, the less scale dependence $F^{\mathrm{NLO}}_{V/A}$ does have in small $\mu_F$.
One might tend to conclude that, in order to reduce $\mu_F$ dependence in HQET factorization,
it appears desirable to resum the large collinear logarithm $\ln m_W^2/m_b^2$ appearing in the hard-scattering kernel to all orders
in $\alpha_s$. Note LN equation can only serve to sum large logarithm $\ln m_b/\Lambda_{\rm QCD}$, which has already included
in our numerical analysis. The occurrence of large logarithm $\ln m_W^2/m_b^2$ is a weakness of the
HQET factorization, since two distinct scales $m_W$ and $m_b$ has not been disentangled in this approach.
Resumming collinear logarithms of this type can only be accomplished by
appealing to the ERBL equation in the collinear factorization framework.
We recall that, for the analogous hard exclusive heavy quarkonium
production processes, exemplified by $\gamma^*\to \eta_c+\gamma$ and $H\to J/\psi+\gamma$, the leading collinear logarithm of type $\ln Q/m_c$ has been identified and resummed to all orders in $\alpha_s$,
but the numerical effect turns out to be modest~\cite{Jia:2008ep}.
It is desirable if the similar goal can be achieved for exclusive heavy-light hadron production.
Very recently, we have proposed a novel factorization theorem, which attempts to
refactorize the $B$ meson LCDA in full QCD into the LCDA defined in HQET,
convoluted with a perturbatively calculable $Z$ function~\cite{Ishaq:2019dst}.
The underlying motivation is to separate the short-distance effect of order $m_b$ out of the QCD LCDA,
which cannot be a genuinely perturbative object.
Starting from the standard collinear factorization approach, armed with this factorization formula, we may
readily make an optimized prediction which combines the virtues of both collinear and HQET factorization approaches.
This improved approach will greatly facilitate resummation of both types of logarithms, $\ln m_b/\Lambda_{\rm QCD}$
and $\ln m_W^2/m_b^2$. We hope to present the optimized prediction for this process in future publication.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{FVA_mW_v1.eps}
\caption{Dependence of ${\rm Re}[F^{\rm NLO}_{V/A}]$ on $\mu_F$, chosen with some fictitious $W$ mass.
The factorization scale ranges from 1 GeV to twice meson mass.} \label{Fig:FromFactor:mW:dep}
\end{figure}
\begin{table}[!htb]
\begin{centering}
\begin{tabular}{|c|c|c|c|}
\hline
& $\vphantom{\frac{L^L}{L^L}}\Gamma^{\mathrm{LO}}$ (GeV) & $\Gamma^{\mathrm{NLO}}$ (GeV) & $\mathrm{Br}^{\mathrm{NLO}}$\tabularnewline
\hline
$\vphantom{\frac{L^L}{L^L}}W^{+}\rightarrow B^{+}\gamma$ & $\left(0.75\sim1.9\right)\times10^{-11}$ & $\left(3.1\sim7.7\right)\times10^{-12}$ & $\left(1.5\sim3.7\right)\times10^{-12}$\tabularnewline
$\vphantom{\frac{L^L}{L^L}}W^{+}\rightarrow D_{s}^{+}\gamma$ & $\left(0.72\sim1.3\right)\times10^{-7}$ & $\left(4.9\sim8.4\right)\times10^{-8}$ & $\left(2.3\sim4.0\right)\times10^{-8}$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Numerical predictions to the partial widths and branching ratios for the processes
$W^+\rightarrow B^+(D^+_s)\gamma$. The uncertainty is estimated by sliding $\mu_F$ from 1 GeV to twice meson mass.}
\label{Tab.nmrcl_prdctn}
\end{table}
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{BR_muF_Fg.eps}
\caption{Branching fractions of $W^+\rightarrow B^+\gamma$ and $W^+\rightarrow D_s^+\gamma$ as a function of
$\mu_F$, which ranges from 1 GeV to twice meson mass. Our predictions are juxtaposed with the existing ones obtained
from the collinear factorization~\cite{Grossmann:2015lea}, which are represented by the green bands.
}\label{Fig:BR}
\end{figure}
We employ \eqref{eq:dcy_wdth} to compute the partial decay widths.
For the NLO prediction, we take the absolute square of the form factors, including their imaginary part in
$F_{V/A}^{(1)}$, without truncating the partial width strictly at ${\cal O}(\alpha_s)$.
We present the LO and NLO predictions for the partial widths and branching ratios of $W^+\to B^+\gamma$ and $W^+\to D_s^+\gamma$
in Table~\ref{Tab.nmrcl_prdctn} and Fig.~\ref{Fig:BR}, respectively.
Several orders of magnitude difference between $W^+\rightarrow B^+\gamma$ and $W^+\rightarrow D_s^+\gamma$ is primarily due
to $|V_{cs}|\gg |V_{ub}|$.
The NLO corrections turns out to be sizable, shift the LO prediction by $-83\%-2\%$ for $W^+\rightarrow B^+\gamma$, and
$-62\%\sim18\%$ $W^+\rightarrow D^+_s\gamma$, respectively.
Recall for a similar decay process $W\to B_c+\gamma$, the ${\cal O}(\alpha_s)$ correction has also been found to
considerably reduce the LO result~\cite{Feng:2019meh}.
Our state-of-the-art predictions for the branching fraction for $W^+\rightarrow B^+\gamma$ lies between
$(1.5-3.7)\times 10^{-12}$. It is difficult to observe such an extremely rare decay channel in the foreseeable future,
even after the integrated luminosity of LHC reaches 3000 ${\rm fb}^{-1}$.
On the other hand, the branching fraction for $W^+\rightarrow D_s^+\gamma$ is predicted to be within
$(2.3-4.0)\times 10^{-8}$, which may have bright observation prospect in the future LHC experiments.
Lastly, it is also interesting to remark that, our NLO predictions in HQET factorization are
somewhat greater than what were obtained from the standard collinear factorization approach for $W^+\rightarrow B^+\gamma$,
but compatible with theirs for $W^+\rightarrow D_s^+\gamma$,
albeit within large errors~\cite{Grossmann:2015lea}.
\section{Summary and outlook}
\label{sec:summary}
In this work, inspired by the NRQCD factorization for hard exclusive heavy quarkonium production,
we have formulated the HQET factorization approach, tailored for describing the
hard exclusive production of heavy-flavor meson. This approach in spirit is quite different from
the standard collinear factorization for hard exclusive production.
We have taken the $W^+\rightarrow B^+(D^+_s)+\gamma$ as the prototype processes to
illustrate our theoretical framework, especially including the complete
NLO perturbative correction.
By examining that the NLO hard-scattering kernel is IR finite,
we have explicitly verified that the HQET factorization formula in \eqref{HQET:factorization:theorem} indeed
holds at first nontrivial order in $\alpha_s$ yet at lowest order in $1/m_b$.
It is conceivable that this factorization formula may hold to all orders in $\alpha_s$.
Interestingly, both vector/axial-vector form factors $F_{V/A}$ remain identical through NLO in $\alpha_s$, which may
be attributed to the the heavy-quark spin symmetry.
The NLO perturbative corrections turn to be substantial and negative. Our predictions for
$W^+\rightarrow B^+(D_s^+)+\gamma$ are compared with the existing ones using the collinear factorization approach.
While the $W^+\rightarrow B^+ +\gamma$ process is perhaps too suppressed to experimentally tag, the process
$W^+\rightarrow D_s^+ +\gamma$ may have positive chance to be observed in future LHC experiment.
A nuisance of the fixed-order calculation in HQET factorization approach is that the hard-scattering kernel
is inevitably plagued with large collinear logarithms of $m_W/m_b$, which may potentially ruin the
convergence of perturbative expansion. This reflects two hard scales $m_W$ and $m_b$ are not yet
disentangled in this approach.
Recently a new factorization theorem that links the $B$ meson
LCDAs defined between QCD and HQET has been discovered~\cite{Jia:2008ep}.
This factorization formula opens the gate for effectively merging both HQET factorization and collinear factorization
to make the optimized predictions. Within this new scheme, both large logarithms of
type $\ln m_b/\Lambda_{\rm QCD}$ and $\ln m_W/m_b$ can be efficiently resummed with the aid of LN equation and
ERBL equation. We hope to illustrate this improved theoretical framework in the future publication.
\begin{acknowledgments}
We thank Feng Feng and Wen-Long Sang for valuable discussions. The Feynman diagrams in this manuscript
were prepared by using JaxoDraw~\cite{Binosi:2003yf}.
The work of S.~I. and Y.~J. is supported in part by the National Natural Science Foundation of China under Grants No.~11875263,
No.~11621131001 (CRC110 by DFG and NSFC). S.~I. also wishes to acknowledge the financial support
from the CAS-TWAS President's Fellowship Programme.
The work of X.-N.~X. is supported in part by the Deutsche Forschungsgemeinschaft (Sino-German CRC 110).
The work of D.-S.~Y. is supported in part by the National Natural Science Foundation of China under Grants No.~11275263 and 11635009.
\end{acknowledgments}
|
2,869,038,156,281 | arxiv | \section{Introduction}
\subsection{Preliminaries}
A polynomial homotopy is a family of polynomial systems
which depend on one parameter. Numerical continuation methods
to track solution paths defined by a homotopy are classical,
see e.g.:~\cite{AG03} and~\cite{Mor87}.
Studies of deformation methods in symbolic computation
appeared in~\cite{BMWW04}, \cite{CPHM01}, and~\cite{HKPSW00}.
In particular, the application of Pad{\'{e}} approximants
in~\cite{JMSW09} stimulated our development of methods to
compute power series.
\noindent {\bf Problem statement.} We want to define an efficient,
numerically stable, and robust algorithm to compute a power series
expansion for a solution curve of a polynomial homotopy.
The input is a list of polynomials in several variables,
where one of the variables is a parameter denoted by $t$,
and a value of $t$ near which information is desired.
The output of the algorithm is a tuple of series in $t$ that vanish up to a certain
degree when plugged in to either the original equations or, in special cases, a
transformation of the original equations.
A power series for a solution curve forms the input to the computation
of a Pad{\'{e}} approximant for the solution curve, which will then
provide a more accurate predictor in numerical path trackers.
Polynomial homotopies define deformations of polynomial systems
starting at generic instances and moving to specific instances.
Tracking solution paths that start at singular solutions is not
supported by current numerical polynomial homotopy software systems.
At singular points we encounter series with fractional powers,
Puiseux series.
\noindent {\bf Background and related work.} As pointed out in~\cite{BM01},
polynomials, power series, and Toeplitz matrices are closely related. A direct
method to solve block banded Toeplitz systems is presented in~\cite{CV10}. The
book~\cite{BG96} is a general reference for methods related to approximations
and power series. We found inspiration for the relationship between
higher-order Newton-Raphson iterations and Hermite interpolation in~\cite{KT74}.
The computation of power series is a classical topic in computer
algebra~\cite{GCL92}. In~\cite{ACGS04}, new algorithms are proposed
to manipulate polynomials by values via Lagrange interpolation.
The Puiseux series field is one of the building blocks of tropical algebraic
geometry~\cite{MS15}. For the leading terms of the Puiseux series,
we rely on tropical methods~\cite{BJSST07}, and in particular on the
constructive proof of the fundamental theorem of tropical algebraic
geometry~\cite{JMM08}, see also~\cite{Kat08} and~\cite{Pay09}.
Computer algebra methods for Puiseux series in two dimensions
can be found in~\cite{PR12}.
\noindent {\bf Our contributions.} Via linearization, rewriting matrices of
series into series with matrix coefficients, we formulate the problem of
computing the updates in Newton's method as a block structured linear algebra
problem. For matrix series where the leading coefficient is regular, the
solution of the block linear system satisfies the Hermite interpolation problem.
For general matrix series, where several of the leading matrix coefficients may
be rank deficient, Hermite-Laurent interpolation applies. We characterize when
these cases occur using the algebraic variety of an augmented system. To solve the block
diagonal linear system, we propose to reduce the coefficient matrix to a lower
triangular echelon form, and we provide a brief analysis of its cost.
The source code for the algorithm presented in this paper is
archived at github via our accounts {\tt nbliss} and {\tt janverschelde}.
\noindent {\bf Acknowledgments.} We thank the organizers of the ILAS 2016
minisymposium on Multivariate Polynomial Computations and Polynomial Systems,
Bernard Mourrain, Vanni Noferini, and Marc Van Barel, for giving the second
author the opportunity to present this work. In addition, we are grateful to
the anonymous referee who supplied many helpful remarks.
\subsection{Motivating Example: Pad\'e Approximant}\label{introPade}
One motivation for finding a series solution is that
once it is obtained,
one can directly compute the associated Pad\'e
approximant, which often has much better convergence properties.
Pad{\'e} approximants~\cite{BG96} are applied in
symbolic deformation algorithms~\cite{JMSW09}.
In this section we reproduce~\cite[Figure~1.1.1]{BG96}
in the context of polynomial homotopy continuation.
Consider the homotopy
\begin{equation}
(1-t) (x^2 - 1) + t (3 x^2 - 3/2) = 0.
\end{equation}
The function
$\displaystyle x(t) = \left( \frac{1+t/2}{1+2t} \right)^{1/2}$
is a solution of this homotopy.
Its second order Taylor series at $t = 0$ is
$s(t) = 1 - 3 t/4 + 39 t^2/32 + O(t^2)$.
The Pad\'{e} approximant of degree one in numerator and
denominator is $\displaystyle q(t) = \frac{1 + 7t/8}{1 + 13t/8}$.
In Figure~\ref{figex4pade} we see that the series approximates
the function only in a small interval and then diverges,
whereas the Pad{\'{e}} approximant is more accurate.
\begin{figure}[hbt]
\begin{center}
\includegraphics[scale=0.7]{ex4pade.png}
\caption{Comparing a Pad{\'{e}} approximant to a series approximation
shows the promise of applying Pad{\'{e}} approximants as predictors
in numerical continuation methods.}
\label{figex4pade}
\end{center}
\end{figure}
\subsection{Motivating Example: Viviani's Curve}\label{introViviani}
\begin{figure}[hbtp]
\begin{center}
\includegraphics[scale=.24]{viviani.png}
\caption{Viviani's curve as the intersection of a sphere with a cylinder.}
\label{vivianicurve}
\end{center}
\end{figure}
Viviani's curve is defined as the
intersection of the sphere $(x_1+2)^2 + x_2^2 + x_3^2 = 4$ and the cylinder
$(x_1+1)^2 + x_2^2 = 1$ such that the surfaces are tangent at a single point;
see Figure~\ref{vivianicurve}.
Our methods will allow us to find the Taylor series expansion around
any point on a 1-dimensional variety,
assuming we have suitable starting information. For example, the origin
$(0,0,0)$ satisfies both equations of Viviani's curve. This is the point where
the curve intersects itself, so the curve
is {\em singular}\footnote{Definition~\ref{def:singular}
makes this precise for general curves.}
there, meaning algebraically that the Jacobian drops rank, and
geometrically that the tangent space does not have the expected dimension.
If we apply our methods at this point, we obtain the following
series solution for $x_1,x_2,x_3$:
\begin{equation}\label{vivianiSeries}
\left\{\begin{array}{l}
-2t^{2} \\
2t - t^{3} - \frac{1}{4}t^{5} - \frac{1}{8}t^{7} - \frac{5}{64}t^{9} -
\frac{7}{128}t^{11} - \frac{21}{512}t^{13} - \frac{33}{1024}t^{15} \\
2t
\end{array}\right.
\end{equation}
This solution is plotted in Figure~\ref{vivianiApprox} for a varying number of
terms. To check the correctness, we can substitute (\ref{vivianiSeries}) into
the original equations, obtaining series in $O(t^{18})$. The vanishing of the
lower-order terms confirms that we have indeed found an approximate series
solution. Such a solution, possibly transformed into an associated Pad\'e
approximant, would allow for path tracking starting at the origin.
\begin{figure}[hbtp]
\begin{center}
\includegraphics[scale=.24]{vivianiAtSingularity3.png}
\caption{Viviani's curve, with improving series approximations
and thus more accurate predictions for points on the curve.}
\label{vivianiApprox}
\end{center}
\end{figure}
\section{The Problem and Our Solution}
\subsection{Problem Setup}\label{sec:problemSetup}
For a polynomial system $\mathbf{f} = (f_1,f_2,\ldots,f_m)$ where each $f_i \in {\mathbb C}[t,x_1,\ldots,x_n]$,
the solution variety ${\mathbb V}(\mathbf{f})$ is the set of points $\mathbf{p} \in {\mathbb C}^{n+1}$ such
that $f_1(\mathbf{p})=\cdots=f_m(\mathbf{p})=0$. Let $\mathbf{f}$ be a system such that
the solution variety is 1-dimensional over ${\mathbb C}$ and is not contained in the
$t=0$ coordinate hyperplane. We seek to understand
${\mathbb V}(\mathbf{f})$ by treating the $f_i$'s as elements of ${\mathbb C}((t))[x_1,\ldots,x_n]$, or in
other words, polynomials in $x_1\ldots x_n$ with coefficients in the ring of
formal Laurent series ${\mathbb C}((t))$. In this context we will denote the system by
$\tilde{\mathbf{f}}$.
Our approach is to use Newton iteration on the system
$\tilde{\mathbf{f}}$. Namely, we find some starting $\mathbf{z}\in {\mathbb C}((t))^n$ and
repeatedly solve
\begin{equation} \label{eqnewton}
J_{\tilde{\mathbf{f}}}(\mathbf{z}) \Delta \mathbf{z} = - \tilde{\mathbf{f}}(\mathbf{z})
\end{equation}
for the update $\Delta \mathbf{z}$ to $\mathbf{z}$, where $J_{\tilde{\mathbf{f}}}$ is the Jacobian
matrix of $\tilde{\mathbf{f}}$ with respect to $x_1,\ldots,x_n$. This is a system of
equations that is linear over ${\mathbb C}((t))$, so the problem is well-posed.
Computationally speaking, one approach to solving it would be to overload the
operators on (truncated) power series
and apply basic linear algebra techniques.
A main point of our paper is that this method can be improved upon.
Of course, applying Newton's method requires a starting guess; here we must
define what it means to be singular:
\begin{definition}\label{def:singular}{\rm
A point $\mathbf{p}$ on a $d$-dimensional component of a variety ${\mathbb V}(\mathbf{f}) \subset
{\mathbb C}^n$ is {\em regular} if the Jacobian of $\mathbf{f}$ evaluated at $\mathbf{p}$ has rank
$n-d$. Points that are not regular are called {\em singular}.}
\end{definition}
In most cases the starting guess for Newton's method can just be a point
$\tilde{\mathbf{p}} = (p_1,\ldots,p_n)$ such that $\mathbf{p} = (0,p_1,\ldots,p_n)$ is in $\in {\mathbb V}(\mathbf{f})$.
However, if $\mathbf{p}$ is a singular point, this is insufficient. In addition, $\mathbf{p}$
could be a branch point (which we define later), in which case it is also
not enough to use as the starting guess for Newton's method.
We solve two problems in this paper. First, we find an effective way to perform
the Newton step; the framework is established in
Section~\ref{sec:newtonStep}, and our solution is laid out in
Section~\ref{sec:echelonForm}. And second, we discuss the prelude to
Newton's method in Section~\ref{sec:startingGuess}, characterizing when
techniques from tropical geometry are needed to transform the problem and obtain
the starting guess.
\subsection{The Newton Step}\label{sec:newtonStep}
Solving the Newton step~(\ref{eqnewton}) amounts to solving a linear system
\begin{equation}\label{basiclinear}
\bf Ax=b
\end{equation}
over the field ${\mathbb C}((t))$.
Our first step is linearization, which turns a vector of
series into a series of vectors, and likewise for a matrix series.
In other words, we refactor the problem and think of $\mathbf{x}$ and $\mathbf{b}$
as in ${\mathbb C}^n((t))$
instead of ${\mathbb C}((t))^n$, and $\mathbf{A}$ as in ${\mathbb C}^{n\times n}((t))$
instead of ${\mathbb C}((t))^{n\times n}$.
Suppose that $a$ is the lowest order of a term in $\mathbf{A}$, and $b$ the lowest
order of a term in $\mathbf{b}$. Then we can write the linearized
\begin{align}
\mathbf{A} &= A_0t^a + A_1t^{a+1}+\ldots,\\
\mathbf{b} &= \mathbf{b}_0t^b + \mathbf{b}_1t^{b+1} + \ldots, \text{ and}\\
\mathbf{x} &= \mathbf{x}_0t^{b-a} + \mathbf{x}_1t^{b-a+1}+\ldots
\end{align}
where $A_i\in{\mathbb C}^{n\times n}$ and
$\mathbf{b}_i,\mathbf{x}_i\in{\mathbb C}^n$. Expanding and equating powers of $t$, the linearized
version of (\ref{basiclinear}) is therefore equivalent to solving
\begin{eqnarray}\label{staggeredSystem}
A_0 \mathbf{x}_0 & = & \mathbf{b}_0 \nonumber \\
A_0 \mathbf{x}_1 & = & \mathbf{b}_1 - A_1 \mathbf{x}_0 \nonumber \\
A_0 \mathbf{x}_2 & = & \mathbf{b}_2 - A_1 \mathbf{x}_1 - A_2 \mathbf{x}_0 \\
& \vdots & \nonumber \\
A_0 \mathbf{x}_d & = & \mathbf{b}_d - A_1 \mathbf{x}_{d-1} - A_2 \mathbf{x}_{d-2} - \cdots - A_d \mathbf{x}_0 \nonumber
\end{eqnarray}
for some $d$. This can be written in block matrix form as
\begin{equation} \label{eqbiglinsys}
\left[
\begin{array}{ccccc}
A_0 & & & & \\
A_1 & A_0 & & & \\
A_2 & A_1 & A_0 & & \\
\vdots & \vdots & \vdots & \ddots & \\
A_d & A_{d-1} & A_{d-2} & \cdots & A_0 \\
\end{array}
\right]
\left[
\begin{array}{c}
\mathbf{x}_0 \\
\mathbf{x}_1 \\
\mathbf{x}_2 \\ \vdots \\
\mathbf{x}_d
\end{array}
\right]
=
\left[
\begin{array}{c}
\mathbf{b}_0 \\
\mathbf{b}_1 \\
\mathbf{b}_2 \\ \vdots \\
\mathbf{b}_d
\end{array}
\right].
\end{equation}
For the remainder of this paper, we will use $\mathbf{z}$ and $\Delta \mathbf{z}$ to denote
vectors of series, while $\mathbf{x}$ and $\Delta \mathbf{x}$ will denote their linearized
counterparts, that is, series which have vectors for coefficients.
\begin{example}
{\rm Let
\begin{equation}
\mathbf{f}=(2t^2 + tx_1 - x_2 + 1, x_1^3 - 4t^2 + tx_2 + 2t - 1).
\end{equation}
Starting with $\mathbf{z} = (1,1)$, the
first Newton step $J_{\tilde{\mathbf{f}}}(\mathbf{z}) \Delta \mathbf{z} = -
\tilde{\mathbf{f}}(\mathbf{z})$ can be written:
\begin{equation}
\left[
\begin{array}{rr}
t & -1 \\
3 & t
\end{array}
\right]
\Delta\mathbf{z}
=
-
\left[
\begin{array}{r}
t + 2 t^{2} \\
3 t - 4 t^{2}
\end{array}
\right].
\end{equation}
To put in linearized form, we have $a=0$, $b=1$,
\begin{equation}
A_0 = \left[\begin{array}{rr}
0 & -1 \\
3 & 0
\end{array}\right],
A_1 = \left[\begin{array}{rr}
1 & 0 \\
0 & 1
\end{array}\right],
\end{equation}
\begin{equation}
\mathbf{b}_0 = \left[\begin{array}{r}
-1 \\
-3
\end{array}\right],\text{ and }
\mathbf{b}_1 = \left[\begin{array}{r}
-2 \\
4
\end{array}\right].
\end{equation}
Since $A_0$ is regular, we can solve in staggered form as
in~(\ref{staggeredSystem}), which yields the next term:
\begin{equation}
\Delta\mathbf{x} = \left[\begin{array}{r}
- 1 \\
1
\end{array}\right]t.
\end{equation}
After another iteration, our series solution is
\begin{equation}\label{regularSol}
\left[\begin{array}{r}
1 - t \\
1 + t + t^{2}
\end{array}\right].
\end{equation}
In fact this is the entire series solution for
$\mathbf{f}$ --- substituting~(\ref{regularSol})
into $\mathbf{f}$ causes both polynomials to vanish completely.
}
\end{example}
\begin{remark}{\rm
We constructed the example above so its solution is a series with
finitely many terms, a polynomial. The solution of~(\ref{basiclinear})
can be interpreted as the solution obtained via Hermite interpolation.
Observe that for a series
\begin{equation}
\mathbf{x}(t) = \mathbf{x}_0 + \mathbf{x}_1 t + \mathbf{x}_2 t^2 + \mathbf{x}_3 t^3 + \cdots + \mathbf{x}_k t^k + \cdots
\end{equation}
its Maclaurin expansion is
\begin{equation}
\mathbf{x}(t) = \mathbf{x}(0) + \mathbf{x}'(0) t + \frac{1}{2} \mathbf{x}''(0) t^2
+ \frac{1}{3!} \mathbf{x}'''(0) t^3 + \cdots
+ \frac{1}{k!} \mathbf{x}^{(k)}(0) t^k + \cdots
\end{equation}
where $\mathbf{x}^{(k)}(0)$ denotes
the $k$-th derivative of $\mathbf{x}(t)$ evaluated at zero. Then:
\begin{equation}
\mathbf{x}_k = \frac{1}{k!} \mathbf{x}^{(k)}(0), \quad k=0, 1, \ldots
\end{equation}
Solving~(\ref{basiclinear}) up to degree~$d$ implies that all
derivatives up to degree~$d$ of $\mathbf{x}(t)$ at $t = 0$ match the solution.
If the solution is a polynomial, then this polynomial will be obtained
if~(\ref{basiclinear}) is solved up to the degree of the polynomial.}
\end{remark}
\subsection{The Starting Guess, and Related Considerations}
\label{sec:startingGuess}
Our hope is that a solution $\mathbf{z}(t)$ of $\tilde{\mathbf{f}}$ parameterizes the curve
in some neighborhood of a point $\mathbf{p} \in {\mathbb V}(\mathbf{f})$. In other words, if $\pi$
is the projection map of ${\mathbb V}(\mathbf{f})$ onto the $t$-coordinate axis, then
$\mathbf{z}(t)$ should be a branch of $\pi^{-1}$.
It is natural to think that there are two scenarios for the starting point $\mathbf{p}
\in {\mathbb V}(\mathbf{f})$, namely that it is a regular point or it is singular. And indeed,
when $\mathbf{p}$ is singular, tropical methods are required. Intuitively speaking,
when at a singular point, knowing just the point itself is insufficient to
determine the series; higher-derivative information is required. Observe the
second frame of Figure~\ref{fig:nodalCurves}.
The point $\mathbf{p}$ being regular, however, is not enough. Consider the third
frame of Figure~\ref{fig:nodalCurves}. Here $x=0$ cannot be lifted because the
origin is a {\em branch point} of the curve. In other words, the derivative at
$\mathbf{p}$ in terms of $t$ is undefined, so a Taylor series in $t$ is impossible
without a transformation of the problem.
The proper way to check if Newton's method can be applied directly to
$\mathbf{p}$, or whether tropical methods are needed, is by checking if $\mathbf{p}$ is a
singular point of ${\mathbb V}(\mathbf{f})\cap {\mathbb V}(t)$. Setting $\mathbf{f}_{{\rm aug}}=
(t,f_1,\ldots,f_n)$, we have ${\mathbb V}(\mathbf{f}_{{\rm aug}}) = {\mathbb V}(\mathbf{f})\cap {\mathbb V}(t)$.
We can thus use ${\mathbb V}(\mathbf{f}_{{\rm aug}})$ to distinguish the first frame of
Figure~\ref{fig:nodalCurves} from the latter two. This is summarized and proven
in the following.
\begin{figure}
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[scale=.15]{nodal1.png}
\caption{a general point}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[scale=.15]{nodal2.png}
\caption{a singularity}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[scale=.15]{nodal3.png}
\caption{a branch point}
\end{subfigure}
\captionsetup{font=scriptsize}
\caption{\label{fig:nodalCurves}
Lifting $x=0$ to three different types of point.
In general, the line $x=0$ intersects the curve at regular points.
If the curve intersects itself for $x=0$, we are at a singular point.
The curve turns at a branch point.}
\end{figure}
\begin{proposition} \label{regularStartProp}
Let $\mathbf{p} = (0,p_1,\ldots,p_n) \in {\mathbb V}(\mathbf{f})$, and set $\tilde{\mathbf{p}} =
(p_1,\ldots,p_n)$. Then $\mathbf{p}$ is a regular point of ${\mathbb V}(\mathbf{f}_{{\rm aug}})$ if and
only if for every step of Newton's method applied to $\mathbf{x}(t):=\tilde{\mathbf{p}}$, $a=0$ and
$A_0$ has full rank.
\end{proposition}
\begin{proof}
$(\Rightarrow)$ By definition, $\mathbf{p}$ is a regular point of $\mathbf{f}_{{\rm aug}}$ if and
only if $J_{\mathbf{f}_{{\rm aug}}}(\mathbf{p})$ has full rank. But note that $J_{\mathbf{f}_{{\rm aug}}}$ is
\begin{equation}\label{proofJac1}
\left[
\begin{array}{cccc}
1 & 0 & \cdots & 0 \\
df_1/dt & df_1/dx_1 & \cdots & df_1/dx_n \\
df_2/dt & df_2/dx_1 & \cdots & df_2/dx_n \\
\vdots & \vdots & & \vdots \\
df_m/dt & df_m/dx_1 & \cdots & df_m/dx_n
\end{array}
\right].
\end{equation}
and $J_{\tilde{\mathbf{f}}}$ is
\begin{equation}\label{proofJac2}
\left[
\begin{array}{ccc}
df_1/dx_1 & \cdots & df_1/dx_n \\
df_2/dx_1 & \cdots & df_2/dx_n \\
\vdots & & \vdots \\
df_m/dx_1 & \cdots & df_m/dx_n
\end{array}
\right].
\end{equation}
So $J_{\mathbf{f}_{{\rm aug}}}$ has full rank at $\mathbf{p}$
if and only if $J_{\tilde{\mathbf{f}}}|_{t=0}$ has full rank at $\tilde{\mathbf{p}}$. Thus it suffices to show that
after each Newton step, $a=0$ and $\mathbf{x}(0)=\tilde{\mathbf{p}}$ remain true, so that $A_0 =
J_{\tilde{\mathbf{f}}}(\mathbf{x}(0)) = J_{\tilde{\mathbf{f}}}(\tilde{\mathbf{p}})|_{t=0}$ continues to
have full rank.
We clearly have $a\geq 0$ at every step, since the Newton iteration cannot
introduce negative exponents. At the beginning, $a=0$ and $\mathbf{x}(0)=\tilde{\mathbf{p}}$ hold
trivially. Inducting on the Newton steps,
if $a=0$ and $\mathbf{x}(0)=\tilde{\mathbf{p}}$ at some point
in the algorithm, then the next $A_0$,
namely $J_{\tilde{\mathbf{f}}}(\mathbf{x}(0)) = J_{\tilde{\mathbf{f}}}(\tilde{\mathbf{p}})|_{t=0}$, is the same matrix as
in the last step, hence it is again regular and $a$ is 0.
Since $\tilde{\mathbf{f}}(\mathbf{x}(0)) = \tilde{\mathbf{f}}(\tilde{\mathbf{p}})|_{t=0}=0$, $b$ must be strictly
greater than~0. Thus the next Newton
update $\Delta \mathbf{x}$ must have positive degree in all components,
leaving $\mathbf{x}(0)= \tilde{\mathbf{p}}$ unchanged.
$(\Leftarrow)$ If $\mathbf{p}$ is a singular point of ${\mathbb V}(\mathbf{f}_{{\rm aug}})$,
then on the first Newton step $A_0=J_{\tilde{\mathbf{f}}}(\tilde{\mathbf{p}})|_{t=0}$ must drop rank by the
same argument as above comparing~(\ref{proofJac1}) and~(\ref{proofJac2}).
\end{proof}
To summarize the cases:
\begin{lemma}\label{startingScenarios}
There are three possible scenarios for ${\mathbb V}(\mathbf{f}_{{\rm aug}})$:
\begingroup \addtolength{\jot}{0.5em}
\begin{align}
&\text{1.~~} \exists \mathbf{p} \in {\mathbb V}(\mathbf{f}_{{\rm aug}})\text{ regular,}\nonumber \\
&\text{2.~~} \exists \mathbf{p} \in {\mathbb V}(\mathbf{f}_{{\rm aug}})\text{ singular, or} \nonumber\\
&\text{3.~~} \nexists \mathbf{p} \in {\mathbb V}(\mathbf{f}_{{\rm aug}})\nonumber
\end{align} \endgroup
\end{lemma}
In the first case, we can simply use $\tilde{\mathbf{p}} = (p_1,p_2,\ldots,p_n)$
to start the Newton iteration. In the
second, we must defer to tropical methods in order to obtain the necessary
starting $\mathbf{z}$, which will lie in ${\mathbb C}[[t]]^n$.
In the final case, we also defer to tropical methods,
which provide a starting $\mathbf{z}$ that will have negative exponents.
A change of coordinates brings the problem back into one of
the first two cases, and we can apply our method directly.
It is important to reiterate that $\mathbf{p}$ may be a
regular point of ${\mathbb V}(\mathbf{f})$ but a singular point of
${\mathbb V}(\mathbf{f}_{{\rm aug}})$, as is the case in the third frame of
Figure~\ref{fig:nodalCurves}. The following example also demonstrates this
behavior.
\begin{example}[Viviani, continued] {\rm
In Section~\ref{introViviani} we introduced the example of Viviani's curve. If
we translate by a substitution so that setting $x_1=0$ gives not the singular point at
the origin, but instead the highest and lowest points on the curve, the system
becomes
\begin{equation}\label{vivianiEqs}
\mathbf{f} = (x_1^2 + x_2^2 + x_3^2 - 4, (x_1-1)^2 + x_2^2 - 1).
\end{equation}
When $x_1=0$ we obtain the two points $(0,0,2)$ and $(0,0,-2)$, which are both
regular points.
For the augmented system $\mathbf{f}_{{\rm aug}}$, the Jacobian $J_{\mathbf{f}_{{\rm aug}}}$ is
\begin{equation}
\left[\begin{array}{rrr}
1 & 0 & 0 \\
2 x_1 & 2 x_2 & 2 x_3 \\
2 x_1 - 2 & 2 x_2 & 0
\end{array}\right]
\end{equation}
which at the point $\mathbf{p} = (0,0,2)$ becomes
\begin{equation}
\left[\begin{array}{rrr}
1 & 0 & 0 \\
0 & 0 & 4 \\
-2 & 0 & 0
\end{array}\right].
\end{equation}
This matrix drops rank, hence $\mathbf{p}$ is a singular point of $\mathbf{f}_{{\rm aug}}$ and we
are in the second case of Lemma~\ref{startingScenarios}.
Following Lemma~\ref{startingScenarios},
we defer to tropical methods to begin, obtaining
the transformation $x_1\rightarrow 2t^2$ and the starting term
$\mathbf{z}=(2t,2)$. Now the first Newton step can be written:
\begin{equation}\label{vivianiNewton}
\left[
\begin{array}{rr}
4 t & 4 \\
4 t & 0
\end{array}
\right]
\Delta\mathbf{z}
=
-
\left[
\begin{array}{r}
4 t^{2} + 4 t^{4} \\
4 t^{4}
\end{array}
\right].
\end{equation}
Note that $J_{\tilde{\mathbf{f}}}(\mathbf{z})$ is now invertible over ${\mathbb C}((t))$.
Its inverse begins with negative exponents of~$t$:
\begin{equation}
\left[
\begin{array}{cc}
0 & 1/4 \\
1/4~t^{-1} & -1/4~t^{-1}
\end{array}
\right].
\end{equation}
To linearize, we first observe that $a=0$ and $b=2$,
so $\mathbf{x}$ will have degree at least $b-a=2$.
The linearized block form of (\ref{vivianiNewton}) is then
\begin{equation}\label{vivianiNewtonLinearized}
\left[
\begin{array}{rr|rr|rr}
0 & 4 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
\hline
4 & 0 & 0 & 4 & 0 & 0 \\
4 & 0 & 0 & 0 & 0 & 0 \\
\hline
0 & 0 & 4 & 0 & 0 & 4 \\
0 & 0 & 4 & 0 & 0 & 0
\end{array}
\right]
\Delta \mathbf{x}
=
\left[
\begin{array}{r}
-4 \\
0 \\
0 \\
0 \\
-4 \\
-4
\end{array}
\right].
\end{equation}
Whether we solve (\ref{vivianiNewton}) over ${\mathbb C}((t))$ or solve
(\ref{vivianiNewtonLinearized}) in the least squares sense,
we obtain the same Newton update
\begin{equation}
\Delta\mathbf{x}=
\left[\begin{array}{r}
0 \\
-1
\end{array}\right]t^2 +
\left[\begin{array}{r}
-1 \\
0
\end{array}\right]t^3,
\end{equation}
or in non-linearized form,
\begin{equation}
\Delta\mathbf{z}=
\left[\begin{array}{r}
-t^{3} \\
-t^{2}
\end{array}\right].
\end{equation}
Substituting $\mathbf{z}+\Delta\mathbf{z} = (2t-t^3,2-t^2)$ into (\ref{vivianiEqs}) produces
$(x_1^6 + x_1^4, x_1^6)$, and we have obtained the desired cancellation of
lower-order terms.
}
\end{example}
The matrix in~(\ref{vivianiNewtonLinearized})
we call a Hermite-Laurent matrix, because its correspondence
with Hermite-Laurent interpolation.
\subsection{A Lower Triangular Echelon Form}\label{sec:echelonForm}
When we are in the regular case of Lemma~\ref{startingScenarios} and the
condition number of $A_0$ is low, we can
simply solve the staggered system~(\ref{staggeredSystem}). When this is not
possible, we are forced to solve~(\ref{eqbiglinsys}).
Figure~\ref{figechelon} shows the structure of
the coefficient matrix~(\ref{eqbiglinsys}) for the regular case,
when $A_0$ is regular and all block matrices are dense.
The essence of this section is that we can use column operations to reduce the
block matrix to a lower triangular
echelon form as shown at the right of Figure~\ref{figechelon},
solving~(\ref{eqbiglinsys}) in the same time as~(\ref{staggeredSystem}).
\begin{figure
\begin{center}
\begin{picture}(400,100)(0,0)
\put(0,0){\includegraphics[scale=.30]{fighermite5.png}}
\put(170,0){\includegraphics[scale=.30]{figechelon5.png}}
\end{picture}
\caption{The banded block structure of a generic Hermite-Laurent matrix
for $n=5$ at the left, with at the right its lower triangular echelon form.}
\label{figechelon}
\end{center}
\end{figure}
The lower triangular echelon form of a matrix is a lower triangular
matrix with zero elements above the diagonal.
If the matrix is regular, then all diagonal elements are nonzero.
For a singular matrix, the zero rows of its echelon form are on top
(have the lowest row index) and the zero columns are at the right
(have the highest column index).
Every nonzero column has one pivot element, which is the
nonzero element with the smallest row index in the column.
All elements at the right of a pivot are zero.
Columns may need to be swapped so that the row indices of the
pivots of columns with increasing column indices are sorted
in decreasing order.
\begin{example} {\rm (Viviani, continued).
For the matrix series in~(\ref{vivianiNewtonLinearized}),
we have the following reduction:
\begin{equation}
\left[
\begin{array}{cccccc}
0 & 4 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
4 & 0 & 0 & 4 & 0 & 0 \\
4 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 4 & 0 & 0 & 4 \\
0 & 0 & 4 & 0 & 0 & 0
\end{array}
\right]
\rightarrow
\left[
\begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
4 & 0 & 0 & 0 & 0 & 0 \\
0 & 4 & 0 & 0 & 0 & 0 \\
0 & 4 & 4 & 0 & 0 & 0 \\
0 & 0 & 0 & 4 & 0 & 0 \\
0 & 0 & 0 & 4 & 4 & 0
\end{array}
\right].
\end{equation}
Because of the singular matrix coefficients in the series,
we find zeros on the diagonal in the echelon form.
}
\end{example}
Given a general $n$-by-$m$ dimensional matrix~$A$,
the lower triangular echelon form $L$
can be described by one $n$-by-$n$ row permutation matrix $P$ which
swaps the zero rows of~$A$ and a sequence of $m$
column permutation matrices $Q_k$ (of dimension $m$) and
multiplier matrices $U_k$ (also of dimension $m$).
The matrices $Q_k$ define the column swaps to bring the pivots
with lowest row indices to the lowest column indices.
The matrices $U_k$ contain the multipliers to reduce what
is at the right of the pivots to zero.
Then the construction of the lower triangular echelon form
can be summarized in the following matrix equation:
\begin{equation}
L = P A Q_1 U_1 Q_2 U_2 \cdots Q_m U_m.
\end{equation}
Similar to solving a linear system with a LU factorization,
the multipliers are applied to the solution of the lower
triangular system which has $L$ as its coefficient matrix.
\section{Some Preliminary Cost Estimates}
Working with truncated power series is somewhat similar to working
with extended precision arithmetic.
In this section we make some observations regarding the cost overhead.
\subsection{Cost of one step}
First we compare the cost of computing a single Newton step using the various
methods introduced. We let $d$ denote the degree of the truncated series in $\mathbf{A}(t)$,
and $n$ the dimension of the matrix coefficients in $\mathbf{A}(t)$ as before.
\noindent {\bf The staggered system.} In the case that
$a\geq 0$ and the leading coefficient $A_0$ of the matrix series $\mathbf{A}(t)$ is
regular, the equations in~{\rm (\ref{staggeredSystem})} can be solved with
$O(n^3) + O(d n^2)$ operations. The cost is $O(n^3)$ for the decomposition of
the matrix $A_0$, and $O(d n^2)$ for the back substitutions using the
decomposition of $A_0$ and the convolutions to compute the right hand sides.
\noindent {\bf The big block matrix.} Ignoring the
triangular matrix structure, the cost of solving the larger linear
system~(\ref{eqbiglinsys}) is $O((dn)^3)$.
\noindent {\bf The lower triangular echelon version.}
If the leading coefficient $A_0$ in the matrix series is regular
(as illustrated by Figure~\ref{figechelon}), we may copy the
lower triangular echelon form $L_0 = A_0 Q_0 U_0$ of $A_0$ to all blocks
on the diagonal and apply the permutation $Q_0$ and column operations
as defined by $U_0$ to all other column blocks in~$\mathbf{A}$.
The regularity of~$A_0$
implies that we may use the lower triangular echelon form of $L_0$
to solve~(\ref{eqbiglinsys}) with substitution.
Thus with this quick optimization we obtain
the same cost as solving the staggered system~(\ref{staggeredSystem}).
In general, $A_0$ and several other matrix coefficients
may be rank deficient, and the diagonal of nonzero pivot elements will
shift towards the bottom of~$L$. We then find as solutions vectors
in the null space of the upper portion of the matrix~$\mathbf{A}$.
\subsection{Cost of computing $D$ terms}
Assume that $D=2^k$. In the regular case, assuming quadratic convergence, it
will take $k$ steps to compute $2^k$ terms. We can reuse the factorization of
$A_0$ at each step, so we have $O(n^3)$ for the decomposition plus
\begin{equation}
O(2n^2 + 4n^2 + 8n^2 + \cdots + 2^{k-1}n^2) = O(2^kn^2)
\end{equation}
for the back substitutions. Putting these together,
we find the cost of computing $D$ terms to be $O(n^3) + O(D n^2)$.
\section{Computational Experiments}
Our power series methods have been implemented in PHCpack~\cite{Ver99}
and are available to the Python programmer via phcpy~\cite{Ver14}.
To set up the problems we used the computer algebra system Sage~\cite{Sage}, and
for tropical computations we used Gfan~\cite{BHS08} and Singular~\cite{DGPS} via
the Sage interface.
\subsection{The Problem of Apollonius}
\begin{figure}[hbt]
\begin{center}
\includegraphics[scale=.30]{appolfig.png}
\caption{Singular configuration of Apollonius circles.
The input circles are filled in, the solution circles are dark gray.
Because the input circles mutually touch each other,
three of the solution circles coincide with the input circles. }
\label{appolfig}
\end{center}
\end{figure}
The classical problem of Apollonius consists in finding all circles that are
simultaneously tangent to three given circles.
A special case is when the three
circles are mutually tangent and have the same radius; see
Figure~\ref{appolfig}. Here the solution variety is singular -- the circles
themselves are double solutions. In this figure, all have radius 3, and
centers $(0,0)$, $(2,0)$, and $(1,\sqrt{3})$. We can study this
configuration with power series techniques by introducing a parameter $t$ to
represent a vertical shift of the upper circle. We then examine the solutions
as we vary $t$. This is represented algebraically as a solution to
\begin{equation} \label{appollo}
\left\{
\begin{array}{rcl}
x_1^2 + x_2^2 - r^2 - 2r - 1 & = & 0 \\
x_1^2 + x_2^2 - r^2 - 4x_1 - 2r + 3 & = & 0 \\
t^2 + x_1^2 - 2tx_2 + x_2^2 - r^2
+ 2\sqrt{3}t - 2x_1 - 2\sqrt{3}x_2 + 2r + 3 & = & 0.
\end{array}
\right.
\end{equation}
Because we are interested in power series solutions of~(\ref{appollo})
near $t=0$, we use $t$ as our free variable.
To simplify away the $\sqrt{3}$, we substitute
$t\rightarrow \sqrt{3}t$, $x_2 \rightarrow \sqrt{3} x_2$,
and the system becomes
\begin{equation} \label{appollo2}
\left\{
\begin{array}{rcl}
x_1^2 + 3x_2^2 - r^2 - 2r - 1 & = & 0 \\
x_1^2 + 3x_2^2 - r^2 - 4x_1 - 2r + 3 & = & 0 \\
3t^2 + x_1^2 - 6tx_2 + 3x_2^2
- r^2 + 6t - 2x_1 - 6x_2 + 2r + 3 & = & 0.
\end{array}
\right.
\end{equation}
Call this system $\mathbf{f}$.
Now we examine the system at $(t,x_1,x_2,r) = (0,1,1,1) = \mathbf{p}$.
The Jacobian $J_{\mathbf{f}}$ at $\mathbf{p}$ is
\begin{equation}
\left[\begin{array}{rrrr}
0 & 2 & 6 & -4 \\
0 & -2 & 6 & -4 \\
0 & 0 & 0 & 0
\end{array}\right],
\end{equation}
so $\mathbf{f}$ --- and by extension $\mathbf{f}_{{\rm aug}}$ --- is singular at $\mathbf{p}$,
and we are in the second case of Lemma~\ref{startingScenarios}.
Tropical methods give two possible starting solutions,
which rounded for readability are
$(t,1,1+0.536t,1+0.804t)$ and $(t,1,1 + 7.464t,1 + 11.196t)$.
We will continue with the second; call it $\mathbf{z}$.
For the first step of Newton's method, $\mathbf{A}$ is
\begin{equation}
\left[\begin{array}{rrr}
2 & 6 & -4 \\
-2 & 6 & -4 \\
0 & 0 & 0
\end{array}\right] +
\left[\begin{array}{rrr}
0 & 44.785 & -22.392 \\
0 & 44.785 & -22.392 \\
0 & 38.785 & -22.392
\end{array}\right]t
\end{equation}
and $\mathbf{b}$ is
\begin{equation}
\left[\begin{array}{r}
41.785 \\
41.785 \\
0
\end{array}\right]t^2.
\end{equation}
From these we can construct the linearized system
\begin{equation}
\left[\begin{array}{rrr}
A_0 & & \\
A_1 & A_0 & \\
& A_1 & A_0 \\
\end{array}\right]\Delta \mathbf{x} =
\left[\begin{array}{c}
\mathbf{b}_0 \\
0 \\
0
\end{array}\right].
\end{equation}
Solving in the least squares sense, we obtain two more terms of the series,
so in total we have
\begin{equation} \label{appolloSol2}
\left\{
\begin{array}{rcl}
x_1 & = &1 \\
x_2 & = & 1 + 7.464t + 45.017t^2 + 290.992t^3 \\
r & = & 1 + 11.196t + 77.971t^2 + 504.013t^3.
\end{array}
\right.
\end{equation}
By comparison, the series we obtain from the other possible starting solution is
\begin{equation} \label{appolloSol3}
\left\{
\begin{array}{rcl}
x_1 & = &1 \\
x_2 & = & 1 + 0.536t - 0.017t^2 + 0.0077t^3\\
r & = & 1 + 0.804t + 0.029t^2 - 0.013t^3.
\end{array}
\right.
\end{equation}
From these, we get a good idea of what happens near $t=0$:
the first solution circle grows rapidly
(corresponding to the larger coefficients in~(\ref{appolloSol2})),
while the other stays small
(corresponding to the smaller coefficient in~(\ref{appolloSol3})).
This is illustrated in Figure~\ref{shiftedAppol},
which shows the solutions of the system at $t=0.13$.
\begin{figure}[hbt]
\begin{center}
\includegraphics[scale=.55]{shiftedAppol2.png}
\caption{Solution to (\ref{appollo}) for $t=0.13$.
The largest circles correspond to power series solutions with larger
coefficients than the coefficients of the power series solutions
for the smaller circles.}
\label{shiftedAppol}
\end{center}
\end{figure}
This example demonstrates the application of power series solutions
in polynomial homotopies.
Current numerical continuation methods cannot be applied to track
the solution paths defined by the homotopy in~(\ref{appollo}),
because at $t=0$, the start solutions are double solutions.
The power series solutions provide reliable predictions
to start tracking the solution paths defined by~(\ref{appollo}).
\subsection{Tangents to Four Spheres}
Our next example is that of finding all lines mutually tangent to four
spheres in $\mathbb{R}^3$; see~\cite{Dur98}, \cite{MPT01}, \cite{Sot11},
and~\cite{ST08}.
If a sphere $S$ has center $\mathbf{c}$ and
radius $r$, the condition that a line in $\mathbb{R}^3$ is tangent to
$S$ is given by
\begin{equation}
\|\mathbf{m} - \mathbf{c} \times \mathbf{t}\|^2 - r^2 = 0,
\end{equation}
where $\mathbf{m}=(x_0,x_1,x_2)$ and $\mathbf{t}=(x_3,x_4,x_5)$ are the moment and
tangent vectors of the line, respectively. For four spheres, this
gives rise to four polynomial equations; if we add the equation
$x_0x_3 + x_1x_4 + x_2x_5 = 0$ to require that $\mathbf{t}$ and $\mathbf{m}$ are
perpendicular and $x_3^2 + x_4^2 + x_5^2 = 1$ to require that
$\|\mathbf{t}\| = 1$, we have a system of 6 equations in 6 unknowns which we
expect to be 0-dimensional.
\begin{figure}[hbtp]
\begin{center}
\includegraphics[scale=.30]{singularSpheres}
\caption{A singular configuration of four spheres.
The input spheres mutually touch each other
and the tangent lines common to all four
input spheres occur with multiplicity.}
\label{singularSpheres}
\end{center}
\end{figure}
If we choose the centers to be $(+1,+1,+1)$, $(+1,-1,-1)$,
$(-1,+1,-1)$, and $(-1,-1,+1)$ and the radii to all be $\sqrt{2}$, the
spheres all mutually touch and the configuration is singular; see
Figure~\ref{singularSpheres}. In this case, the number of solutions
drops to three, each of multiplicity 4.
Next we introduce an extra parameter $t$ to the equations so that the radii of
the spheres are $\sqrt{2}+t$. This results in a 1-dimensional system $F$, which
we omit for succinctness. $F$ is singular at $t=0$, so we are once again in the
second case of Lemma~\ref{startingScenarios}.
Tropical and algebraic techniques --- in particular, the tropical
basis~\cite{BHS08} in Gfan~\cite{Jen08} and the primary decomposition
in Singular~\cite{DGPS} ---
decompose $F$ into three systems, one of which is
\begin{equation} \label{fourSpheresEq}
\mathbf{f}=
\left\{
\begin{array}{rcl}
x_0 &= &0 \\
x_3 &= &0 \\
x_4^2 + x_2x_5 + x_5^2 &= &0 \\
x_1x_4 + x_2x_5 &= &0 \\
x_1x_2 - x_2x_4 + x_1x_5 &= &0 \\
x_1^2 + x_2^2 - 1 &= &0 \\
2t^4 + 4t^2 + x_2x_5 &= &0 \\
x_2^2x_4 - x_2x_4x_5 + x_1x_5^2 - x_4 &= &0 \\
x_2^3 - x_2 - x_5 &= &0.
\end{array}
\right.
\end{equation}
Using our methods we can find several solutions to this, one of which is
\begin{equation}\nonumber
\left\{
\begin{array}{l}
x_0 = 0 \\
x_1 = 2t + 4.5t^3 + 30.9375t^5 + 299.3906t^7 + 3335.0889t^9 + 40316.851t^{11} \\
x_2 = 1 - 2t^2 - 11t^4 - 94t^6 - 986.5t^8 - 11503t^{10} \\
x_3 = 0 \\
x_4 = 2t - 3.5t^3 - 23.0625t^5 - 193.3594t^7 - 2019.3486t^9 - 23493.535t^{11} \\
x_5 = -4t^2 - 10t^4 - 64t^6 - 614t^8 - 6818t^{10} - 82283t^{12}
\end{array}.
\right.
\end{equation}
Substituting back into $\mathbf{f}$ yields series in $O(t^{12})$, confirming
the calculations. This solution could be used as the initial predictor
in a homotopy beginning at the singular configuration.
In contrast to the small Apollonius circle problem,
this example is computationally more challenging,
as covered in~\cite{Dur98}, \cite{MPT01}, \cite{Sot11}, and~\cite{ST08}.
It illustrates the combination of tropical methods in
computer algebra with symbolic-numeric power series computations
to define a polynomial homotopy to track solution paths starting
at multiple solutions.
\subsection{Series Developments for Cyclic 8-Roots}
A vector $\mathbf{u} \in {\mathbb C}^n$ of a unitary matrix $A$ is biunimodular
if for $k=1, 2, \ldots, n$: $|u_k| = 1$ and $|v_k| = 1$
for $\mathbf{v} = A \mathbf{u}$.
The following system arises in the study~\cite{FR15}
of biunimodular vectors:
\begin{equation}
\mathbf{f}(\mathbf{x}) =
\left\{
\begin{array}{c}
x_{0}+x_{1}+ \cdots +x_{n-1}=0 \\
i = 2, 3, 4, \ldots, n-1:
\displaystyle\sum_{j=0}^{n-1} ~ \prod_{k=j}^{j+i-1}
x_{k~{\rm mod}~n}=0 \\
x_{0}x_{1}x_{2} \cdots x_{n-1} - 1 = 0. \\
\end{array}
\right.
\end{equation}
Cyclic 8-roots has solution curves not reported by Backelin~\cite{Bac89}.
Note that because of the last equation, the system has no
solution for $x_0=0$, or in other words ${\mathbb V}(\mathbf{f}_{{\rm aug}})=\emptyset$. Thus
we are in the third case of Lemma~\ref{startingScenarios}.
In~\cite{AV12,AV13}, the vector $\mathbf{v} = (1, -1, 0, 1, 0, 0, -1, 0)$
gives the leading exponents of the series.
The corresponding unimodular coordinate transformation $\mathbf{x} = \mathbf{z}^M$ is
\begin{equation}
M =
\left[
\begin{array}{rrrrrrrr}
1 & -1 & 0 & 1 & 0 & 0 & -1 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right]
\quad
\begin{array}{l}
x_{0} \rightarrow z_{0} \\
x_{1} \rightarrow z_{1} z_{0}^{-1} \\
x_{2} \rightarrow z_{2} \\
x_{3} \rightarrow z_{3} z_{0} \\
x_{4} \rightarrow z_{4} \\
x_{5} \rightarrow z_{5} \\
x_{6} \rightarrow z_{6} z_{0}^{-1} \\
x_{7} \rightarrow z_{7}.
\end{array}
\end{equation}
Solving the transformed system with $z_0$ set to $0$ gives
the leading coefficient of the series.
After 2 Newton steps, invoked in PHCpack with {\tt phc -u},
the series for $z_1$ is
{\small
\begin{verbatim}
(-1.25000000000000E+00 + 1.25000000000000E+00*i)*z0^2
+( 5.00000000000000E-01 - 2.37676980513323E-17*i)*z0
+(-5.00000000000000E-01 - 5.00000000000000E-01*i);
\end{verbatim}
}
After a third step, the series for $z_1$ is
{\small
\begin{verbatim}
( 7.12500000000000E+00 + 7.12500000000000E+00*i)*z0^4
+(-1.52745512076048E-16 - 4.25000000000000E+00*i)*z0^3
+(-1.25000000000000E+00 + 1.25000000000000E+00*i)*z0^2
+( 5.00000000000000E-01 - 1.45255178343636E-17*i)*z0
+(-5.00000000000000E-01 - 5.00000000000000E-01*i);
\end{verbatim}
}
Bounds on the degree of the Puiseux series expansion
to decide whether a point is isolated are derived in~\cite{HSJ16}.
While the explicit bounds (which can be computed without prior
knowledge of the degrees of the solution curves) are large,
the test of whether a point is isolated can still be
performed efficiently with our quadratically convergent Newton's method.
In a future work, we plan to apply the power series methods to
the cyclic 16-roots problem, the 16-dimensional version of this
polynomial system, for which the tropical prevariety was computed
recently~\cite{JSV17}.
|
2,869,038,156,282 | arxiv | \section{INTRODUCTION}
\label{intro}
The concept of \emph{churn} is as old as the customer--service relationships themselves. Churn occurs when a certain user stops using a service, i.e.\ when the relationship between the customer and the service provider ends \citep{mozer2000churn}. This term is widely used in a variety of industries including retail banking \citep{mutanen2006customer}, telecommunications \citep{hwang2004ltv} and gaming \citep{runge2014,perianez2016churn}.
Churn remains one of the most important metrics to evaluate a business, as it is directly linked to user loyalty \citep{hwang2004ltv}. High retention (i.e.\ low churn) points to a healthy business, and increases in user retention usually translate into higher revenues. In free-to-play games retention is crucial, since many of them have in-app purchases as their main source of revenue and, moreover, gaining new users through marketing and promotion campaigns is typically much costlier than retaining existing players \citep{Monetization}.
If there is a contractual relationship with the customer (as is normally the case in sectors such as telecommunications, see \citealt{mozer2000churn}), the definition of churn is unambiguous: it happens when the customer cancels the contract or unsubscribes from the service. On the other hand, when there is no contract (or equivalent relationship) it is more difficult to assess whether a user has really churned or not.
The appropriate way of defining churn in this kind of commercial activities must be carefully studied in light of their particularities and needs, and also of the purpose of the definition itself. This is the situation that applies to online games \citep{perianez2016churn,GameBigData,cig2018competition,chen2019competition}, where most users stop playing without deleting their account. Additionally, free-to-play gamers who are active but make no purchases are of little or no economic value, and this allows us to introduce another type of churn definition within the video game context: {\it purchase churn}, which refers to paying users who cease to spend money on the title and is as tricky to define as conventional churn. (We will occasionally refer to the latter as \emph{login churn}, for clarity.)
The usual strategy is to consider that a player has churned after a certain number of days of inactivity~\citep{runge2014,perianez2016churn}. Here we begin by examining how to choose a suitable (login/purchase) churn definition (in terms of days without activity/purchases). The goal of classifying players into active or churned is twofold: On the one hand, to have an accurate measure of the current health of the game. On the other, to label players in an appropriate way to successfully train churn prediction models.
Accurately predicting churn is of paramount importance for any business. In video games, the early detection of potential (login or purchase) churners may give studios the chance to target players individually---with personalized discounts, presents or contents---in an attempt to re-engage them. Previous works addressing churn prediction in video games have treated churn either as a classification \citep{sifa2015predicting,chen2019competition} or survival problem \citep{perianez2016churn,chen2019competition}, with the latter approach being especially well-suited due to the censored nature of churn. Other related works used churn predictions to compute the lifetime value of individual players \citep{chen2018ltv}.
Player profiling (i.e.\ grouping users based on their behavior) is another noteworthy problem \citep{bauckhage2014clustering,drachen2012guns,drachen2014comparison,saas2016discovering}, which we also address here from a churn perspective. Our main goal is to characterize players who are identified as churners but eventually start playing again, namely {\it false churners}. Some of them are \emph{genuine false churners}: in spite of meeting the corresponding churn definition, they never left the game, but just remained inactive for a relatively long time. Others (those who had a lengthier period of inactivity before returning to the game and thus can be considered to actually have churned) are more rightly regarded as \emph{resurrected} players.
In contrast, we will regard \emph{all} players who start purchasing again after a prolonged lapse without spending any money as \emph{purchase resurrected}.
There is yet another group of interest in connection with churn: players whose activity is so sporadic that---regardless of whether or not they have been tagged as churners in the past based on the particular churn definition used---they can hardly be deemed as active users; we will refer to them as \emph{zombies}. Such a classification of players according to their churn behavior is interesting on many levels, but in this work we focus on assessing its impact on the accuracy of churn prediction models.
The remainder of the paper is organized as follows: First we introduce the two main standard approaches used to define churn in video games, as well as the specific dataset and definitions adopted in our experiments. Then, we describe the churn prediction models analyzed in this study. Finally, after presenting and discussing the prediction results obtained by discarding different types of churners, we provide a brief summary of our findings and deliver our conclusions.
\subsection{Our Contribution}
To the best of our knowledge, this is the first work to simultaneously address login and purchase churn prediction, compare the classification and survival approaches and study the effect of excluding different kinds of churners from the training on the accuracy of the results.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.45\textwidth]{images/percentageMissedSales.png}
\includegraphics[width=0.45\textwidth]{images/percentageFalseChurners.png}
\caption{Determination of the login churn definition for VIP players (blue) and all paying users (PUs, red) based on two indicators: the percentages of missed sales (left) and false churners (right) during the first two months of data. By imposing these percentages to remain below 1\% and 5\%, respectively, we obtain 9 days (for VIP players) and 13 days (for PUs) as the inactivity period after which a player is considered to have churned.}
\label{churnDef}
\end{figure*}
\section{DEFINITIONS AND DATASET}
\label{defAndDataset}
\subsection{Defining Churn}
\label{defs}
Two main approaches to define churn in terms of player inactivity can be found in the literature:
1) Using a \emph{fixed time window} for all players \citep{cig2018competition}. For example, we could consider players who logged in during the previous month but not within the current one to be churners. This kind of strategy can be useful for some purposes---such as tracking game retention over long time scales---but it is not without shortcomings. In particular, it is fairly insensitive to specific player connection patterns, something especially problematic for churn prediction.
2) Using a \emph{moving time window} for each player. To overcome the limitations of the above approach, most works measure the churn-defining inactivity period through a moving window, i.e., referred to individual player time instead of calendar time~\citep{runge2014,perianez2016churn}. While this method is computationally more demanding, it is also much better suited to model churn risk, and is thus the one followed in this paper.
The length of the optimal moving time window is highly game dependent. While in very casual titles a few days of inactivity typically signal a real user disengagement, in massively multiplayer online role-playing games time between sessions is usually much longer, and so longer time windows are required to correctly identify churners. The situation is analogous for purchase churn, as the typical purchase frequency may also vary greatly from game to game.
In this work, window lengths are selected so as to minimize two quantities: the \emph{percentage of false churners} (number of churners who eventually return to the game over total number of churners) and the \emph{percentage of missed sales} (sales from false churners after they return to the game over total sales). Considering long enough time windows can make both of these quantities vanish. However, our aim is to detect churn as soon as possible, both to have an accurate picture of player engagement at any given time and to have sufficient room for manoeuvre to try and re-engage potential churners. In particular, for our churn definitions we consider the shortest period of inactivity that keeps false churners under 5\% and missed sales under 1\% (although these figures can be fine-tuned according to the specific requirements of the analysis). Further details are given below.
\subsection{Dataset}
\label{dataset}
We used game data from the Japanese title \emph{Age of Ishtaria} (a free-to-play, role-playing mobile card game developed by Silicon Studio), collected between 2014-10-02 and 2017-05-01. The data contains detailed daily information about each player, including level-ups, playtime, purchases and sessions.
Only top spenders (\emph{VIP players} or \emph{whales}) were considered, as they are the most valuable users.
We define VIP players as those with total outlay above a certain threshold (computed from the first two months of data so that whales provide at least 50\% of the total revenue) and there were around 6000 of them in the studied dataset.
Data from other mobile games were also evaluated following the same methodology, and we obtained equivalent results, which shows the applicability of the proposed concepts to online games. These results are not included in the paper due to space limitations.
In the case of non-online games, similar principles could be applied. However, as the purpose of this work is to give a solution that can be used in an operational environment, we focused on studying online games, where actions can be actively performed on the players and player information is continuously updated.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.3\textwidth]{images/NumDaysToChurn_1.png}
\includegraphics[width=0.3\textwidth]{images/LevelsToChurn_1.png}
\includegraphics[width=0.3\textwidth]{images/PlaytimeToChurn_1.png} \\
\includegraphics[width=0.3\textwidth]{images/NumDaysToPurchaseChurn_1.png}
\includegraphics[width=0.3\textwidth]{images/LevelsToPurchaseChurn_1.png}
\includegraphics[width=0.3\textwidth]{images/PlaytimeToPurchaseChurn_1.png} \\
\caption{Cumulative survival probability (Kaplan--Meier estimates) as a function of time since first login (left), game level (center) and cumulative playtime (right) for VIP players. Top/bottom panels refer to login/purchase churn. Curves are stratified by churner type: \emph{normal}, \emph{zombie}, \emph{resurrected} and \emph{purchase resurrected} players. Shaded areas represent 95\% confidence intervals. }
\label{kaplanMeier_playerType}
\end{figure*}
\subsection{\emph{Age of Ishtaria}'s Churn Definition}
\label{churndefcomputation}
Figure \ref{churnDef} shows graphically how the \emph{login churn} definition was inferred from the first two months of data. The percentage of missed sales (left) and percentage of false churners (right) were evaluated for different churn definitions---letting the inactivity period after which a player is considered to have churned vary between 3 and 90 days---when considering all paying users (PUs, red curves) or just VIP players (blue curves). As already discussed, we require these percentages to be less than 1\% and 5\%, respectively, which yields an inactivity period of 13 days for all PUs and 9 days for VIP players only. Since our analysis is restricted to top spenders, we will use the latter time window as our churn definition. Note that the percentage of false churners will tend to increase when considering extended data periods (longer than two months) but, for practical reasons, it is desirable to set the churn definition as soon as possible.
In any case, we checked that such increase was not very significant---the percentage remained well below 10\% even for longer windows of 6 months, taken at different dates across the full dataset---which means that the two-month data are representative of the overall churning behavior and supports our strategy. (Note that our real aim here is to restrict the number of \emph{genuine} false churners. Thus the percentage of false churners can be higher when considering the whole dataset, due to the increase in the number of resurrected players.)
Following a similar approach we found that \emph{purchase churn} should be defined as 50 days without any spending for VIP users. This inactivity period is much longer than in the previous (login) case, and thus a much larger (roughly by a factor 5) sample is needed to properly determine it. In practice, for a new title, it is possible to obtain a first working definition by using other games as reference, and then revisit it when a large enough data sample is available.
\subsection{Churner Profiling}
\label{typesOfPlayers}
Three different groups of players with a particularly interesting churn-related behavior will be considered, and the impact of excluding them from the model training examined. These are
1) \emph{Resurrected} players: Those who return to the game after churning and remaining inactive for a prolonged period of time. When churn is defined as less than 10 days of inactivity (as in our case), we require that period to be of at least 30 days.
Users who return to the game before 30 days of inactivity are considered to be genuine false churners (i.e., to have been mistakenly marked as churners) rather than resurrected players.
2) \emph{Purchase resurrected} players:
In this study we identify all false purchase churners as purchase resurrected once they start spending again. (We thus disregard genuine false purchase churners.)
3) \emph{Zombies}:
Players who exhibit a too disengaged behavior to be considered active users (but who are not churners at that moment). In this study, players with less than 3 hours of playtime, no level-ups and no purchases in the previous 30 days were labeled as zombies.
Players who do not fall into any of the previous three groups will be referred to as \emph{normal}.
In the sample considered, 21\% of the players had churned and 5\% had purchase churned by the end of the data period. Around 10\% of all players were labeled as zombies, nearly 30\% as resurrected and 23\% as purchase resurrected at some point throughout their lifetime.
Although the high percentage of resurrected players could suggest that our churn definition was not restrictive enough, we should recall that its aim is to limit the presence of \emph{genuine} false churners rather than resurrected players (who typically churn for good shortly after returning to the game and thus do not increase the percentage of false churners in the long run).
Figure \ref{kaplanMeier_playerType} shows Kaplan--Meier survival curves for VIP players---as a function of playtime, lifetime (time since first login) and game level---stratified by user type. Purchase resurrected players have the highest survival probabilities against both churn and purchase churn. (The only exception could be purchase survival for very high game levels or playtime, where normal players seemingly have higher probabilities, although it is not possible to ascertain that due to the large uncertainties.) On the other hand, zombies have the lowest survival and purchase survival probabilities across all variables. Interestingly, for small lifetime (though not level or playtime) values, resurrected players present higher survival rates (against login churn) than normal players. After more than a year the trend is inverted, as the survival probability for normal players stabilizes whereas that of resurrected players continues decreasing at the same pace.
Note also that, in general, purchase survival curves are steeper than the corresponding (login) survival curves. This highlights the fact that all churners are also purchase churners (while the opposite is not true).
\section{MODELING}
\label{modeling}
We analyzed both binary and survival churn prediction models, exploring the effect of removing zombies, resurrected players, purchase resurrected players and combinations of them from the model training. The aim is to elucidate whether the presence of these players might be introducing noise that prevents the models from learning the {\it typical} VIP churn behavior more efficiently.
To get the results shown in this paper, data until 2018-03-01 was used for training and the remaining data until 2018-05-01, for validation. Nonetheless, we also evaluated the impact of varying the training and validation data ranges, obtaining similar results in all cases.
\begin{table*}
\centering
\caption{Login and purchase churn prediction results for the binary and survival models, measured through the area under the curve (AUC) and the integrated Brier score (IBS), respectively. Survival results are given in terms of different predictor variables: lifetime, level and cumulative playtime. We consider different situations with regard to the training sample: including all users (\emph{none}) vs.~excluding zombie, resurrected or purchase resurrected players (or combinations of them). The best results for each model and variable are highlighted in bold.}
\small
\begin{tabular}{@{}ccccccccc@{}}
\toprule
CHURN & \multicolumn{2}{c}{Binary models (AUC)} & \multicolumn{6}{c}{Survival models (IBS)} \\
\cmidrule(lr){2-3}\cmidrule(l){4-9}
excluding from training & by login & by purchase & \multicolumn{3}{c}{by login} & \multicolumn{3}{c}{by purchase} \\
\cmidrule(lr){4-6} \cmidrule(lr){7-9}
& & & lifetime & level & playtime & lifetime & level & playtime \\ \midrule
none & 0.95 & 0.69 & 0.072 & 0.069 & 0.060 & 0.070 & 0.080 & 0.077\\
zombie & 0.93 & 0.69 & 0.034 & 0.047 & \textbf{0.035} & 0.055 & 0.067 & 0.086\\
resurrected & 0.90 & 0.68 & 0.043 & 0.048 & 0.041 & 0.070 & 0.080 & 0.080\\
p.~resurrected & 0.95 & 0.72 & 0.104 & 0.084 & 0.060 & 0.065 & 0.076 & 0.062\\
zombie, resurrected & 0.94 & 0.69 & \textbf{0.029} & \textbf{0.041} & \textbf{0.035} & 0.055 & \textbf{0.057} & 0.086\\
zombie, p.~resurrected & 0.93 & 0.72 & 0.057 & 0.068 & 0.049 & \textbf{0.053} & 0.067 & \textbf{0.050}\\
resurrected, p.~resurrected & 0.92 & 0.73 & 0.071 & 0.068 & 0.057 & 0.065 & 0.068 & 0.057\\
zombie, resurrected, p.resurrected & 0.94 & 0.73 & 0.053 & 0.059 & 0.050 & \textbf{0.053} & \textbf{0.056} & \textbf{0.051}\\
\bottomrule
\end{tabular}
\label{resultsTable}
\end{table*}
\subsection{Model Specification}
Specifically, we investigated the performance of a \emph{conditional inference survival ensemble} model \citep{Hothorn06unbiasedrecursive}, described in detail in previous churn prediction studies \citep{perianez2016churn,GameBigData} of which the present work constitutes an extension. Player survival was described in terms of three different variables: playtime, lifetime (time since first login) and game level reached. On the other hand, binary classification was explored through \emph{conditional inference trees} \citep{Hothorn06unbiasedrecursive}. Ensembles of size 1000 were used in all cases.
\subsection{Feature Selection}
Feature selection was also based on previous studies \citep{perianez2016churn,GameBigData} that constructed game-independent features measurable in most titles, such as playtime, purchases or number of actions of each player.
We evaluated the best feature combination as a function of the model (binary or survival) and survival variable (lifetime, level, playtime). The possibility of adding a flag to identify the type of user (e.g.\ 1 for zombies and 0 for normal players) was also investigated.
However, these variables proved to bias the models towards the behavior of the special users (affecting the accuracy of the predictions for normal players) and were discarded in the end.
\subsection{Model Validation}
For conditional inference ensembles, model validation was performed through specific survival analysis error curves and the integrated Brier score (IBS) \citep{graf1999assessment}, in the way described by \cite{perianez2016churn}. The binary models performance was assessed using the area under the receiver operating characteristic curve (AUC); see e.g.~\cite{bradley1997}.
The set of players used for validation was the same in all cases (excluding zombies, resurrected and purchase resurrected players) so that we can fully assess the impact that training on different groups of users has on the predictions for the same group of players. This strategy was adopted to avoid massaging the data, which may lead to biased results.
\section{RESULTS}
\label{results}
\begin{figure*}[ht!]
\centering
\hfill
\includegraphics[width=0.44\textwidth]{images/lifetime1_IBS.png}\hfill
\includegraphics[width=0.44\textwidth]{images/lifetime1_IBS_2.png}\hfill\null\\
\hfill
\includegraphics[width=0.44\textwidth]{images/level1_IBS.png}\hfill
\includegraphics[width=0.44\textwidth]{images/level1_IBS_2.png}\hfill\null\\
\hfill
\includegraphics[width=0.44\textwidth]{images/efftime1_IBS.png}\hfill
\includegraphics[width=0.44\textwidth]{images/efftime1_IBS_2.png}\hfill\null\\
\caption{Prediction error curves for \emph{login} churn as a function of lifetime (top), game level (center) and playtime (bottom). They have been computed using a conditional inference survival ensemble model, upon excluding zombie, resurrected or purchase resurrected players (left) and combinations thereof (right) from the training sample.}
\label{IBS}
\end{figure*}
\begin{figure*}[ht!]
\centering
\hfill
\includegraphics[width=0.44\textwidth]{images/lifetime1_purchase_IBS.png}\hfill
\includegraphics[width=0.44\textwidth]{images/lifetime1_purchase_IBS_2.png}\hfill\null\\
\hfill
\includegraphics[width=0.44\textwidth]{images/level1_purchase_IBS.png}\hfill
\includegraphics[width=0.44\textwidth]{images/level1_purchase_IBS_2.png}\hfill\null\\
\hfill
\includegraphics[width=0.44\textwidth]{images/efftime1_purchase_IBS.png}\hfill
\includegraphics[width=0.44\textwidth]{images/efftime1_purchase_IBS_2.png}\hfill\null
\caption{Prediction error curves for \emph{purchase} churn as a function of lifetime (top), game level (center) and playtime (bottom). They have been computed using a conditional inference survival ensemble model, upon excluding zombies, resurrected or purchase resurrected players (left) and combinations thereof (right) from the training sample.}
\label{IBS_purchase}
\end{figure*}
The login and purchase churn prediction results for the different models (binary and survival) and survival variables (lifetime, level and playtime) are summarized in Table~\ref{resultsTable}. Prediction error curves from the survival analysis of churn and purchase churn are shown in Figures \ref{IBS} and \ref{IBS_purchase}, respectively.
Both the table and the figures explore how the prediction accuracy of the models varies when we exclude one or several of the previously described player groups (zombies, resurrected, purchase resurrected) from the training sample.
The impact of including or excluding these groups is large in the survival analysis, but small to non-existent in the binary classification (where the only action that seems to have a relatively noticeable effect is removing purchase resurrected players when predicting purchase churn). This seems reasonable, as the former method relies on learning probabilities throughout the whole lifetime of each player and is thus much more sensitive to the noise introduced by erratic churn behaviors.
Remarkably, the choices that optimize the survival results (discussed in detail in what follows) have a negligible to slightly positive impact on the binary models, and thus the same approach could be safely taken for both the classification and survival problems.
Focusing on the left column of Figure~\ref{IBS} (where only individual groups of players have been excluded from the training) we see that, for small lifetime, level and playtime values, the most significant error reduction in login churn predictions is achieved by removing zombies (although there is no such reduction for very short lifetimes), which is also reflected in the IBS scores in Table~\ref{resultsTable}. The improvement is further enhanced as lifetime increases; for high playtime and level, however, the trend is reversed and errors are lower (albeit not significantly) when considering all players.
Removal of (only) resurrected players exhibits similar patterns, but with a generally lower impact. Curiously, discarding purchase resurrected players has almost the opposite effect: it affects very negatively the performance for small values of all three survival variables, but improves it at large scales---to the point of yielding the best results for high level and playtime. However, the IBS values in Table~\ref{resultsTable} clearly indicate that the overall performance is degraded when removing these users.
As suggested by the previous discussion, the best overall results for login churn prediction are obtained by excluding both zombies and resurrected (but not purchase resurrected) players from the training sample; see Table~\ref{resultsTable}. On the other hand, the overall negative impact of removing purchase churners can be deemed reasonable, which may be explained by the fact that these players---despite going for long periods without any spending---can maintain typical activity levels in terms of session frequency and duration and in-game progression, thus providing the models with additional valuable information to learn from.
Turning now to purchase churn, the effects of excluding only zombies or only resurrected players (see Figure~\ref{IBS_purchase}, left column) are qualitatively similar to the ones discussed for login churn.
Purchase resurrected players could have been anticipated to play a major role in understanding purchase churn,
and indeed their exclusion does provide an overall improvement in all variables (lifetime, level and playtime) as shown by the IBS values in Table~\ref{resultsTable}.
Interestingly, discarding these players has a negative impact for short lifetimes---an effect compensated for by the great gains at large values.
This could be suggesting that a more restrictive definition of purchase resurrected players
(by requiring them to start purchasing again after periods not just longer but \emph{much longer} than the purchase churn definition, namely the approach followed for login churn) might be needed.
As in the case of login churn, excluding both resurrected and zombie players yields good results in terms of lifetime and level; however, for playtime it is better to consider all players.
The highest overall accuracy is achieved by discarding zombies and purchase resurrected players (being almost irrelevant
whether or not resurrected players are also discarded).
\section{SUMMARY AND CONCLUSION}
\label{conclusion}
This study shows that excluding certain types of players (with a particular behavior regarding churn) from the training sample can lead to better churn predictions in the context of video games. Both binary classification and survival models were evaluated. Even though both approaches yield accurate results, the latter seems better suited for churn prediction, since (as discussed in \citealt{perianez2016churn}) it takes into account the censored nature of the problem and provides a much richer output.
Our results show that, in general, removing active players with very limited activity (zombies) and those who return to the game or after a long period of inactivity (resurrected players) leads to more accurate churn and purchase churn forecasts.
(In the latter case, optimal results are obtained by removing also players who start purchasing again after a long period without spending.)
Moreover, excluding certain players from the modeling might be helpful from an operational perspective, as it would reduce the size of game datasets.
This work proposes three new types of players based on their churn behavior and aims to establish a basic framework for further related studies (in a similar vein e.g.\ to the already extended use of the ``VIP player'' concept). It also opens new questions in game data science research, such as whether it could be possible to foresee if a certain player will resurrect and how many times she will do so, or to get an accurate time-to-resurrection prediction. Finally, it represents a first step towards finding better and increasingly complex ways to characterize churn behavior that will improve our understanding of the phenomenon and the performance of churn prediction models.
\section*{ACKNOWLEDGEMENT}
\label{acknowledgements}
We thank Javier Grande for reviewing the manuscript.
|
2,869,038,156,283 | arxiv | \section{Introduction}
\label{sec1}
Understanding quantum many body physics, especially in the regime where the degrees of freedom are strongly correlated, is one of the outstanding areas of research in both condensed matter and nuclear physics today. Many physical phenomena ranging from the properties of nuclei \cite{Bedaque:2002mn}, phases of dense nuclear matter \cite{Schmitt:2010pn}, properties of high $T_c$ materials \cite{And97}, heavy fermion systems \cite{HFSys}, topological superconductors and insulators \cite{RevModPhys.83.1057}, etc., contain strongly correlated regimes of interest. While approximate methods can help uncover exotic features that can emerge in such systems, reliable quantitative predictions usually require numerical approaches such as the quantum Monte Carlo method \cite{Fehske08,Lee:2008fa,Drut:2012md,RevModPhys.87.1067}. Unfortunately these methods suffer from sign problems in many interesting cases, and their solutions are exciting research directions in the field of computational quantum many body theory today \cite{Gattringer:2016kco}.
The challenge is to rewrite the quantum problem as a classical statistical mechanics problem with positive Boltzmann weights that are computable in polynomial time. While the Feynman path intergal is one way to proceed, the presence of fermions and/or frustration means there is no guarantee that the goal can be achieved. Although a generic solution that solves all sign problems most likely does not exist \cite{Tro05}, solutions to many specific sign problems have been discovered recently \cite{PhysRevLett.83.3116,Alford:2001ug,PhysRevLett.99.250201,PhysRevD.85.091502,Bloch:2011jx,PhysRevB.77.125101,Bloch:2015iha,PhysRevLett.115.266802,Chandrasekharan:2012fk,Huf14,Li15,Wan15a,PhysRevLett.116.250601,Li16,Chandrasekharan:2013rpa,Huffman:2016nrx,Alet16alm,PhysRevB.93.054408}. It would be nice to establish general criteria for the solvability of sign problems.
Discovering an appropriate basis to formulate the quantum problem is an important step in the solution to the sign problem in a given system. For example, the sign problem that exists in a class of frustrated quantum spin systems when formulated in the local spin-half basis can be eliminated by going to a local spin-one basis \cite{Alet16alm, PhysRevB.93.054408}. It would be exciting if a systematic approach could be developed to construct such a basis for each problem of interest. Every new solution expands the class of solvable problems and thus takes us a step closer towards this goal. In this work we discover a solution to a simple frustrated model involving a single triangular anti-ferromagnetic interaction. In order to make the problem non-trivial, each spin in the triangle is coupled to its own bath of spins in the form of a one-dimensional chain. Our model was considered earlier as a toy model to explore if a basis change could help alleviate the sign problem \cite{PhysRevB.92.195126}. It was shown that even a small change in the basis can have a significant effect on alleviating the sign problem. In this work we explore if the sign problem can in fact be completely eliminated in certain cases. Although our model is geometrically different from those considered in \cite{Alet16alm, PhysRevB.93.054408}, our final solution is similar and emerges when the system is formulated in a local spin-one basis on half the system. Interestingly, we can go a step further and show that the solution is based on fermion pairing in a related fermion model.
Our paper is organized as follows. In section \ref{sec2} we explain the details of our model and map it into a fermion model which plays an important role in uncovering our solution. In section \ref{sec3} we transform the model into an inhomogeneous one dimensional model by identifying new fermion degrees of freedom on half the lattice. We then expand the partition function using fermion worldlines and identify the origin of the sign problem. In section \ref{sec4} we show that the sign problem is absent in a model that contains only paired fermion worldlines. Using this insight, in section \ref{sec5} we define a new local basis for the original spin model that is free of sign problems. Section \ref{sec6} contains our conclusions.
\begin{figure}[b]
\includegraphics[width=0.8\textwidth]{fig1.pdf}
\caption{\label{fig1} The lattice structure of the frustrated quantum impurity model. }
\end{figure}
\section{Frustrated Quantum Impurity Model}
\label{sec2}
In this work we consider a model consisting of three quantum spin chains constructed with spin half operators ${\mathbf{S}}_{a,i}$, where $a=1,2,3$ labels the three chains and $i=0,...,N$ labels the sites in each chain. Frustration is introduced through an anti-ferromagnetic interaction among the three quantum spins at the $i=0$ site (see Fig.~\ref{fig1}). The Hamiltonian of the system is given by
\begin{equation}
H \ =\ \lambda_s \sum_{a=1,2,3} \sum_{i=0}^{N-1} {\mathbf{S}}_{a,i} \cdot {\mathbf{S}}_{a,i+1}\ +
\alpha_s \big({\mathbf{S}}_{1,0}\cdot {\mathbf{S}}_{2,0} + ({\mathbf{S}}_{1,0}+{\mathbf{S}}_{2,0})\cdot {\mathbf{S}}_{3,0}\big)
\label{smodel}
\end{equation}
where $\lambda_s$ and $\alpha_s$ are two independent couplings. The spin-half operators satisfy the usual commutation relations $[S^x_{a,i},S^y_{b,j}] = i S^z_{a,i}\delta_{ab} \delta_{ij}$. Our goal is to write the partition function
\begin{equation}
Z \ =\ \mathrm{Tr}\Big(\ \mathrm{e}^{-\beta H}\ \Big)
\end{equation}
as a sum over configurations with positive weights, such that each weight is computable in polynomial time as the system size (i.e., $N$) and $\beta$ grow. Construction of such an expansion is referred to as a solution to the sign problem for the quantum system described by $H$. For general quantum systems the existence of such an expansion is not guaranteed, especially in the presence of frustrating interactions. The idea is to explore solutions to sign problems in simple models as a step towards finding similar solutions in more complex models. Our model contains only a simple local frustration due to the triangular anti-ferromagnetic coupling $\alpha_s \neq 0$, but it is sufficient to introduce sign problems with conventional methods. In contrast, in this work we wish to solve the sign problem completely. Our model does not naturally fall in the class of frustrated models solved recently in~\cite{Alet16alm,PhysRevB.93.054408}, but we will show that the final solution, which relies on the mapping to an inhomogeneous one dimensional system, is similar.
In our approach, we first map the spin model into a fermion model whose creation and annihilation operators of are denoted as $c^\dagger_{\sigma,a,i}, c_{\sigma,a,i}$ where, as before, the index $a$ denotes the chain, the index $i$ denotes the lattice site on each chain, and $\sigma=\uparrow,\downarrow$ are the spin degrees of freedom. The site $i=0$ is special and viewed as an impurity site that couples the three different lattices. At this site fermions can hop between the three different lattices. It is easy to show that the Hamiltonian
\begin{equation}
H= - \lambda \sum_{a,\sigma}\ \sum_{i=0}^{N-1}
(c^\dagger_{\sigma,a,i} c_{\sigma,a,i+1} +c^\dagger_{\sigma,a,i+1} c_{\sigma,a,i})
- \alpha \sum_{\sigma,a,b}\ c^\dagger_{\sigma,a,0} M_{ab}\ c_{\sigma,b,0} - U \sum_{a,i} Q_{a,i}.
\label{fmodel}
\end{equation}
where
\begin{equation}
M = \left(\begin{array}{ccc} 0 & 1 & 1 \cr 1 & 0 & 1 \cr 1 & 1 & 0 \end{array}\right),
\end{equation}
and
\begin{equation}
Q_{a,i} \ =\
\big(c^\dagger_{\uparrow,a,i}c_{\uparrow,a,i}-c^\dagger_{\downarrow,a,i}c_{\downarrow,a,i}\big)^2,
\label{qdef}
\end{equation}
will reproduce the physics of (\ref{smodel}) with $\lambda_s = 2 \lambda^2/U$ and $\alpha_s = 2 \alpha^2/U$ in the limit of $U \rightarrow \infty$. In this limit the system is forced to contain a single fermion degree of freedom at every lattice site. Our initial hope was that if we could solve the sign problem in this fermionic model, it would be more general and exciting. While we have not yet found a full solution, the sign problem in the fermion model is eliminated if we impose additional pairing in the fermion worldline configurations. In the large $U$ limit this pairing leads to a class of quantum spin models in which our original impurity model is included. Using insight from this pairing solution we can construct a basis in the original spin model that solves its sign problem.
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig2.pdf}
\caption{\label{fig2} A fermion worldline configuration showing permutation of two fermions.}
\end{figure}
\section{Transforming to $S$ and $A$ type fermions}
\label{sec3}
Solutions to sign problems in Fermi systems are difficult since sign problems can arise through fermion permutations. For example, model (\ref{fmodel}) contains a sign problem in the fermion occupation number basis even when $U=0$. Indeed, worldlines of fermions can permute with each other by hopping around the triangle as illustrated in the Fig. \ref{fig2}. Hence, solutions to fermion sign problems usually require resummations of world line configurations which lead to fermion determinants. Such solutions to sign problems are available for half filled systems with repulsive interactions such as ours, but only on bi-partite lattices. Unfortunately, this conventional solution no longer works for model (\ref{fmodel}) due to the triangular hopping term proportional to $\alpha$. Thus, even our simple system offers an opportunity to explore solutions to sign problems that go beyond conventional methods.
Open one dimensional fermion systems with nearest neighbor hopping have no sign problems in the occupation number basis since fermion worldlines of the same type cannot cross each other due to the Pauli principle. In this work we wish to explore if this idea can be used to solve the sign problem in our model. For this purpose let us define a new set of fermion creation operators using the following unitary transformation:
\begin{equation}
\left(\begin{array}{c} c_{\sigma,1,i}^\dagger \cr c_{\sigma,2,i}^\dagger \cr c^\dagger_{\sigma,3,i}\end{array}\right)
\ =\
\left(\begin{array}{ccc} 1/\sqrt{2} & 1/\sqrt{2} & 0 \cr -1/\sqrt{2} & 1/\sqrt{2} & 0 \cr 0 & 0 & 1 \end{array}\right)\
\left(\begin{array}{c} d_{\sigma,1,i}^\dagger \cr d_{\sigma,2,i}^\dagger\cr d^\dagger_{\sigma,3,i} \end{array}\right).
\end{equation}
Such a transformation has already been demonstrated to minimize the severity of the sign problem in frustrated impurity models~\cite{PhysRevB.92.195126}. The annihilation operators are defined using the Hermitian conjugate of the above expression. In terms of $d^\dagger_{\sigma,a,i}$ and $d_{\sigma,a,i}$ the free part of (\ref{fmodel}) can be written as
\begin{equation}
H_0 = -\lambda \sum_{a,\sigma}\ \sum_{i=0}^{N-1}
(d^\dagger_{\sigma,a,i} d_{\sigma,a,i+1} +d^\dagger_{\sigma,a,i+1} d_{\sigma,a,i})
- \alpha \sum_{\sigma,a,b}\ d^\dagger_{\sigma,a,0} K_{ab}\ d_{\sigma,b,0}
\label{1dch}
\end{equation}
where
\begin{equation}
K = \left(\begin{array}{ccc} -1 & 0 & 0 \cr 0 & 1 & \sqrt{2} \cr 0 & \sqrt{2} & 0 \end{array}\right).
\end{equation}
Thus, the above transformation helps convert the free part of the three coupled chains into two disconnected open chains, one involving nearest neighbor hops of the $S$-type fermions (created by $d^\dagger_{\sigma,2,i}$ and $d^\dagger_{\sigma,3,i}$) and the other involving nearest neighbor hops of the $A$-type particle (created by $d^\dagger_{\sigma,1,i}$). We can view the transformed basis as though we have combined the Hilbert spaces on each site in the $a=1$ and $a=2$ chains into a single {\em composite} site, while each site in the $a=3$ chain remains as a {\em fundamental} site. Thus, the system looks one dimensional (see Fig.~\ref{fig2}) but inhomogeneous. The composite sites (shown as squares) contain a sixteen dimensional Hilbert space as compared to fundamental sites (shown as circles), which have a four dimensional Hilbert space.
\begin{figure}[h]
\includegraphics[width=0.8\textwidth]{fig3.pdf}
\caption{\label{fig3} Combining the Hilbert spaces at the site $i$ in the a=1 and a=2 chains into a single composite site one obtains a system that looks one-dimensional but inhomogeneous. The lattice sites have been relabeled as explained in the text.}
\end{figure}
The $S$-type particles move on the full one dimensional chain, while the $A$-type particles are restricted to the half chain consisting of composite sites. To reflect the one dimensional nature of the new lattice, we relabel the lattice sites to a one dimensional chain $-N \leq i \leq N+1$, where $i \leq 0$ label composite sites and $ i \geq 1$ label fundamental sites. We also define new $A$-type and $S$-type fermion operators with the one-dimensional site index as follows:
\begin{equation}
b^\dagger_{\sigma,A,-i} \ =\ d^\dagger_{\sigma,1,i},\quad
b^\dagger_{\sigma,S,-i} \ = \ d^\dagger_{\sigma,2,i}, \quad
b^\dagger_{\sigma,S,i+1} \ = \ d^\dagger_{\sigma,3,i}
\end{equation}
where $i=0,1,2...,N$.
Using the new fermion creation and annihilation operators (\ref{fmodel}) can be rewritten as
\begin{equation}
H = H_0^A + H_0^S + H_0^{LR}+ H_\mu+H_{\rm int}
\end{equation}
where
\begin{subequations}
\begin{eqnarray}
H_0^A &=& -\lambda \sum_{\sigma,i=-N}^{-1} \ \big(b^\dagger_{\sigma,A,i}b_{\sigma,A,i+1} + b^\dagger_{\sigma,A,i+1}b_{\sigma,A,i}\big), \\
H_0^S &=& -\lambda \Big\{\sum_{\sigma,i=-N}^{-1} \ + \sum_{\sigma,i=1}^{N} \Big\}\
\big(b^\dagger_{\sigma,S,i}b_{\sigma,S,i+1} + b^\dagger_{\sigma,S,i+1}b_{\sigma,S,i}\big), \\
H_0^{LR} &=& - \sqrt{2}\alpha\big(b^\dagger_{\sigma,S,0}b_{\sigma,S,1} + b^\dagger_{\sigma,S,1}b_{\sigma,S,0}\big),
\\
H_\mu &=& -\alpha \sum_\sigma\ (b^\dagger_{\sigma,S,0}b_{\sigma,S,0} - b^\dagger_{\sigma,A,0}b_{\sigma,A,0}),
\\
H_{\rm int} &=& - U\sum_{n,i=-N}^{i=N+1} Q^{n}_i. \label{Hint}
\end{eqnarray}
\label{model1}
\end{subequations}
Note that $H^{LR}$ represents hopping between the composites on the left and fundamental sites on the right and only involves $S$-type particles. The interaction operators $Q^{n}_i$ can be constructed using (\ref{qdef}) and will differ between fundamental and composite sites due to the difference in the Hilbert space. On the composite sites we will write them as a sum over five different operators labeled $Q^{1}_i,Q^{2,+}_i,Q^{2,-}_i,Q^{2,0}_i,Q^{3}_i$, while on the fundamental sites we will need only one such operator $Q^{1}_i$. We will construct these individual operators explicitly after choosing a basis to expand the partition function.
We expand the partition function of the model in the fermion occupation number basis defined by ordering the creation of fermions in the Fock vacuum. We first create fermions on the right most lattice site (i,e...$i=N+1$) and then move to the left. For fundamental sites $(i\geq 1)$, we define the four dimensional basis states as
\begin{equation}
|n_1n_2\rangle = (b^\dagger_{{\uparrow},S,i})^{n_1}(-b^\dagger_{{\downarrow},S,i})^{n_2}|0\rangle,
\label{basisf}
\end{equation}
where, for later convenience, we have introduced an extra negative sign when particles of spin down are created. Similarly, for composite sites $(i\leq 0)$ we choose the occupation number basis as
\begin{equation}
|n_1n_2n_3n_4\rangle = (b^\dagger_{{\uparrow},A,i})^{n_1}(b^\dagger_{{\uparrow},S,i})^{n_2} (b^\dagger_{{\downarrow},A,i})^{n_3}(-b^\dagger_{{\downarrow},S,i})^{n_4}|0\rangle,
\label{basisc}
\end{equation}
i.e, create up spins after down spins and if there are two particles with the same spin create $S$ type particles before $A$ type ones. Again note the extra negative sign in front of the $ b^\dagger_{{\downarrow},S,i}$ as before.
\begin{table}[h]
\begin{tabular}{|l||l|}
\hline
$H_\mu\|0001\rangle = -\alpha|0001\rangle$ & $H_\mu\|0010\rangle = \alpha|0010\rangle$ \\
$H_\mu\|0100\rangle = -\alpha|0100\rangle$ & $H_\mu\|1000\rangle = \alpha|1000\rangle$ \\
$H_\mu|0111\rangle = -\alpha|0111\rangle$ & $H_\mu|1011\rangle = \alpha|1011\rangle$ \\
$H_\mu|1101\rangle = -\alpha|1101\rangle$ & $H_\mu|1110\rangle = \alpha|1110\rangle$ \\
$H_\mu|1010\rangle = 2\alpha|1010\rangle$ & $H_\mu|0101\rangle = -2\alpha|0101\rangle$ \\
\hline
\end{tabular}
\caption{\label{tab1} The operator $H_\mu$ is diagonal in the fermion occupation number basis. The above table gives the non-zero eigenvectors and the corresponding eigenvalues.}
\end{table}
\begin{table}[h]
\begin{tabular}{|l||l|}
\hline
$(Q_{1,i}+Q_{2,i})|0000\rangle = 0$ & $(Q_{1,i}+Q_{2,i})|1111\rangle = 0$ \\
$(Q_{1,i}+Q_{2,i})|0001\rangle = |0001\rangle$ & $(Q_{1,i}+Q_{2,i})|0010\rangle = |0010\rangle$ \\
$(Q_{1,i}+Q_{2,i})|0100\rangle = |0100\rangle$ & $(Q_{1,i}+Q_{2,i})|1000\rangle = |1000\rangle$ \\
$(Q_{1,i}+Q_{2,i})|0111\rangle = |0111\rangle$ & $(Q_{1,i}+Q_{2,i})|1011\rangle = |1011\rangle$ \\
$(Q_{1,i}+Q_{2,i})|1101\rangle = |1101\rangle$ & $(Q_{1,i}+Q_{2,i})|1110\rangle = |1110\rangle$ \\
$(Q_{1,i}+Q_{2,i})|1100\rangle = 2 |1100\rangle$ & $(Q_{1,i}+Q_{2,i})|0011\rangle = 2|0011\rangle$ \\
$(Q_{1,i}+Q_{2,i})|1010\rangle = (|1010\rangle+|0101\rangle)$ & $(Q_{1,i}+Q_{2,i})|0101\rangle = (|0101\rangle+|1010\rangle)$ \\
$(Q_{1,i}+Q_{2,i})|1001\rangle = (|1001\rangle+|0110\rangle)$ & $(Q_{1,i}+Q_{2,i})|0110\rangle = (|0110\rangle+|1001\rangle)$ \\
\hline
\end{tabular}
\caption{\label{tab2} Action of the interaction term $(Q_{1,i}+Q_{2,i})$ on the sixteen dimensional Hilbert space on a composite site. Most of the Hilbert space states are eigenstates of the interaction operator. Only two sets of doublets mix within each doublet.}
\end{table}
In the above basis we can compute the matrix elements of various terms that appear in the Hamiltonian (\ref{model1}). First note that $H_\mu$ is diagonal in the occupation number basis, where the non-zero diagonal elements are given in Table.~\ref{tab1}. Next we consider the interaction term and write it as $H_{\rm int} = H^C_{\rm int} + H^F_ {\rm int}$, where $H^C_{\rm int} \ =\ -U\ \sum_i Q_{1,i}+Q_{2,i}$ and $H^F_{\rm int} \ =\ -U\ \sum_i Q_{3,i}$ represent the interactions in the composite and fundamental parts of the chain. For the fundamental chain $Q_{3,i}$ is diagonal in our chosen basis:
\begin{equation}
Q_{3,i}|00\rangle = Q_{3,i}|11\rangle = 0, \ \ Q_{3,i}|01\rangle = |01\rangle,\ \
Q_{3,i}|10\rangle = |10\rangle.
\end{equation}
If we define $Q_i^1$, which appears in (\ref{Hint}), on the fundamental sites to be a projector on the one particle space, such that $Q_i^1|n_1 n_2\rangle = \delta_{n_1+n_2,1}|n_1 n_2\rangle$, note that $Q_i^1 \equiv Q_{3,i}$. For the composite sites, the action of the interaction term $(Q_{1,i}+Q_{2,i})$ on the sixteen dimensional basis states is given table \ref{tab2}. We define $Q_i^n |n_1n_2n_3n_4\rangle = \delta_{n,n_1+n_2+n_3+n_4}|n_1n_2n_3n_4\rangle$ for $n=1,3$. For $n=2$ we define three different operators with the property that
\begin{equation}
Q_i^{2,+} |1100\rangle = 2|1100\rangle,\quad Q_i^{2,-} |0011\rangle = 2|0011\rangle,
\end{equation}
\begin{subequations}
\vspace{-1.15cm}
\begin{eqnarray}
Q_i^{2,0}|1010\rangle = (|1010\rangle+|0101\rangle), &&
Q_i^{2,0}|0101\rangle = (|1010\rangle+|0101\rangle) \\
Q_i^{2,0}|1001\rangle = (|1001\rangle+|0110\rangle), &&
Q_i^{2,0}|0110\rangle = (|1001\rangle+|0110\rangle)
\end{eqnarray}
\label{q20}
\end{subequations}
Using these definitions we see that $(Q_{1,i}+Q_{2,i}) = Q_i^1 + Q_i^{2,+} + Q_i^{2,-} +Q_i^{2,0} + Q_i^3$. Finally we turn to the action of each of the nearest neighbor hopping terms contained in $H_0^A$, $H_0^S$, $H_0^{LR}$. We denote these hops generically as $H^{a,\sigma}_{ij} = -t_{ij} b^\dagger_{\sigma,a,i}b_{\sigma,a,j}$, which hops an $a=A,S$ type fermion of spin $\sigma$ from site $j$ to $i$. Here $t_{ij}$ is $\lambda$ or $\sqrt{2}\alpha$ depending on the bond $ij$.
\begin{figure}[h]
\includegraphics[width=0.7\textwidth]{fig4.pdf}
\caption{\label{fig4} The eight non-zero matrix elements of the operator $Q_i^{2,0}$, which can induce a transition between an $S$ type to an $A$ type fermion of the same spin. These matrix elements are all positive as can be seen from Eq.~(\ref{q20}).}
\end{figure}
Having computed all the matrix elements we can expand the partition function in the CT-INT representation given by \cite{Rub05},
\begin{equation}
Z = \sum_k \int_T dt_1 dt_2...dt_k (-1)^k \mathrm{Tr}\Bigg(\mathrm{e}^{-(\beta-t_1)H_\mu}H_1\mathrm{e}^{-(t_1-t_2)H_\mu}H_2...H_3...H_k\mathrm{e}^{-t_kH_\mu}\Bigg)
\label{ctint}
\end{equation}
where the integral is over the time ordered domain $T \equiv t_1 \geq t_2 \geq,...\geq t_k$, and $H_p$ stands for one of the many possible local terms in the Hamiltonian like the fermion hop $H^{a,\sigma}_{ij}$ or $-U Q_i^{n}$. We then compute the trace by introducing a sum over fermionic occupation basis between various $H_p$ insertions. While the hopping terms move the fermions around, all other terms except $Q_i^{2,0}$ are diagonal terms and do not change the state in our chosen basis of fermion occupation numbers. The action of $Q_i^{2,0}$ on sites $i\leq 0$ is shown pictorially in Fig.~\ref{fig4}. We see that there are eight non-zero matrix elements, four of them diagonal while the other four flip an $A$-type particle into a $S$-type particle and vice versa without changing the spin. Hence, while we can follow the world line of a particular spin, S-type and A-type fermions mix among themselves due to $Q_i^{2,0}$. Worldlines of a specific spin are still well defined, and pictorially the trace becomes a sum over these worldline configurations $[\ell]$:
\begin{equation}
Z = \sum_k \int_T dt_1 dt_2...dt_k \sum_{[\ell]} \mathrm{Sign}([\ell]) W([\ell])
\end{equation}
where
\begin{equation}
W([\ell]) = \lambda^{k_1} (\sqrt{2}\alpha)^{k_2} (2U)^{k_3} (U^{k_4}) \Omega_\mu
\end{equation}
where $k_1$ stands for the number of fermion hops coming from $H_0^A$ or $H_0^S$ terms, $k_2$ stands for the number of fermion hops coming from $H_0^{LR}$, $k_3$ stands for the number of insertions of $Q_i^{2,+}$ and $Q_i^{2,-}$ and $k_4$ stands for the number of insertions of the other $Q_i^{n}$. The factor $\Omega_\mu$ is a postitive number that depends on the $H_\mu$ term. Whenever there is a $S$ type particle that exists for a time $\tau$ it contributes a factor $\exp(\alpha \tau)$ to it, on the other hand an $A$ type particle contributes $\exp(-\alpha\tau)$ to it.
Fermion worldline configurations $[\ell]$ are a set of closed loops that can wrap over the temporal boundary. The sign of a configuration, $\mathrm{Sign}([\ell])$, comes from the number of crossings of fermion world lines. In one dimension any two different loops will cross an even number of times. Hence, negative factors from crossings of different loops always give a positive sign. This means the sign of a configuration is determined by the number of self crossings of each loop, which we can denote by $N_{\rm self}$. Then
\begin{equation}
\mathrm{Sign}([\ell]) = (-1)^{N_{\rm self}}.
\end{equation}
It is easy to build configurations with negative weight as shown in Fig.~\ref{fig5}. Thus, although in the absence of interactions there is no sign problem, interactions do introduce configurations with negative signs. It would be interesting if this sign problem can be solved using methods like the meron cluster \cite{PhysRevLett.83.3116} or the fermion bag approach. While we have made much progress in this direction, the complete solution is still missing. Hence, we postpone this discussion to a later publication. Instead, in this work we try to identify configurations that arise in the limit $U \rightarrow \infty$ and show that these are guaranteed to be positive and thus solve the sign problem in the original quantum spin model.
\begin{figure}[h]
\includegraphics[width=0.7\textwidth]{fig5.pdf}
\caption{\label{fig5} Example of a negative weight fermion worldline configuration. The configuration contains one spin-up ($\uparrow$) loop that self intersects once and one spin-down ($\downarrow$) loop that does not self intersect. $S$ type and $A$ type particles are indicated by dashed and solid lines respectively, while the blue boxes are instances of the matrix elements shown in Fig.~\ref{fig4}.}
\end{figure}
\section{Paired Fermion Model}
\label{sec4}
Fermion sign problems can often be solved by pairing fermion worldlines or when fermions are one dimensional. A combination of both these features are required to solve the sign problem in (\ref{smodel}). In the limit of $U\rightarrow \infty$, the $H_{\rm int}$ term in (\ref{fmodel}) dominates and forces every composite site to have exactly two fermions and every fundamental site to have a single particle. Further, out of the six possible two particle states involving paired fermions, the operator, $(Q_{1,i}+Q_{2,i})$ projects into four states $t^+$, $t^-$, $t^0$ and $s$ defined by
\begin{equation}
|t^+\rangle = |1100\rangle,\ \
|t^-\rangle = |0011\rangle,\ \
|t^0\rangle = \frac{1}{\sqrt{2}}\big(|0110\rangle+|1001\rangle\big),\ \
|s\rangle = \frac{1}{\sqrt{2}}\big(|1010\rangle+|0101\rangle\big),\ \
\label{4states}
\end{equation}
as can be seen from table \ref{tab2}. We will show that restricting the Hilbert space on the composite sites to these paired states eliminates all negative weight configurations due to the one dimensional nature of the full problem. Interestingly, the weights remain positive even even if we allow the composite sites to be empty without any fermions. Thus, the fermion model can be modified by adding the additional interaction term
\begin{equation}
H'_{\rm int} = H_{\rm int} - U\sum_{i} Q_i^{0},
\end{equation}
where $Q_i^{0} |n_1,n_2\rangle = \delta_{n_1+n_2,0} |n_1,n_2\rangle$ on fundamental sites and $Q_i^{0} |n_1,n_2,n_3,n_4\rangle = 2\delta_{n_1+n_2+n_3+n_4,0} |n_1,n_2,n_3,n_4\rangle$ on composite sites. With the additional term, the $U\rightarrow \infty$ limit defines a paired fermion model which is similar to the original frustrated quantum spin model (1) but now allows composite sites and fundamental sites to be empty. In this model fermion worldline configurations only contain paired fermions on the composite sites and unpaired fermions on the fundamental sites, along with empty sites. In fact we will be more general and define the model through these worldline configurations and their weights. Such models are difficult to write down explicitly in terms of a Hamiltonian, so we do not attempt it here. We now focus on this more general model, bearing in mind that the original quantum spin model is what results when all empty sites are eliminated.
In the paired fermion model, a state on the composite site can only change through hops or exchanges of two fermions between neighboring sites. For example when one of the four states in (\ref{4states}) on a composite site moves to its neighboring empty site, both of its fermionic components must hop together. Since no fermion worldlines cross during such bosonic hops, there is no negative sign introduced during the hop. In contrast, when neighboring states exchange two fermions between them, there is usually a sign change since fermion worldlines cross each other. However, due to the definition of the bosonic state this is not always the case. One can work out the sign change during the exchange pictorially as we illustrate by considering six different examples of exchanges in Fig.~\ref{fig6}. The complete list of fermion exchange processes involving two boson exchanges and the corresponding sign of the matrix element is given in table \ref{tab3}. In the last row we also show the rules through which paired fermions on the composite chain interact with unpaired fermions on the fundamental chain. Note that the $s$ type particles do not interact with the fermion on the fundamental site, but the $t$ type particles do. Finally, we note that $H_\mu$ annihilates the $t$-type particles, but not the $s$-type particles. Since $H_\mu^2$ acts like a chemical potential for the $s$-type particles, it does not introduce negative signs in the worldline representation.
\begin{figure}[h]
\includegraphics[width=0.7\textwidth]{fig6.pdf}
\caption{\label{fig6} Examples of fermion exchange diagrams and the sign change associated the processes listed in Table \ref{tab3}. The sign of each exchange is determined by multiplying together the factors of $-1$ shown in the diagram with an additional factor of $-1$ from the crossing of fermion worldlines. }
\end{figure}
\begin{table}[h]
\begin{tabular}{cc||cc}
\hline
Process & Sign & Process & Sign \\
$t^+ \ t^-\ \longleftrightarrow s\ s$ & - &
$t^+ \ t^-\ \longleftrightarrow t^0\ t^0$ & + \\
$t^+ \ t^0\ \longleftrightarrow t^0\ t^+$ & - &
$t^+ \ s\ \longleftrightarrow s\ t^+$ & - \\
$t^- \ t^0\ \longleftrightarrow t^0\ t^-$ & - &
$t^- \ s\ \longleftrightarrow s\ t^-$ & - \\
$t^0 \ t^0\ \longleftrightarrow s\ s$ & - &
$t^0\ s\ \longleftrightarrow s\ t^0$ & - \\ \hline
$t^+ \ {\downarrow} \ \longleftrightarrow t^0 \ {\uparrow}$ & - &
$t^- \ {\uparrow} \ \longleftrightarrow t^0\ {\downarrow}$ & + \\
${\uparrow} \ \ {\downarrow} \ \longleftrightarrow {\downarrow} \ \ {\uparrow}$ & - \\
\hline
\end{tabular}
\caption{\label{tab3} Signs of transfer matrix elements associated with off diagonal processes in the worldline representation. While every worldline crossing produces a negative sign, pair creation and annihilation of $t$-type particles are also negative. }
\end{table}
\begin{figure}[h]
\includegraphics[width=0.6\textwidth]{fig7.pdf}
\caption{\label{fig7} An illustration of a world line configuration in the paired fermion model with a single crossing of $s$ and $t$ type particles. As explained in the text, such configurations are still positive because we can map the $\uparrow$ spin on the fundamental sites with a $t^+$ particle and $\downarrow$ with a $t^0$ particle. With this mapping there are always an even number of crossings and even number of pair creation events with negative signs.}
\end{figure}
The sign of a worldline configuration can be computed using the information in table \ref{tab3}. But in order to define worldlines in the presence of interactions between paired fermions in the composite chain and unpaired fermions in the fundamental chain, we map a spin-up $\uparrow$ (spin-down $\downarrow$) fermion on the fundamental chain into a $t^+$ ($t^0$) particle. In other words, while pictorial drawing worldline configurations by following each of the four types of particles $t^+,t^-,t^0$ and $s$, only $t^+$ and $t^0$ particles travel to the fundamental chain from the composite chain, by transforming into $\uparrow$ and $\downarrow$ fermions respectively. We also view $t^+$ and $t^-$ as particles and anti-particles, and represent their worldlines as directed lines with an arrow pointing forward in time for $t^+$ and backwards in time for $t^-$. In contrast $t^0$ and $s$ are indicated through undirected worldlines. Using this pictorial representation every worldline configuration contains three types of closed loops, the $s$-loops, $t^0$ loops and the directed $t^+$-$t^-$ loops. Particles of each type cannot cross themselves, but can cross other types of particles. Note that in our model we do not allow $t^+$ and $t^-$ to cross each other since we forbid sites with four fermions. Thus every loop can only cross another loop of a different type. But, every crossing of worldlines produces a negative sign as seen from table \ref{tab3}. Since these lines are drawn on a two dimensional world sheet, such crossings always occur in pairs. The one-dimensional nature of the problem is important here. In addition to particle crossings, particles can be created or annihilated in pairs. Such events occur when world lines turn around in time. Interestingly we can associate negative signs with pair creation/annihilation of $t^0-t^0$ and $t^+-t^-$ particles, while keeping similar events for $s-s$ particles positive. These are also consistent with the rules in \ref{tab3}. Since pair-creation and annihilation events also come in pairs in every loop, they too cancel. Thus, all world line configurations are positive in the paired fermion model and the sign problem is absent. An illustration of a world line configuration in the paired fermion is shown in Fig.~\ref{fig7}.
\section{The Quantum Spin Limit}
\label{sec5}
In the paired fermion model we allowed composite sites and fundamental sites to be empty. If we eliminate all empty sites we recover our original frustrated quantum spin model. This means we could have chosen an appropriate local basis in our original spin model to avoid all sign problems. In this section we construct this basis directly. Taking clues from the paired fermion model, we combine the chains $a=1,2$ into a single chain with a four dimensional Hilbert space. We then choose the eigenstates of the total spin operator ${\mathbf{S}}_i = {\mathbf{S}}_{1,i}+{\mathbf{S}}_{2,i}$ as the complete basis to expand the partition function. The lattice structure of the model is the same as Fig.~\ref{fig3}, where the fundamental sites are denoted by circles and contain spin-half states, while the composite sites are denoted by squares and contain the $s$ and the three $t$ states.
Given two independent quantum spin-half operators ${\mathbf{S}}_1$ and ${\mathbf{S}}_2$, we can label the four-dimensional Hilbert space on a composite site either through the eigenstates of $S_1^z$ and $S_2^z$ as $\left|\uparrow\uparrow\right\rangle$, $\left|\uparrow\downarrow\right\rangle$, $\left|\downarrow\uparrow\right\rangle$, $\left|\downarrow\downarrow\right\rangle$, or through the eigenstates of the total spin ${\mathbf{S}}_i ={\mathbf{S}}_{1,i}+{\mathbf{S}}_{2,i}$ states as the spin singlet $\left|s\right\rangle$ and the three spin triplets $|t^0\rangle, \ |t^+\rangle,\ |t^-\rangle$. Using the insight from the paired fermion model, we define these states through the expressions
\begin{equation}
|s\rangle = \frac{1}{\sqrt{2}} (\left|\uparrow\downarrow\right\rangle - \left|\downarrow\uparrow\right\rangle), \ \
|t^0\rangle = \frac{1}{\sqrt{2}} (\left|\uparrow\downarrow\right\rangle + \left|\downarrow\uparrow\right\rangle), \ \
|t^+\rangle = \left|\uparrow \uparrow\right\rangle,\ \ |t^-\rangle = -\left|\downarrow \downarrow\right\rangle.\ \
\label{sbasis}
\end{equation}
The extra negative sign in the definition of $|t^-\rangle$ is the remnant of the negative sign introduced in (\ref{basisc}) and will be useful in mapping the signs in the quantum spin model to the signs in the paired fermion model.
Let us now construct all the matrix elements of the Hamilton operator (\ref{smodel}) in the above basis. We divide the operator into three parts $H = H_L+H_R+H_I$ for convenience, such that
\begin{eqnarray}
H_L \ &=& \ \lambda_s \sum_{a=1,2} \sum_{i=0}^{N-1} {\mathbf{S}}_{a,i} \cdot {\mathbf{S}}_{a,i+1},\ \nonumber \\
H_R \ &=&\ \lambda_s \sum_{i=0}^{N-1} {\mathbf{S}}_{3,i} \cdot {\mathbf{S}}_{3,i+1},\ \nonumber \\
H_I\ &=&\ \alpha_s \big({\mathbf{S}}_{1,0}\cdot {\mathbf{S}}_{2,0} + ({\mathbf{S}}_{1,0}+{\mathbf{S}}_{2,0})\cdot {\mathbf{S}}_{3,0}\big).
\label{shamop}
\end{eqnarray}
The action of each term of the Hamiltonian on the nearest neighbor states in the basis (\ref{sbasis}) is given in table \ref{tab4}. From this information it is easy to read off the matrix elements. We then collect all the diagonal terms in an operator defined as $H_D$ and all the off diagonal terms in $H_O$. We can then expand the partition function in the CT-INT formulation as before (see (\ref{ctint}))
\begin{equation}
Z = \sum_k \int_T dt_1 dt_2...dt_k \mathrm{Tr}\Bigg(\mathrm{e}^{-(\beta-t_1)H_D}(-H_O)\mathrm{e}^{-(t_1-t_2)H_D}(-H_O)...(-H_O)...(-H_O)\mathrm{e}^{-t_kH_\mu}\Bigg).
\end{equation}
If we expand the trace in the basis (\ref{sbasis}), the potential negative signs can only come from off diagonal terms. Since the chosen basis naturally defines a worldline configuration involving $s$ and $t$-type particles on composite sites and spins on fundamental sites, they are identical to the paired fermion model. Further, due to our choice of the basis, even the local negative signs from the off diagonal terms are exactly the same as those given in Table \ref{tab3}. Hence for the same reasons as already discussed in the paired fermion model, all configurations weights are positive and the sign problem is absent.
\begin{table}
\begin{tabular}{|c|c|}
\hline
$H_{L,ij}\ (|t^+\rangle \otimes |t^+\rangle) = \frac{\lambda_s}{2}\ (|t^+\rangle \otimes |t^+\rangle)\ \ \ $
&
$\ \ \ H_{L,ij}\ (|t^-\rangle \otimes |t^-\rangle) = \frac{\lambda_s}{2}\ (|t^-\rangle \otimes |t^-\rangle)$
\\
\hline
\multicolumn{2}{|c|}{$H_{L,ij}\ (|t^+\rangle \otimes |t^-\rangle) = -\frac{\lambda_s}{2}\ (|t^+\rangle \otimes |t^-\rangle)
- \frac{\lambda_s}{2}\ [(|t^0\rangle \otimes |t^0\rangle) - (|s\rangle \otimes |s\rangle)]$} \\
\hline
\multicolumn{2}{|c|}{$H_{L,ij}\ (|t^-\rangle \otimes |t^+\rangle) = -\frac{\lambda_s}{2}\ (|t^-\rangle \otimes |t^+\rangle) - \frac{\lambda_s}{2}\
[(|t^0\rangle \otimes |t^0\rangle) - (|s\rangle \otimes |s\rangle)]$} \\
\hline
\multicolumn{2}{|c|}{$
H_{L,ij}\ (|t^0\rangle \otimes |t^0\rangle) = \frac{\lambda_s}{2} (|s\rangle \otimes |s\rangle)
- \frac{\lambda_s}{2}\ [(|t^+\rangle \otimes |t^-\rangle) \ +\ (|t^-\rangle \otimes |t^+\rangle)]$} \\
\hline
\multicolumn{2}{|c|}{$
H_{L,ij}\ (|s\rangle \otimes |s\rangle) = \frac{\lambda_s}{2}\ (|t^0\rangle \otimes |t^0\rangle) + \frac{\lambda_s}{2}\ [(|t^+\rangle \otimes |t^-\rangle) \ +\
(|t^-\rangle \otimes |t^+\rangle)]$} \\
\hline
$H_{L,ij}\ (|t^+\rangle \otimes |t^0\rangle) = \frac{\lambda_s}{2}\ (|t^0\rangle \otimes |t^+\rangle)\ \ \ $
&
$\ \ \ H_{L,ij}\ (|t^+\rangle \otimes |s\rangle) = \frac{\lambda_s}{2}\ (|s\rangle \otimes |t^+\rangle)$\\
\hline
$H_{L,ij}\ (|t^-\rangle \otimes |t^0\rangle) = \frac{\lambda_s}{2}\ |t^0\rangle \otimes |t^-\rangle)\ \ \ $
&
$\ \ \ H_{L,ij}\ (|t^-\rangle \otimes |s\rangle) = \frac{\lambda_s}{2}\ (|s\rangle \otimes |t^-\rangle)$ \\
\hline
$H_{L,ij}\ (|t^0\rangle \otimes |t^+\rangle) = \frac{\lambda_s}{2}\ (|t^+\rangle \otimes |t^0\rangle)\ \ \ $
&
$\ \ \ H_{L,ij}\ (|t^0\rangle \otimes |t^-\rangle) = \frac{\lambda_s}{2}\ (|t^-\rangle \otimes |t^0\rangle)$ \\
\hline
$H_{L,ij}\ (|t^0\rangle \otimes |s\rangle) = \frac{\lambda_s}{2}\ (|s\rangle \otimes |t^0\rangle)\ \ \ $
&
$\ \ \ H_{L,ij}\ (|s\rangle \otimes |t^0\rangle) = \frac{\lambda_s}{2}\ (|t^0\rangle \otimes |s\rangle)$ \\
\hline
$H_{L,ij}\ (|s\rangle \otimes |t^+\rangle) = \frac{\lambda_s}{2}\ (|t^+\rangle \otimes |s\rangle)\ \ \ $
&
$\ \ \ H_{L,ij}\ (|s\rangle \otimes |t^-\rangle) = \frac{\lambda_s}{2}\ (|t^-\rangle \otimes |s\rangle)$
\\
\hline \hline
$H_I\ (|t^+\rangle \otimes \left|\uparrow\right\rangle) = \frac{3\alpha_s}{4} (|t^+\rangle \otimes \left|\uparrow\right\rangle)\ \ \ $
&
$\ \ \ H_I\ (|t^-\rangle \otimes \left|\downarrow\right\rangle) = \frac{3\alpha_s}{4} (|t^-\rangle \otimes \left|\downarrow\right\rangle)$ \\
\hline
$H_I\ (|s\rangle \otimes \left|\uparrow\right\rangle) = -\frac{3\alpha_s}{4} (|s\rangle \otimes \left|\uparrow\right\rangle)\ \ \ $
&
$\ \ \ H_I\ (|s\rangle \otimes \left|\downarrow\right\rangle) = -\frac{3\alpha_s}{4} (|s\rangle \otimes \left|\downarrow\right\rangle)$ \\
\hline
\multicolumn{2}{|c|}{$
H_I\ (|t^+\rangle \otimes \left|\downarrow\right\rangle) = -\frac{\alpha_s}{4} (|t^+\rangle \otimes \left|\downarrow\right\rangle)
+ \frac{\alpha_s}{\sqrt{2}} |t^0\rangle \otimes \left|\uparrow\right\rangle$}
\\
\hline
\multicolumn{2}{|c|}{$
H_I\ (|t^-\rangle \otimes \left|\uparrow\right\rangle) = -\frac{\alpha_s}{4} (|t^-\rangle \otimes \left|\uparrow\right\rangle)
- \frac{\alpha_s}{\sqrt{2}} |t^0\rangle \otimes \left|\downarrow\right\rangle$}
\\
\hline
\multicolumn{2}{|c|}{$
H_I\ (|t^0\rangle \otimes \left|\uparrow\right\rangle) = \frac{\alpha_s}{4} (|t^0\rangle \otimes \left|\uparrow\right\rangle) + \frac{\alpha_s}{\sqrt{2}} |t^+\rangle \otimes \left|\downarrow\right\rangle$}
\\
\hline
\multicolumn{2}{|c|}{$H_I\ (|t^0\rangle \otimes \left|\downarrow\right\rangle) = \frac{\alpha_s}{4} (|t^0\rangle \otimes \left|\downarrow\right\rangle) - \frac{\alpha_s}{\sqrt{2}} |t^-\rangle \otimes \left|\uparrow\right\rangle$}
\\
\hline \hline
$H_{R,ij}\ (\left|\uparrow\right\rangle \otimes \left|\uparrow\right\rangle) = \frac{\lambda_s}{4} (\left|\uparrow\right\rangle \otimes \left|\uparrow\right\rangle)$ &
$H_{R,ij}\ (\left|\downarrow\right\rangle \otimes \left|\downarrow\right\rangle) = \frac{\lambda_s}{4} (\left|\downarrow\right\rangle \otimes \left|\downarrow\right\rangle)$ \\
\hline
\multicolumn{2}{|c|}{$
H_{R,ij}\ (\left|\uparrow\right\rangle \otimes \left|\downarrow\right\rangle) = -\frac{\lambda_s}{4} (\left|\uparrow\right\rangle \otimes \left|\downarrow\right\rangle) + \frac{\lambda_s}{2}\ (\left|\downarrow\right\rangle \otimes \left|\uparrow\right\rangle)$}
\\
\hline
\multicolumn{2}{|c|}{$
H_{R,ij}\ (\left|\downarrow\right\rangle \otimes \left|\uparrow\right\rangle) = -\frac{\lambda_s}{4} (\left|\downarrow\right\rangle \otimes \left|\uparrow\right\rangle) + \frac{\lambda_s}{2}\ (\left|\uparrow\right\rangle \otimes \left|\downarrow\right\rangle)$} \\
\hline
\end{tabular}
\caption{\label{tab4} Action of the various terms in the quantum spin Hamiltonian (\ref{shamop}) on the basis states (\ref{sbasis}). }
\end{table}
\section{Conclusions}
\label{sec6}
In this work we solved the sign problem of a simple frustrated quantum spin model in which three quantum spin chains interact through a frustrated antiferromagnetic coupling. The solution required the use of a modified spin basis involving more than one lattice site of the original model. The idea behind the solution was obtained by starting from a fermion model that reproduced the spin model in the strong coupling limit. We then converted the fermion model into a non-homogeneous one dimensional problem. The solution emerged in the limit where fermions remain paired (into $s-$ and $t$-type bosons) on half of the space but remain unpaired in the other half. The frustrating interaction involved exchanging bosons and fermions at the impurity site. We showed that worldline configurations of bosons and fermions remain positive even in the presence of such an exchange. The solution remains valid even when the bosons and fermions are not present on every site as required in the original spin model. Thus, in a way we have found a solution to an extended class of problems that go beyond the original spin model. Another important lesson we learned is that although the frustration that produces the sign problem is localized, it seemed necessary to change the basis in a macroscopic region in order to solve the sign problem completely. All local reformulations of the basis we tried on a few sites close to the frustration did not yield a solution. This seems consistent with earlier findings \cite{PhysRevB.92.195126}.
Our solution should be extendable to other models as long as they can can be reduced to a series of one dimensional problems that may be coupled through interactions and that are diagonal in the worldline representation. It is also likely that other simple frustrated models can be solved using similar ideas. Unfortunately worldline configurations in our model with unpaired fermions can still have negative weights. The solution will need resummation of fermion world lines, which we did not attempt in this work. Perhaps ideas like the meron cluster solution \cite{PhysRevLett.83.3116} and the fermion bag approach \cite{Chandrasekharan:2013rpa} would be helpful in this regard. It is also likely that solutions to fermionic models on ladder geometries could emerge from our work. Finally, it would also be interesting to explore if partition functions of more complex models in higher dimensions could be reduced to a sum of partition functions of simpler models of the type encountered here.
\section*{Acknowledgments}
We would like to thank helpful conversations with Kedar Damle, Diptiman Sen and Uwe-Jens Wiese. SC would like to thank the Center for High Energy Physics at the Indian Institute of Science for hospitality, where part of this work was done. The material presented here is based upon work supported by the U.S. Department of Energy, Office of Science, Nuclear Physics program under Award Number DE-FG02-05ER41368. EH is also supported by a National Physical Science Consortium fellowship.
|
2,869,038,156,284 | arxiv | \section{Introduction}
Thermalization of a vacuum due to acceleration has been first suggested
by Fulling \cite{Fulling1973}. Unruh \cite{Unruh1976} and DeWitt
\cite{dewitt1979general} have shown that particles with Planckian
distribution are actually observed by an accelerated detector in an
inertial vacuum (Minkowski vacuum); such a detector is called Unruh-DeWitt
detector. After that, numerous papers has been published on this topic
to this date (see \cite{takagi1986vacuum,Fulling2005,Crispino2008}
and references therein). Compared to the well established effect of
linear acceleration, the particle detection due to acceleration of
rotational motion is still controversial.
Letaw and Pfautsch \cite{Letaw1981} investigated vacua correspond
to various Killing flows in a flat spacetime. They have categorized
Killing flows into six classes, and examined the vacuum in each class.
Their conclusion is that there are only two inequivalent vacua in
a flat spacetime, which they called Minkowski vacuum and Fulling vacuum.
The vacuum with the Killing flow of circular rotation is the Minkowski
vacuum, which means an observer in rotational motion does not see
particles in a Minkowski vacuum. However, analysis using Unruh-DeWitt
detector indicates that a rotating detector will resister particles
\cite{Letaw1981a,Kim1987} (see also \cite{Bell1983,Bell1987}).
Davies \textit{et al}.\ \cite{Davies1996} found that the existence
of a static limit is necessary for the particle detection in a rotating
orbit. They have shown that a detector is not excited in the Minkowski
vacuum confined in a boundary inside the light cylinder. From this
result they suggested the possibility of detector excitation by negative
energy. It was Korsbakken and Leinaas \cite{Korsbakken2004} (referred
as Paper 1 hereafter) who revealed the actual process of a rotating
detector to observe particles; they found the detector is excited
by the emission of negative energy quanta.
It was argued in Paper 1 that some wave modes can have negative generalized
energy when the flow has a static limit. Here let us use more specific
words {}``Killing energy'' for what termed {}``generalized energy''
in Paper 1; it is the energy-momentum along the Killing flow in interest.
Two specific Killing flows are investigated in detail in Paper 1:
one corresponds to the spatial circular rotation and the other is
the flow with drift motion superposed on linear acceleration. The
excitation of detector due to the absorption of negative Killing energy
was found in both cases.
\bigskip{}
The present paper is stimulated by the results in Paper 1 and explore
the response of accelerating detectors. Our point is that the detector's
response depends on the choice of surface (three volume) to define
it, and the detector excitation by negative Killing energy does not
occur when we choose the surface appropriately.
The Unruh-DeWitt detector is a hypothetical monopole interacting with
the field at one volumeless point. The orbit of the detector is externally
given, and its internal energy is the energy measured in the frame
comoving with the orbit; it becomes Killing energy when the orbit
is along the Killing flow. The detector's interaction is represented
by a small interaction term added to the Hamiltonian of the whole
system. In general, a Hamiltonian is defined by the integration of
energy-momentum tensor over a surface of a constant time. Therefore,
what a detector observes is the Killing energy integrated over the
surface of Hamiltonian.
The Hamiltonian used in Paper 1 is on a surface of a constant Minkowski
time, which is not normal to the detector's orbit. The measured Killing
energy becomes a combination of energy and momentum integrated over
the surface of constant Minkowski time. This can be negative for waves
with large negative momentum, and the detector is excited by the emission
of such waves.
In contrast, the Killing energy is always positive when integrated
over a surface normal to the Killing flow, just like the pressure
(three dimensional momentum flow) is always positive. Therefore, a
detector does not perceive negative Killing energy when we define
it on the normal surface.
These two results do not contradict; two detectors observes different
physical quantities which do not have to agree. A similar situation
takes place for the acceleration superposed on the drift motion.
\bigskip{}
In the present paper we first investigate why and how the negative
Killing energy modes can exit as a result of the surface choice. As
we will see, the negative Killing energy occurs when a wave with phase
phase speed slower than the detector crosses the surface oblique to
the Killing flow.
Though the wave phase speed is usually faster than the speed of light
for planar waves, it can be locally slower in the cylindrical modes
for the rotational vacuum as in our case. However, it is not intuitively
easy to understand the underlying mechanism with the cylindrical modes
expressed with Bessel functions. Therefore, we firstly mimic the slower
phase speed using hypothetical planar waves with imaginary mass in
Section II. The detector is moving with inertial motion there. This
is fake but not entirely unrealistic; we can clarify the mechanism
of negative energy mode with it.
Then we move on to the accelerating detectors with realistic models.
The response of a rotating detector to the Minkowski vacuum is examined
in Section III; we find the detector will not be excited with an appropriate
choice of surface. In Section IV another similar motion of detector,
motion with drift superposed in acceleration namely, is investigated.
Again a properly defined detector has no excitation due to the negative
Killing energy; it detects only the particles with Planckian distribution
of the ordinary Unruh effect with Doppler shift. Section V is for
brief concluding remarks.
\section{Negative Killing Energy Mode}
Here in this section we examine how the negative Killing energy modes
occur with a simple and somewhat unrealistic model. To begin with,
let us clarify the definition of Killing energy. Let $\zeta_{\mu}$
be the unit vector in the direction of a Killing flow and $T_{\nu}^{\mu}$
be the energy-momentum tensor. What we call Killing energy in the
present paper is the energy-momentum along the Killing flow; its flux
$j_{\nu}$ is defined as \begin{equation}
j_{\nu}=\zeta_{\mu}T_{\nu}^{\mu}\,.\end{equation}
The gross Killing energy is obtained by integrating the above flux
over a specific surface.
In the following we examine the Killing energy of a real valued two
dimensional field with wave equation \begin{equation}
\phi_{,tt}-\phi_{,xx}-m^{2}\phi=0\,\label{eq:wave}\end{equation}
where we write $\partial\phi/\partial t=\phi_{,t}$, etc., in shorthand.
We use the unit system of $c=\hbar=1$ (speed of light = Planck constant
= unity) throughout the present paper.
Suppose a hypothetical monopole detector with internal degree of freedom
$\mu$ is coupled with the above scalar field $\phi$ by a small coupling
constant $c$; the detector is moving along a fixed trajectory $(t,x)=(T(\tau),X(\tau))$,
where $\tau$ is the detector's proper time. The total Hamiltonian
of the system is given as \begin{equation}
H(t)=\int_{t=\textnormal{const}}\left[h_{\phi}+c\left(\frac{dT}{d\tau}\right)^{-1}{\mu}(\tau)\phi(t,x)\delta(x-X)\right]dx+\left(\frac{dT}{d\tau}\right)^{-1}H_{\mu}(\tau(t))\label{detector}\end{equation}
where $h_{\phi}=\frac{1}{2}(\phi_{,t}^{2}+\phi_{,x}^{2}+m^{2}\phi^{2})$
is the Hamiltonian density of the field and $H_{\mu}(\tau)$ is the
internal Hamiltonian of the detector, which is independent of the
proper time $\tau$. The time evolution of the detector's internal
dynamics is along its propertied $\tau$, which can be inversely written
as a function of $t$; $\tau$ and $X$ in the above expression should
be understood as $\tau=\tau(t)$ and $X=X(\tau(t))$. The factor $(dT/d\tau)^{-1}$
in the coupling term comes from the same reason, i.e., the interaction
takes place along the detector's proper time $\tau$, while the Hamiltonian
is defined along the Minkowski time $t$. It should be noted that
the Hamiltonian is defined by the integration over a Cauchy surface
of $t=\textnormal{constant}$ which depends on the choice of $(t,x)$
coordinates.
The total Hamiltonian $H(t)$ is time dependent since $X$ depends
on time $t$. This means $\partial H/\partial t\ne0$. On the other
hand, the Killing energy is a conserved quantity since the detector
moves along the Killing flow. It is conserved locally due to the Noether's
theorem, so its integration over any Cauchy surface is also conserved.
Thus we can write \begin{equation}
\frac{\partial}{\partial t}\int_{t=\textnormal{const}}dx[\zeta_{t}(t,x)h_{\phi}-\zeta_{x}(t,x)p_{\phi}]=\frac{\partial}{\partial\tau}H_{\mu}\,,\label{eq:conserve}\end{equation}
where $p_{\phi}=\phi_{,t}\phi_{,x}$ is the momentum flux across
the surface of constant $t$. We neglected in the above expression
the interaction energy, which vanishes by long time average. The above
expression means what the detector measures is the Killing energy
integrated over a surface of constant $t$.
\bigskip{}
Now let us suppose the wave field is in the vacuum and the detector
is in the state with lowest energy $E_{0}$; note that the detector's
energy $E$, i.e., the eigenvalue of $H_{\mu}$, is the energy in
the detector's frame since the time evolution with $H_{\mu}$ is determined
by the proper time. The detector moves along the Killing flow, thus
$E$ is the Killing energy. The state vector of the total system is
decomposed as \begin{equation}
\left|E,\Psi\right\rangle =\left|E_{0}\right\rangle \left|0\right\rangle \,.\end{equation}
The transition by the coupling occurs only for $\left|0\right\rangle \rightarrow\left|1_{k}\right\rangle $
($1_{k}$ denotes the state with one particle in mode $k$) when the
coupling constant $c$ is small enough (see, e.g., \cite{DeWitt2003}):
\begin{align}
A(E,1;E_{0},0) & =ic\left\langle E,1_{k}\right|\int_{-\infty}^{\infty}dt\,\int dx\,\left(\frac{dT}{d\tau}\right)^{-1}\mu(\tau)\phi(t,x)\delta(x-X)\left|E_{0},0\right\rangle \nonumber \\
\, & =ic\left\langle E,1_{k}\right|\int_{-\infty}^{\infty}d\tau\,\mu(\tau)\phi(T(\tau),X(\tau))\left|E_{0},0\right\rangle \nonumber \\
\, & =ic\left\langle E\right|\mu(0)\left|E_{0}\right\rangle \int_{-\infty}^{\infty}d\tau\, e^{i\Delta E\tau}\left\langle 1_{k}\right|\phi(T(\tau),X(\tau))\left|0\right\rangle \,,\label{eq:defA-1}\end{align}
In the past literature, the transition amplitude $|A|^{2}$ expressed
with the Wightman function is often used to reach the same conclusion.
However, we use the above expression because emission/absorption of
quanta is more transparent in this form; one can obtain the same result
with the Wightman function.
The field can be expanded as \begin{equation}
\phi=\left(a_{k}^{+}e^{-i\omega t}+a_{k}^{-}e^{i\omega t}\right)\, e^{ikx}\,,\end{equation}
where $\omega>0$ and $a_{k}^{+}$ {[}$a_{k}^{-}${]} are the annihilation
{[}creation{]} operators for the particles in mode $k$. This definition
of annihilation and creation operators is based on the Hamiltonian
on the surface of constant $t$, therefore, the detector's excitation
calculated by these operators is in response to the Killing energy
integrated over that surface as in (\ref{eq:conserve}).
Since the terms with annihilation operator $a_{k}^{+}$ vanishes for
the vacuum state $|0\rangle$, the transition coefficient in (\ref{eq:defA-1})
reduces to \begin{equation}
\left\langle 1_{k}\right|\phi(T(\tau),X(\tau))\left|0\right\rangle =\exp i(\omega T(\tau)+kX(\tau))\,.\label{eq:creation}\end{equation}
Suppose the detector is moving with a constant velocity $(\zeta_{0t},\zeta_{0x})$,
i.e., $(T,X)=(\zeta_{0t}\tau,\zeta_{0x}\tau)$. Then we have \begin{equation}
\int_{-\infty}^{\infty}d\tau\, e^{i\Delta E\tau}\left\langle 1_{k}\right|\phi(T(\tau),X(\tau))\left|0\right\rangle =\delta(\Delta E+\zeta_{0t}\omega+\zeta_{0x}k)\label{transition}\end{equation}
The above expression vanishes for ordinary plane waves since $\omega>|k|$
and the detector's speed must be less than unity (=speed of light).
Nevertheless, it can survive for mode functions with Bessel (or Macdonald)
functions, for which $\omega$ can be smaller than $|k|$ locally,
as we will see in the following sections.
However, calculations with Bessel functions in a four dimensional
space is complicated and not easy to understand what is happening
intuitively. Therefore in this section we artificially assume the
mass $m$ is imaginary, i.e., $m^{2}<0$ so that $\omega<|k|$. Although
this is somewhat unrealistic, it can demonstrate how a detector is
excited by negative Killing-energy.
When $\omega<|k|$ the argument of the $\delta$ function in (\ref{transition})
can be non-zero for a large $k$. This means the detector is excited
by the emission of negative Killing energy since the term in (\ref{eq:creation})
is the result of creation operators.
\bigskip{}
In contrast, such negative Killing energy emission does not occur
when we perform the same calculation in the detector's rest frame.
Let us introduce a new frame $\Sigma'$ which is moving with a velocity
$(u_{t},u_{x})=(\zeta_{0t},\zeta_{0x})$ relative to the original
frame (let's call the original frame $\Sigma$); the coordinates in
the frame $\Sigma'$ becomes \begin{equation}
t'=\zeta_{0t}t-\zeta_{0x}x\,,\;\; x'=\zeta_{0t}x-\zeta_{0x}t\end{equation}
The trajectory of the detector is $(T'(\tau),X'(\tau))=(t',0)$.
When we do the same calculation as above, (\ref{transition}) becomes
\begin{equation}
\int_{-\infty}^{\infty}d\tau\, e^{i\Delta E\tau}\left\langle 1_{k}\right|\phi(T'(\tau),X'(\tau))\left|0\right\rangle =\delta(\Delta E+\omega')\,,\end{equation}
where $\omega'=|\zeta_{0t}\omega+\zeta_{0x}k|$ is the wave frequency
of the creation operator in $\Sigma'$. The above expression is always
zero since $\omega'>0$.
This discrepancy occurs because the the energy-momentum tensor is
a flux density and its sign depends on the flow direction across the
surface. This situation is illustrated in Figure 1. Two dashed lines
indicate the constant time surface in $\Sigma$ and $\Sigma'$ respectively,
and the hollow arrow is the phase speed of the negative Killing energy
mode in $\Sigma$. The energy-momentum carried by this wave crosses
the constant time surfaces of $\Sigma$ and $\Sigma'$ from the opposite
side, and the flux has opposite sign correspondingly.
\begin{figure}
\begin{centering}
\includegraphics{fig1}
\par\end{centering}
\caption{The wave direction crossing the surfaces of constant time. Dashed
lines are the time constant surfaces of $\Sigma$ (constant $t$)
and $\Sigma'$ (constant $t'$) respectively. In $\Sigma'$ the wave
crosses the surface in the increasing $t'$ direction, which is decreasing
$t$ in $\Sigma$. }
\end{figure}
From the above consideration, we understand the negative Killing energy
in $\Sigma$ becomes positive with the same absolute value in $\Sigma'$.
Therefore, if we wish to express the Killing energy in $\Sigma'$
with the parameters defined on $\Sigma$, straightforward Lorentz
transform gives wrong sign. We can obtain the correct answer by replacing
$\zeta_{0t}\omega+\zeta_{0x}k$ with $|\zeta_{0t}\omega+\zeta_{0x}k|$.
\section{Rotating Detector}
In this section we examine the response of a rotating detector. Without
loss of generality we can express the detector's orbit using the proper
time $\tau$ as \begin{equation}
(T(\tau),R(\tau),\Theta(\tau),Z(\tau))=(\gamma\tau,r_{0},\gamma\Omega\tau,0)\end{equation}
in the cylindrical coordinates $(t,r,\theta,z)$; the radial distance
$r_{0}$ and the angular velocity $\Omega$ are constants, and $\gamma=1/\sqrt{1-\Omega^{2}r_{0}^{2}}$.
The detector cannot move faster than the speed of light, thus $1>\Omega r_{0}$.
The massless Klein-Goldon equation in the cylindrical coordinates
may be written as\begin{equation}
\frac{\partial^{2}}{\partial t^{2}}\phi-\left(\frac{1}{r^{2}}\frac{\partial}{\partial r}r\frac{\partial}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}}{\partial\theta^{2}}+\frac{\partial^{2}}{\partial z^{2}}\right)\phi=0\,.\end{equation}
The mode functions to the above wave equation is \begin{equation}
\psi_{hmk}^{\pm}=\frac{J_{m}(hr)}{\sqrt{8\omega\pi^{2}}}\,\exp-i(\pm\omega-m\theta-kz)\,,\label{eq:bessel}\end{equation}
where $J_{m}$ is a Bessel function of order $m$ and $\omega=|h|>0$.
The unit vector in the Killing flow direction is expressed as $(\zeta_{t},\zeta_{r},\zeta_{\theta},\zeta_{z})=(\gamma,0,\gamma\Omega r_{0,}0)$.
Then the Killing energy of the mode function across the surface of
constant $t$ (let us call this surface $S$) becomes \begin{equation}
E_{S}=\zeta_{t}T_{t}^{t}-\zeta_{\theta}T_{t}^{\theta}=\gamma(\omega+m\Omega)J_{m}(hr_{0})^{2}\,,\end{equation}
at the detector's orbit. This can be negative for large negative
$m\Omega$, which may seem peculiar because it means the local frequency
is smaller than the wave number. Suppose we introduce WKB-like approximation
in a small region around $(r,\theta)\sim(r_{0},\theta_{0})$ as \begin{equation}
\psi_{p}^{+}\propto\exp-i(\omega t-k_{r}(r_{0})(r-r_{0})-k_{\theta}r_{0}(\theta-\theta_{0})-k_{z}z)\,,\end{equation}
The above expression is not quantitative approximation, but just
a rough sketch to illustrate what is happening. The negative $E_{S}$
means $\omega(r_{0})<|k_{\theta}|$ which is not true for ordinary
planar waves since $\omega=\sqrt{k_{r}^{2}+k_{\theta}^{2}+k_{z}^{2}}$.
In this case, however, it can be true because the Bessel function
$J_{m}(hr)$ exponentially damps in $hr\ll1$ for large $m$. Then
$k_{r}(r_{0})$ in the above approximation becomes imaginary to make
$\omega$ smaller than $|k_{\theta}|$. This situation is well mimicked
by the imaginary mass we introduced in the previous section, and the
details we examined there are also valid here.
On the other hand, the Killing energy across the surface normal to
the Killing flow (denoted by $S'$) is \begin{equation}
E_{S'}=\zeta_{t}T_{t}^{t}\zeta^{t}-\zeta_{\theta}T_{t}^{\theta}\zeta^{t}-\zeta_{t}T_{\theta}^{t}\zeta^{\theta}+\zeta_{\theta}T_{\theta}^{\theta}\zeta^{\theta}=\gamma^{2}\omega^{-1}(\omega+m\Omega)^{2}J_{m}(hr_{0})^{2}\,,\end{equation}
which is always positive. Note that the above Killing energy density
is the one per unit volume. However, energy-momentum of a wave as
a physical entity should be a density per wave length because the
wave length changes due to the Lorentz transform. The Killing energy
density per wave length can be obtained by multiplying the above expression
by a factor $\omega\gamma^{-1}/(\omega+m\Omega)$, which yields the
energy density $\gamma|\omega+m\Omega|J_{m}^{2}$ in consistent with
the result in the previous section.
\bigskip{}
Now we define the detector in the same way as (\ref{detector}) on
$S$. The coefficient for the transition from the Minkowski state
to the one particle state is evaluated as as \begin{eqnarray}
A(E,1_{hmk};E_{0},0_{M}) & = & ic\left\langle E,1_{hmk}\right|\int_{-\infty}^{\infty}d\tau\, m(\tau)\phi(T(\tau),X_{i}(\tau))\left|E_{0},0\right\rangle \nonumber \\
\, & = & ic\left\langle E\right|m(0)\left|E_{0}\right\rangle \int_{-\infty}^{\infty}d\tau\, e^{i\Delta E\tau}\left\langle 1_{k_{hmk}}\right|\phi(T(\tau),X_{i}(\tau))\left|0\right\rangle \,,\label{eq:defA-2}\end{eqnarray}
where $\Delta E=E-E_{0}$ and $X_{i}(\tau)=(R(\tau),\Theta(\tau),Z(\tau))$
denotes the detector's spatial position.
We expand the field $\phi$ with the mode functions in (\ref{eq:bessel})
as \begin{equation}
\phi=\sum_{hmk}(a_{hmk}^{+}\psi_{hmk}^{+}+a_{hmk}^{-}\psi_{hmk}^{-})\,.\end{equation}
Mode expansion of the above expression is based on the Hamiltonian
on $S$, therefore, the flux of Killing energy calculated with this
expansion is the one across the surface $S$ as in the previous section.
The terms with annihilation operators vanish for the transition of
$\left|0\right\rangle \rightarrow\left|1_{hmk}\right\rangle $, thus
we have \begin{align}
\int_{-\infty}^{\infty}d\tau\, e^{i\Delta E\tau} & \left\langle 1_{hmk}\right|\phi(T(\tau),X_{i}(\tau)\left|0\right\rangle \nonumber \\
= & \frac{J_{m}(hr_{0})}{\sqrt{8\omega\pi^{2}}}\,\int_{-\infty}^{\infty}d\tau\, e^{i\Delta E\tau}\,\exp(i\gamma(\omega+m\Omega)\tau)\label{eq:delta}\\
= & \frac{J_{m}(hr_{0})}{\sqrt{8\omega\pi^{2}}}\delta(\Delta E+\gamma(\omega+m\Omega))\,.\nonumber \end{align}
This means the excitation of the detector takes place when $\omega+m\Omega$
is negative; the rotating detector observes particles due to the emission
of negative Killing energy. The amplitude of the above coefficient
is small because the Bessel function becomes exponentially small within
the static limit when $\omega<|m\Omega|$.
\bigskip{}
This excitation by negative Killing energy does not occur when we
choose the surface $S'$ to define the detector as we discussed in
the previous section. Let us introduce new coordinates $(s,\varphi,r,z)$
as \begin{equation}
t=s-r^{2}\Omega\varphi\,,~\theta=\varphi+\Omega s\,.\label{eq:varphi}\end{equation}
with $r$ and $z$ unchanged. The surface $S'$ is specified by a
constant $s$, which is normal to the $s$ axis.
The mode functions then become \begin{equation}
\psi_{hm'k}^{\prime\pm}=\frac{1}{\sqrt{8\omega\pi^{2}}}\, e^{\mp\omega's}J_{m}(hr)\, e^{im'\varphi}e^{ikz}\,.\end{equation}
where $\omega'=|\omega-m\Omega|>0$. The filed can be expanded with
these mode functions as
\begin{equation}
\phi=\sum_{hmk}(a_{hm'k}^{\prime+}\psi_{hm'k}^{\prime+}+a_{hm'k}^{\prime-}\psi_{hm'k}^{\prime-})\,.\end{equation}
Since $\varphi$ is constant along the orbit, the same calculation
as in (\ref{eq:delta}) yields \begin{align}
\int_{-\infty}^{\infty}d\tau\, e^{i\Delta E\tau} & \left\langle 1_{hmk}\right|\phi(T(\tau),X_{i}(\tau)\left|0\right\rangle \nonumber \\
= & \frac{J_{m}(hr_{0})}{\sqrt{8\omega\pi^{2}}}\delta(\Delta E+\gamma\omega')\,\end{align}
The above expression vanishes since the argument of the $\delta$
function is always positive, which means no excitation of the detector.
\section{Accelerating Detector with Drift}
Another class of vacua was investigated in Paper 1. The Killing flow
$K^{\mu}$ to define it is expressed in rectangular coordinates $(t,x,y,z)$
as\begin{equation}
(K^{t},K^{x},K^{y},K^{z})\propto(\Gamma x,\Gamma t,1,0)\,,\label{eq:killing-1}\end{equation}
which is a flow accelerating in the $xt$ plane superposed with a
constant drift in the $y$ direction. This flow becomes spacelike
when $\Gamma\xi<1$ and therefore, the surface of $\Gamma\xi=1$ is
the static limit. Readers are refereed to Paper 1 for more details;
the above expression is identical to the equation (37) in Paper 1
with $\Gamma=(a^{2}-\omega^{2})/\omega$.
It is reported in Paper 1 that the detector excitation due to the
negative Killing energy takes place here in the same way as for the
rotating detector. We will see that the excitation can be avoided
again when we design the detector appropriately as in the previous
section.
A detector's orbit along the Killing flow (\ref{eq:killing-1}) is
expressed as\begin{align}
T & =\xi_{0}\sinh(\zeta_{0\eta}\tau/\xi_{0})\,,\nonumber \\
X & =\xi_{0}\cosh(\zeta_{0\eta}\tau/\xi_{0})\,,\\
Y & =\zeta_{0y}\tau\,,\; Z=0\,.\nonumber \end{align}
The parameters $\zeta_{0\eta}$ and $\zeta_{0y}$ are constants corresponds
to the detectors four velocity: $\zeta_{0\eta}=g^{-1}\Gamma\xi_{0}$
and $\zeta_{0y}=g^{-1}$ with $g=\sqrt{\Gamma^{2}\xi_{0}^{2}-1}$.
We can calculate the transition amplitude of the process $\left|E_{0},0\right\rangle \rightarrow\left|E,1_{k_{i}}\right\rangle $
in the same way as the previous section as\begin{eqnarray}
A(E,1_{k_{i}};E_{0},0) & = & ic\left\langle E,1_{k_{i}}\right|\int_{-\infty}^{\infty}d\tau\,\mu(\tau)\phi(T(\tau),X_{i}(\tau))\left|E_{0},0\right\rangle \nonumber \\
\, & = & ic\left\langle E\right|m(0)\left|E_{0}\right\rangle \int_{-\infty}^{\infty}d\tau\, e^{i\Delta E\tau}\left\langle 1_{k_{i}}\right|{\phi}(T,X_{i})\left|0\right\rangle \,,\label{eq:defA}\end{eqnarray}
where $\Delta E=E-E_{0}$ and $(t,x,y,z)=(T(\tau),X(\tau),Y(\tau),Z(\tau))=(T(\tau),X_{i}(\tau))$
is the detector's trajectory; $\left|1_{k_{i}}\right\rangle $ is
the state one particle with mode $k_{i}$. The field $\phi$ is expanded
as\begin{equation}
\phi=\left(a_{k_{i}}^{+}\, e^{-i\omega t}+a_{k_{i}}^{-}\, e^{i\omega t}\right)e^{k_{i}x^{i}}\,.\end{equation}
Again this expression means that the Killing energy is the one across
the surface of constant $t$ as in the previous section. With the
above expansion we obtain\begin{align}
\int_{-\infty}^{\infty}d\tau & e^{\Delta E\tau}\left\langle 1_{k}\right|\phi(T(\tau),Z(\tau),Y(\tau),Z(\tau))\left|0_{M}\right\rangle \nonumber \\
= & \frac{1}{\sqrt{2\pi\omega}}\int_{-\infty}^{\infty}d\tau\, e^{i\Delta E\tau}\exp i(\omega T(\tau)+k_{i}X^{i}(\tau))\end{align}
To simplify the detector's orbit we introduce the Rindler coordinates
for the $tx$ plane as\begin{equation}
t=\xi\sinh\kappa\eta,\; x=\xi\cosh\kappa\eta\;,\label{eq:rindler-1}\end{equation}
where $\kappa$ is an arbitrary constant to make the arguments of
hyperbolic functions dimensionless so that $\eta$ has the unit of
length. The detector's orbit is expressed as $(\eta,\xi,y,z)=(\kappa^{-1}\xi_{0}^{-1}\zeta_{0\eta}\tau,\xi_{0},\zeta_{0y}\tau,0)$
with these coordinates. The mode functions in this coordinate system
is expressed as \begin{equation}
\psi_{p}^{\pm}=\frac{\sqrt{\sinh\sigma\kappa^{-1}}}{2\pi^{2}}\, e^{\mp\sigma\eta}K_{ip}(h\xi)\exp i(k_{y}y+k_{z}z)\,.\end{equation}
where $K_{ip}$ is the Macdonald function (modified Bessel function)
with the imaginary order and $\sigma=|p|,\, h=\sqrt{k_{x}^{2}+k_{y}^{2}}$.
The Minkowski modes can be expanded by the above mode functions as\begin{equation}
\frac{1}{\sqrt{2\pi\omega}}\exp-i(\pm\omega t-k_{i}x^{i})=\sum_{p}[\alpha(p,k_{i})\psi_{p}^{\pm}+\beta(p,k_{i})\psi_{p}^{\mp}]\,.\end{equation}
where $\alpha$ and $\beta$ are the Bogolubov coefficients conventionally
used to calculate the Unruh effect. With the above expansion we obtain
\begin{align}
\int_{-\infty}^{\infty}d\tau & \, e^{i\Delta E\tau}\exp i(\omega T(\tau)+k_{i}X^{i}(\tau))\nonumber \\
\, & =\int_{-\infty}^{\infty}d\tau\int_{-\infty}^{\infty}dp\, e^{i\Delta E\tau}e^{ik_{y}\zeta_{0y}\tau}\left[\alpha(p,k_{i})\,\exp\left(\frac{i\sigma\zeta_{0\eta}}{\kappa\xi_{0}}\tau\right)\right.\nonumber \\
& ~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.+\beta(p,k_{i})\,\exp\left(-\frac{i\sigma\zeta_{0\eta}}{\kappa\xi_{0}}\tau\right)\right]K_{ip}(h\xi_{0})\,\nonumber \\
\, & =\int_{-\infty}^{\infty}dp\,[\alpha(p,k_{x})\,\delta(\Delta E+g(\kappa^{-1}\Gamma\sigma+k_{y}))\nonumber \\
& ~~~~~~~~~~~~~+\beta(p,k_{x})\,\delta(\Delta E-g(\kappa^{-1}\Gamma\sigma-k_{y}))]K_{ip}(h\xi_{0})\,\,.\end{align}
The terms with Bogolubov coefficients $\beta$ is the result of annihilation
operators, which means the absorption of quanta excites the detector
as in the usual Unruh effect. The excitation by negative Killing energy
is expressed by the terms with coefficients $\alpha$ as in the previous
section.
These terms with $\alpha$ again vanish when we choose the surface
normal to the Killing flow of the detector's orbit. Actual calculation
is similar to the one in the previous section. Or, we can obtain the
same result simply by replacing $\kappa^{-1}\Gamma\sigma\pm k_{y}$
with $|\kappa^{-1}\Gamma\sigma\pm k_{y}|$ following the prescription
in Section II. The result shows coefficients with $\alpha$ vanish
but those with $\beta$ survive. This means the detector responds
not by the negative Killing energy emission, but by the absorption
of positive Killing energy only, as in the usual Unruh effect. Further
calculation leads to the particle distribution of Doppler shifted
Planckian distribution as expected.
\section{Concluding Remarks}
In the present paper we have investigated the vacuum observed by a
circularly rotating Unruh-DeWitt detector. The response of a detector
depends on the choice of the surface (three volume) for the Hamiltonian
to define it. Consequently detectors defined on different surfaces
may perceive different state of particles. The reason for the particle
detection reported in the past literature is due to the choice of
the surface with constant Minkowski time. A detector will not observe
particles when we define it on a surface normal to the detector's
orbit.
It has been puzzling that a rotating detector observed particles in
a Minkowski vacuum because a global analysis shows the rotating vacuum
is identical to the Minkowski vacuum. Korsbakken and Leinaas \cite{Korsbakken2004}
clarified the reason for this discrepancy. They found the detector
responds to the negative Killing energy wave; the ground state detector
can get excited by the emission of negative Killing energy mode. In
the present paper it was shown that their result is due to the choice
of surface to define the detector; their choice was the surface of
constant Minkowski time. Here in the present paper we introduced a
detector defined on a surface normal to the detector's orbit. It was
found such a detector does not perceive negative Killing energy, and
thus particles are not detected. A similar situation was also found
for an accelerating detector with perpendicular drift.
Using a hypothetical negative mass, we demonstrated how and why negative
Killing energy occurs. When the phase speed of some waves is slower
than the detector, such waves crosses the surface of the constant
time from the {}``flip side'' of the surface. Consequently, the
energy-momentum flux has opposite sign, since the sign of flux is
determined by the direction of surface to cross. The detector sees
negative Killing energy for those waves, and can be excited by the
absorption of negative Killing energy.
A remark should be made that the definition of the Hamiltonian in
the present paper is not rigorous in a sense. Precisely speaking,
a surface of a Hamiltonian for field quantization must be all spacelike,
however, the surface we introduced here becomes timelike beyond the
static limit. There are attempt to generalize the field quantization
to accommodate such partially timelike surface (see \cite{Oeckl2006}
and references therein), however, it is out of scope of the present
paper to discuss it. We simply assume its validity here. Also there
is a subtle point at defining the constant time surface with the coordinates
(\ref{eq:varphi}), which has discontinuity at $\theta=2\pi$. We
will leave detailed examination on this point for future work.
\thanks{ The author is grateful to MANZANA for productive research environment. }
\bibliographystyle{apsrev4-1} \bibliographystyle{apsrev4-1} \bibliographystyle{apsrev4-1}
|
2,869,038,156,285 | arxiv | \section{Introduction}
\noindent
According to the prescriptions of the original quark model proposed by
Gell-Mann~\cite{gellmann} and Zweig~\cite{zweig}
in 1964, mesons are comprised of quark-antiquark pairs and baryons are
three-quark triplets. In the 1970's, this simple model was superseded by
Quantum Chromodynamics (QCD), which identified the reason for these rules
was that $q\bar{q}$ pairs and $qqq$ combinations can be color singlet
representations of the color $SU(3)$ group that is fundamental to the theory.
Somewhat suprisingly, the mesons are $q\bar{q}$ and baryons are $qqq$
prescription still adequately describes the hadronic particle spectrum
despite the existence of a number of other color-singlet quark and gluon
combinations that are possible in QCD~\cite{strottman}.
Considerable experimental efforts
at searching for the predicted color-singlet $qqq\bar{q}q$ ``pentaquark''
baryons~\cite{diakonov}
and the doubly strange $udsuds$ $H$-dibaryon~\cite{jaffe_H} have failed to come up with
any unambiguous candidates for either state~\cite{trilling}.
Although a few candidates for non-$q\bar{q}$ light hadron resonances
have been reported~\cite{e852} none have been generally accepted as
established by the hadron physics community~\cite{barnes_1}.
In recent years, however, the situation changed, beginning with the observation of
the $X(3872)$ meson by Belle~\cite{skchoi_x3872}, the
discovery of the $Y(4260)$ meson by BaBar~\cite{babar_y4260}, and the
subsequent observation of a number of other candidate charmonium-like meson states,
the so-called $XYZ$ mesons, that are not well matched to expectations for the
quark-antiquark meson picture~\cite{godfrey_olsen}. Here I give a brief report
on why we think the observed states may be exotic and describe some recent
observations of charged quarkonium-like meson states that
necessarily must have a minimal four-quark structure
by Belle~\cite{skchoi_z4430, mizuk_z4050,bondar_zb}
and BESIII~\cite{bes_z3900}.
In 1977, Jaffe
predicted the existence of
The $H$-dibaryon, a doubly strange,
six-quark structure ($uuddss$) with quantum numbers
$I=0$ and $J^P=0^+$ and a mass that is $\simeq 80$~MeV
below the $2m_{\Lambda}$~\cite{jaffe_H}. An $S=-2$, baryon-number $B=2$
particle with mass below $2m_{\Lambda}$ would decay via weak
interactions and, thus, be long-lived with a lifetime comparable
to that of the $\Lambda$ and negligible natural width.
Jaffe's specific prediction was ruled out by the observation
of double-$\Lambda$ hypernuclei events~\cite{double-lambda,nagara,e176},
especially the famous ``Nagara'' event that has a relatively unambiguous
signature as a $_{\Lambda\Lambda}^6$He hypernucleus produced via
$\Xi^{-}$ capture in emulsion~\cite{nagara}.
The measured $\Lambda\Lambda$ binding energy,
$B_{\Lambda\Lambda}=7.13\pm 0.87$~MeV, establishes, with a 90\% confidence level
(CL), a lower limit of $M_{H}>2223.7$~MeV, severely narrowing the window
for a stable $H$ to the binding energy range
$B_H\equiv 2m_{\Lambda}-M_H < 7.9$~MeV\footnote{In this report I have
taken the liberty of averaging asymmetric errors
and combining statistical and systematic errors
in quadrature. For actual measured values, please refer
to the original papers.}
Although Jaffe's original prediction for a binding energy of $\simeq 81$ MeV
has been ruled out, the theoretical case for
an $H$-dibaryon with a mass near $2m_{\Lambda}$ continues to be
strong and has been recently strengthened by lattice QCD
calculations (LQCD) by the NPLQCD~\cite{NPLQCD,NPLQCD_2} and
HALQCD~\cite{HALQCD} collaborations that both find
a bound $H$-dibaryon, albeit for non-physical values for
the pion mass.
NPLQCD's linear (quadratic) extrapolation to
the physical pion mass gives $B_H= -0.2\pm 8.0$~MeV
($7.4\pm 6.2$~MeV)~\cite{NPLQCD_2}. Carames and
Valcarce~\cite{Carames} recently studied the $H$ with a
chiral constituent model constrained by $\Lambda N$,
$\Sigma N$, $\Xi N$ and $\Lambda\lm$ cross section data
and find $B_H$ values that are similar to the NPLQCD
extrapolated values.
Numerous experimental searches have been made for an
$H$-dibaryon-like state with mass near (above or below)
the $2m_{\Lambda}$ threshold. Although some hints
of a virtual $\Lambda\lm$ state was reported by a
KEK experiment~\cite{e522}, other searches produced
negative results~\cite{e836,KTeV,e910,H-searches}.
\section{The Quarkonium Spectra}
\noindent
Quarkonium mesons, {\it i.e.,} mesons that contain a $Q$ and $\bar{Q}$ quark pair,
where $Q$ is used to denote either the $c$- or $b$-quark, have proven
to be useful probes for multiquark meson systems. That is because these mesons are well understood;
their constituent quarks are non-relativistic and potential models can be applied. Most of the low-lying
$Q\bar{Q}$ meson states have been discovered and found to have properties that agree
reasonably well with potential model predictions. More complex states would likely have properties that
deviate from model predictions and, thus, be identifiable as such.
Figure~\ref{fig:charmonium} shows a level diagram for the $c\bar{c}$ (``charmonium'') system,
where established states are indicated by solid lines, and the masses predicted by the
Godfrey-Isgur (GI) relativized potential model in 1985~\cite{GI} are shown as dash-dot lines.
All of the states below the $M=2m_D$ open-charmed-meson threshold have been identified and
have masses that agree reasonably well with GI predictions. Moreover, all of
the above-threshold $J^{PC}=1^{--}$ states below $M\simeq 4.45$~GeV have been assigned and,
here too, there is reasonable agreement with predicted masses. In addition to the $1^{--}$
states, the $\chi_{c2}^{\prime}$, the $2^{++}$ radially excited $2\ ^3P_2$ state,
has been assigned~\cite{uehara_chi3929} and Belle recently reported strong evidence
for the $\psi_2$, the $2^{--}$ $1^{3}D_{2}$ state~\cite{belle_psi2}.
Any meson state with prominent decays to final states containg a $c$- and
a $\bar{c}$-quark, that does not fit into one of the remaining unassigned $c\bar{c}$ states
has to be considered exotic.\footnote{
The large value of the $c$-quark mass precludes any substantial production
of $c\bar{c}$ pairs via fragmentation processes.}
\begin{figure}[t]
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.9\textwidth,width=1.1\textwidth]{charmonium1.eps}
\caption{ The charmonium meson spectrum. The solid bars indicate the established
charmonium states and the dash-dot bars indicate the mass levels that were predicted
in 1985.}
\label{fig:charmonium}
\end{minipage}
\hspace{\fill}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.9\textwidth,width=1.1\textwidth]{bottomonium.eps}
\caption{The bottomonium level diagram. The solid bars indicate well established
states, in addition, candidates for the $h_b(1P)$, $h_b(2P)$ and $\eta_b(2S)$
states have recently been seen. }
\label{fig:bottomonium}
\end{minipage}
\end{figure}
Figure~\ref{fig:bottomonium} shows a level diagram for the $B\bar{B}$ (``bottomonium'')
system. Here all the levels indicated by solid bars are well established. In
addition, the have been recently reported of the $h_b(1p)$, $h_b(2P)$ and
$\eta_b(2S)$~\cite{belle_hb,belle_etab2s}, as well as evidence for some
of the (unshown) $D$-wave and $\chi_b (3P)$ states~\cite{dwaves}. Three
$1^{--}$ states above the $2m_B$ open-bottom threshold have been tentatively
identified as the $\Upsilon(4S)$, $\Upsilon(5S)$ and $\Upsilon(6S)$, and
these are their commonly used names. The arrows in Fig.~\ref{fig:bottomonium}
indicate transitions between the sates accompanied by either light-hadron
emission (vertical arrows) and $E1$ electromagnetic transitions (diagonal
arrows). Not shown are the $\Upsilon(3S)\rightarrow \gamma \eta_b(1S)$ $M1$-transition
that was used by BaBar to discover the $\eta_b(1S)$~\cite{babar_etab},
the $\Upsilon(5S)\rightarrow\pi^{+}\pi^{-} h_b(1P)$
and $\pi^{+}\pi^{-} h_b(2P)$ transitions used by Belle to discover the $h_b(1P,2P)$
states~\cite{belle_hb},
or the $h_b(2P)\rightarrow \gamma \eta_b(2S)$ $E1$ transition that led to the
discovery of the $\eta_b(2S)$~\cite{belle_etab2s}. With the notable exception of the
$\Upsilon(5S)\rightarrow \pi^{+}\pi^{-} \Upsilon(1S,2S,3S)$ and $\Upsilon(5S)\rightarrow \pi^{+}\pi^{-} h_b(1S,2S)$
transitions, which are anomalously strong and discussed below, all of the other
transitions have measured strengths that are consistent with theoretical expectations.
\section{The $X(3872)$}
\noindent
The first $XYZ$ meson that was observed is the $X(3872)$, which was seen as a
pronounced peak in the $\pipiJ/\psi$ invariant mass spectrum in exclusive
$B^+\rightarrow K^+\pipiJ/\psi$ decays~\cite{skchoi_x3872,conj}. Decays to
$\gammaJ/\psi$~\cite{belle_gammajp,babar_gammajp,belle_gammajp_1},
$\pi^{+}\pi^{-}\pi^0 J/\psi$~\cite{belle_gammajp,babar_3pi}, and $D^0\bar{D^{*0}}$~\cite{x3872_ddbar}
have also been seen. The $\pi^{+}\pi^{-}$ invariant mass distribution
in $X(3872)\rightarrow\pipiJ/\psi$ decays is well described by the hypothesis that the pions
originate from $\rho^0\rightarrow\pi^{+}\pi^{-}$ decays~\cite{CDF_pipi,belle_jpc}. A CDF analysis
of angular correlations among final state particles in $X(3872)\rightarrow\pipiJ/\psi$ ruled
out all possible $J^{PC}$ assignments (for $J\le3$) other than $1^{++}$ and
$2^{-+}$~\cite{CDF_jpc}. A Belle analysis of angular correlations in
$B\rightarrow K X(3872)$; $X(3872)\rightarrow\pipiJ/\psi$ decays found good agreement with the
$1^{++}$ hypotheses with no free parameters; for $2^{-+}$ there is one free
complex parameter and a value for this was found that produces acceptable
agreement with the measured data~\cite{belle_jpc}. Recently, an
comprehensive analysis of the five-dimensional angular correlations
in the $B^+\rightarrow K^+ X(3872)$, $X(3872)\rightarrow\pipiJ/\psi$, $J/\psi\rightarrow\mu^{+}\mu^{-}$ decay
chain conclusively ruled out the $2^{-+}$ assignment and established, once and
for all, that the $J^{PC}$ of the $X(3872)$ is $1^{++}$~\cite{LHCb_jpc}.
The only unassigned $1^{++}$ charmonium level with a predicted mass near 3872~MeV
is the $\chi_{c1}^{\prime}$, the first radial excitation of the $\chi_{c1}$.
The assignment of the $X(3872)$ to this level has some problems.
First, the mass is too low.
Potential models predict the mass of the $\chi_{c1}^{\prime}$ to be
around 3905~MeV, where this is pegged to the measured mass of the
multiplet-partner state $M_{\chi_{c2}^{\prime}}=3929\pm 5$~MeV~\cite{uehara_chi3929}.
If the $\chi_{c1}^{\prime}$ mass is $\simeq$3872~MeV, the
$\chi_{c2}^{\prime}$-$\chi_{c1}^{\prime}$ mass splitting would be
$\simeq 57$~MeV, and higher than the $\chi_{c2}$-$\chi_{c1}$ mass splitting
of $45.5\pm 1.1 $~MeV. In potential models this splitting decreases with increasing
radial quantum numbers~\cite{splitting}. Second, the decay
$\chi_{c1}^{\prime}\rightarrow \gamma \psi(2S)$
is a favored $E1$ transition and expected to be more than an order-of-magnitude
stronger than ``hindered'' $E1$ transition $\chi_{c1}^{\prime}\rightarrow\gammaJ/\psi$~\cite{NR}.
The Belle experiment recently
reported a 90\% CL limit on $\Gamma_{X(3872)\rightarrow \gamma \psi(2S)}$ that
is less than $2.1\times\Gamma_{X(3872)\rightarrow \gamma J/\psi }$~\cite{belle_gammajp}
and in contradiction with potential model expectations for the
$X(3872)=\chi_{c1}^{\prime}$ assignment. Third, the transition
$\chi_{c1}^{\prime}\rightarrow\pipiJ/\psi$, the $X(3872)$ discovery mode, violates
isospin and is expected to be strongly suppressed.
Two features of the $X(3872)$ that have attracted considerable attention are its
narrow natural width, $\Gamma_{X(3872)}<1.2$~MeV at the 90\% CL~\cite{belle_jpc},
and its mass, for which (my) world average value is $M_{X(3872)}=3871.67\pm 0.17$~MeV,
which is equal, to about a part in $\sim 10^4$, to the $D^0\bar{D^{*0}}$ mass
threshold: $m_{D^0}+m_{D^{*0}}= 3871.79\pm 0.30$~MeV.\cite{pdg} The close proximity
of the $M_{X(3872)}$ to the $D^0\bar{D^{*0}}$ threshold has led to speculation that
the $X(3872)$ is a molecule-like $D^0$-$\bar{D^{*0}}$ bound state held together
by nuclear-like $\pi$- and $\omega$-meson exchange forces~\cite{molecule}.
\subsection{The $Y(4260)$}
\noindent
The $Y(4260)$ was seen by BaBar as a peak in the $M(\pipiJ/\psi)$
distribution in the initial-state-radiation (ISR) process
$e^{+}e^{-}\rightarrow\gamma_{ISR}\pipiJ/\psi$~\cite{babar_y4260}, an observation
that was confirmed by CLEO and Belle~\cite{belle_y4260}. Since it
is produced via the ISR process, its $J^{PC}$ must be $1^{--}$. In
contrast to the $X(3872)$, the peak is relatively wide; the
weighted average of the BaBar and Belle peak width measurements is
$\Gamma_{Y(4260)}=99\pm 17$~MeV.
A striking feature of the $Y(4260)$ is that its peak mass is
not near that of any of the established $1^{--}$ charmonium states. Moreover,
since all $1^{--}$ charmonium states with mass below $4.45$~MeV have
been identified, the $Y(4260)$ cannot be a standard $c\bar{c}$ meson.
Moreover, it does not seem to have a strong coupling to open-charm
mesons; measurements of $e^{+}e^{-}$ annihilation into charmed mesons in the
vicinity of $\sqrt{s}\sim 4260$~MeV show indications of a dip in the
total cross section at the location of the $Y(4260)$
peak~\cite{bes2_rscan}. This motivated a detailed analysis~\cite{xhmo}
that established a lower limit on the partial width
$\Gamma_{Y(4260)\rightarrow\pipiJ/\psi}$ that is greater than 1~MeV,
which is huge for charmonium. The Belle group did a
comprehensive search for $Y(4260)$ decays to all possible final
states containing open charmed meson pairs and found no sign of
a $Y(4260)$ signal in any of them~\cite{galina}. Thus, it seems likely that
the $\Gamma_{Y(4260)\rightarrow\pipiJ/\psi}$ is
substantially greater than 1~MeV. If this is the case, it
would be a strong indication that some new, previously unanticipated,
mechanism is involved.
Subsequent studies of the $e^{+}e^{-}\rightarrow\gamma_{ISR}\pi^{+}\pi^{-}\psi(2S) $ ISR
process led to discoveries of states with similar characteristics
decaying to the $\pi^{+}\pi^{-}\psi(2S)$ final state: the $Y(4360)$ by
BaBar~\cite{babar_y4360}, and the $Y(4660)$ by
Belle~\cite{belle_y4660}. There is no evidence for open-charmed
meson decays for either of these states. Moreover, there is
no sign of them in the $\pipiJ/\psi$ spectrum and is there no
evidence for $Y(4260)\rightarrow \pi^{+}\pi^{-}\psi(2S) $.
\section{Searches in the $b$-quark sector}
\noindent
The existence of the $Y(4260)$ and other $1^{--}$ hidden charm states
with large partial widths to $\pipiJ/\psi$ and $\pi^{+}\pi^{-}\psi(2S) $
led to speculation that there may be counterparts in the $b$-quark
sector~\cite{wshou}. This prompted a Belle measurement of the
partial widths for $\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}\Upsilon(nS)$
($n=1,2,3$). The expected branching fraction for these
decays~\cite{wshou} is $\sim 10^{-5}$
and, with the data sample that was available at the time, the
expectation was that no signal would be seen.
(The measured
branching fractions for the nearby $\Upsilon(4S)$ to
$\pi^{+}\pi^{-}\Upsilon(nS)$ are less than $10^{-4}$~\cite{pdg}.) Rather remarkably,
very strong signals were observed for all three decays modes, with
branching fractions of nearly one percent --- more than
two-orders-of-magnitude times expectations~\cite{kfchen_1}.
In an attempt to determine whether or
not the anomalous events were coming from decays of $\Upsilon(5S)$
or from some other, $b$-quark sector equivalent of the $Y(4260)$ lurking nearby, Belle
did a cross section scan of $e^{+}e^{-} \rightarrow $~hadrons and
$e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\Upsilon(nS)$~\cite{kfchen_2} . This scan showed
some indication that the $e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\Upsilon(nS)$ yield peaks
at a mass distinct from that for $\Upsilon(5S)\rightarrow$~hadrons
but with limited statistical significance
($10888\pm 3$~MeV for the three $\pi^{+}\pi^{-}\Upsilon(nS)$
channels {\it vs.} $10879\pm 3$~MeV for inclusive hadrons).
\subsection{Study of inclusive $\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}$ plus {\rm anything}}
\noindent
Motivated by the curious phenomena described in the preceding section,
Belle made a study of the inclusive process
$\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}$~plus~{\it anything}~\cite{belle_hb}.
Figure~\ref{fig:ups5S_incl} shows the missing mass recoiling against
every $\pi^{+}\pi^{-}$ pair in events in a 121~fb$^{-1}$ data sample collected at an $e^{+}e^{-}$
c.m. energy in the vicinity of the $\Upsilon(5S)$ resonance.
In this plot there are a huge number of entries, on the order of a million
in each of the 1~MeV bins; the relative statistical error on each point is of order $0.1\%$.
The distribution is fitted piecewise to a polynomial background shape
plus signal peaks for all of the bottomonim states (and reflections)
that are expected to be produced via this process.
\begin{wrapfigure}{l}{6.6cm}
\centerline{\includegraphics[width=6.6 cm,height=5.5 cm]
{Y5S_2_pipi_incl.eps}}
\caption{The mass of the system recoiling against the
$\pi^+$ and $\pi^-$ in inclusive $\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}$~X
decays. The dashed lines indicate the positions
of the $\Upsilon(1S)$, $h_b(1P)$, $\Upsilon(2S)$, $h_b(2P)$ and
$\Upsilon(3S)$.}
\label{fig:ups5S_incl}
\end{wrapfigure}
Figure~\ref{fig:h_bnP} shows the results of the fit
with the background component subtracted. There, in addition
to peaks corresponding to $\pi^{+}\pi^{-}\Upsilon(nS)$ ($n=1,2,3$) and
reflections from the ISR processes $e^{+}e^{-}\rightarrow\gamma_{ISR}\Upsilon(mS)$,
$\Upsilon(mS)\rightarrow\pi^{+}\pi^{-}\Upsilon(1S)$ ($m=1, 2$), are distinct
signals for $\Upsilon(5S)\rightarrow\pi^{+}\pi^{-} h_b(1P)$ and $\pi^{+}\pi^{-} h_b(2P)$
with $ 5.5\sigma$ and $11.2\sigma$ significance, respectively, and a hint
of $\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}\Upsilon(1D)$.
This is the first observation of the $h_b(1P)$ and $h_b(2P)$ bottomonium
states. The prominent $h_b$ signals -- similar in strength to
the $\Upsilon(nS)$ signals -- are somewhat surprising because the
$\Upsilon(5S)\rightarrow\pi^{+}\pi^{-} h_b$ process requires a $b$-quark spin-flip
and is expected to be supressed.
\begin{figure}[b]
\centerline{\includegraphics[width=9 cm, height=5.5 cm]
{h_b1P_and_h_b2P.eps}}
\caption{The background-subtracted recoil mass distribution
with the signal component from the fit superimposed. The
vertical lines indicate the boundaries used for the piecewise
fit.}
\label{fig:h_bnP}
\end{figure}
\subsection{$M(\pi^{\pm}h_b(mP))$ distributions}
\noindent
The huge number of events in the $h_b(1P)$ and $h_b(2P)$ signal
peaks in Fig.~\ref{fig:h_bnP} ($\simeq50$\ K and $\simeq 84$\ K
events, respectively) permitted an investigation of the
resonant substructure in $\Upsilon(5S)\rightarrow \pi^{+}\pi^{-} h_b(mP)$ decays.\cite{bondar_zb}
Figure~\ref{fig:h_bpi}(a) shows the $h_b(1P)$ yield
determined from fits to the $\pi^{+}\pi^{-}$ recoil mass spectrum for
different values of $\pi^{\pm }h_b(1P)$ mass, determined from $\pi^{+}\pi^{-}$ missing mass
measurements of the $h_b$ signals in bins of the mass recoiling against one of
the pions. Figure~\ref{fig:h_bpi}(b) shows the corresponding
$\pi^{\pm}h_b(2P)$ mass distribution.
\begin{figure}[t]
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{hb1_vs_mmp_fit.eps}
\end{minipage}
\hspace{\fill}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{hb2_vs_mmp_fit.eps}
\end{minipage}
\caption{(a) $h_b(1P)$ and (b) $h_b(2P)$ yields
{\it vs.} $M_{\rm miss}(\pi)$. The histograms are the fit results.}
\label{fig:h_bpi}
\end{figure}
As is evident from the figures, the $h_b(1P)$ and $h_b(2P)$ signals
are entirely due to two structures in $M(\pi^{\pm} h_b(mP))$, one with
peak mass near $10610$~MeV and the other
with peak mass near $10650$~MeV. In the
following, these structures are referred to as the
$Z_b(10610)$ and $Z_b(10650)$, respectively. The
histograms in each figure show the results of fits to the mass
spectra using two Breit Wigner (BW) amplitudes to represent the $Z_b$
peaks plus a phase-space component. The fitted results for the BW
parameters for the two $Z_b$ peaks, which are consistent with being the
same for both decay channels, are listed below in Table~\ref{tbl:fits}.
For both spectra, the
fitted strengths of the phase space term are consistent with being zero.
\subsection{$M(\pi^{\pm}\Upsilon(nS))$ distributions in
$\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}\Upsilon(nS)$ decays}
\begin{wrapfigure}{r}{6.6cm}
\centerline{\includegraphics[width=6.0 cm,height=5.0 cm]
{y2spp-s-dp.eps}}
\caption{$M^2(\Upsilon(2S)\pi)$ {\it vs.}
$M^(\pi^{+}\pi^{-})$ Dalitz plot for
$\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}\Upsilon(2S)$ decays.}
\label{fig:ups2S-pipi-dp}
\end{wrapfigure}
\noindent
Belle also made an investigation of possible resonant
substructure in fully reconstructed
$\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}\Upsilon(nS)$ decays ($n=1,2,3$)~\cite{bondar_zb}.
Figure~\ref{fig:ups2S-pipi-dp} shows the
$M^2(\Upsilon(2S)\pi)$ (vertical) {\it vs.}
$M^(\pi^{+}\pi^{-})$ (horizontal) Dalitz plot for
$\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}\Upsilon(2S)$ decays.
Here, to avoid double counting, only the
highest mass $\Upsilon(2S)\pi$ combination
is plotted. In the figure there is
a sharp vertical band at small $\pi^{+}\pi^{-}$ masses
caused by background from converted photons,
and two distinct horizontal
clusters near $M^2(\Upsilon(2S)\pi)=112.6$~GeV$^2$
and $113.3$~GeV$^2$, near the locations
expected for the $Z_b(10610)$ and $Z_b(10650)$.
The $\pi^{+}\pi^{-}\Upsilon(1S)$ and $\pi^{+}\pi^{-}\Upsilon(3S)$
Dalitz plots show similar structures.
The Dalitz plots are fitted with a model that includes BW
amplitudes to represent the two $Z_b$ states, terms that
account for possible contributions in the $\pi^{+}\pi^{-}$ system
from the $f_0(980)$ and $f_2(1270)$ resonances, and a
non-resonant amplitude using a form suggested by
Voloshin~\cite{voloshin}. The regions of the Dalitz plots
contaminated by photon conversion backgound ({\it i.e.} to the left
of the vertical line in Fig.~\ref{fig:ups2S-pipi-dp}) are
excluded from the fits.
$M(\Upsilon(nS)\pi)$ and $M(\pi^{+}\pi^{-})$
projections with the results of the fits superimposed are
shown in Fig.~\ref{fig:dp-fits} and included in Table~\ref{tbl:fits}.
The $Z_b(10610)$ and $Z_b(10650)$ mass and width measurements from the
five different channels agree within their errors. The averages of the
five mass and width measurements for the $Z_b(10610)$ are
$M=10607.2\pm 2.0$~MeV and $\Gamma= 18.4\pm 2.4$~MeV; for the
$Z_b(10650)$, the averages are
$M=10652.2\pm 1.5$~MeV and $\Gamma= 11.5\pm 2.2$~MeV. These
are very near the $m_B + M_{B^*}=10604.3\pm 0.6$~MeV and
$2m_{B^*}=10650.2\pm 1.0$~MeV\cite{pdg} mass thresholds, respectively,
which is suggestive of virtual molecule-like structures~\cite{bondar},
although other interpretations have been proposed~\cite{other_zb}.
\begin{figure}[h]
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{dpfit-a.eps}
\end{minipage}
\hspace{\fill}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{dpfit-b.eps}
\end{minipage}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{dpfit-c.eps}
\end{minipage}
\hspace{\fill}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{dpfit-d.eps}
\end{minipage}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{dpfit-e.eps}
\end{minipage}
\hspace{\fill}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{dpfit-f.eps}
\end{minipage}
\label{fig:dp-fits}
\caption{$M(\Upsilon(nS)\pi)$ and $M(\pi^{+}\pi^{-})$ projections with fit results superimposed
for the $\Upsilon(1S)$ (a,b), $\Upsilon(2S)$ (c,d) and $\Upsilon(3S)$ signals. The hatched
histograms are sideband-determined backgrounds.}
\end{figure}
\begin{table}[h!]
\caption{Results for the $Z_b(10610)$ and $Z_b(10650)$ parameters
obtained from $\Upsilon(5S)\rightarrow\pi^{+}\pi^{-}\Upsilon(nS)$ ($n=1,2,3$) and
$\Upsilon(5S)\rightarrow h_b(mP)\pi^{+}\pi^{-}$ ($m=1,2$) analyses. }
\medskip
\label{tbl:fits}
\centering
\begin{tabular}{lccccc} \hline \hline
Final state & $\Upsilon(1S)\pi^+\pi^-$ &
$\Upsilon(2S)\pi^+\pi^-$ &
$\Upsilon(3S)\pi^+\pi^-$ &
$h_b(1P)\pi^+\pi^-$ &
$h_b(2P)\pi^+\pi^-$ \\ \hline
$M[Z_b(10610)]$, MeV &
$10611\pm4\pm3$ &
$10609\pm2\pm3$ &
$10608\pm2\pm3$ &
$10605\pm2^{+3}_{-1}$ &
$10599{^{+6+5}_{-3-4}}$
\\
$\Gamma[Z_b(10610)]$, MeV &
$22.3\pm7.7^{+3.0}_{-4.0}$ &
$24.2\pm3.1^{+2.0}_{-3.0}$ &
$17.6\pm3.0\pm3.0$ &
$11.4\,^{+4.5+2.1}_{-3.9-1.2}$ &
$13\,^{+10+9}_{-8-7}$
\\
$M[Z_b(10650)]$, MeV &
$10657\pm6\pm3$ &
$10651\pm2\pm3$ &
$10652\pm1\pm2$ &
$10654\pm3\,{^{+1}_{-2}}$ &
$10651{^{+2+3}_{-3-2}}$
\\
$\Gamma[Z_b(10650)]$, MeV &
$16.3\pm9.8^{+6.0}_{-2.0}$~ &
$13.3\pm3.3^{+4.0}_{-3.0}$ &
$8.4\pm2.0\pm2.0$ &
$20.9\,^{+5.4+2.1}_{-4.7-5.7}$ &
$19\pm7\,^{+11}_{-7}$
\\
\hline \hline
\end{tabular}
\end{table}
\subsection{The transitions $h_b(1P,2P)\rightarrow \gamma\eta_b(1S,2S)$ and the discovery of the $\eta_b(2S)$}
\noindent
In studies of bottomonium physics, the $\Upsilon(1S)$-$\eta_b(1S)$ mass difference has special importance
since this determines the scale of the spin-spin hyperfine interaction term in the $B\bar{B}$ potential.
This is accessible to Lattice QCD calculations, which give values that from
$\Delta_{\rm hfs}(1S)=47$~MeV to $59$~MeV~\cite{meinel}.
The $\eta_b(1S)$ was discovered by the BaBar collaboration in the $M1$ radiative
process $\Upsilon(3S)\rightarrow \gamma \eta_b(1S)$~\cite{babar_etab}. BaBar produced the first measurement
of the splitting to be $\Delta_{\rm hfs}(1S)=71.4\pm 4.1$~MeV, which is outside the theoretical range. This
measurement was an experimental {\it tour-de-force} because the $\Upsilon(3S)\rightarrow\gamma\eta_b$
environment is very difficult, with a weak signal and substantial backgrounds that make the
extraction of a precise mass value difficult.
\begin{wrapfigure}{r}{5.6cm}
\centerline{\includegraphics[width=5.0 cm,height=12.0 cm]
{hb_2_etab.eps}}
\caption{{\bf a)} The $h_b(1P)$ yield {\it vs.} $M_{\rm miss}(\pi^{+}\pi^{-}\gamma)$
and {\bf b)} the $h_b(2P)$ yield {\it vs.} $M_{\rm miss}(\pi^{+}\pi^{-}\gamma)$
inn the $\eta_b(1S)$ mass region. {\bf c)} The $h_b(2P)$ yield {\it vs.} $M_{\rm miss}(\pi^{+}\pi^{-}\gamma)$
inn the $\eta_b(2S)$ mass region.}
\label{fig:hb_2_etab}
\end{wrapfigure}
The Belle observation of strong signals for $h_b(1s)$
and $h_b(2S)$ in inclusive $\Upsilon(5S)\rightarrow \pi^{+}\pi^{-} X$ decays, provides another way to access
the $\eta_b(1S)$, and that is via the $E1$ transitions, $h_b(1P,2P)\rightarrow \gamma\eta_b(1S)$.
Figures~\ref{fig:hb_2_etab}(a) and (b) show the $h_b(1S)$ and $h_b(2P)$ signal yields determined from
fitting the $\pi^{+}\pi^{-}$-recoil mass spectra, but this time in bins
of $\pi^{+}\pi^{-}\gamma$ missing mass. In this measurement, $\gamma$s that are
not associated with a $\pi^0\rightarrow\gamma\gamma$ decay are combined with the $\pi^{+}\pi^{-}$ pairs
to determine $M_{\rm miss}(\pi^{+}\pi^{-}\gamma)$, which is plotted on the horizontal axis~\cite{belle_etab}.
Distinct peaks near $9.4$~GeV corresponding to the $\eta_b(1S)$
are evident in both distributions. These data are used to determine the hyperfine splitting
$\Delta_{\rm hfs}(1S)=57.9\pm 2.3$~MeV and total width $\Gamma_{\eta_b(1S)}=10.8\pm 5.8$~MeV. This
measurement of $\Delta_{\rm hfs}(1S)$ has improved precision and
has a central value that is about $2.9\sigma$ higher
than the BaBar measurement and within the range of theoretical expectations.
Figures~\ref{fig:hb_2_etab}(c) shows the $h_b(2P)$ signal yields determined from
fitting the $\pi^{+}\pi^{-}$ recoil spectrum in $\pi^{+}\pi^{-}\gamma$ bins in the mass range expected for
the $\eta_b(2S)$, where a prominent peak can be seen near $10$~GeV. Belle identifies this
as the first observation of the $\eta_b(2P)$ and measures $\Delta_{\rm hfs}(2S)=24.3\pm 4.3$~MeV.
As mentioned above, the LQCD calculations of $\Delta_{\rm hfs}(nS)$ produce a range of values that
reflect the different approximations that are necessary for managable lattice calculations.
On the other hand, in ratios of the splittings between different radial states, many of
these uncertainties cancel. Thus, at least for the time being, measurements of these ratios
present the strongest challenges for theory. The Belle measurement of the ratio
$\Delta_{\rm hfs}(2S)/\Delta_{\rm hfs}(1S) = 0.42\pm0.08$ is in agreement with a LQCD
prediction of $0.40\pm 0.06$~\cite{meinel}.
\section{Search for the $H$-dibaryon in $\Upsilon(1S)$ and $\Upsilon(2S)$ decays.}
\noindent
As mentioned in the intoduction above,
recent theoretical results motivate searches for the $H$
with mass near the $M_H = 2m_{\Lambda}$ threshold. This mass
region is especially interesting, because very general theoretical
arguements ensure that for masses
approaching the $2m_{\Lambda}$ threshold from below, the
$H$ would behave more and more like a $\Lambda\lm$
analog of the deuteron, and for masses approaching $2m_{\Lambda}$ from
above, the $H$ would look more and more like a virtual dineutron resonance,
independently of its dynamical origin~\cite{braaten}. If its mass is below
$2m_{\Lambda}$, the $H$ would predominantly decay via
$\Delta S=+1$ weak transitions to $\Lambda n$,
$\Sigma^- p$, $\Sigma^0 n$ or $\Lambda p \pi^-$ final states.
If its mass is above $2m_{\Lambda}$, but below
$m_{\Xi^0} + m_n~(=2m_{\Lambda}+ 23.1$~MeV), the $H$ would decay
via strong interactions to $\Lambda\Lambda$ 100\% of the time.
Decays of narrow $\Upsilon(nS)$ ($n=1,2,3$) bottomonium
($b\bar{b}$) resonances are
particularly well suited for searches for deuteron-like
multiquark states with
non-zero strangeness. The $\Upsilon(nS)$ states are flavor-$SU(3)$
singlets and primarily decay via the
three-gluon annihilation process ({\it e.g.},
($Bf(\Upsilon(1S)\rightarrow ggg) = 81.7\pm 0.7$\%~\cite{pdg}).
The gluons materialize into $u\bar{u}$, $d\bar{d}$ and $s\bar{s}$
pairs in roughly equal numbers.
The high density of quarks and antiquarks in the limited final-state phase
space is conducive to the production of
multi-quark systems, as demonstrated by large branching
fractions for inclusive antideuteron ($\bar{d}$) production:
$Bf(\Upsilon(1S)\rightarrow \bar{d}\, X)=
(2.9\pm 0.3)\times 10^{-5}$ and
$Bf(\Upsilon(2S)\rightarrow \bar{d}\, X)=
(3.4\pm 0.6)\times 10^{-5}$~\cite{CLEO_dbar}. An upper limit for
the production of a six-quark $S=-2$ state in $\Upsilon(nS)$ decays that is
substantially below that for the six-quark antideuteron would
be strong evidence against its existence.
Belle recently completed a search for
$H$-dibaryon production in the inclusive decay chains
$\Upsilon(1S,2S) \rightarrow H \, X$; $H\rightarrow \Lambda p \pi^-$ and
$H\rightarrow \Lambda\Lambda$~\cite{belle_H},
using data samples containing 102 million $\Upsilon(1S)$ and
158 million $\Upsilon(2S)$ decays. The search strategy
assumed equal $\Upsilon(1S)$ and $\Upsilon(2S)$ branching fractions:
{\it i.e.,} $Bf(\Upsilon(1S)\rightarrow H\, X)=Bf(\Upsilon(2S)\rightarrow H\, X)
\equiv Bf(\Upsilon(1S,2S)\rightarrow H\, X)$.
The resulting continuum-subtracted $M(\Lambda p \pi^-)$ ($M(\bar{\Lambda} \bar{p} \pi^+)$)
distribution for the combined $\Upsilon(1S)$ and $\Upsilon(2S)$ samples,
shown in the top (bottom) left-hand panels of Fig.~\ref{fig:data-lppi}, has no evident
$H\rightarrow\Lambda p \pi^-$ ($\bar{H}\rightarrow\bar{\Lambda} \bar{p} \pi^+$) signal. The curve in the figure is the result of a fit using
a threshold function to model the background; fit residuals are also shown.
The dashed curves in the figures show the expected $H$ signal for a $\Upsilon(1S,2S)\rightarrow H X$ branching
fraction that is $1/20^{\rm th}$ that for antideuterons.
\begin{figure}[htb]
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{lppimode_lppionly_bg_1Mevbin_withsignal.eps}
\end{minipage}
\hspace{\fill}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{llmode_llonly_bg_1Mevbin_withsignal.eps}
\end{minipage}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{lppimode_alppionly_bg_1Mevbin_withsignal.eps}
\end{minipage}
\hspace{\fill}
\begin{minipage}{70mm}
\centering
\includegraphics[height=0.6\textwidth,width=0.8\textwidth]{llmode_allonly_bg_1Mevbin_withsignal.eps}
\end{minipage}
\caption{
{\bf Top:} The continuum-subtracted $M(\Lambda p \pi^-)$ (left) and $M(\Lambda\lm)$ (right) distributions
with the residuals from a background-only fit shown below. Here the $\Upsilon(1S)$ and $\Upsilon(2S)$ data samples
are combined. The curve shows the
results of the background-only fit described in the text. The dashed curve
shows the expected $H$ signal for a $\Upsilon(1S,2S)\rightarrow H X$ branching
fraction that is $1/20^{\rm th}$ that for antideuterons.
{\bf Bottom:} The corresponding $M(\bar{\Lambda} \bar{p} \pi^+ ) $ (left) and $M(\bar{\Lambda}\lmb)$ distributions.
}
\label{fig:data-lppi}
\end{figure}
The panels on the right of Fig.~\ref{fig:data-lppi} show the $M(\Lambda\lm)$ (above)
and $M(\bar{\Lambda}\lmb)$ (below) distributions for events that satisfy
the selection requirements. Here there is
no sign of a near-threshold enhancement similar to that reported by the
E522 collaboration~\cite{e522} nor any other evident signal for
$H\rightarrow\Lambda\lm$ ($\bar{H}\rightarrow\bar{\Lambda}\lmb$). The curve is the result of a
background-only fit using the functional form described above; fit
residuals are also shown. Expectations for a signal branching fraction
that is $1/20^{\rm th}$ that for the antideuterons is indicated with a dashed curve.
\begin{figure}[htb]
\begin{center}
\includegraphics[height=0.25\textwidth,width=0.5\textwidth]
{likelihood_newlum_sigma.eps}
\end{center}
\caption{
Upper limits (at 90\% CL) for
$Bf(\Upsilon(1S,2S)\rightarrow H~X)\cdot Bf(H \rightarrow f_i)$
for a narrow ($\Gamma=0$) $H$-dibaryon {\it vs.} $M_H-2m_{\Lambda}$ are shown as
solid horizontal bars. The one (two) sigma bands are shown as the
darker (lighter) bands.
The vertical dotted line indicates the $M_H=2m_{\Lambda}$ threshold. The
limits below (above) the $2m_{\Lambda}$ threshold are for $f_1= \Lambda p \pi^-$
($f_2=\Lambda\lm$). The horizontal dotted line indicates the
average PDG value for $Bf(\Upsilon(1S,2S)\rightarrow \bar{d}~X)$.
}
\label{fig:limits}
\end{figure}
In the absence of any sign of an $H$-dibaryon in either the
$\Lambda p\pi$ or the $\Lambda\Lambda$ mode, we set the 90\% CL
($M_H - 2m_{\Lambda}$)-dependent branching fraction upper limits for the $\Lambda p \pi^-$ and
$\Lambda\lm$ (for $\Gamma=0$) mode shown in
Figure~\ref{fig:limits}. These limits are all more than an order of magnitude
lower than the average of measured values of
$Bf(\Upsilon(1,2S)\rightarrow \bar{d}~X)$, shown
in Fig.~\ref{fig:limits} as a horizontal dotted line.
These new Belle results are some of the most stringent constraints to date on
the existence of an $H$-dibaryon with mass near the $2m_{\Lambda}$ threshold~\cite{limits}.
Since $\Upsilon\rightarrow\ $hadrons decays produce final states that are flavor-$SU(3)$
symmetric, this suggests that if an $H$-dibaryon exists in this mass
range, it must have very different dynamical properties than the deuteron,
or, in the case of $M_H<2m_{\Lambda}$, a strongly suppressed $H\rightarrow \Lambda p \pi^-$ decay mode.
\section{Comments}
\noindent
After years of theoretical and experimental work, a large assortment of particles,
the $XYZ$ mesons, have been found that can not be accounted for by the standard mesons are
quark-antiquark pairs rule that has been in common practice. There are now
almost twenty candidates, a number that continues to grow rapidly.
Many of these new mesons are close to particle-antiparticle thresholds and look
very much like molecular structures of color-singlet mesons, however others
are far from thresholds, which make molecular assignments less compelling.
One feature of these states are their strong decays to hidden quarkonium states.
In cases where partial width measurements have been possible, the results are
usually much larger than those measured for conventional quarkonium states.
Likewise, decays to open-flavor states seem to be suppressed compared to
those for quarkonium mesons.
Few of the observed states were predicted in advance by theorists, while
some predicted states, such as charged partners of the $X(3872)$
have been searched for but not been seen~\cite{belle_jpc}. Moreover, none
of the new particles make compelling matches to any of the states
that are predicted by the QCD-motivated models that theorists seem
to really like. The experimental limits on pentaquarks and the
$H$-dibaryon keep getting more stringent with no compelling signs
for either of them. Attempts have been made to attribute some of
the $XYZ$ states to diquark-diantiquark color bound states~\cite{maiani},
however, these models predict that these structures should form
flavor-$SU(3)$ multiplets and, so far at least, no multiplet
partners of the observed states have been found.
This remains very much an experiment-dominated field of research.
Hopefully as the list of $XYZ$ states expands and the properties of
the established states are better know, some pattern will emerge that
will allow someone to make sense of it all.
\section{Acknowledgements}
\noindent
I thank the organizers of this meeting for inviting me to present
these results. In addition I compliment them on their well organized and
interesting meeting.
This work is supported by the Korean national Research Foundation
via Grant No. 2011-0029457 and WCU Grant No. R32-10155.
|
2,869,038,156,286 | arxiv | \section{\bf Introduction} In varying dimensions the $t$-$J$model continues to
attract attention owing to its relevance in cuprates and other important
strongly interacting electronic systems. The model embodies very strong
correlations, which lie outside the regime of validity of perturbation
theory, and thus pose a challenging problem. Our main goal in this work is to obtain an understanding of the properties in
one dimension (1-d), {\em over a wide energy range}.
At low energies
the bosonization technique has been widely applied to the
(closely related) Hubbard model \cite{Giamarchi,1dhubbard,bosonization,Meden,CI}.
For large U several non-perturbative methods have been devised to study the
$t$-$J$model for general dimensions, including the study of finite clusters
\cite{ed,Prelovsek} and large-N based slave particle mean-field theories
\cite{Slave-boson}. In 1-d we also have exact results using Bethe's ansatz
\cite{BA1,BA2,BA3,BA4,BA5,BA6} at special values of the parameters of the
model, and also for long-ranged versions \cite{LRtJ} of the $t$-$J$model, using
techniques developed in the Haldane-Shastry models. Photoemission experiments \cite{Dardel} have been carried
out to study the spectral properties of several quasi 1-d metals, relevant to the $t$-$J$model.
To study a wider energy range, including the low to intermediate and high energy regimes, we employ and compare the results from two complementary techniques.
In 1-d, the density matrix renormalization group (DMRG) \cite{DMRG}
provides nearly exact results for the ground state, and can also be used
for finite temperature and spectral properties.
Ground state DMRG has been used to give the phase diagram of the $t$-$J$model over a broad range of parameters
in \cite{DMRG2}.
Here we study dynamics using the time dependent density
matrix renormalization group (tDMRG). tDMRG \cite{DMRG,tDMRG} has been used to
obtain virtually exact spectral functions for spin chains, but only a few times
for doped Fermi systems. One such time was a tDMRG treatment of the $t$-$J$model, obtaining
spectral functions for the system at finite temperature \cite{DMRG1}.
In this work we use tDMRG only at $T=0$, but we have pushed much farther in terms of system size,
accuracy, and frequency resolution than in \cite{DMRG1}. This accuracy is needed to resolve the detailed
features of the self-energy, which has not been done before with tDMRG.
The other technique used is the extremely correlated
Fermi liquid (ECFL) theory \cite{ECFL}. This analytical theory, which can treat a large
class of large $U$ problems, including the $t$-$J$model, uses Schwinger's functional
differential equations for the electron Green's function. These equations are
systematically expanded in a parameter $\lambda \in [0,1]$, representing
partial Gutzwiller projection. The ${\cal O}(\lambda^2)$ theory leads to a
closed set of coupled equations \cite{ECFL,Pathintegrals} for the Green's
function. This treatment has been benchmarked in high dimensions and in 2-d.
In infinite dimensions, dynamical mean field theory (DMFT) \cite{infinited}
provides a solution to the Hubbard model, and ECFL has been
benchmarked recently \cite{Sriram-Edward,WXD} against
exact results from the single impurity Anderson model, and DMFT in $d=\infty$
\cite{badmetal,HFL}. The limiting case $U=\infty$ has been explored in detail in \cite{ECFL-DMFT}.
The agreement at low energies is good enough to yield
accurate results for the low T resistivity, a highly sensitive variable.
In 2-d, ECFL has been applied recently to cuprate superconductors \cite{SP,PS}.
It is therefore interesting to
see how well this scheme deals with the physics of 1-d. The equations used here have the character of a skeleton graph series. We have checked that the second order skeleton graphs for the Hubbard model in 1-d already displays characteristics of spin-charge separation and non-Fermi liquid spectral functions, while the non-skeleton, i.e. bare perturbation theory does not.
Understanding the extent of {\em momentum dependence} of the Dysonian
self-energy $\Sigma$ in various dimensions is one of the goals of the present
work. While the $d=\infty$ models have a momentum {\em independent}
self-energy, momentum dependence of $\Sigma$ is inevitable in lower
dimensions. However there is a scarcity of reliable information on its extent
and location. In most published work, the self-energy in 1-d is rarely
presented \cite{Veljko-1d}, or even calculated, since standard solutions
directly deal with the Green's function. In contrast we focus on unraveling
the $(\vec{k},\omega)$ dependence of the Dysonian self-energy in 1-d and
comparing with its higher dimensional counterparts.
\begin{figure*}[t]
\subfigure[\;\; n=0.7, t'=0, J=0.3]{\includegraphics[width=.7\columnwidth]{nktp0n07J3plot.pdf}}
\subfigure[\;\; n=0.7, t'=0, J=0.6]{\includegraphics[width=.7\columnwidth]{nktp0n07J6plot.pdf}}
\subfigure[\;\; n=0.7, t'=0.2, J=0.3]{\includegraphics[width=.7\columnwidth]{nktpp2n07J3plot.pdf}}
\subfigure[\;\; n=0.7, t'=0.2, J=0.6]{\includegraphics[width=.7\columnwidth]{nktpp2n07J6plot.pdf}}
\caption{ \label{nk} Momentum distribution $n_k$ for ECFL (yellow) at T=0.005 and tDMRG (blue) at T=0 with n=0.7, J=0.3, 0.6 and t'=0, 0.2. In all cases these two methods agree well especially in the occupied region and both give a power law singularity at $k_F$. The small discrepancy in the unoccupied region corresponds to the $3k_F$ feature in the exact solutions discussed in \cite{BA1}. This subtle singularity is missed by the ${\cal O}(\lambda^2)$ equations.
}
\end{figure*}
\section{\bf Overview}
In the present work we solve the $d=1$ $t$-$t'$-$J$ model for generic parameters using {\em the same set of ECFL equations} as in higher dimensions. We calculate from the two theories the momentum distribution function, self-energy, spectral function and excitation dispersion over a broad energy scale.
In the low $k,\omega$ regime exhibiting non-Fermi liquid behavior, reasonable agreement is found between the two and the exact diagonalization (ED) data in the velocities of spinons and holons \cite{ed}, as well as the Tomonaga-Luttinger liquid (TLL) theory in anomalous exponent \cite{DMRG2}.
Extending the ${\cal O}(\lambda^2)$ ECFL equations to higher orders holds promise of a better agreement. At higher energies, where few studies exist, the agreement between the two theories is quite good already. A valuable insight gained at low energies is the close relationship between a momentum dependent ridge in the $\Im \, \Sigma(k,\omega)$ and the spin-charge separation.
\section{\bf Model and Parameters used}
The Hamiltonian of the 1-d $t$-$t'$-$J$ model is
\begin{eqnarray}
\begin{split}
H_{tJ} &= - t \sum_{\langle ij\rangle} \X{i}{\sigma 0}\X{j}{0\sigma} - t' \sum_{\langle\langle ij\rangle\rangle} \X{i}{\sigma 0}\X{j}{0\sigma} - \bm {\mu} \sum_i \X{i}{\sigma \sigma},
\\&\ + J \sum_{\langle ij\rangle} \left( \vec{S}_i . \vec{S}_j - \frac{1}{4} \X{i}{\sigma \sigma} \X{j}{\sigma' \sigma'} \right), \label{hamiltonian}
\end{split}
\end{eqnarray}
where repeated spin indices are summed , $\X{i}{\sigma 0}= P_G C^\dagger_{i \sigma} P_G$, $\X{i}{0 \sigma }= P_G C_{i \sigma} P_G$, $\X{i}{\sigma \sigma'}= P_G C^\dagger_{i \sigma} C_{i \sigma'} P_G$ with $P_G= \Pi_i (1-n_{i \uparrow} n_{i \downarrow})$ as the Gutzwiller projection operator. $\langle ij\rangle$ and $\langle\langle ij\rangle\rangle$ refers to summing over first and second neighbor pairs respectively.
For this model \cite{ECFL,SP}
we compute the results from the two theories at density $n=0.7$,
second nearest neighbor hopping
$t'/t=0,0.2$ and
$J/t=0.3,0.6$.
We avoid the special cases of $t'=0=J$ since this leads to a degenerate
spectrum, with a charge sector that is isomorphic to the spinless Fermi gas.
The ECFL results are shown at various $T$ while the tDMRG results are at $T=0$
where most reliable calculations are possible. $t=1$ is the energy unit and will be neglected below.
\begin{figure*}[t]
\centering
\subfigure[\;\; ECFL, T=0.005]{\includegraphics[width=.99\columnwidth]{ECFLsigmakomega.pdf}}
\subfigure[\;\; tDMRG, T=0]{\includegraphics[width=.99\columnwidth]{DMRGsigmakomega.pdf}}
\caption{ \label{sigma3D} n=0.7, J=0.3, t'=0: Imaginary self-energy $\rho_{\Sigma}(k,\omega)$ at low $\omega$ and $k-k_F$ from both methods. Both give a dominant $(k, \omega)$ dependent ridge running from left to right, and a less prominent feature running from top-left to bottom-right. Both of them pass through $k=k_F, \omega=0$ region. The dominant ridge is responsible for the appearance of the twin peaks structure in the spectral functions which represents the spin-charge separation. The peaks for $k<k_F, \omega<0$ are seen in the left half of the electronic spectral function in \figdisp{EDC} panels (a,b), while the peaks for $k>k_F,\omega>0$ are seen in the right half of the same figures. As seen in \figdisp{fig5} panel (c), the peak in the self-energy $\rho_\Sigma$ directly leads to a dip in the electronic spectral function $\rho_G$, provided the real part is small.
}
\end{figure*}
\begin{figure}[htbp]
\subfigure[\;\; ECFL, T=0.005]{\includegraphics[width=.49\columnwidth]{ECFLsigmak.pdf}}
\subfigure[\;\; tDMRG, T=0]{\includegraphics[width=.49\columnwidth]{DMRGsigmak.pdf}}
\caption{ \label{sigma} $n=0.7, J=0.3$: $\rho_{\Sigma}(k,\omega)$ vs $\omega$ at marked $k/k_F$'s.
, from ECFL at $T=0.005$ (a) and tDMRG at $T=0$ (b) in a large scale.
The two sets of results are similar on a broad energy scale, and are comparable to higher dimensional results.
The low energy behavior is discussed below.
}
\end{figure}
{The tDMRG methods used are very similar to those used in
\refdisp{WSKPRL}. We start by obtaining the ground state $|0\rangle$ using DMRG on
a rather long but finite chain, with $L=400$, and then apply $\hat c_0$ or
$\hat c^\dagger_0$ to a site 0 near the center, forming $|\psi(t=0)\rangle$. We
use a Trotter based time evolution algorithm, with
fermionic swap gates to handle next-nearest neighbor terms. We specify a density matrix eigenvalue
truncation cutoff of $3\times10^{-8}$ during the evolution, subject to a constraint on the maximum number
of states kept of $m=3000$. (Results were checked by comparing to $m=2000$.)
We evolve out to a time $t=50$. At $t=50$, the normalization of $|\psi(t)\rangle$ had decreased by
a few percent, a small error affecting primarily the widths of any sharp peaks.
The space and time dependent Green's function is obtained by sandwiching $\hat c_i$ or $\hat c^\dagger_i$
between the ground state and $|\psi(t)\rangle$ for all $i$. Linear prediction is used
to extend the time dependent Green's function out to $t=100$, after which the data is windowed
and Fourier transformed
This calculation
represents the most accurate and detailed study to date of the spectral
properties of the model at $T=0$.}
\section{\bf Momentum distribution function}
In 1-d $t$-$J$model, $n_k$ shows a power law singularity at $k_F$ \cite{1dhubbard, CI}, a signature of the TLL, unlike a jump in higher dimensions as Fermi liquid behavior. This feature is observed from both methods in \figdisp{nk} for different $t'$ and $J$. Due to the second order approximation, the weak $3k_F$ singularity related to shadow band \cite{BA1,BA4} is not observed in ECFL results. Besides this weak effect, $n_k$ from both methods agrees well, especially in the occupied side, showing that ECFL describes the correct $t'$ and $J$ dependent behaviors.}
\begin{figure*}
\begin{minipage}[c][10cm][t]{.5\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=10cm,height=7.5cm]{sigmaTscalinga.pdf}
\end{minipage}%
\begin{minipage}[c][4.5cm][t]{.45\textwidth}
\vspace*{\fill}
\centering
\includegraphics[width=5cm,height=3.5cm]{sigmaTscalingb.pdf}
\label{}\par\vfill
\includegraphics[width=5cm,height=3.5cm]{sigmaTscalingc.pdf}
\label{}
\end{minipage}
\caption{ \label{sigmaTscaling} $\rho_{\Sigma}(k_F,\omega)$ from ECFL is shown in (a) for several T at $J=0.3, t'=0$. The central peak $\rho_{\Sigma}(k_F,0)$ scales as $T^{1.1}$, in contrast to Fermi liquid behavior $T^2$.
Extrapolating to $T=0$ the double minimum structure disappears, leaving behind a $\sim|\omega|^{1.3}$ dependence.
(b) displays the self-energy in larger scale where changing $T$ barely makes a difference.
(c) shows the
spectral function
softened by warming. }
\end{figure*}
\section{\bf Self-energy}
Next we present the Dysonian self-energy in terms of its spectral function $\rho_{\Sigma}$ defined as
\begin{equation}
\rho_{\Sigma}(k,\omega)=-\frac{1}{\pi}\Im \, \Sigma(k,\omega).
\end{equation}
It is derived separately from the Green's functions in ECFL and tDMRG methods. In tDMRG, $\Sigma$ can be found from $G$ by inverting the Dyson relation $G^{-1}= G_0^{-1}- \Sigma$. The ECFL theory produces two (non Dysonian) self energies $\Phi, \Psi$ \cite{ECFL}, and the resulting G can again be inverted to find the standard Dysonian $\Sigma$. Both ECFL $(T=0.005)$ and tDMRG $(T=0)$ self-energies are shown in \figdisp{sigma3D} for comparison.
In \figdisp{sigma3D}, the two theories have a similar pattern of k dependence, a dominant ridge running from left to right, and a less prominent feature running from top-left to bottom-right. They pass through $k=k_F, \omega=0$ region. The ridge leads to the appearance of twin peaks in the spectral functions representing spin-charge separation. In the higher energy region in \figdisp{sigma}, both theories agree well and are similar to their higher dimensional counterparts.
A powerful feature of ECFL theory is that it allows us to vary temperature without extra effort, at least in the low to intermediate temperature region. In \figdisp{sigmaTscaling},
$\rho_{\Sigma}$ at $k_F$ is presented in several temperatures. The bump becomes higher with increasing temperature though no obvious change in larger scale (Panel (b)). This is expected because warming softens the peak height of spectral function at $k_F$, which is $\rho_G(k_F,0)=1/(\pi^2\rho_{\Sigma}(k_F,0)$ in Panel (c). The central peak height $\rho_{\Sigma}(k_F,0)$ scales as $T^{\alpha}$ with $\alpha\approx1.1$, as opposed to $\alpha=2$ expected for a Fermi liquid.
Although $T=0.005$ is the lowest temperature in the current numerical scheme for second order ECFL due to the finite lattice size (up to $L=2417$ and $N_\omega=2^{17}$), we extrapolate the curve to $T=0$.
The peak at $k_F$ disappears at zero T, and is replaced by a minimum at the origin corresponding to a singular peak in the spectral function, consistent with earlier studies \cite{1dhubbard, BA4}. The self-energy approaches zero as $\big|\omega\big|^\gamma$, where $\gamma\approx 1.3$. This behavior is difficult to observe in our present tDMRG implementation, because the finite time cut-off, leads to a broadening. The peak and its $k$ dependence is recovered on moving away from $k_F$, causing spin-charge separated peaks at T=0.
\begin{figure*}
\subfigure[\;\; ECFL T=0.005, J=0.3]{\includegraphics[width=.72\columnwidth]{EDCECFLa.pdf}}
\subfigure[\;\; tDMRG T=0, J=0.3]{\includegraphics[width=.7\columnwidth]{EDCDMRG.pdf}}
\subfigure[\;\; ECFL T=0.005, J=0.3]{\includegraphics[width=.73\columnwidth]{EDCECFLb.pdf}}
\subfigure[\;\; ECFL T=0.005, J=0.3]{\includegraphics[width=.7\columnwidth]{EDCECFLc.pdf}}
\caption{\label{fig5} Energy distribution curves (EDCs) at t'=0, J=0.3: (a) and (b) (same legends marking $k/k_F$) displaying the spinon and the holon for $k \neq k_F$. Panel (c) at $k=.9 k_F$ shows that the peak in $(\pi \rho_{\Sigma})^2$ (dashed black) coincides with the dip in the spectral function $\rho_G(\omega)$ (solid gold), while $(\omega+\mu-\varepsilon_k - \Re \, \Sigma)^2)$ (magenta dots) is small everywhere. This implies that the twin peaks originate in the intervening peak of self-energy.
Panel (d) also at $k=.9 k_F$ shows the fitting procedure for finding the anomalous exponent $\zeta'\equiv \zeta-\frac{1}{2}$ for the spinon \cite{Giamarchi,Meden}, we fit to $.59(\omega- \omega_{peak})^{\zeta'}$ (dashed blue),
the best fit value is $\zeta' \sim -0.44$, close to the TLL result $-0.45$ \cite{DMRG2}.
}
\end{figure*}
\begin{figure*}
\subfigure[\;\; tDMRG]{\includegraphics[width=.95\columnwidth]{EDC3DDMRG.pdf}}
\subfigure[\;\; ECFL with window]{\includegraphics[width=.95\columnwidth]{EDC3DECFLwindow.pdf}}
\subfigure[\;\; ECFL without window]{\includegraphics[width=.95\columnwidth]{EDC3DECFL.pdf}}
\caption{\label{EDC} $J=0.6, t'=0$. The spectral function of the tDMRG ($T=0$) with an intrinsic time window (a) and the ECFL ($T=.005$) with (b) and without (c) a comparable time window. The introduction of a time window brings the two theories to the same scale. The central peak and the spinon peaks are of comparable height while the holon peak of ECFL is less prominent duo to second order approximation.
}
\end{figure*}
\begin{figure*}
\centering
\subfigure[\;\; n=0.7, t'=0, J=0.3]{\includegraphics[width=.7\columnwidth]{dispersion1.pdf}}
\subfigure[\;\; n=0.7, t'=0, J=0.6]{\includegraphics[width=.7\columnwidth]{dispersion2.pdf}}
\subfigure[\;\; n=0.7, t'=0.2, J=0.3]{\includegraphics[width=.7\columnwidth]{dispersion3.pdf}}
\subfigure[\;\; n=0.7, t'=0.2, J=0.6]{\includegraphics[width=.7\columnwidth]{dispersion4.pdf}}
\caption{ \label{spectra} Dispersion of excitations from both ECFL at T=0.005 (gold dots) and tDMRG at T=0 (blue dots), and the available ED data (red) \cite{ed}.
The error bars in the tDMRG estimates are from the time window broadening. The tDMRG results are consistent with the ED results, while the ECFL holon dispersion deviates somewhat.}
\end{figure*}
\section{\bf Spectral function}
We also compare the spectral functions from both methods. In \figdisp{fig5} panel (a,b) both show a single peak at $k_F$ and double peaks away from $k_F$ representing spinons and holons respectively. Panel (c) puts together the spectral function away from $k_F$ and different parts of its formula:
\begin{equation}
\rho_G(k,\omega)= \frac{\rho_\Sigma(k,\omega)}{[\omega+ \mu - \varepsilon_k - \Re \Sigma(k, \omega)]^2+ \pi^2 \rho^2_\Sigma(k,\omega)}, \label{formula}
\end{equation}
It shows that $\omega+ \mu - \varepsilon_k - \Re \Sigma(k, \omega) $ is very small in the frequency range that spans the two peaks, and confirms that the visible twin peaks result from a peak in $\rho_\Sigma$ in the middle.
Thus the location of the ridge lies in the minimum between spinon and and holon peaks in the spectral function in panels (a,b), and in-fact the ridge causes the twin peaks.
The exponents in panel (d) match reasonably with those from the TLL at $J=0.3$ and also at $0.6$ (where $\zeta'\sim -.49$ versus $\zeta'\sim -0.46$ from \refdisp{DMRG2}). We take the Luttinger parameter $K_{\rho}\approx 0.53$ at $J=0.3, t'=0$ from Fig. (4) in \refdisp{DMRG2}. Then we calculate $\zeta=\gamma_{\rho}=(K_{\rho}+K_{\rho}^{-1}-2)/8\approx 0.05$ \cite{Giamarchi,Meden}. Therefore the anomalous exponent is $\zeta'= \zeta-\frac{1}{2}=-0.45$. The calculation is similar for $J=0.6$ with $K_{\rho}\approx 0.56$ from Fig. (4). The tDMRG spectral function in panel b is too soft to extract the anomalous exponent, because its finite time cutoff leads to the broadening of spectral peaks in the low $\omega$ region.
In \figdisp{EDC} we compares the spectral function of the tDMRG with the ECFL theory. The latter is presented both with and without Gaussian windowing by a suitable time constant comparable to that in our tDMRG work. As one might expect, the scales of the two theories differ if we compare the raw (un-windowed) figures, but become very close upon windowing.
\section{\bf Dispersion relation of spinons and holons}
We extract the excitation dispersion relation from spectral function in \figdisp{spectra}. According to \refdisp{ed}, in the selected parameter region n=0.7, J=0.3, 0.6 and t'=0, 0.2, the holon velocity $v_c$ is larger than the spinon velocity $v_s$. The error bars in the tDMRG originates from the broadening of the lines due to finite time windowing. Within the error bar, the DMRG agrees with the available ED data \cite{ed}. We expect that the neglected higher order terms in the ECFL theory would play a role in improving the holon velocity and also intensities.
\section {\bf Conclusion and Discussion}
In this paper, we present the self-energy for the 1-d $t$-$t'$-$J$ model from both ECFL and tDMRG and specify its characteristic low energy strongly momentum-dependent cross-ridge, qualitatively different from higher dimensional cases, responsible for the spin-charge separation in spectral function. This perspective is different from the ones discussed in earlier studies on this model in 1-d \cite{CI,BA1,BA2,BA3,BA4,BA5,BA6,ed,DMRG1,DMRG2,QMC}. The existence of a ridge structure in the imaginary self-energy, represents a non-trivial exact statement about the momentum dependence of the 1-d model.
We also compare the spectral function, the excitation dispersion and the momentum distribution function between both methods. They agree qualitatively in the low energy region, both capturing clear signatures of the TLL and more quantitatively at larger energy scales where the system behaves like it does in higher dimensions.
In summary we have shown in this work that the ECFL equations capture the essential physics of 1-d systems, namely spin-charge separation and non-Fermi liquid Green's functions in parallel to the behavior displayed by the tDMRG solution. A remarkable conclusion of this work is that ECFL theory works in the widely different regimes of infinite dimensions \cite{Sriram-Edward}, two dimensions \cite{SP,PS} and 1-d. This observation lends support to the overall scheme in general dimensions as well.
\section {\bf Acknowledgement} We thank Rok \v{Z}itko for helpful comments on the manuscript.
The work at UCSC was supported by the US Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under Award No. DE-FG02-06ER46319. The work at UCI was supported by National Science Foundation (NSF) grant DMR-1505406. The ECFL Computations used the XSEDE Environment \cite{xsede} (TG-DMR170044) supported by National Science Foundation grant number ACI-1053575.
|
2,869,038,156,287 | arxiv | \section{0pt}{12pt plus 4pt minus 2pt}{4pt plus 2pt minus 2pt}
\titlespacing\subsection{0pt}{12pt plus 4pt minus 2pt}{2pt plus 0.5pt minus 0.5pt}
\titlespacing\subsubsection{0pt}{12pt plus 4pt minus 2pt}{2pt plus 0.5pt minus 0.5pt}
\makeatletter
\let\cat@comma@active\@empty
\makeatother
\begin{document}
\title{Internal friction can be measured with the Jarzynski equality}
\date{\today}
\author{R. Kailasham}
\affiliation{IITB-Monash Research Academy, Indian Institute of Technology Bombay, Mumbai, Maharashtra - 400076, India}
\affiliation{Department of Chemistry, Indian Institute of Technology Bombay, Mumbai, Maharashtra - 400076, India}
\affiliation{Department of Chemical Engineering, Monash University,
Melbourne, VIC 3800, Australia}
\author{Rajarshi Chakrabarti}
\email{[email protected]}
\affiliation{Department of Chemistry, Indian Institute of Technology Bombay, Mumbai, Maharashtra - 400076, India}
\author{J. Ravi Prakash}
\email[Electronic mail: ]{[email protected]}
\affiliation{Department of Chemical Engineering, Monash University,
Melbourne, VIC 3800, Australia}
\begin{abstract}
A simple protocol for the extraction of the internal friction coefficient of polymers is presented. The proposed scheme necessitates repeatedly stretching the polymer molecule, and measuring the average work dissipated in the process by applying the Jarzynski equality. The internal friction coefficient is then estimated from the average dissipated work in the hypothetical limit of zero solvent viscosity. The validity of the protocol is established through Brownian dynamics simulations of a single-mode spring-dashpot model for a polymer. Well-established single-molecule manipulation techniques, such as optical tweezer-based pulling, can be used to implement the suggested protocol experimentally.
\end{abstract}
\keywords{Internal friction $|$ Jarzynski's equality $|$ Force spectroscopy $|$ Brownian dynamics $|$ Spring-dashpot dumbbell}
\maketitle
Conformational transitions in polymer molecules are impeded by solvent molecules and by intramolecular interactions within the molecule~\cite{kuhn1945,degennes,Manke1985,Sagnella2000,Hagen2010385}. The resistive effect from various intramolecular interactions~\cite{Khatri20071825,Murayama2007,Alexander-Katz2009,Schulz20154565,Echeverria2014,DeSancho2014,Ameseder2018,Jas2001,Soranno2017} is collectively called ``internal friction''. Internal friction modulates the kinetics of spatial reorganization in a number of biological contexts, such as damping the process of protein folding~\cite{Ansari1992,Qiu2004,Wensley2010,Soranno2017}, and affecting stretching transitions in polysaccharides~\cite{Khatri20071825}. A commonly accepted operational definition for internal friction~\cite{Qiu2004,Avdoshenko2017,Soranno2017} is the folding (reconfiguration) time for a protein in the extrapolated limit of zero solvent viscosity ($\eta_{\textrm{s}}\to 0$). However, such an indirect definition does not ascribe a friction coefficient to the phenomenon. On the other hand, stretch-relaxation experiments on condensed DNA by Murayama et al.~\cite{Murayama2007}, and molecular dynamics simulations of polypeptide-stretching by Schulz, Miettinen and Netz~\cite{Schulz20154565} provide an estimate for the internal friction coefficient. We are motivated by the protocol proposed by Netz and coworkers~\cite{Alexander-Katz2009,Schulz20154565}, wherein the work done in stretching a polymer molecule is partitioned into a reversible component, and an irreversible, dissipative component. The former goes into reversibly increasing the free energy, and is stored in the extension of the molecule. The latter represents the work expended in overcoming the rate-dependent restoring force offered by the the solvent degrees of freedom, and sources of friction internal to the molecule. In the model system examined in ref.~\citealp{Schulz20154565}, Netz and coworkers consider intramolecular hydrogen bonds as the source of internal friction. In the hypothetical limit of zero solvent viscosity, the only source of dissipation in the system is assumed to be the internal friction in the molecule. The average dissipated work in the limit of zero solvent viscosity, is then used to estimate the internal friction coefficient, and is found to scale with the number of intramolecular hydrogen bonds in the system. In this paper, we propose a methodology by which single molecule stretching experiments using optical tweezers could be employed in conjunction with a novel application of the Jarzynski equality~\cite{Jarzynski1997} to measure the dissipation associated with the stretching process and consequently extract the internal friction coefficient of the molecule. The Jarzynski equality provides a conceptual framework for obtaining the free-energy difference associated with a process from a non-equilibrium protocol. Since its formulation in 1997, the Jarzynski equality has been routinely employed for reconstructing the free energy landscape of biomolecules from finite-rate pulling experiments~\cite{Liphardt2002,Harris2007,Gupta2011} and simulations~\cite{Hummer2010,Hodges2016}. The dissipation arising due to the non-equilibrium protocols employed in these applications has largely been ignored, except in the context of estimating the accuracy~\cite{Ritort2002,Jarzynski2006,YungerHalpern2016} of the free-energy difference obtained using the Jarzynski equality. In this study, we leverage the dissipated work to our advantage, and show that internal friction can be measured with the Jarzynski equality.
It is worth noting that prior studies on the dissipative effects of internal friction~\cite{Murayama2007,Alexander-Katz2009,Schulz20154565} calculate the reversible and irreversible components of the total stretching work \textit{separately}: the free energy difference is obtained as the work done in the quasi-static limit~\cite{Callen1985}, and the dissipated work at a given pulling rate is obtained by subtracting the free energy difference from the total work done. Here we propose that multiple realizations of the pulling experiment be performed, which yield a distribution of work values due to thermal fluctuations in the system. From this distribution of work values, the Jarzynski equality allows one to extract the free-energy difference and the average dissipated work \textit{simultaneously}, at \textit{finite} pulling rates. At a fixed value of the pulling velocity, the average dissipated work is then calculated at various values of the solvent viscosity, $\eta_{\textrm{s}}$, and its value in the hypothetical limit of zero solvent viscosity, $\left<W_{\text{dis}}\right>_{\eta_{\!_{\, \text{s}}}\to\,0}$, is obtained by extrapolation. This exercise is to be repeated for multiple values of the pulling velocity. The dissipative force due to internal friction is given by the ratio of $\left<W_{\text{dis}}\right>_{\eta_{\!_{\, \text{s}}}\to\,0}$ to the distance, $d$, over which the molecule is stretched. The internal friction coefficient is then evaluated as the slope of the plot between the dissipative force and the pulling velocity.
The validity of this protocol is established through Brownian dynamics (BD) simulations. A spring-dashpot model for a polymer is considered, where the molecule is represented as two massless beads connected by a spring in parallel with a dashpot (see Fig.~\ref{fig:scheme}~(a)), subjected to fluctuating random forces from the bath of solvent molecules it is immersed in. This model has been widely invoked in rheological~\cite{kuhn1945,manke1992stress,Hua1995307,Kailasham2018}, as well as biophysical contexts~\cite{Khatri20071825,Samanta2016165,Ameseder2018}, to capture the effects of internal friction in polymers. Within this model, the spring accounts for the entropic elasticity in the polymer molecule, whereas the dissipative effect from the myriad sources of internal friction is captured by the dashpot. The solvent drag on the beads accounts for the friction caused by the motion of the polymer in the solvent. The solvent-mediated propagation of momentum between the beads is accounted for by the inclusion of fluctuating hydrodynamic interactions (HI), since it is well established that fluctuations in HI affect the dynamic response of polymer molecules~\cite{Kailasham2018,Prakash2019}. The present study is the first derivation and numerical solution of the system of stochastic differential equations necessary for carrying out force spectroscopy simulations of the spring-dashpot model. Using this model, we demonstrate that the internal friction coefficient estimated from the average dissipated work in the hypothetical limit of zero solvent viscosity is identical to the damping coefficient of the dashpot, which is a model input parameter. The successful recovery of the damping coefficient suggests that the proposed protocol could be used to experimentally obtain an estimate of the magnitude of the internal friction coefficient in polymers.
While the proposed ansatz rests on an assumption that awaits rigorous proof, namely, that the total friction experienced by the polymer molecule may be separable into a ``dry'' component (independent of solvent viscosity) and a ``wet'' component (arising from solvent drag), a recent study by Daldrop et al.~\cite{Daldrop2018} in which they calculate the total friction using a generalized Langevin equation that incorporates both inertial and solvent memory effects, lends substantial credence to this empiricism. In the present treatment, an overdamped Langevin equation is considered that ignores both inertial and memory effects. Furthermore, it is believed that force-induced unfolding in general follows a different mechanistic pathway in comparison to those induced by chemical agents~\cite{Matouschek2003} or temperature~\cite{Hyeon2005}. Consequently, the internal friction coefficient determined from stretching experiments might not necessarily represent the friction associated with the global unfolding of the molecule initiated by chemical or thermal means. Nevertheless, in contexts where it is applicable, the proposed protocol offers a means of quantifying the magnitude of internal friction directly, unlike the majority of the approaches used so far.
The rest of the paper is organized as follows. After introducing the model and the pulling scheme used in our simulations, a brief description of the BD simulation procedure is provided. The parameters used in the simulations, and the rationale behind their choice is then explained. The protocol for the extraction of the internal friction coefficient, and the results are then presented. Finally, we offer a few concluding remarks.
\section*{\label{sec:model} Model description}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig1.pdf}
\caption{\textbf{Simulation schematic/Proposed experiment}. (a) Schematic diagram of the coarse-grained polymer model entrapped between two optical tweezers. The spring connecting the two beads is finitely extensible, upto a length $Q_0$. Internal friction is modelled using the dashpot, whose damping coefficient is $K$. The Hookean spring constant associated with the spring is $H$. The strengths of the two traps, modelled as harmonic potential wells, are $H_1=c_1H$ and $H_2=c_2H$ respectively. (b) The one-dimensional pulling protocol : the position of the first trap is taken to be the origin, and remains stationary throughout the experiment. The second trap is moved from its initial position, $\chi^{\textrm{(i)}}_{2x}$ to its final position, $\chi^{\textrm{(f)}}_{2x}$, over a time-interval $\tau$, stretching the spring-dashpot setup in the process. The difference between the initial and final positions of the mobile trap is $d$, and the velocity of pulling is $v_x$.}
\label{fig:scheme}
\end{figure}
As shown in Fig.~\ref{fig:scheme}~(a), we consider a dumbbell model for a polymer with fluctuating internal friction and hydrodynamic interactions. The beads (each of radius $a$) are joined by a spring, with maximum stretchability $Q_0$ and a Hookean spring constant $H$, in parallel with a dashpot of damping coefficient $K$. The dumbbell is suspended in an incompressible, Newtonian solvent of viscosity $\eta_{\text{s}}$. The Marko-Siggia force expression~\cite{Marko1995}, widely employed to model the force-extension relationship in synthetic polymer molecules~\cite{Black2017}, as well as biopolymers~\cite{Latinwo2014,Raman2014,Sunthar2005,Sasmal2016}, is used to describe the entropic elasticity in the dumbbell. The connector vector joining the two beads is denoted by $\bm{Q}\equiv\bm{r}_2-\bm{r}_1$. The centre-of-mass coordinates are given by $\bm{R}\equiv\left(1/2\right)\left(\bm{r}_1+\bm{r}_2\right)$. The positions of the two beads can be manipulated using optical traps, modelled here as harmonic potential wells. The trap stiffnesses are denoted by $H_1=c_1H$, and $H_2=c_2H$ (in units of the dumbbell spring constant), and the co-ordinates of the minimum of the wells are represented by $\bm{\chi}_1$ and $\bm{\chi}_2$, respectively. A temperature of $T=300\,K$ is considered in all our simulations, as a matter of convenience. The viscosity of the solvent at this temperature is taken to be $\eta_{\textrm{s},0}=0.001$ kg/m s, which is close to the viscosity of water at room temperature. In this protocol, values of solvent viscosity which are multiples of $\eta_{\textrm{s},0}$ will be considered. Within this model, the conservative spring potential acting between the beads of the polymer determines the equilibrium properties of the molecule. The dashpot does not affect equilibrium properties, and only modulates the dynamics of the polymer molecule~\cite{Kailasham2018}. A dashpot of damping coefficient $K$ pulled with a constant velocity $v$ over a distance $d$, offers a resistive force of magnitude $Kv$~\cite{kuhn1945,Booij1970}. The work done against this restoring force is then simply given by the product of the restoring force and the stretching distance,
\begin{equation}
\left<W_{\text{dis}}\right>_{\eta_{\!_{\, \text{s}}}\to\,0}=Kvd
\end{equation}
The temperature and the spring stiffness together define the length scale of the problem, $l_{\text{H}}\equiv\sqrt{k_BT/H}$, where $k_B$ is the Boltzmann constant. Using this definition of the length-scale, a finite extensibility parameter, $b$, is introduced as $b\equiv Q^2_0/l^2_{\textrm{H}}$. The molecular time scale is given by $\lambda_{\textrm{H}}=\zeta/4H$, where $\zeta(\coloneqq 6\pi\eta_{\textrm{s}}a)$ is the bead friction coefficient. The internal friction parameter, $\epsilon\,(\coloneqq 2K/\zeta)$, is defined as the ratio of the internal friction coefficient to the bead friction coefficient. Quantities with an asterisk as superscript are understood to be dimensionless.
In Fig.~\ref{fig:scheme}~(b), the pulling protocol employed in this study is depicted. Without any loss of generality, $\bm{\chi}_1$ is chosen as the origin of our frame of reference. In all pulling simulations throughout this work, the first trap is held stationary, and the second trap is moved from its initial position, $\bm{\chi}_2^{\text{(i)}}\equiv(\chi^{\textrm{(i)}}_{2x},0,0)$, to its final position, $\bm{\chi}_2^{\text{(f)}}\equiv(\chi^{\textrm{(f)}}_{2x},0,0)$. The notation ``$\chi^{\textrm{(i)}}_{2x}\to\chi^{\textrm{(f)}}_{2x}$'' represents such a pulling event. The stretching distance is denoted by $d\equiv \left[\chi^{\textrm{(f)}}_{2x}-\chi^{\textrm{(i)}}_{2x}\right]$, the time interval for stretching by $\tau$, and the pulling velocity by $\bm{v}\equiv(v_x,0,0)$, where $v_x=d/\tau$. Note that the bead co-ordinates are allowed to sample the entirety of the three-dimensional coordinate space, but the pulling is restricted to the $x$-axis alone. One could implement an alternative protocol such that the pulling direction is also in general three-dimensional space. However, such a change will not alter the analysis and arguments presented in this study.
For any value of the trap position, $\bm{\chi}_2$, the Hamiltonian, $\mathcal{H}$, of the system is given by
\begin{align}\label{eq:ham_def}
\mathcal{H}&=U_{\text{MS}}(\bm{Q})+\frac{H_1}{2}{r}_1^2+\frac{H_2}{2}\left(\bm{r}_2-\bm{\chi}_2\right)^2
\end{align}
where $U_{\text{MS}}(\bm{Q})$ represents the potential energy in the spring.
The generalized Jarzynski work~\cite{Jarzynski1997e} corresponding to the pulling protocol discussed above is given by
\begin{align}\label{eq:jarz_work}
W&\equiv\int_{0}^{\tau}\left(\frac{\partial \mathcal{H}}{\partial t}\right)dt=\int_{0}^{\tau}\left(\frac{\partial \mathcal{H}}{\partial \bm{\chi}_2}\right)\cdot \bm{v}\,dt
\end{align}
where $\bm{v}=d\boldsymbol{\chi}_2/dt$.
The work done during each pulling event is different due to the thermal fluctuations in the system. The Jarzynski equality~\cite{Jarzynski1997} is used to evaluate the free energy difference, $\Delta A$, corresponding to the transition shown in Fig.~\ref{fig:scheme}, as
\begin{equation}\label{eq:je_use}
\left<\exp[-W/k_BT]\right>=\exp[-\Delta A/k_BT]
\end{equation}
where $\left<...\right>$ denotes an average over all possible realizations of the pulling protocol.
The average work dissipated in the process is given by $\left<W_{\textrm{dis}}\right>=\left<W\right>-\Delta A$.
\section*{\label{sec:bd_simu}Brownian dynamics simulations}
We use Brownian dynamics to simulate the pulling depicted in Fig.~\ref{fig:scheme} (see Sec.~I of the SI for details of the governing equations and numerical scheme). Fluctuating hydrodynamic interactions are modeled using the Rotne-Prager-Yamakawa tensor~\cite{Rotne1969,Yamakawa}. The hydrodynamic interaction parameter is given by $h^*=a/\sqrt{\pi}l_{\textrm{H}}$, and $h^*=0$ corresponds to the free-draining case. The governing system of stochastic differential equations in $\bm{Q^*}$ and $\bm{R^*}$ is solved in its dimensionless form, using a semi-implicit predictor corrector method. The corresponding dimensional quantities are obtained by a suitable multiplication with the scaling factors.
A dimensionless time-step width of $\Delta t^*=1\times 10^{-3}$ is used for all the parameter sets considered in this work. This value corresponds to the highest time-step width that gives converged results. Higher values of the internal friction parameter, smaller values of the finite extensibility parameter, or stiffer optical traps would necessitate the use of smaller $\Delta t^*$, as elaborated in Sec.~I of the SI.
The dimensionless equivalent of the work done during one realization of the pulling event is calculated by evaluating the integral in Eq.~\ref{eq:jarz_work} using a simple rectangular quadrature (see Sec.~I of the SI for details).
The free-energy difference between the initial and final states can be computed exactly, since the Hamiltonian of the system is known [Eq.~\ref{eq:ham_def}]. There is no closed-form analytical solution for the partition function, as a consequence of the non-linearity introduced by the Marko-Siggia spring potential, and therefore, the free energy difference is evaluated using numerical quadrature. The free energy difference computed in this manner is compared with the result obtained from the implementation of Jarzynski's equality using BD simulations, for two sample cases. Excellent agreement between the free energy differences computed using the two approaches establishes the validity of the code. Additional details pertaining to the validation studies are presented in Sec. II of the SI.
\section*{\label{sec:para_space}Parameter-space specification}
The parameters used in the present work are broadly classified into molecular and control parameters. Molecular parameters pertain to the polymer that is being stretched, whereas control parameters are set by the experiments or the simulations used in the study of stretching the molecule.
\subsection*{\label{sec:mol_param}Molecular parameters}
The molecular parameters relevant to this study are the finite extensibility parameter, $b$, the length scale, $l_{\textrm{H}}$, the bead radius, $a$, and the damping coefficient of the dashpot, $K$. The dumbbell parameters are chosen such that they model the $\lambda$-phage DNA (contour length: $16.5\mu\textrm{m}$, Kuhn segment length: 88 nm) used by Murayama et al.~\cite{Murayama2007} in their stretch-relaxation experiments. As shown in Sec. 3A of the SI, the appropriate values in this case are $b=800$ and $l_{\textrm{H}}=500\,\textrm{nm}$. Alexander-Katz et al.~\cite{Alexander-Katz2009} suggest that the monomeric radius may be taken as the persistence length of the molecule. A bead radius of $a=30\,\textrm{nm}$ is chosen as a representative value. Additional values of $b$, $l_{\textrm{H}}$, and $a$, of about the same order-of-magnitude, have also been used so as to test the protocol for a range of parameter values. The internal friction coefficient associated with the molecule used by Murayama et al.~\cite{Murayama2007} is $K \approx 10^{-7}$ kg/s. Molecular dynamics simulations of pulling experiments on polypeptides by Netz and coworkers~\cite{Schulz20154565} estimate the internal friction coefficient to be in the order of $\sim10^{-10}$ kg/s. In this work, internal friction coefficients that fall within the range mentioned above are chosen, by picking values for $K$ between $1.0\times 10^{-9}\,\textrm{kg/s}$ and $1.0\times 10^{-8}\,\textrm{kg/s}$. Sec. 3A of the SI presents a detailed discussion on the molecular parameters selected for this study.
\subsection*{\label{sec:control_param}Control parameters}
The control parameters in the current study are: the stiffness of the optical traps, the initial extension of the molecule subjected to the stretching protocol, and the pulling velocity. The rationale behind the choice of these parameters is briefly discussed below.
\subsubsection*{\label{sec:trap_stiffness}Trap stiffness}
The strength of the optical trap determines its ability to confine the bead to positions close to its minimum. Higher trap stiffnesses ensure that the beads track the positions of the trap minima (Fig. S2 of the SI), but also necessitate the use of smaller time-step widths in our simulations. For a given set of molecular parameters, terminal states, pulling velocity, and stationary trap stiffness $c_1=1000$, the dissipated work increases with an increase in the mobile trap stiffness, and saturates to a constant value at around $c_2\approx100$ (see Fig. S3 of the SI). Since it is desirable to operate in a regime where the dissipated work is independent of trap stiffness, $c_1=c_2=1000$ is used in all our simulations.
\subsubsection*{\label{sec:f_ex}Initial stretch of molecule}
The restoring force in the spring increases linearly at low values of the fractional extension ($\Lambda\equiv Q^*/\sqrt{b}$), and diverges as $\Lambda\sim1$ (Fig. S5 of the SI). As expected in a non-linear spring, stretching becomes increasingly harder at higher fractional extensions, necessitating the use of stiffer traps in order that the beads track the position of the traps. As shown in Sec. 3B of the SI, for a fixed value of the trap stiffness, increasing the ensemble size decreases the error in the recovered internal friction coefficient when the stretching is restricted to the linear regime. However, when the protocol is extended to the non-linear regime, higher values of the trap stiffness are required to recover the internal friction coefficient with the same accuracy. Throughout this paper, initial states of the spring such that $\Lambda\leq0.2$ have been chosen, but, as shown in Table S5 of the SI , the analysis is transferable to springs that start at $\Lambda\sim0.7$, provided that stiffer traps are used.
\subsubsection*{\label{sec:vel_linear}Pulling velocity and ensemble size}
In Fig.~\ref{fig:vel_linear}, the average dissipated work (scaled by the thermal energy $k_BT$) calculated for a variety of molecular and control parameters is plotted against the magnitude of the dimensionless pulling velocity, $v^*$. It is seen that the average dissipated work varies linearly over the entire range of the pulling velocity, $v^*=0.001-0.1$, except for the dataset with the highest value of the internal friction parameter, which deviates from linearity at higher pulling velocities. The velocity range in dimensional units would depend on the molecular parameters.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig2.pdf}
\caption{\textbf{Linearity between dissipation and pulling velocity determines regime of operation}. Average dissipated work as a function of the dimensionless pulling velocity, for various molecular and control parameters. Except when mentioned otherwise, an ensemble size of $N=1\times10^{4}$ is used for all the data points. Symbols indicating datasets with fluctuating hydrodynamic interactions have been enlarged for the sake of clarity. Boxed region indicates the regime of operation throughout this paper. Error bars, which represent a statistical uncertainty of one standard error of the mean (s.e.m), are smaller than the symbol size.}
\label{fig:vel_linear}
\end{figure}
The experimental feasibility of the proposed protocol can be discussed in the context of the molecular parameters used for the dataset represented by filled circles in Fig.~\ref{fig:vel_linear}. For these set of parameters, the pulling velocities explored in Fig.~\ref{fig:vel_linear} vary from $v=29.3\,\textrm{nm/s}\,(v^*=0.001)$ to $v=2.93\,\mu\textrm{m/s}\,(v^*=0.1)$. The molecule is stretched over a distance of $1\mu\textrm{m}$. The stiffness of this molecule is $H=1.657\times10^{-5}$ pN/nm. In order to operate in a regime where the dissipated work is independent of the trap strength, as discussed earlier, the stiffness of the trap must be at least a hundred times that of the molecule, which implies $H_{\textrm{trap,min}}=1.657\times10^{-3}$ pN/nm. These values of $v$, $d$, and $H_{\textrm{trap,min}}$ lie well within the range of values explored experimentally, as discussed in greater detail in Sec. 3E of the SI.
Ritort et al.~\cite{Ritort2002} have established from computer simulations of mechanical unfolding that the number of trajectories required to obtain estimates for free energy difference within an error of $\mathcal{O}\left(k_BT\right)$ increases exponentially with the average dissipation associated with the unfolding process. They predict that for dissipation less than $4k_BT$, around 100 trajectories would suffice, and for a dissipation of $5k_BT$, about 1000 trajectories would be required. These predictions agree well with the average dissipation and ensemble sizes encountered in optical-tweezer-based pulling experiments on RNA~\cite{Liphardt2002} and DNA hairpins~\cite{Gupta2011}. In the present work, error in the free-energy difference is maintained to be $\sim \mathcal{O}\left(0.01k_BT\right)$, in order to obtain a sufficiently accurate estimate of the average dissipated work that enables the internal friction coefficient to be extracted reliably. By restricting the regime of operation to the boxed region in Fig.~\ref{fig:vel_linear}, with $v^*\leq0.02$ and $\left<W_{\textrm{dis}}\right>\sim k_BT$, it is found that $N=1\times10^{4}$ trajectories are sufficient to ensure convergence in the free energy difference within the desired error-bounds. It is possible to operate at higher values of dissipation, outside the boxed regime, provided that the ensemble size is suitably increased. A detailed account on the choice of ensemble size is provided in Sec. 3D of the SI.
\begin{figure}[th]
\centering
\includegraphics[width=0.9\linewidth]{fig3.pdf}
\caption{\textbf{Protocol for the extraction of the internal friction coefficient.} Average dissipated work as a function of the solvent viscosity, for molecules with the parameters: $\left\{b=800, l_{\mathrm{H}}=500\,\mathrm{nm}, K=3.0\times10^{-9}\,\text{kg/s}\right\}$, subjected to pulling denoted by $5 \, l_{\textrm{H}}\to7 \, l_{\textrm{H}}$ at various values of the pulling velocity, $v$, for an ensemble size, $N=1\times10^{4}$. (Inset) The extrapolated values of $\left<W_{\textrm{dis}}\right>$ in the hypothetical limit of zero solvent viscosity, divided by the stretching distance, as a function of the pulling velocity. The slope of the graph, $K_{\textrm{BD}}$, is an estimate of the internal friction coefficient. Error bars represent a statistical uncertainty of one standard error of the mean (s.e.m). Error bars on the extrapolated values are smaller than the symbol size.}
\label{fig:extract_iv}
\end{figure}
Netz and coworkers~\cite{Alexander-Katz2009,Schulz20154565} report a non-linear relationship between the dissipated work and the pulling velocity at high values of the latter quantity, and restrict their analyses to the linear regime for the extraction of the internal friction coefficient. Along similar lines, the current protocol relies on the linear relationship between dissipated work and pulling velocity in order to meaningfully define the internal friction coefficient.
Speck and coworkers~\cite{Speck2008,Speck2017} have shown in the context of colloidal particles and a dumbbell model for a polymer, that the inclusion of hydrodynamic interactions does not alter the dissipation along a single trajectory. It is observed here as well, that the inclusion of fluctuating hydrodynamic interactions does not affect the dissipated work values, as seen from Fig.~\ref{fig:vel_linear}
\begin{table*}[t]
\setlength{\tabcolsep}{3pt}
\centering
\caption{\label{k_bd} Internal friction coefficients estimated using the protocol described in Fig.~\ref{fig:extract_iv}, for various values of the molecular, and control parameters. The error associated with the protocol is calculated as, $\%\,\textrm{error}=100\times\left[\left(K_{\textrm{BD}}-K\right)/K\right]$.}
\begin{center}
\begin{tabular}{c| c | c c | c c c}
\hline& & $\qquad \qquad \qquad h^*=0.0$ & & & $h^*>0.0$\\ \cline{3-7} Parameters & Input, $K\,[\times 10^9$ kg/s]& $K_{\text{BD}} [\times 10^9$ kg/s] & $\%$ error& $h^*$ & $K_{\text{BD}} [\times 10^9$ kg/s] & $\%$ error\\
\hline
$b=200,\,l_{\mathrm{H}}=150\,\mathrm{nm},\,2\,l_{\mathrm{H}}\to3\,l_{\mathrm{H}}$ & $1.0$& $1.04\pm0.01$&$4.42$&$0.3$& $0.97\pm0.02$ & $-3.11$\\
$b=400,\,l_{\mathrm{H}}=350\,\mathrm{nm},\,4\,l_{\mathrm{H}}\to6\,l_{\mathrm{H}}$ & $1.0$& $0.99\pm0.03$&$-1.36$& $0.16$&$1.04\pm0.02$ & $4.29$\\
$b=200,\,l_{\mathrm{H}}=150\,\mathrm{nm},\,2\,l_{\mathrm{H}}\to3\,l_{\mathrm{H}}$ & $10.0$& $10.04\pm0.05$&$0.47$&$0.3$& $9.87\pm0.09$ & $-1.24$\\
$b=400,\,l_{\mathrm{H}}=350\,\mathrm{nm},\,4\,l_{\mathrm{H}}\to6\,l_{\mathrm{H}}$ & $10.0$& $9.922\pm0.008$&$-0.78$& $0.16$& $9.95\pm0.09$ & $-0.52$\\
$b=800,\,l_{\mathrm{H}}=500\,\mathrm{nm},\,5\,l_{\mathrm{H}}\to7\,l_{\mathrm{H}}$ & $3.0$& $2.95\pm0.02$&$-1.74$& $-$&$-$&$-$\\
$b=800,\,l_{\mathrm{H}}=500\,\mathrm{nm},\,5\,l_{\mathrm{H}}\to7\,l_{\mathrm{H}}$ & $6.0$& $6.00\pm0.07$&$0.11$& $-$ & $-$ & $-$\\
\hline
\end{tabular}
\end{center}
\end{table*}
\section*{\label{sec:extract_iv}Internal friction in equals internal friction out}
The methodology to extract the internal friction coefficient is illustrated using a molecule with parameters: $\left\{b=800, l_{\mathrm{H}}=500\,\mathrm{nm}, K=3.0\times10^{-9}\,\text{kg/s}, h^*=0.0\right\}$ as an example. An ensemble of such molecules is pulled from an initial trap position of ${\chi}_{2x}^{\text{(i)}}=5\,l_{\text{H}}$ to a final trap position of ${\chi}_{2x}^{\text{(f)}}=7\,l_{\text{H}}$ at different pulling velocities. At each value of the pulling velocity, the average dissipated work is calculated at several values of the solvent viscosity in the range, $\eta_{\textrm{s}}=\eta_{\textrm{s},0}$ to $\eta_{\textrm{s}}=10\,\eta_{\textrm{s},0}$. In an experimental setting with water as the solvent, suitable viscogens, such as glucose or sucrose, may be added to the solvent in order to realize an approximately four-fold increase in its viscosity~\cite{Jas2001,Qiu2004}. In experiments that study the kinetics of intrachain contact formation in polypeptides~\cite{Bieri1999} suspended in a solvent mixture of ethanol and glycerol, the solvent viscosity was varied over two orders of magnitude by adjusting the proportion of glycerol in the mixture.
As shown in Fig.~\ref{fig:extract_iv}, for each value of the pulling velocity used, the average dissipated work in the hypothetical limit of zero solvent viscosity, $\left<W_{\text{dis}}\right>_{\eta_{\!_{\, \text{s}}}\to\,0}$, is obtained from a linear fit to the average dissipated work at finite solvent viscosities. In the inset of Fig.~\ref{fig:extract_iv}, the extrapolated values of the average dissipated work in the limit of zero solvent viscosity (divided by the stretching distance $d$), is plotted against the pulling velocity. The slope of the graph $(K_{\textrm{BD}})$ represents the internal friction coefficient extracted from simulations.
Table~\ref{k_bd} shows a comparison between the value of the internal friction coefficient used as an input parameter in the Brownian dynamics simulations, and the corresponding value extracted from the dissipated work using the protocol proposed here, for various molecular and control parameters. As seen from the table, our protocol recovers the input internal friction coefficient to within $5\%$ accuracy. Furthermore, values of the extracted internal friction coefficient, for models with and without fluctuating hydrodynamic interactions, lie close to each other, supporting the conclusion that hydrodynamic interactions do not affect the dissipated work in the system.
\section*{\label{sec:disc}Conclusion}
A simple protocol for extracting the internal friction co-efficient of polymer molecules has been proposed which can be implemented experimentally by pulling the molecule multiple times using optical tweezers. The work done during the stretching process is employed to evaluate the average dissipation using the Jarzynski equality, and the dissipated work in the limit of zero solvent viscosity is then used to obtain the internal friction coefficient. Using Brownian dynamics simulations on a spring-dashpot dumbbell model for a polymer, we establish proof-of-principle by recovering the internal friction coefficient which is used as a model input.
While optical tweezers offer a wide range of values from which the control parameters may be chosen, it is essential to operate in a regime with dissipation limited to a few $k_BT$, so that: (a) the ensemble size is practically realizable, and (b) the average dissipated work is linear in the pulling velocity. We envisage that the scheme proposed here may be applicable to a variety of polymer molecules, and would enable a succinct characterisation of the dissipative properties of the molecule.
The demonstration of the proposed methodology has been made in the context of a single-mode spring-dashpot model. Though the dumbbell model offers much qualitative insight into the behavior of polymeric molecules~\cite{Kailasham2018}, bead-spring-chain models with multiple springs, are needed to quantitatively describe experimental features such as the viscoelastic properties of polymer solutions~\cite{Prakash2019} and the dynamics of single molecules~\cite{Schroeder2018}. Indeed the effect of internal friction on the rheology and dynamics of polymers has been studied with bead-spring-chain models, in which a dashpot with a common value of the damping coefficient is included in parallel with each of the connecting springs~\cite{Dasbach19924118,Khatri20076770,Samanta2016165}. It would be interesting to explore the connection between the internal friction coefficient estimated from the average dissipated work (in the limit of zero solvent viscosity) from such a model, and the damping coefficient of each dashpot in the model.
A comparison with biophysical experiments that determine the reconfiguration time of proteins, or the energy landscape of polysaccharides, would necessitate the use of a multi-bead spring chain that incorporates IV and HI. ~\citet{Dasbach19924118} have obtained an approximate analytical solution for such a chain model. The use of BD simulations to solve the bead-spring-dashpot chain model exactly is rendered difficult by the fact that formulating the correct Fokker-Planck equation for such a system, and finding the equivalent set of stochastic differential equations, is non-trivial, and is the subject of our future study.
\section*{\label{sec:Ak}Acknowledgements}
We thank Burkhard D\"unweg, Subhashish Chaki and Ranjith Padinhateeri for enlightening discussions. The work was supported by the MonARCH and SpaceTime computational facilities of Monash University and IIT Bombay, respectively. We also acknowledge the funding and general support received from the IITB-Monash Research Academy.
\bibliographystyle{aipnum4-1}
|
2,869,038,156,288 | arxiv | \section{Introduction}
In a richly scattered high-speed wireless environment, the transmitted signal arrives at the destination after propagating through a number of independent subpaths, with different delays and different angle of arrival (AoA) related Doppler frequency offsets (DFOs). Superposition of these time and frequency shifted versions of transmitted signal at the receiver not only results in the inter-symbol interference (frequency-selective fading), but also leads to a fast time-varying channel, namely time-selective fading. This doubly selective channel fading imposes a significant challenge for high-speed wireless communication~\cite{J_Wu2016Access, F_Hasegawa2017CSCN}. Besides, the transceiver oscillator frequency offset (OFO) naturally exists due to the mismatch of local oscillators. Though orthogonal frequency division multiplexing (OFDM)~\cite{T_Huang2009TVT} is immune to frequency-selective fading, its performance relies heavily on perfect orthogonality among subcarriers, which is quite vulnerable to the carrier frequency offset (CFO, general designation of DFO and OFO)~\cite{S_Ahmed2005TC}. The existence of multiple CFOs will destroy subcarrier orthogonality and cause inter-carrier interference (ICI).
To prevent OFDM from experiencing significant performance degradation, it is crucial to address these multiple CFOs, or the resulting fast time-varying channel.
The most commonly used approach in the current literature to characterize the time variations of channel is basis expansion model (BEM)~\cite{MF_Rabbi2010IET_C, F_Qu2010TWC, H_Hijazi2009TVT, QT_Zhang2006ICC}. The time-varying channel is approximated by the combination of a few basis functions, which greatly reduces the parameters to be estimated. Various candidates of basis functions have been developed, such as complex exponential BEM (CE-BEM)~\cite{F_Qu2010TWC}, polynomial BEM (P-BEM)~\cite{H_Hijazi2009TVT} and Karhunen-Loeve BEM (KL-BEM)~\cite{QT_Zhang2006ICC}. However, the computational burden of BEM methods is still very heavy. Moreover, accurate maximum DFO is necessary so as to determine the minimum order of basis functions, not to mention that KL-BEM further requires accurate channel statistics to compute the basis functions.
Another frequently adopted approach is based on the autocorrelation of the time-varying channel, which could be approximated as the weighted summation of two monochromatic plane waves~\cite{M_Souden2009TSP, F_Bellili2013GLOBECOM, YR_Tsai2009SPL}. In~\cite{M_Souden2009TSP}, channel covariances at different time lags are expressed as a function of Doppler spread factor and OFO, from which a closed-form estimator of Doppler spread factor and OFO is derived. In~\cite{F_Bellili2013GLOBECOM}, the maximum likelihood (ML) estimator is derived and the Doppler spread factor could be estimated via one-dimensional low-cost search.
However, the studies in~\cite{M_Souden2009TSP, F_Bellili2013GLOBECOM, YR_Tsai2009SPL} are restricted to flat fading channels and circumvent the issues of channel equalization and subsequent data detection.
The doubly selective fading feature of channel is difficult to deal with simply from time or frequency domain. Since this feature originates from multipath propagation associated to different AoAs, it should be more reasonable to resort to angle domain. Some earlier works could be found in~\cite{Y_Zhang2011ICST} and~\cite{W_Guo2013ICSPCC}, where the small-scale uniform circular antenna-array (UCA)~\cite{Y_Zhang2011ICST} and uniform linear antenna-array (ULA)~\cite{W_Guo2013ICSPCC} are adopted to separate multiple DFOs and eliminate ICI via array beamforming, respectively. However, their work only applies to scenarios with very sparse channels.
Recently, large-scale antenna array has gained growing interest from researchers~\cite{EG_Larsson2014CM, W_Zhang2018TWC, H_Xie2016JSAC}. Profiting from its high spatial resolution, the authors in~\cite{W_Guo2017TVT} made the first attempt to address in angle domain the problem of multiple CFOs under richly scattered high-mobility scenarios. However, the approach in~\cite{W_Guo2017TVT} must exploit a pilot sequence composed of two identical halves in the time domain, which limits its applicability to systems with other pilot sequences, and may not be optimal for channel estimation and subsequent data detection due to sparsity in the frequency domain~\cite{H_Minn2006TC}.
Neither have the authors of~\cite{W_Guo2017TVT} taken into account the fact that it is quite challenging and may not be possible in practice to establish a fully and entirely calibrated large-scale antenna array. Due to various uncontrollable factors such as imperfect time synchronization or communication devices aging, etc., gain and phase mismatches inevitably appear among multiple receive antennas. Thus, for many circumstances involving array signal processing, array calibration is unavoidable prior to exploiting the probable benefits of large-scale antenna array.
Meanwhile, each subarray in a large subarray-based system can be well calibrated, though the calibration of the whole array is quite difficult. A class of partly calibrated subarray-based antenna array has received considerable attention especially in the traditional array signal processing domain~\cite{F_Gao2005SPL, M_Pesavento2002TSP, CMS_See2004TSP}, e.g., rank reduction estimator (RARE) for direction of arrival (DoA) estimation in~\cite{M_Pesavento2002TSP, CMS_See2004TSP}. One of the benefits of dividing the whole uncalibrated antenna array into perfectly calibrated subarrays is that in the case of one or several subarrays being damaged, it is possible to remove or replace the damaged subarrays without affecting the whole antenna array.
Motivated by the above discussions, this paper targets at combating the doubly selective fading channel, by exploiting the high spatial resolution provided by fully/partly calibrated large-scale ULA. First, we design a high-resolution beamforming network to separate the received signal with multiple CFOs into a set of parallel signal branches with single CFO each and develop an estimation algorithm in the case of fully calibrated ULA to jointly acquire maximum DFO, OFO and channel. After that the conventional CFO compensation and maximum-ratio-combining (MRC)-based data detection are performed. Next, the CFO estimation mean square error (MSE) performance analysis unveils its incapability in dealing with inter-subarray mismatches. In view of this, the above algorithm is further modified by introducing a calibration-oriented beamforming parameter (COBP), making it applicable to partly calibrated ULA.
In summary, the main contribution of this paper can be described as follows:
\begin{itemize}
\item The frequency synchronization and channel estimation problem in high-mobility scenarios with both DFO and OFO is addressed, whether the ULA at the receiver is fully or partly calibrated. By taking into account the inter-subarray gain and phase mismatches, our system model represents a more realistic scenario and the introduction of COBP effectively remedies the detrimental effects of those array mismatches.
\item By eliminating the necessity of exploiting the two-halves pilot as in~\cite{W_Guo2017TVT}, the proposed joint estimation algorithms can be implemented into many practical communication systems without incurring incompatibility problem.
Moreover, as the proposed algorithms can be applied to systems with optimized pilot sequences, its performance can be superior to~\cite{W_Guo2017TVT}.
\item An alternative solution based on Taylor series expansion is developed to address the extremely time-consuming two-dimensional grid-search, thereby prominently reducing the computational complexity of the proposed joint estimation algorithms.
\item The MSE performance analysis justifies the necessity of introducing COBP in the presence of array mismatches, and reveals the twofold relationship between the estimation MSE of DFO and that of OFO in the case of fully calibrated ULA. In addition, the Cram\'{e}r-Rao Bound (CRB) for joint DFO and OFO estimation is derived in order to theoretically assess the performance of the proposed algorithms.
\end{itemize}
The remainder of this paper is organized as follows. The system model is described in Section II. Section III presents the joint estimation algorithm designed for the fully calibrated ULA. Section IV first applies the joint estimation algorithm in Section III to the partly calibrated case and analyzes the MSE performance, and then extends the joint estimation algorithm to the partly calibrated ULA. The CRB is derived in Section V. Simulation results are provided in Section VI. Section VII draws the conclusion of this paper. Part of this paper has appeared in a conference paper~\cite{Y_GE2017VTCSpring}.
\textit{Notations:} Superscripts $(\cdot)^*$, $(\cdot)^T$, $(\cdot)^H$, $(\cdot)^\dag$ and $E{\{\cdot\}}$
represent conjugate, transpose, conjugated transpose, pseudo-inverse and expectation, respectively; ${\rm j}=\sqrt{-1}$ is the imaginary unit; $\mathbf{\Re} \{\cdot\}$ and $\mathbf{\Im} \{\cdot\}$ denote the real and imaginary part, and $\|\cdot\|_{2}$ denotes the Euclidean norm of a vector or Frobenius norm of a matrix. $\operatorname{diag}(\bf x)$ is a diagonal matrix with the vector $\mathbf{x}$ as its diagonal elements, and $\operatorname{blkdiag}(\cdot)$ represents a block diagonal matrix; $\operatorname{tr}(\cdot)$ denotes the trace operator, $\lambda_{\textrm{min}}(\cdot)$ and ${\mathbf{v}}_{\min }(\cdot)$ return respectively the minimum eigenvalue and corresponding eigenvector of a positive semi-definite matrix.
$\otimes$ and $\odot$ stand for the Kronecker product and Schur-Hadamard product (element-wise product), respectively. ${\mathbf{I}_N}$ is the $N\times N$ identity matrix, ${\bf 1}$ and ${\bf 0}$ represent respectively an all-one and all-zero matrix with appropriate dimension.
\vspace{-0.6em}
\section{System Model}
Consider the OFDM downlink transmission in a high-mobility scenario where the signal transmitted from base station (BS) arrives at the high-speed train (HST) along a number of independent subpaths. The HST is equipped with a fully or partly calibrated massive ULA for decoding data from BS and then delivering to target users. We will describe in this section the complete system model with the partly calibrated ULA, while the fully calibrated ULA can be regarded as a special case.
\vspace{-0.8em}
\subsection{Actual Steering Vector for the Partly Calibrated Antenna Array}
Consider that the terminal HST is equipped with a massive ULA deployed along the direction of HST motion. The ULA consists of $M$ antennas which can be evenly decomposed into $K$ subarrays, each with $J = M/K$ elements. We suppose that each subarray is well calibrated while the calibration of the whole array is imperfect due to AoA-independent inter-subarray gain and phase mismatches~\cite{CMS_See2004TSP}.
Denote $\boldsymbol{\varepsilon}$ as the calibration error parameter. Then, the actual steering vector towards direction $\theta$ could be written as~\cite{CMS_See2004TSP}
\begin{align}
\mathbf{a}(\theta, \boldsymbol \varepsilon)=\mathbf{V}(\theta)\boldsymbol \alpha(\boldsymbol \varepsilon),
\end{align}
where $\mathbf{V}(\theta) \!=\! \operatorname{blkdiag}\left({{\mathbf{v}}_{1}}(\theta), \!\ {{\mathbf{v}}_{2}}(\theta), \!\ \cdots\!, \!\ {{\mathbf{v}}_{K}}(\theta)\right) $ and $\boldsymbol{\alpha}(\boldsymbol{\varepsilon}) \!=\! {{\left[ \begin{matrix}
{{\alpha}_{1}},\!\! & {{\alpha }_{2}},\!\! & \cdots\!,\!\! & {{\alpha }_{K}} \\
\end{matrix}\right]}^{T}}$. Here, $\mathbf{V}(\theta)$ \\ is an $M\times K$ block-diagonal matrix whose non-zero elements ${{\left[ \begin{matrix}
\mathbf{v}_{1}^{T}(\theta), & \mathbf{v}_{2}^{T}(\theta), & \cdots\!, & \mathbf{v}_{K}^{T}(\theta) \\
\end{matrix}\right]}^{T}}$ correspond to the array response vector of an $M$-elements fully calibrated ULA $\mathbf{a}(\theta)$, and $\boldsymbol{\alpha}(\boldsymbol{\varepsilon})$ is the $K\times 1$ complex vector characterizing inter-subarray gain and phase mismatches, with $\alpha_k$ being the gain and phase mismatch of the $k\rm{th}$ subarray.
Further denote $d$ as the antenna spacing and $\lambda$ as the carrier wavelength. Then, the $r$th element of $\mathbf{a}(\theta, \boldsymbol \varepsilon)$ is given by ${{a}_{r}}\left( {{\theta }}, \boldsymbol{\varepsilon } \right) = {{\alpha }_{k}}{{e}^{\text{j}2\chi\left( r-1 \right)\cos {{\theta }}}}$, where $\chi = \pi \tilde{d} $, $\tilde{d} = \frac{d}{\lambda}$ and $k$ denotes the subarray index to which the $r$th antenna belongs.
\vspace{-0.9em}
\subsection{Fast Time-varying Channel Model and Received Signal at Partly Calibrated Antenna Array}
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.5cm}
\setlength{\belowcaptionskip}{-1.0cm}
\begin{center}
\includegraphics[width=100mm]{fig1.eps}
\end{center}
\caption{ Illustration of the richly scattered HST scenario with multi-branch beamforming towards pre-fixed directions \\ (only a few subpaths are shown as example). }
\end{figure}
The considered scenario where the fast-moving terminal is surrounded by plentiful scatterers, as illustrated in Fig. 1, could be fairly characterized by an established Jakes' channel model~\cite{bWC_Jakes1994, YR_Zheng2003TC}. The channel between the BS and the $r$th antenna is modeled as $L$ taps $l\!=\!1,2,\cdots\!,L $, each tap composed of $P \!\gg\! 1$ separable subpaths with index $p\!=\!1,2,\cdots\!,P $, which could be identified by its unique AoA $\theta_{l,p} \sim U\left(0,2\pi \right)$ and associated complex gain $g_{l,p} \sim \mathcal{CN}\left( 0, {\sigma_{l}^{2}}/P \right)$. Here, $\{\sigma_{l}^{2}, l\!=\!1,2,\cdots\!, L\}$ models the channel power delay profile (PDP). We assume $\sum\nolimits_{l=1}^{L}{\sigma_{l}^{2}} = 1$ such that the total average channel gain per receive antenna is normalized. The channel impulse response of the $p$th subpath at the $l$th tap can be written as $\mathbf{h}\left( {{\theta }_{l,p}} \right)\in {{\mathbb{C}}^{L\times 1}}$, whose $l'$th element is given by ${{g}_{l, p}}\delta \left( l'-{l} \right)$.
Denote $\xi$ as the normalized OFO (nOFO) relative to the subcarrier spacing ${{f}_{s}}$. Denote $f_d$ as the normalized maximum DFO (nDFOmax) defined as the ratio of maximum Doppler shift $\frac{\upsilon }{\lambda }$ and ${{f}_{s}}$, where $\upsilon$ refers to HST velocity. Thus, the normalized DFO (nDFO) and the effective superimposed CFO for the $p$th subpath at the $l$th tap are determined by ${{f}_{l, p}}={{f}_{d}}\cos {{\theta }_{l, p}}$ and ${{\varphi }_{l,p}}={{f}_{l,p}}+\xi $, respectively.
Consider one OFDM frame consisting of $N_b$ OFDM blocks, where the first block serves as pilot and the rest is reserved for data transmission. Note that for Jakes' channel model, each subpath has independent attenuation, phase and AoA (thus different DFO) and we assume that $g_{l,p}$, $\theta_{l,p}$ and $f_{l,p}$ may differ among different frames but remain constant over one OFDM frame.
Denote ${{\mathbf{x}}_{m}} \!=\! {{\left[ \begin{matrix}
{{x}_{m, 0}}, & {{x}_{m, 1}}, & \cdots\!, & {{x}_{m, N-1}} \\
\end{matrix} \right]}^{T}}$ as the frequency domain pilot ($m\!=\!1$) or data symbols ($m\!>\!1$) in the $m$th block, with $N$ being the number of subcarriers. Define $\mathbf{F}$ as the $N\!\times\! N$ unitary Discrete Fourier Transform (DFT) matrix, with $\frac{1}{\sqrt{N}}{{e}^{-\text{j}2{\rm\pi} \frac{\left( k-1 \right)\left( n-1 \right)}{N}}}$ as its $(k,n)$-th entry and ${{\mathbf{F}}_{L}}$ the submatrix consisting of its first $L$ columns. Then, the time domain samples can be obtained by applying an $N$-point inverse DFT on ${\mathbf{x}}_{m}$, i.e., ${{\mathbf{s}}_{m}} \!=\! {\mathbf{F}}^H {\mathbf{x}}_{m}$. Appending the cyclic prefix (CP) of length $N_{\mathrm{cp}}$ to the time domain samples ${{\mathbf{s}}_{m}}$ yields ${{\mathbf{s}}_{m, \mathrm{cp}}}$. The existence of CP turns the linear convolution between ${{\mathbf{s}}_{m, \mathrm{cp}}}$ and $\mathbf{h}\left( {{\theta }_{l,p}} \right)$ into circular convolution between ${{\mathbf{s}}_{m}}$ and $\mathbf{h}\left( {{\theta }_{l,p}} \right)$, i.e., $\mathbf{s}_m \circledast \mathbf{h}(\theta_{l,p})$.
Therefore, the signal in the $m$th block (after CP removal) received from the $p$th subpath at the $l$th tap can be expressed as the following $N\times M$ matrix
\begin{align}
{{\mathbf{Y}}_{m}(\theta_{l,p})} = {{{\eta }_{m}}\left( {{\varphi }_{l,p}} \right) \mathbf{E}\left( {{\varphi }_{l,p}} \right) \left(\mathbf{s}_m \circledast \mathbf{h}(\theta_{l,p})\right) {{\mathbf{a}}^{T}}\left( {{\theta }_{l, p}}, \boldsymbol{\varepsilon } \right)},
\end{align}
where ${{\eta }_{m}}\left( {{\varphi }_{l,p}} \right)={{e}^{\text{j}2{\rm\pi} {{\varphi }_{l,p}}\frac{\left( m-1 \right)\left( N+{{N}_{\mathrm{cp}}} \right)}{N}}}$ is the accumulative phase shift of the $m$th block induced by ${\varphi }_{l,p}$, and $\mathbf{E}\left( {{\varphi }_{l,p}} \right)=\operatorname{diag}\left( 1,\ {{e}^{\text{j}2{\rm\pi} {{\varphi }_{l,p}}\frac{1}{N}}},\ \cdots\!, \ {{e}^{\text{j}2{\rm\pi} {{\varphi }_{l,p}}\frac{N-1}{N}}} \right)$ represents the phase rotation inside one OFDM block. Note that we assume perfect time synchronization between the transceivers.
Considering that the circular convolution in the time domain corresponds to the pointwise multiplication in the frequency domain, we have
\begin{align}
\mathbf{s}_m \circledast \mathbf{h}(\theta_{l,p}) &= \mathbf{F}^H \mathbf{F} \left(\mathbf{s}_m \circledast \mathbf{h}(\theta_{l,p})\right) = \mathbf{F}^H \operatorname{diag}(\mathbf{F} \mathbf{s}_m) \sqrt{N} \mathbf{F}_L \mathbf{h}(\theta_{l,p}) = {{\mathbf{B}}_{m}}\mathbf{h}\left( {{\theta }_{l,p}} \right),
\end{align}
where ${{\mathbf{B}}_{m}} \!=\! \sqrt{N}{{\mathbf{F}}^{H}} \operatorname{diag} \left( {{\mathbf{x}}_{m}} \right) {{\mathbf{F}}_{L}}$.
As a result, the total received signal in the $m$th block can be finally expressed as
\begin{align} \label{ReceivedSignal}
{{\mathbf{Y}}_{m}} = \sum\limits_{l=1}^{L}{\sum\limits_{p=1}^{P}{{{\eta }_{m}}\left( {{\varphi }_{l,p}} \right)\mathbf{E}\left( {{\varphi }_{l,p}} \right){{\mathbf{B}}_{m}}\mathbf{h}\left( {{\theta }_{l,p}} \right){{\mathbf{a}}^{T}}\left( {{\theta }_{l, p}}, \boldsymbol{\varepsilon } \right)}} + {{\mathbf{W}}_{m}},
\end{align}
where ${{\mathbf{W}}_{m}}$ is the zero-mean complex additive white Gaussian noise (AWGN) in the $m$th block at the receive antenna array with $E\{{{\mathbf{W}}_{m}}{{\mathbf{W}}_{m}^H}\} \!=\! M{\sigma_{\mathrm{n}}^{2}}{\mathbf{I}_L}$. Here, $\sigma_{\mathrm{n}}^{2}$ denotes the noise power.
\vspace{-0.6em}
\section{Proposed joint estimation algorithm for fully calibrated ULA}
The integrated receiving procedure with fully calibrated massive ULA will be elaborated in this section, and the diagram of this procedure is illustrated in Fig. 2. First, a high-resolution beamforming network is designed to separate the received signal into $Q$ parallel branches, each of which is mainly affected by a single CFO. Then, the CFO and channel are jointly estimated, using all the $Q$ beamforming branches. Next, conventional CFO compensation techniques could be performed for each branch. Finally, MRC is utilized to recover the transmitted data symbols.
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.5cm}
\setlength{\belowcaptionskip}{-1cm}
\begin{center}
\includegraphics[width=100mm]{fig2.eps}
\end{center}
\caption{ Diagram of the receiver design in the case of fully calibrated ULA.}
\end{figure}
\vspace{-0.6em}
\subsection{Beamforming Network}
From (\ref{ReceivedSignal}), the received pilot signal without considering inter-subarray mismatches is given by (the subscript $m=1$ denoting the pilot block is omitted for brevity hereafter)
\begin{align}
\mathbf{Y}=\sum\limits_{l=1}^{L}{\sum\limits_{p=1}^{P}{\mathbf{E}\left( {{\varphi }_{l,p}} \right)\mathbf{Bh}\left( {{\theta }_{l,p}} \right){{\mathbf{a}}^{T}}\left( {{\theta }_{l, p}} \right)}}+\mathbf{W}.
\end{align}
Since the multiple CFOs are related to different AoAs, the difficulty could be alleviated if we can separate signals of different AoAs through a high-resolution beamforming network. Owing to sufficient number of antennas, the steering vectors of a fully calibrated massive ULA pointing to any two different directions are nearly orthogonal, i.e., ${{\mathbf{a}}^{H}}\left( {{\theta }_{1}} \right)\mathbf{a}\left( {{\theta }_{2}} \right) \approx 0, {{\theta }_{1}}\ne {{\theta }_{2}}$. Such orthogonality helps eliminate the inter-direction interference, thereby enabling the steering vector a very simple but effective candidate beamformer.
Since signals may come from any direction in the space due to rich scatterers around the moving HST, it is more reasonable to directly perform beamforming for a range of pre-fixed $\theta$ without estimating the AoAs previously. How to determine the range of $\theta$ will be discussed later.
Define the beamformer $\mathbf{b}\left( \theta \right) = \frac{1}{M} \mathbf{a}\left( \theta \right) $. The received signal $\mathbf{z}\left( \theta \right) = \mathbf{Y}{{\mathbf{b}}^{\text{*}}}\left( \theta \right)$ is expressed as
\begin{align} \label{SIN}
\mathbf{z}\left( \theta \right) = \underbrace{\sum\limits_{l,p, {{\theta }_{l,p}}
=\theta }{\mathbf{E}\left( {{\varphi }_{l,p}} \right)\mathbf{Bh}\left( {{\theta }_{l,p}} \right)}}_{\mathrm{desired \ signal}} +\underbrace{\sum\limits_{l', p', {{\theta }_{l',p'}}\ne \theta }{\mathbf{E}\left( {{\varphi }_{l',p'}} \right)\mathbf{Bh}\left( {{\theta }_{l',p'}} \right){{\mathbf{a}}^{T}}\left( {{\theta }_{l', p'}} \right){{\mathbf{b}}^{\text{*}}}\left( \theta \right)}}_{\mathrm{interference}} +\underbrace{\mathbf{W}{{\mathbf{b}}^{\text{*}}}\left( \theta \right)}_{\mathrm{noise}},
\end{align}
where the first term is the desired signal relative to the interested direction $\theta$, while the second and third terms represent the interference from other directions and the noise after beamforming, respectively. The sufficient spatial dimension provided by massive ULA creates a high-resolution beamformer, which only allows signals arriving from the interested direction $\theta$ to pass through and mitigates prominently the interference from other directions. This makes the second term negligible. By ignoring the interference, we arrive at
\begin{align}
\mathbf{z}\left( \theta \right) = \mathbf{E}\left( {{f}_{d}}\cos \theta +\xi \right)\mathbf{B}\mathbbm{h} \left( \theta \right)+\mathbf{\tilde{w}}\left( \theta \right),
\end{align}
where $\mathbbm{h}\left( \theta \right)=\sum\nolimits_{{{\theta }_{l,p}}=\theta }{\mathbf{h}\left( {{\theta }_{l,p}} \right)}$
and $\mathbf{\tilde{w}}\left( \theta \right) =\mathbf{W}{{\mathbf{b}}^{\text{*}}}\left( \theta \right)$.
Note that in the case of no signals arriving from direction $\theta$, $\mathbbm{h}\left( \theta \right)$ equals $\mathbf{0}$ and thus $\mathbf{z}\left( \theta \right)$ only comprises noise and weak interference.
Now, we will discuss how to determine some critical parameters for the beamforming network, such as the range of direction $\theta$ and antenna spacing $d$.
First, in the considered richly scattered scenario, the signals may come from all directions in the entire space. The cone-shaped beam pattern of ULA (as shown in Fig. 3 of~\cite{W_Guo2017TVT}) can guarantee that all the signals incorporated by the beam towards $\theta$ are mainly affected by the same CFO ${{f}_{d}}\cos \theta +\xi $. Besides, the adoption of ULA only requires to design the beamforming network along the dimension of AoA for receiving all signals dispersed in the whole space. Therefore, ULA is preferable to deal with the considered scenario.
Moreover, the ULA cannot differentiate two symmetric AoAs $\theta$ and $360^\circ\!-\!\theta$, making it sufficient to perform beamforming within the range of $(0^\circ,180^\circ)$.
Second, the antenna spacing $d$ optimizing beamforming resolution without incurring aliasing is $d=\frac{\lambda}{2}$. However, this cannot avoid the aliasing between $0^\circ$ (corresponding nDFO $f_d$) and $180^\circ $ (corresponding nDFO $-f_d$), which will bring inconvenience for CFO compensation. Hence, a tradeoff between beamforming resolution and aliasing avoidance needs to be taken.
Third, since an $M$-elements ULA provides at most $M$ degrees of freedom (DoF), it is sufficient to perform beamforming towards $M$ distinct directions, which could either be evenly selected between $(0^\circ,180^\circ)$, or drawn from ${\mathcal{D}}_{{\rm{IFFT}}} \!=\! \left\{ {{\vartheta}_{1}}, {{\vartheta}_{2}}, \cdots\!, {{\vartheta}_{M}} \right\}$ with ${{\vartheta}_{r}} \!=\! {\rm{arccos}}\left( \frac{\rm{\pi}}{\chi} \left(\frac{r-1}{M} \!-\! \frac{1}{2} \right)\right)$~\cite{L_You2015TWC}. Here, $\frac{1}{\sqrt{M}} {{\left[ \begin{matrix}
\mathbf{a}(\vartheta_1), & \mathbf{a}(\vartheta_2), & \cdots \!, & \mathbf{a}(\vartheta_M) \\
\end{matrix} \right]}}$ in fact constitutes the column-permuted normalized inverse DFT matrix and thus the beamforming can be efficiently achieved via FFT operation. However, as $\tilde{d}<0.5$, we have $\left| \frac{\rm{\pi}}{\chi} \left(\frac{r-1}{M} \!-\! \frac{1}{2} \right) \right| > 1$ for $r \!<\! \left(\frac{1}{2} \!-\! \tilde{d} \right)M\!+\!1$ or $r \!>\! \left(\frac{1}{2} \!+\! \tilde{d} \right)M\!+\!1$ and thereby no corresponding real physical angles ${{\vartheta}_{r}}$ can be found. That is to say, the beamforming can only be performed towards $Q\!<\!M$ directions at $\tilde{d}\!<\!0.5$. As a result, by exploiting FFT operation, we benefit from computational efficiency and perfect orthogonality among different beamformers at the cost of slightly sacrificing some DoF.
\vspace{-0.6em}
\subsection{Joint Estimation Algorithm with Fully Calibrated ULA}
The beamforming decomposes the received signal into $Q$ parallel branches, each of which is affected by a single CFO. Assuming perfect interference elimination, we can find the estimates of nDFOmax ${\hat{f}}_{d}$, nOFO $\hat{\xi }$ and channel $\mathbbm{\hat{h}}\left( \theta_{q} \right)$ for the $q$th branch by solving the minimization problem below
\begin{align} \label{OptimPb0}
\left\{{\hat{f}}_{d}, \hat{\xi}, \mathbbm{\hat{h}}\left( \theta_{q} \right)\right\} = \arg \underset{\left\{{\tilde{f}}_{d}, \tilde{\xi}, \mathbbm{\tilde{h}}\left( \theta_{q} \right)\right\}}{\mathop{\min }}\,\left\| \mathbf{z}\left( \theta_{q} \right)-\mathbf{E}\left( \tilde{\varphi}_{q} \right)\mathbf{B}\mathbbm{\tilde{h}} \left( \theta_{q} \right) \right\|_{2}^{2},
\end{align}
where $\tilde{\varphi}_{q} = {{\tilde{f}}_{d}}\cos \theta_{q} + \tilde{\xi}$. For the given trial value pair ${{\tilde{f}}_{d}}$ and $\tilde{\xi }$, the ML estimator of $\mathbbm{\hat{h}}(\theta_{q})$ minimizing (\ref{OptimPb0}) is given by $\mathbbm{\hat{h}}\left( \theta_{q} \right)={{\mathbf{B}}^{\dagger }}{{\mathbf{E}}^{H}}\left( \tilde{\varphi}_{q} \right)\mathbf{z}\left( \theta_{q} \right)$. Let ${{\mathbf{P}}_{\mathbf{B}}}=\mathbf{B}{{\mathbf{B}}^{\dagger }}$ represent the orthogonal projection operator onto the subspace spanned by the columns of $\mathbf{B}$ and $\mathbf{P}_{\mathbf{B}}^{\bot }={{\mathbf{I}}_{N}}-{{\mathbf{P}}_{\mathbf{B}}}$. Then by substituting $\mathbbm{\tilde{h}}\left( \theta_{q} \right)$ with its ML estimator, (\ref{OptimPb0}) is reduced to
\begin{align}\label{OptimPb_SingleBeam}
\left\{{\hat{f}}_{d}, \hat{\xi}\right\} = \arg \underset{\left\{{{\tilde{f}}}_{d}, \tilde{\xi }\right\}}{\mathop{\min }}\,{g\left( {{{\tilde{f}}}_{d}}, \tilde{\xi }, {{\theta }_{q}} \right)},
\end{align}
where $g\left( {{{\tilde{f}}}_{d}}, \tilde{\xi }, {{\theta }_{q}} \right) = \left\| \mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{E}}^{H}}\left( \tilde{\varphi}_{q} \right)\mathbf{z}\left( \theta_{q} \right) \right\|_{2}^{2}$.
It should be pointed out that an estimate of superimposed CFO $\hat{\varphi}_q$ can be acquired by solving (\ref{OptimPb_SingleBeam}). However, there are infinite combinations of $\hat{f}_d$ and $\hat{\xi}$ satisfying $\hat{\varphi}_q \!=\! \hat{f}_d \cos\theta_q \!+\! \hat{\xi}$. In other words, ambiguity exists between DFO and OFO estimation if only one beamforming branch is used. Since nDFOmax and nOFO are the same for all branches~\cite{C_Tepedelenlioglu2001TVT}, we can employ simultaneously all the beamforming branches to eliminate such estimation ambiguity. Namely, nDFOmax and nOFO can be jointly estimated from
\begin{align}\label{OptimPb1}
\left\{{{\hat{f}}}_{d}, \hat{\xi }\right\}=\arg \underset{\left\{{{\tilde{f}}}_{d}, \tilde{\xi}\right\}}{\mathop{\min }}\, {g\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right)},
\end{align}
where $g\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right) = \sum\nolimits_{q=1}^{Q}{g\left( {{{\tilde{f}}}_{d}}, \tilde{\xi }, {{\theta }_{q}} \right)}$.
The equivalent channel of the $q$th beamforming branch is given by $\mathbbm{\hat{h}}\left( {{\theta }_{q}} \right)={{\mathbf{B}}^{\dagger }}{{\mathbf{E}}^{H}}\left( {{{\hat{f}}}_{d}}\cos {{\theta }_{q}}+\hat{\xi } \right)\mathbf{z}\left( {{\theta }_{q}} \right)$.
Directly solving (\ref{OptimPb1}) necessitates the exhaustive two-dimensional grid-searching. Instead, we solve (\ref{OptimPb1}) efficiently and iteratively via Newton's method~\cite{S_Boyd2004}.
Let $\hat{f}_{d}^{\left( i-1 \right)}$ and $\hat{\xi}^{\left( i-1 \right)}$ represent the estimation of ${f}_{d}$ and that of $\xi$ in the $\left( i-1 \right)$th iteration, respectively. Moreover, define $\hat{\varphi}_{q}^{\left( i-1 \right)} \!=\! \hat{f}_{d}^{\left( i-1 \right)}\cos{{\theta }_{q}} + \hat{\xi}^{\left( i-1 \right)}$, $\tilde{\varphi}_{q}^{\left( i \right)} \!=\! \tilde{f}_{d}^{\left( i \right)}\cos{{\theta }_{q}} + \tilde{\xi}^{\left( i \right)}$ and $\Delta {\tilde{\varphi}^{\left( i\right)}_{q}} \!=\! \Delta {\tilde{f}^{\left( i\right)}_{d}}\cos{{\theta }_{q}} + \Delta \tilde{\xi}^{\left( i\right)}$, where $\Delta {\tilde{f}_{d}^{\left( i \right)}} \!=\! {\tilde{f}^{\left( i\right)}_{d}} \\ \!-\! \hat{f}_{d}^{\left( i-1 \right)}$ and $\Delta \tilde{\xi}^{\left( i \right)} \!=\! \tilde{\xi}^{\left( i\right)} \!-\! \hat{\xi}^{\left( i-1 \right)}$ denote the trial value pair of residual nDFOmax and residual nOFO in the $i$th iteration, respectively.
Define $\mathbf{\hat{z}}^{\left( i \right)}\left( {{\theta }_{q}} \right)={{\mathbf{E}}^{H}}\left( \hat{\varphi}_{q}^{\left( i-1 \right)} \right)\mathbf{z}\left( {{\theta }_{q}} \right)$ and $\mathbf{D}={\mathrm{j}}\frac{2{\rm \pi}}{N} \operatorname{diag}\left( 0, 1, \cdots\!, N-1 \right)$. Then, with Taylor series expansion $\mathbf{E}\left( {{\tilde{\varphi}}}^{\left( i\right)}_{q} \right) \approx \mathbf{E}\left( \hat{\varphi}_{q}^{\left( i-1 \right)} \right) \left( {{\mathbf{I}}_{N}}+ \Delta {{\tilde{\varphi}}}^{\left( i\right)}_{q} \mathbf{D}+\frac{1}{2}{{\left( \Delta {{\tilde{\varphi}}}^{\left( i\right)}_{q} \right)}^{2}}{{\mathbf{D}}^{2}} \right)$,
$g\left( {{{\tilde{f}}}_{d}^{(i)}}, {\tilde{\xi}}^{(i)}, {{\theta }_{q}} \right)= \left\| \mathbf{P}_{\mathbf{B}}^{\bot } {\mathbf{E}}^{H}\left( {{\tilde{\varphi}}}^{\left( i\right)}_{q} \right) \mathbf{z}\left( {{\theta }_{q}} \right) \right\|_{2}^{2}$ could be approximated as
\begin{align}
g\left( {{{\tilde{f}}}_{d}^{(i)}}, {\tilde{\xi}}^{(i)}, {{\theta }_{q}} \right)
\approx {{T}_{0}^{\left( i \right)}}\left( {{\theta }_{q}} \right)+ \Delta \tilde{\varphi}_{q}^{\left( i \right)} {{T}_{1}^{\left( i \right)}}\left( {{\theta }_{q}} \right) +{{\left( \Delta \tilde{\varphi}_{q}^{\left( i \right)} \right)}^{2}}{{T}_{2}^{\left( i \right)}}\left( {{\theta }_{q}} \right),
\end{align}
where
\begin{align*}
{{T}_{0}^{\left( i \right)}}\left( {{\theta }_{q}} \right)&={{{\mathbf{\hat{z}}}}^{{\left( i \right)}H}}\left( {{\theta }_{q}} \right)\mathbf{P}_{\mathbf{B}}^{\bot }\mathbf{\hat{z}}^{\left( i \right)}\left( {{\theta }_{q}} \right), \
{{T}_{1}^{\left( i \right)}}\left( {{\theta }_{q}} \right) = 2\Re \left\{ {{{\mathbf{\hat{z}}}}^{{\left( i \right)}H}}\left( {{\theta }_{q}} \right)\mathbf{DP}_{\mathbf{B}}^{\bot }\mathbf{\hat{z}}^{\left( i \right)}\left( {{\theta }_{q}} \right) \right\}, \nonumber \\
{{T}_{2}^{\left( i \right)}}\left( {{\theta }_{q}} \right) &= \Re \left\{ {{{\mathbf{\hat{z}}}}^{{\left( i \right)}H}}\left( {{\theta }_{q}} \right) {{\mathbf{D}}^{2}}\mathbf{P}_{\mathbf{B}}^{\bot } \mathbf{\hat{z}}^{\left( i \right)}\left( {{\theta }_{q}} \right) \right\} + {{{\mathbf{\hat{z}}}}^{{\left( i \right)}H}}\left( {{\theta }_{q}} \right) \mathbf{DP}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}} \mathbf{\hat{z}}^{\left( i \right)}\left( {{\theta }_{q}} \right).
\end{align*}
Therefore, we obtain
\begin{align} \label{QuadraticFun1}
g\left( {{{\tilde{f}}}_{d}^{(i)}}, {\tilde{\xi}}^{(i)} \right)
\approx {{t}_{0}^{\left( i \right)}}+{{t}_{11}^{\left( i \right)}} \Delta {{\tilde f}_{d}^{\left( i \right)}} + {{t}_{12}^{\left( i \right)}} \Delta \tilde\xi^{\left( i \right)} + {{t}_{21}^{\left( i \right)}} {{\left( \Delta {{\tilde f}_{d}^{\left( i \right)}} \right)}^{2}} + {{t}_{22}^{\left( i \right)}} \Delta {{\tilde f}_{d}^{\left( i \right)}}\Delta \tilde \xi^{\left( i \right)} + {{t}_{23}^{\left( i \right)}} {{\left( \Delta \tilde \xi^{\left( i \right)} \right)}^{2}},
\end{align}
where
\begin{align*}
& {{t}_{0}^{\left( i \right)}}=\sum\nolimits_{q=1}^{Q}{{{T}_{0}^{\left( i \right)}}\left( {{\theta }_{q}} \right)},\ \ {{t}_{11}^{\left( i \right)}}=\sum\nolimits_{q=1}^{Q}{\cos{{\theta }_{q}}{{T}_{1}^{\left( i \right)}}\left( {{\theta }_{q}} \right)}, \ \ {{t}_{12}^{\left( i \right)}}=\sum\nolimits_{q=1}^{Q}{{{T}_{1}^{\left( i \right)}}\left( {{\theta }_{q}} \right)\ } \nonumber \\
& {{t}_{21}^{\left( i \right)}}=\sum\nolimits_{q=1}^{Q}{{\cos}^{2}{{\theta }_{q}}{{T}_{2}^{\left( i \right)}}\left( {{\theta }_{q}} \right)}, \ \ {{t}_{22}^{\left( i \right)}}=\sum\nolimits_{q=1}^{Q}{2\cos{{\theta }_{q}}{{T}_{2}^{\left( i \right)}}\left( {{\theta }_{q}} \right)},\ \ {{t}_{23}^{\left( i \right)}}=\sum\nolimits_{q=1}^{Q}{{{T}_{2}^{\left( i \right)}}\left( {{\theta }_{q}} \right)}.
\end{align*}
By setting the first-order gradients of (\ref{QuadraticFun1}) with respect to $\Delta {\tilde{f}_{d}^{\left( i \right)}}$ and $\Delta \tilde{\xi}^{\left( i \right)} $ equal zeros, the optimal residual nDFOmax and that of residual nOFO in the $i$th iteration are given by
\begin{align} \label{CFOSolution}
\left[ \begin{matrix}
\Delta \hat{f}_{d}^{\left( i \right)} \\
\Delta \hat{\xi}^{\left( i \right)} \\
\end{matrix} \right]=-{{\left[ \begin{matrix}
2{{t}_{21}^{\left( i \right)}} & {{t}_{22}^{\left( i \right)}} \\
{{t}_{22}^{\left( i \right)}} & 2{{t}_{23}^{\left( i \right)}} \\
\end{matrix} \right]}^{-1}}\left[ \begin{matrix}
{{t}_{11}^{\left( i \right)}} \\
{{t}_{12}^{\left( i \right)}} \\
\end{matrix} \right].
\end{align}
Thus, the CFO estimation in the $i$th iteration could be accordingly updated as $\hat{f}_{d}^{\left( i \right)} \!=\! \hat{f}_{d}^{\left( i-1 \right)} \!+\! \Delta \hat{f}_{d}^{\left( i \right)}$ and $\hat{\xi}^{\left( i \right)} \!=\! \hat{\xi}^{\left( i-1 \right)} \!+\! \Delta \hat{\xi}^{\left( i \right)}$.
Note that $\hat{f}_{d}^{\left( 0 \right)} \!=\! 0$ and $\hat{\xi}^{\left( 0 \right)} \!=\! 0$ are used for initialization and the convergence to the local optimum of the Newton's method is well proved in~\cite{S_Boyd2004}.
\vspace{-0.6em}
\subsection{Post-Processing After Beamforming}
After the high-resolution beamforming network, the beamforming branch towards $\theta_q$ is mainly affected by single dominant CFO ${\hat{\varphi}_{q}} ={{\hat{f}}_{d}}\cos {{\theta }_{q}}+\hat{\xi }$. For the signal in the $m$th block of the $q$th branch, the CFO compensation could be performed as
\begin{align}
{{\mathbf{\hat{z}}}_{m}}\left( {{\theta }_{q}} \right) ={{e}^{-\text{j}2{\rm \pi} \hat{\varphi}_{q} \frac{\left( m-1 \right)\left( N+{{N}_{\mathrm{cp}}} \right)}{N}}} \operatorname{diag}\left( 1,\ {{e}^{-\text{j}2{\rm \pi} \hat{\varphi}_{q}\frac{1}{N}}},\ \cdots \!,\ {{e}^{-\text{j}2{\rm \pi} \hat{\varphi}_{q}\frac{N-1}{N}}} \right){{\mathbf{z}}_{m}}\left( {{\theta }_{q}} \right).
\end{align}
Then, the equivalent channel in each branch could be regarded as frequency-selective but time-invariant. The adoption of OFDM system again decomposes this frequency-selective channel into parallel flat-fading subcarrier channels. Finally, the transmitted data symbols are readily recovered through MRC among all the beamforming branches.
\section{Proposed joint estimation algorithm for partly calibrated ULA}
From (\ref{ReceivedSignal}), the received pilot signal in the case of partly calibrated ULA could be written as
\begin{align}{\label{ReceivedSignalUncalibrated}}
\mathbf{Y}=\sum\limits_{l=1}^{L}{\sum\limits_{p=1}^{P}{\mathbf{E}\left( {{\varphi }_{l,p}} \right)\mathbf{Bh}\left( {{\theta }_{l,p}} \right) {\boldsymbol{\alpha}}^{T}\left( \boldsymbol{\varepsilon} \right) {\mathbf{V}}^{T}\left( \theta_{l,p} \right) }}+\mathbf{W}.
\end{align}
Imperfect calibration of the uncalibrated ULA destroys the quasi-orthogonality between steering vectors pointing to two distinct directions, since ${{\mathbf{a}}^{H}}\left( {{\theta }_{1}} \right)\mathbf{a}\left( {{\theta }_{2}}, \boldsymbol{\varepsilon} \right)={{\mathbf{a}}^{H}}\left( {{\theta }_{1}} \right) {\mathbf{V}}\left( \theta_{2} \right) {\boldsymbol{\alpha}}\left( \boldsymbol{\varepsilon} \right) \approx 0$ may not hold anymore for ${{\theta }_{1}}\ne {{\theta }_{2}}$. Therefore, the ULA response vector might fail to eliminate the inter-direction interference and cannot serve as an efficient beamformer. A new algorithm specially designed for partly calibrated ULA is thereby needed.
\vspace{-0.6em}
\subsection{MSE Performance Analysis of the Joint Estimation Algorithm for Fully Calibrated ULA in Partly Calibrated Case}
In this subsection, we will examine the MSE performance loss if we directly apply the joint estimation algorithm developed for fully calibrated ULA to partly calibrated case.
For ease of derivation, we assume that the channel at each delay shares the same uniform incident AoA set as in~\cite{YR_Zheng2003TC}, i.e., the AoA associated to the $p$th subpath of the $l$th delay is $\theta_{l,p} = 2\pi \frac{p}{P}$ for $p=1,2,\cdots\!,P$.
By denoting ${\theta_{p}} = {\theta_{l,p}}, l=1,2,\cdots\!,L$ and ${{\varphi }_{p}}={{f}_{d}}\cos {{\theta }_{p}}+\xi $, the received pilot signal (\ref{ReceivedSignalUncalibrated}) can be re-expressed as
\begin{align}
{{\mathbf{Y}}} = \sum\limits_{l=1}^{L}{\sum\limits_{{p}=1}^{P}{\mathbf{E}\left( {{\varphi }_{l,{p}}} \right){{\mathbf{B}}}\mathbf{h}\left( {{\theta }_{l,{p}}} \right) {\boldsymbol{\alpha}}^{T}\left( \boldsymbol{\varepsilon} \right) {\mathbf{V}}^{T}\left( \theta_{l,p} \right) }} + {{\mathbf{W}}} = \underbrace{\sum\limits_{p=1}^{P}{\mathbf{E}\left( {{\varphi }_{p}} \right)\mathbf{B}{{\mathbf{h}}_{p}}{{\boldsymbol{\alpha }}^{T}}{{\mathbf{V}}^{T}}\left( {{\theta }_{p}} \right)}}_{{{\mathbf{Y}}_{0}}}+\mathbf{W},
\end{align}
where ${{\mathbf{h}}_{p}} = \sum\nolimits_{l=1}^{L}{\mathbf{h}\left( {{\theta }_{l,{p}}} \right)}$ is an $L\times 1$ vector whose $l$th element is $g_{l,p}$ and ${\boldsymbol{\alpha }} \left( \boldsymbol{\varepsilon} \right)$ is simplified as ${\boldsymbol{\alpha }}$ for conciseness. Irrespective of the inter-subarray mismatches, we still adopt the joint estimation algorithm in Section III-B to estimate the CFO. The beamforming direction ${\tilde{\theta}}_{q}$ is chosen from the set ${{\mathcal{D}}_{\text{IFFT}}}$. Define ${\tilde{\varphi}}_{q}={{\tilde{f}}_{d}}\cos {\tilde{\theta}}_{q}+\tilde{\xi }$ and $\tilde{\varphi }={{\tilde{f}}_{d}}\cos \tilde{\theta }+\tilde{\xi }$. With the impact of inter-subarray mismatches, the cost function $g\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right)$ in (\ref{OptimPb1}) could be equivalently expressed as
\begin{align}
g\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right)= & \sum\limits_{q=1}^{Q} {\left\| \mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{E}}^{H}}\left( {\tilde{\varphi }}_{q} \right)\left( {{\mathbf{Y}}_{0}}+\mathbf{W} \right){{\mathbf{a}}^{*}}\left( {\tilde{\theta }}_{q} \right) \right\|_{2}^{2}}, \ {\tilde{\theta}}_{q} \in {{{\mathcal{D}}}_{\text{IFFT}}}, \nonumber \\
\propto & \int_{\rm{\pi}}^{0} \left\| \mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{E}}^{H}}\left( {\tilde{\varphi }} \right)\left( {{\mathbf{Y}}_{0}}+\mathbf{W} \right){{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right) \right\|_{2}^{2} d\cos \tilde{\theta } \nonumber \\
\approx & \underbrace{\int_{0}^{\rm{\pi}}{ {{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}\mathbf{E}\left( {\tilde{\varphi }} \right)\mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{E}}^{H}}\left( {\tilde{\varphi }} \right){{\mathbf{Y}}_{0}}{{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right) \sin \tilde{\theta }d\tilde{\theta }}}_{{{g}_{0}}} \nonumber \\
+ & \underbrace{2\Re \left\{ \int_{0}^{\rm{\pi}}{ {{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}\mathbf{E}\left( {\tilde{\varphi }} \right)\mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{E}}^{H}}\left( {\tilde{\varphi }} \right)\mathbf{W}{{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right) \sin \tilde{\theta }d\tilde{\theta }} \right\}}_{{{g}_{\mathrm{n}}}},
\end{align}
where ${{g}_{0}}$ and ${{g}_{\mathrm{n}}}$ represent the contribution of inter-direction interference and that of noise, respectively. Note that all the first-order moments of $\mathbf{W}$ are zero, which leads to $E\left\{ \frac{\partial {{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}}\frac{\partial {{g}_{\mathrm{n}}}}{\partial {{{\tilde{f}}}_{d}}} \right\}=E\left\{ \frac{\partial {{g}_{0}}}{\partial \tilde{\xi }}\frac{\partial {{g}_{\mathrm{n}}}}{\partial \tilde{\xi }} \right\}=E\left\{ \frac{{{\partial }^{2}}{{g}_{\mathrm{n}}}}{\partial {{{\tilde{f}}}_{d}}^{2}} \right\}=E\left\{ \frac{{{\partial }^{2}}{{g}_{\mathrm{n}}}}{\partial {{{\tilde{f}}}_{d}}\partial \tilde{\xi }} \right\}=E\left\{ \frac{{{\partial }^{2}}{{g}_{\mathrm{n}}}}{\partial {{{\tilde{\xi }}}^{2}}} \right\}=0$.
As a result, we derive $E\left\{ {{\left( \frac{\partial g}{\partial {{{\tilde{f}}}_{d}}} \right)}^{2}} \right\} =E\left\{ {{\left( \frac{\partial {{g}_{\mathrm{n}}}}{\partial {{{\tilde{f}}}_{d}}} \right)}^{2}} \right\}+E\left\{ {{\left( \frac{\partial {{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}} \right)}^{2}} \right\} \gtrapprox E\left\{ {{\left( \frac{\partial {{g}_{\mathrm{n}}}}{\partial {{{\tilde{f}}}_{d}}} \right)}^{2}} \right\}+\left( E\left\{ {{\frac{\partial {{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}} }} \right\}\right)^{2}$, $E\left\{ {{\left( \frac{\partial g}{\partial \tilde{\xi }} \right)}^{2}} \right\} =E\left\{ {{\left( \frac{\partial {{g}_{\mathrm{n}}}}{\partial \tilde{\xi }} \right)}^{2}} \right\}+E\left\{ {{\left( \frac{\partial {{g}_{0}}}{\partial \tilde{\xi }} \right)}^{2}} \right\} \gtrapprox E\left\{ {{\left( \frac{\partial {{g}_{\mathrm{n}}}}{\partial {{{\tilde{\xi}}}}} \right)}^{2}} \right\}+\left( E\left\{ {{\frac{\partial {{g}_{0}}}{\partial {{{\tilde{\xi}}}}} }} \right\}\right)^{2}$, $E\left\{ \frac{{{\partial }^{2}}g}{\partial {{{\tilde{f}}}_{d}}^{2}} \right\} =E\left\{ \frac{{{\partial }^{2}}{{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}^{2}} \right\}$, $E\left\{ \frac{{{\partial }^{2}}g}{\partial {{{\tilde{f}}}_{d}}\partial \tilde{\xi }} \right\} =E\left\{ \frac{{{\partial }^{2}}{{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}\partial \tilde{\xi }} \right\}$ and $E\left\{ \frac{{{\partial }^{2}}g}{\partial {{{\tilde{\xi }}}^{2}}} \right\} =E\left\{ \frac{{{\partial }^{2}}{{g}_{0}}}{\partial {{{\tilde{\xi }}}^{2}}} \right\}$.
Denote $\boldsymbol{\tilde{\phi}} \!=\! \left[ {\tilde{f}}_{d}, \tilde{\xi} \right]^{T}$ and $\boldsymbol{\phi} \!=\! \left[ {f}_{d}, \xi \right]^{T}$. Define $a_{11}^{0} \!=\! {E{\left. \left\{ \frac{\partial {{g}_{0}}}{\partial \tilde{f}_d} \right\} \right|}_{ \boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }}$, $a_{12}^{0} \!=\! {E{\left. \left\{ \frac{\partial {{g}_{0}}}{\partial \tilde{\xi }} \right\} \right|}_{ \boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }}$, $a_{11}^{\mathrm{n}} \!=\! E{{\left. \left\{ {{\left( \frac{\partial {{g}_{\mathrm{n}}}}{\partial {{{\tilde{f}}}_{d}}} \right)}^{2}} \right\} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }}$, $a_{12}^{\mathrm{n}} \!=\! E{{\left. \left\{ {{\left( \frac{\partial {{g}_{\mathrm{n}}}}{\partial {{{\tilde{\xi}}}}} \right)}^{2}} \right\} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }}$, ${{a}_{21}} \!=\! {{\left. E\left\{ \frac{{{\partial }^{2}}{{g}_{0}}}{\partial \tilde{f}_{d}^{2}} \right\} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }}$, ${{a}_{22}} \!=\! {{\left. E\left\{ \frac{{{\partial }^{2}}{{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}\partial \tilde{\xi }} \right\} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }}$, ${{a}_{23}} \!=\! {{\left. E\left\{ \frac{{{\partial }^{2}}{{g}_{0}}}{\partial {{{\tilde{\xi }}}^{2}}} \right\} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }}$. The detailed derivation for the expression of $a_{11}^{0}, a_{12}^{0}, a_{11}^{\mathrm{n}}, a_{12}^{\mathrm{n}}, a_{21}, a_{22}, a_{23}$ can be found in Appendix~\ref{MSEDerivation}.
With some tedious derivation in Appendix~\ref{CrossTerm}, we can further prove ${{a}_{23}}\approx 2{{a}_{21}}\gg {{a}_{22}}$, which suggests that ${{a}_{22}}$ is negligible and that DFO and OFO estimations are quasi-independent of each other. As a result, the MSE of CFO estimation could be finally expressed as~\cite{W_Zhang2013TSP, G_Wang2011TWC}
\begin{align}
\text{MSE}\left\{ {{{\tilde{f}}}_{d}} \right\} & ={{\left. \frac{E\left\{ {{\left( \frac{\partial {{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}} \right)}^{2}}+{{\left( \frac{\partial {{g}_{\mathrm{n}}}}{\partial {{{\tilde{f}}}_{d}}} \right)}^{2}} \right\}}{{{\left( E\left\{ \frac{{{\partial }^{2}}{{g}_{0}}}{\partial \tilde{f}_{d}^{2}} \right\} \right)}^{2}}} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }} \gtrapprox \frac{\left( a_{11}^{0} \right)^{2} + a_{11}^{\mathrm{n}}}{\left( a_{21} \right)^{2}}, \\
\text{MSE}\left\{ {\tilde{\xi }} \right\} & = {{\left. \frac{E\left\{ {{\left( \frac{\partial {{g}_{0}}}{\partial \tilde{\xi }} \right)}^{2}}+{{\left( \frac{\partial {{g}_{\mathrm{n}}}}{\partial \tilde{\xi }} \right)}^{2}} \right\}}{{{\left( E\left\{ \frac{{{\partial }^{2}}{{g}_{0}}}{\partial {{{\tilde{\xi }}}^{2}}} \right\} \right)}^{2}}} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }} \gtrapprox \frac{\left( a_{12}^{0} \right)^{2} + a_{12}^{\mathrm{n}}}{\left( a_{23} \right)^{2}},
\end{align}
where
\begin{align}
a_{11}^{0} & \approx N \int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} }\left( \frac{1}{3}-\frac{{{\left( { \rm{\pi} } {{f}_{d}}\tilde{x} \right)}^{2}}}{30} \right) \sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right){{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}}\sin 2\tilde{\theta }d\tilde{\theta }d{{\theta }_{p}}, \\ \label{a110}
a_{12}^{0} & \approx 2N \int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} }\left( \frac{1}{3}-\frac{{{\left( { \rm{\pi} } {{f}_{d}}\tilde{x} \right)}^{2}}}{30} \right) \sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right){{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }d{{\theta }_{p}}, \\ \label{a120}
a_{11}^{\mathrm{n}} & \approx \frac{2{\rm{\pi}}N \sigma_{\mathrm{n}}^{2}} {3\tilde{d}} \int_{0}^{ \rm{\pi} } {\int_{0}^{ \rm{\pi} } {{{\cos }^{2}}\tilde{\theta } {{\boldsymbol{\alpha }}^{T}} {{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{*}}\sin \tilde{\theta } d\tilde{\theta }}d{{\theta }_{p}}}, \\ \label{a11n}
a_{12}^{\mathrm{n}} & \approx \frac{2{\rm{\pi}}N \sigma_{\mathrm{n}}^{2}}{3\tilde{d}} \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} } {{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}} {{\boldsymbol{\alpha }}^{*}}\sin \tilde{\theta } d\tilde{\theta }}d{{\theta }_{p}}}, \\ \label{a12n}
{{a}_{21}} & \approx \frac{2{\rm{\pi}}N}{3} \int_{0}^{ \rm{\pi} } {\int_{0}^{ \rm{\pi} }{{{\cos }^{2}} \tilde{\theta }{{\boldsymbol{\alpha }}^{T}} {{\mathbf{A}}_{b}} {{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }} d{{\theta }_{p}}}, \\ \label{a21}
{{a}_{22}}& \approx \frac{2{\rm{\pi}}N}{3} \int_{0}^{ \rm{\pi} } \int_{0}^{ \rm{\pi} }{2\cos \tilde{\theta } {{\boldsymbol{\alpha }}^{T}} {{\mathbf{A}}_{b}} {{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }} d{{\theta }_{p}}, \\ \label{a22}
{{a}_{23}} & \approx \frac{2{\rm{\pi}}N}{3} \int_{0}^{ \rm{\pi} } \int_{0}^{ \rm{\pi} }{{{\boldsymbol{\alpha }}^{T}} {{\mathbf{A}}_{b}} {{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }} d{{\theta }_{p}}.
\end{align}
Here, the $(p,q)$th element of ${\mathbf{A}}_{b} \in \mathbb{C}^{K\times K}$ is $[{\mathbf{A}}_{b}]_{p,q} \!=\! \frac{{{\sin }^{2}}\left( \chi J\tilde{x} \right)}{{{\sin }^{2}}\left( \chi\tilde{x} \right)} {{e}^{\text{j}2\chi J\tilde{x} \left( q-p \right)}}$, with $\tilde{x} \!=\! \cos \tilde{\theta } \!-\! \cos {{\theta }_{p}}$.
Let us further define
\begin{align}\label{MSE0n}
\text{MS}{{\text{E}}_{0}}\left\{ {{{\tilde{f}}}_{d}} \right\}=\frac{\left( a_{11}^{0} \right)^2}{\left( a_{21} \right)^{2}}, \text{MS}{{\text{E}}_{\mathrm{n}}}\left\{ {{{\tilde{f}}}_{d}} \right\}=\frac{a_{11}^{\mathrm{n}}}{\left( a_{21} \right)^{2}}, \text{MS}{{\text{E}}_{0}}\left\{ {{{\tilde{\xi}}}} \right\}=\frac{\left( a_{12}^{0} \right)^2}{\left( a_{23} \right)^{2}}, \text{MS}{{\text{E}}_{\mathrm{n}}}\left\{ {{{\tilde{\xi}}}} \right\}=\frac{a_{12}^{\mathrm{n}}}{\left( a_{23} \right)^{2}}.
\end{align}
Here, ${{\text{MSE}}_{\mathrm{n}}}\left\{ \cdot \right\}$ can be regarded as the contribution of noise to the MSE and decreases as the signal-to-noise ratio (SNR) increases. In contrast, ${{\text{MSE}}_{0}}\left\{ \cdot \right\}$ reflects the influence of inter-direction interference on MSE.
It is independent of the SNR and appears as the MSE floor at high SNRs.
Thus, the latter dominates MSE performance under high SNR region while the former is dominant under low and moderate SNRs. Besides, it must be pointed out that ${{\text{MSE}}_{0}}\left\{ \cdot \right\}$ in (\ref{MSE0n}) is actually a lower bound of the real MSE floor, since we have approximated ${E\left\{ {{\left( \frac{\partial {{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}} \right)}^{2}} \right\}}$ and ${E\left\{ {{\left( \frac{\partial {{g}_{0}}}{\partial {{{\tilde{\xi}}}}} \right)}^{2}} \right\}}$ by $\left( {E\left\{ {\frac{\partial {{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}} } \right\}} \right)^{2}$ and $\left( {E\left\{ {\frac{\partial {{g}_{0}}}{\partial {{{\tilde{\xi}}}}} } \right\}} \right)^{2}$, respectively, for ease of MSE derivation.
Simulations will show that ${{\text{MSE}}_{0}}\left\{ \cdot \right\}$ is evident for large nDFOmax or large number of subarrays, which will cause significant uncompensated residual CFOs and thereby considerably exacerbate the subsequent data detection performance. Thus, it is necessary to amend the current algorithm so that it adapts to partly calibrated ULA. This procedure will be developed in detail in the next subsection.
Simplifying ${{\text{MSE}}_{\mathrm{n}}} \left\{ \cdot \right\}$ and ${{\text{MSE}}_{0}} \left\{ \cdot \right\}$ in (\ref{MSE0n}) into a more concise form is quite an arduous task. Nevertheless, for fully calibrated ULA, we have the following proposition.
\begin{proposition}
In the case of fully calibrated ULA, as the number of antennas $M$ increases, the asymptotic estimation MSEs of both DFO and OFO decrease linearly with the reduction in noise power $\sigma_{\mathrm{n}}^2$, and the asymptotic MSE of DFO is approximately twice that of OFO, i.e.,
\begin{align}\label{MSE_ULA}
{{\text{MSE}}_{\mathrm{n}}}\left\{ {{{\tilde{f}}}_{d}} \right\} \approx 2{{\text{MSE}}_{\mathrm{n}}}\left\{ {\tilde{\xi }} \right\} \approx \frac{3\sigma _{\mathrm{n}}^{2}}{{{ \rm{\pi} }^{2}}MN}, \ {{\text{MSE}}_{0}}\left\{ {{{\tilde{f}}}_{d}} \right\} \approx 0, \ {{\text{MSE}}_{0}}\left\{ {\tilde{\xi }} \right\} \approx 0.
\end{align}
The detailed derivation could be found in Appendix~\ref{MSE_ULA_proof}. Within expectation, no remarkable MSE floor is observed. Besides, the improvement of SNR condition, enlargement of antenna array or increase of the number of subcarriers help enhance both DFO and OFO estimation performance.
\end{proposition}
\vspace{-0.6em}
\subsection{Joint Estimation Algorithm for Partly Calibrated ULA}
In this subsection, the COBP will be introduced in the design of beamforming network to combat the detrimental effects of imperfect calibration, so that the algorithm could be extended to partly calibrated case.
The diagram of this adapted procedure is illustrated in Fig. 3.
In contrast to Fig. 2, the main difference lies in that the estimation of CFO, COBP and channel is performed prior to the high-resolution beamforming network.
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.5cm}
\setlength{\belowcaptionskip}{-1cm}
\begin{center}
\includegraphics[width=100mm]{fig3.eps}
\end{center}
\caption{ Diagram of the receiver design in the case of partly calibrated ULA.}
\end{figure}
When the ULA is partly calibrated, we adopt the modified beamformer
\begin{align}
\mathbf{b}\left( \theta , \boldsymbol{\varepsilon } \right)= \mathbf{V}\left( \theta \right)\boldsymbol{\beta }
\end{align}
to perform beamforming. Here, the COBP $\boldsymbol{\beta }$ is introduced to repair the loss of orthogonality caused by inter-subarray mismatches. To some extent, $\boldsymbol{\beta }$ can be regarded as the counterpart of inter-subarray gain and phase mismatches $\boldsymbol{\alpha}(\boldsymbol{\varepsilon}) $.
Let the received signal pass through the modified beamformer $\mathbf{b}\left( \theta , \boldsymbol{\varepsilon } \right)$. Then, the resulting signal $\mathbf{z}\left( \theta \right) =\mathbf{Y}{{\mathbf{b}}^{\text{*}}}\left( \theta , \boldsymbol{\varepsilon } \right)$ is given by
\begin{align}
\mathbf{z}\left( \theta \right) & =\underbrace{ \kappa\left( \theta \right) \sum\limits_{l,p, {{\theta }_{l,p}}=\theta }{\mathbf{E}\left( {{\varphi }_{l,p}} \right)\mathbf{Bh}\left( {{\theta }_{l,p}} \right)}}_{\mathrm{desired \ signal}} +\underbrace{\sum\limits_{l', p', {{\theta }_{l',p'}}\ne \theta }{\mathbf{E}\left( {{\varphi }_{l',p'}} \right)\mathbf{Bh}\left( {{\theta }_{l',p'}} \right){{\mathbf{a}}^{T}}\left( {{\theta }_{l', p'}}, \boldsymbol{\varepsilon } \right){{\mathbf{b}}^{\text{*}}}\left( \theta , {\boldsymbol{\varepsilon}} \right)}}_{\mathrm{interference}} \nonumber \\
& + \underbrace{\mathbf{W}{{\mathbf{b}}^{\text{*}}}\left( \theta , \boldsymbol{\varepsilon } \right)}_{\mathrm{noise}},
\end{align}
where $\kappa\left( \theta \right) = {{\mathbf{a}}^{T}}\left( {{\theta }}, \boldsymbol{\varepsilon } \right) {{\mathbf{b}}^{\text{*}}}\left( \theta , {\boldsymbol{\varepsilon}} \right) = {{\boldsymbol{\alpha}}^{T}}\left( \boldsymbol{\varepsilon } \right) {{\mathbf{V}}^{T}}\left( {{\theta }} \right) \mathbf{V}\left( \theta \right)\boldsymbol{\beta }$. Besides, the first term is the desired signal from direction $\theta$, while the second and third terms represent the inter-direction interference and noise after beamforming respectively. Since the rectified beamformer is expected to combat array mismatches, there should be ${{\mathbf{a}}^{T}}\left( {{\theta }_{l', p'}}, \boldsymbol{\varepsilon } \right){{\mathbf{b}}^{\text{*}}}\left( \theta , {\boldsymbol{\varepsilon}} \right) \!=\! {{\boldsymbol{\alpha}}^{T}}\left( \boldsymbol{\varepsilon } \right) {{\mathbf{V}}^{T}}\left( {{\theta }_{l', p'}} \right) \mathbf{V}\left( \theta \right) \boldsymbol{\beta } \approx\! 0$ for ${{\theta }_{l', p'}} \\ \ne \theta$. As a result, the interference from other directions will be prominently mitigated. By ignoring the interference, we arrive at
\begin{align}
\mathbf{z}\left( \theta \right)=\mathbf{Y}{{\mathbf{V}}^{\text{*}}}\left( \theta \right){{\boldsymbol{\beta }}^{*}}=\mathbf{E}\left( {{f}_{d}}\cos \theta +\xi \right)\mathbf{B}\mathbbm{h}\left( \theta \right)+\mathbf{\tilde{w}}\left( \theta \right),
\end{align}
where $\mathbbm{h}\left( \theta \right)=\kappa\left( \theta \right) \sum\nolimits_{{{\theta }_{l,p}}=\theta }{\mathbf{h}\left( {{\theta }_{l,p}} \right)}$
and $\mathbf{\tilde{w}}\left( \theta \right) =\mathbf{W}{{\mathbf{V}}^{\text{*}}} \left( \theta \right) {{\boldsymbol{\beta }}^{*}}$.
Similar to Section III-B, the maximum DFO, OFO and COBP could be found by solving the following minimization problem
\begin{align} \label{OptimBeta}
\left\{{\hat{f}}_{d}, \hat{\xi }, \boldsymbol{\hat{\beta }}\right\} = \arg \underset{\left\{{{\tilde{f}}}_{d}, \tilde{\xi }, \boldsymbol{\tilde{\beta }}\right\}}{\mathop{\min }}\,{{{\boldsymbol{\tilde{\beta }}}}^{H}}{\mathbf{C}\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right)}\boldsymbol{\tilde{\beta }},\ s.t.\ \left\| {\boldsymbol{\tilde{\beta }}} \right\|_{2}^{2}=1,
\end{align}
where
\begin{align}
& \mathbf{C}\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right) = \sum\limits_{q=1}^{Q}{{\mathbf{V}}^{H}}\left( {{\theta }_{q}} \right){{\mathbf{Y}}^{T}}{{\mathbf{E}}^{H}}\left( \tilde{\varphi}_{q} \right) {{\mathbf{P}_{\mathbf{B}}^{\bot } }^{T}}\mathbf{E}\left( \tilde{\varphi}_{q} \right){{\mathbf{Y}}^{*}}\mathbf{V}\left( {{\theta }_{q}} \right).
\end{align}
The constraint $\big\| {\boldsymbol{\tilde{\beta }}} \big\|_{2}^{2}=1$ is added because otherwise (\ref{OptimBeta}) achieves its minimum at $\boldsymbol{\hat{\beta }}=\mathbf{0}$, which is undesired for the subsequent processing. Moreover, the equivalent channel for the $q$th beamforming branch could be estimated by
\begin{align}\label{MLChannel}
\mathbbm{\hat{h}}\left( \theta_{q} \right)={{\mathbf{B}}^{\dagger }}{{\mathbf{E}}^{H}}\left( {\hat{f}_d} \cos \theta_{q} + \hat{\xi} \right)\mathbf{Y}{{\mathbf{V}}^{\text{*}}}\left( \theta_{q} \right){{\boldsymbol{\hat{\beta} }}^{\text{*}}}.
\end{align}
For a given trial value pair ${{{\tilde{f}}}_{d}}$ and $\tilde{\xi}$, (\ref{OptimBeta}) is equivalent to minimizing $H\left( \boldsymbol{\tilde{\beta} } \right)= {{{\boldsymbol{\tilde{\beta }}}}^{H}}{\mathbf{C}\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right)}\boldsymbol{\tilde{\beta }} \\
+\mu \left( 1-{{\boldsymbol{\tilde{\beta} }}^{H}}\boldsymbol{\tilde{\beta} } \right)$, where $\mu$ is the Lagrange Multiplier. By means of the first-order condition, the optimal solution of $\boldsymbol{\tilde{\beta} }$ is given by $\boldsymbol{\hat{\beta}}={{\mathbf{v}}_{\min }}\left( {\mathbf{C}\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right)}\right)$ and the corresponding minimum attained at $\boldsymbol{\hat{\beta}}$ is ${{\left. H\left( {\boldsymbol{\tilde{\beta }}} \right) \right|}_{\boldsymbol{\tilde{\beta }}=\boldsymbol{\hat{\beta }}}}={{\boldsymbol{\hat{\beta }}}^{H}}\mathbf{C}\left( {{{\tilde{f}}}_{d}},\tilde{\xi } \right)\boldsymbol{\hat{\beta }}={{\lambda }_{\min }}\left( \mathbf{C}\left( {{{\tilde{f}}}_{d}},\tilde{\xi } \right) \right)$.
Therefore, (\ref{OptimBeta}) could be further decomposed into
\begin{eqnarray}
\left\{
\begin{array}{lll}
\!\! \left\{{{\hat{f}}}_{d}, \hat{\xi }\right\}& \!\!\!\!=\arg \underset{\left\{{{\tilde{f}}}_{d}, \tilde{\xi}\right\}}{\mathop{\min }}\,{{\lambda }_{\min }}\left( {\mathbf{C}\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right)} \right), \\
\!\! \ \boldsymbol{\hat{\beta }}&\!\!\!\!={{\mathbf{v}}_{\min }}\left( {\mathbf{C}\left( {{{\hat{f}}}_{d}}, \hat{\xi } \right)} \right).
\end{array}
\right.
\end{eqnarray}
Note that although the algorithm in Section III is designed for fully calibrated ULA, it could also provide valid coarse CFO estimates $\left[ {{f}_{dc}}, \ {{\xi }_{c}} \right]$ in the presence of inter-subarray mismatches. Based on this coarse estimation result, one-tap adjustment via Taylor series expansion is sufficient to obtain fine DFO and OFO estimates. Specifically, denote $\mathbf{\tilde{c}}\left( {{\theta }_{q}} \right)=\mathbf{E}\left( {\varphi}_{qc} \right){{\mathbf{Y}}^{*}}\mathbf{V}\left( {{\theta }_{q}} \right)$, where ${\varphi}_{qc} = f_{dc} \cos{{\theta }_{q}} + \xi_c$, and define $\Delta {\tilde{\varphi}_{q}} = \Delta \tilde f_{d} \cos{{\theta }_{q}} + \Delta \tilde\xi$, where $\Delta \tilde f_{d} = \tilde{f_d} - f_{dc}$ and $\Delta \tilde\xi = \tilde{\xi} - \xi_c$ represent the trial residual nDFOmax and trial residual nOFO, respectively. By substituting $\mathbf{E }\left( {{\tilde{\varphi}}}_{q} \right)$ with its Taylor series expansion at $\left[ {{f}_{dc}}, \ {{\xi }_{c}} \right]$, $\mathbf{C}\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right)$ could be approximated as
\begin{align}
\mathbf{C} \left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right) \approx \underbrace{ \sum\limits_{q=1}^{Q}{ {{\mathbf{T}}_{0}}\left( {{\theta }_{q}} \right) }}_{\mathbf{\Upsilon}} + \underbrace{\sum\limits_{q=1}^{Q}{ \Delta {\tilde{\varphi}_{q}} {{\mathbf{T}}_{1}}\left( {{\theta }_{q}} \right)+{{\left( \Delta {\tilde{\varphi}_{q}} \right)}^{2}}{{\mathbf{T}}_{2}}\left( {{\theta }_{q}} \right) }}_{\mathbf{\Xi}},
\end{align}
where
\begin{align*}
{{\mathbf{T}}_{0}}\left( {{\theta }_{q}} \right) &={{{\mathbf{\tilde{c}}}}^{H}}\left( {{\theta }_{q}} \right) {\mathbf{P}_{\mathbf{B}}^{\bot }}^{T} \mathbf{\tilde{c}}\left( {{\theta }_{q}} \right), \
{{\mathbf{T}}_{1}}\left( {{\theta }_{q}} \right) ={{{\mathbf{\tilde{c}}}}^{H}}\left( {{\theta }_{q}} \right) \left( {{\mathbf{D}}^{H}}{{\mathbf{P}_{\mathbf{B}}^{\bot } }^{T}} + {{ \mathbf{P}_{\mathbf{B}}^{\bot } }^{T}}\mathbf{D} \right) \mathbf{\tilde{c}}\left( {{\theta }_{q}} \right), \\
{{\mathbf{T}}_{2}}\left( {{\theta }_{q}} \right)&= {{{\mathbf{\tilde{c}}}}^{H}}\left( {{\theta }_{q}} \right) \left( {{\mathbf{D}}^{H}}{{ \mathbf{P}_{\mathbf{B}}^{\bot } }^{T}}\mathbf{D} + \frac{ {{\mathbf{D}}^{2H}}{{ \mathbf{P}_{\mathbf{B}}^{\bot }}^{T}} + {{ \mathbf{P}_{\mathbf{B}}^{\bot } }^{T}}{{\mathbf{D}}^{2}}}{2} \right) \mathbf{\tilde{c}}\left( {{\theta }_{q}} \right).
\end{align*}
Let $\mathcal{A}$ and $\mathcal{B}$ be two arbitrary full-rank matrices,
and denote $\epsilon$ as a sufficiently small perturbation term. Then, according to the perturbation theory~\cite{bJH_Wilkinson1965, W_Zhang2016TSP}, there holds ${{\lambda }_{\min }}\left( \mathcal{A} \!+\! \epsilon\mathcal{B} \right) \!\approx\! {{\lambda }_{\min }}\left( \mathcal{A} \right) \!+\! \epsilon {{\mathbf{v}}_{\mathrm{min}}^{H}}\left( \mathcal{A} \right) \mathcal{B} {{\mathbf{v}}_{\mathrm{min}}}\left( \mathcal{A} \right)$. Therefore, denoting $\mathbf{v} \!=\! {\mathbf{v }_{\min }}\left( {\mathbf{\Upsilon}} \right)$, we have
\begin{align}
{{\lambda }_{\min }}\left( \mathbf{C}\left( {{{\tilde{f}}}_{d}}, \tilde{\xi } \right) \right) & \approx {{\lambda }_{\min }}\left( {\mathbf{\Upsilon}} + {\mathbf{\Xi}} \right) \approx {{\lambda }_{\min }}\left( {\mathbf{\Upsilon}} \right) +{{\mathbf{v}}^{H}} {\mathbf{\Xi}} \mathbf{v} \nonumber \\
& = {{t}_{0}}+{{t}_{11}}\Delta {\tilde{f}_{d}}+{{t}_{12}}\Delta \tilde\xi+{{t}_{21}}{{\left( \Delta {\tilde{f}_{d}} \right)}^{2}} +{{t}_{22}}\Delta {\tilde{f}_{d}}\Delta \tilde\xi+{{t}_{23}}{{\left( \Delta \tilde\xi \right)}^{2}},
\end{align}
where
\begin{align*}
{{t}_{0}} & ={{\lambda }_{\min }} \left( {\mathbf{\Upsilon}} \right), \ \ {{t}_{11}} = {{\mathbf{v}}^{H}} \sum\nolimits_{q=1}^{Q}{\cos{{\theta }_{q}}{{\mathbf{T}}_{1}}\left( {{\theta }_{q}} \right)}\mathbf{v}, \ \ {{t}_{12}} ={{\mathbf{v}}^{H}} \sum\nolimits_{q=1}^{Q}{{{\mathbf{T}}_{1}}\left( {{\theta }_{q}} \right)} \mathbf{v}, \nonumber \\
{{t}_{21}} & ={{\mathbf{v}}^{H}} \sum\nolimits_{q=1}^{Q}{{{\cos}^{2}}{{\theta }_{q}}{{\mathbf{T}}_{2}}\left( {{\theta }_{q}} \right)} \mathbf{v}, \ \
{{t}_{22}} ={{\mathbf{v}}^{H}} \sum\nolimits_{q=1}^{Q}{2\cos{{\theta }_{q}}{{\mathbf{T}}_{2}}\left( {{\theta }_{q}} \right)} \mathbf{v}, \ \ {{t}_{23}} ={{\mathbf{v}}^{H}} \sum\nolimits_{q=1}^{Q}{{{\mathbf{T}}_{2}}\left( {{\theta }_{q}} \right)}\mathbf{v}.
\end{align*}
Similar to (\ref{QuadraticFun1}), the optimal residual nDFOmax $\Delta {{f}_{d}}$ and residual nOFO $\Delta \xi $ are given by
\begin{align}\label{CFOSolution2}
\left[ \begin{matrix}
\Delta \hat{f}_{d} \\
\Delta \hat{\xi} \\
\end{matrix} \right]=-{{\left[ \begin{matrix}
2{{t}_{21}} & {{t}_{22}} \\
{{t}_{22}} & 2{{t}_{23}} \\
\end{matrix} \right]}^{-1}}\left[ \begin{matrix}
{{t}_{11}} \\
{{t}_{12}} \\
\end{matrix} \right],
\end{align}
and the final CFO estimates could be calculated as ${{\hat{f}}_{d}}={{f}_{dc}}+\Delta {\hat{f}_{d}}$ and $\hat{\xi }={{\xi }_{c}}+\Delta \hat{\xi} $.
In summary, the whole estimation process with partly calibrated massive ULA can be described as follows.
\begin{itemize}
\item \emph{Step-1, coarse CFO estimation}: We first perform beamforming irrespective of inter-subarray mismatches, and get the coarse estimates ${f}_{dc}$ and ${\xi}_{c}$ with the algorithm in Section III-B.
\item \emph{Step-2, one-tap adjustment via Taylor series expansion}: The inter-subarray gain and phase mismatches are taken into account and the joint estimation algorithm in section IV-B is used to jointly estimate CFO and COBP. The fine CFO estimates can be obtained from (\ref{CFOSolution2}) via two-dimensional Taylor series expansion by setting $\left[ {f}_{dc}, {\xi}_{c} \right]$ as an expansion point.
\item \emph{Step-3, calculation of COBP}: Once the estimated nDFOmax ${{\hat{f}}_{d}}$ and nOFO $\hat{\xi }$ are obtained, the COBP can be directly calculated as $\boldsymbol{\hat{\beta }}={{\mathbf{v}}_{\min }}\left( \mathbf{C}\left( {{{\hat{f}}}_{d}}, \hat{\xi } \right) \right)$.
\item \emph{Step-4, computation of the equivalent channel}: Based on the estimates of CFO and COBP, the equivalent channel for the $q$th beamforming branch is readily computed by (\ref{MLChannel}).
\end{itemize}
\section{Derivation of the Cram\'{e}r-Rao Bound}
In this section, we will derive the CRB of CFO estimation. The derivation will be carried out in the case of partly calibrated ULA, but the result also applies to the fully calibrated ULA, wherein the inter-subarray gain and phase mismatches vector $\boldsymbol{\alpha} \left( \boldsymbol{\varepsilon} \right)$ is reduced to scalar $1$.
We first reformulate (\ref{ReceivedSignalUncalibrated}) as
\begin{align}
\mathbf{Y} =\sum\limits_{l=1}^{L}{\sum\limits_{p=1}^{P}{\mathbf{E}\left( {{\varphi }_{l, p}} \right)\mathbf{BH}\left( {{\theta }_{l, p}} \right)\mathbf{G}}}+\mathbf{W},
\end{align}
where $\mathbf{H}\left( {{\theta }_{l, p}} \right) = \mathbf{h}\left( {{\theta }_{l,p}} \right){{\mathbf{a}}^{T}}\left( {{\theta }_{l, p}} \right)$ and $\mathbf{G} =\operatorname{diag}\left( \boldsymbol{\alpha }\left( \boldsymbol{\varepsilon } \right) \otimes {{\mathbf{1}}_{J\times 1}} \right)$ such that $\mathbf{a}\left( {{\theta }_{l, p}}, \boldsymbol{\varepsilon } \right) =\mathbf{G}\mathbf{a}\left( {{\theta }_{l, p}} \right)$.
The vectorization of $\mathbf{Y}$ is given by
\begin{align}
\text{vec} \left( \mathbf{Y} \right) =\sum\limits_{l=1}^{L}{\sum\limits_{p=1}^{P}{\left( {{\mathbf{I}}_{M}}\otimes \mathbf{E}\left( {{\varphi }_{l,p}} \right) \right)\mathbb{C}\left( {{\mathbf{I}}_{L}}\otimes \mathbf{G} \right)\operatorname{vec}\left( {{\mathbf{H}}^{T}}\left( {{\theta }_{l,p}} \right) \right)}} +\text{vec}\left( \mathbf{W} \right),
\end{align}
where $\mathbb{C}=\left[ \begin{matrix}
{{\mathbf{I}}_{M}}\otimes {{\mathbf{b}}_{1}}, & {{\mathbf{I}}_{M}}\otimes {{\mathbf{b}}_{2}}, & \cdots\!, & {{\mathbf{I}}_{M}}\otimes {{\mathbf{b}}_{L}} \\
\end{matrix} \right]$ and $\mathbf{B}=\left[ \begin{matrix}
{{\mathbf{b}}_{1}}, & {{\mathbf{b}}_{2}}, & \cdots\!, & {{\mathbf{b}}_{L}} \\
\end{matrix} \right]$.
We further obtain $E \left\{ \operatorname{vec}\left( {{\mathbf{H}}^{T}}\left( {{\theta }_{l,p}} \right) \right)\operatorname{vec}{{\left( {{\mathbf{H}}^{T}}\left( {{\theta }_{l',p'}} \right) \right)}^{H}} \right\}
={{\delta }_{l-l', \!\ p-p'}}\frac{\sigma_{l}^{2} }{P}{{\mathbf{E}}_{L}^{l}}\otimes \mathbf{R}\left( {{\theta }_{l,p}} \right)$, where ${{\mathbf{E}}_{L}^{l}}$ is a diagonal matrix whose $l$th element is 1 and 0 elsewhere, and $\mathbf{R}\left( {{\theta }_{l,p}} \right)=\mathbf{a}\left( {{\theta }_{l,p}} \right){{\mathbf{a}}^{H}}\left( {{\theta }_{l,p}} \right)$.
Define $\mathbf{U}=\operatorname{blkdiag}\left( {{\mathbf{1}}_{J\times1}}, {{\mathbf{1}}_{J\times1}}, \cdots\!, {{\mathbf{1}}_{J\times1}} \right)\in {{\mathbb{C}}^{M\times K}}$ such that $\mathbf{G} = \operatorname{diag} \left({\mathbf{U}}{\boldsymbol\alpha}\right)$. Define DFO phase rotation vector $\mathbf{e}\left( {{f}_{l,p}} \right) =\left[ 1,\ {{e}^{\text{j}2{\rm\pi} {{f}_{l,p}}\frac{1}{N}}},\ \cdots\!, \ {{e}^{\text{j}2{\rm\pi} {{f }_{l,p}}\frac{N-1}{N}}} \right]^{T}$ and OFO phase rotation vector $\mathbf{e}\left( {\xi} \right) =\left[ 1,\ {{e}^{\text{j}2{\rm\pi} {\xi}\frac{1}{N}}},\ \cdots\!, \ {{e}^{\text{j}2{\rm\pi} {\xi}\frac{N-1}{N}}} \right]^{T}$ such that $\mathbf{E}\left( {{\varphi }_{l,p}} \right) = \operatorname{diag}\left( \mathbf{e}\left( {{f}_{l,p}} \right) \odot {\mathbf{e}}\left( {\xi} \right) \right)$.
Then, there is
\begin{align}{\label{R}}
\mathbb{R}= & E\left\{ \operatorname{vec}\left( \mathbf{Y} \right)\operatorname{vec}{{\left( \mathbf{Y} \right)}^{H}} \right\} \nonumber \\
=&\sum\limits_{l=1}^{L} {\sum\limits_{p=1}^{P}{ \frac{{\sigma_{l}^{2}} }{P} \left( {{\mathbf{I}}_{M}}\otimes \mathbf{E}\left( {{\varphi }_{l,p}} \right) \right)\left( {{\mathbf{I}}_{M}}\otimes {{\mathbf{b}}_{{l}}} \right)\mathbf{G} \mathbf{R}\left( {{\theta }_{l,p}} \right){{\mathbf{G}}^{H}}}} {{\left( {{\mathbf{I}}_{M}}\otimes {{\mathbf{b}}_{{l}}} \right)}^{H}}{{\left( {{\mathbf{I}}_{M}}\otimes \mathbf{E}\left( {{\varphi }_{l,p}} \right) \right)}^{H}}+{{\sigma }_{\mathrm{n}}^{2}}{{\mathbf{I}}_{MN}} \nonumber \\
=& \sum\limits_{l=1}^{L}{\sum\limits_{p=1}^{P}{ \frac{{\sigma_{l}^{2}} }{P} \left( \mathbf{GR}\left( {{\theta }_{l,p}} \right){{\mathbf{G}}^{H}} \right)\otimes \left( \mathbf{E}\left( {{\varphi }_{l,p}} \right){{\mathbf{b}}_{{l}}}\mathbf{b}_{{l}}^{H}{{\mathbf{E}}^{H}}\left( {{\varphi }_{l,p}} \right) \right)}} +{{\sigma }_{\mathrm{n}}^{2}}{{\mathbf{I}}_{MN}} \nonumber \\
=& \frac{1}{P}\sum\limits_{l=1}^{L}{\sum\limits_{p=1}^{P}{{{\mathbf{R}}_{1,l,p}}\odot {{\mathbf{R}}_{2}}\odot {{\mathbf{R}}_{3}}\odot {{\mathbf{R}}_{4,l}}}}+{{\sigma }_{\mathrm{n}}^{2}}{{\mathbf{I}}_{MN}},
\end{align}
where ${{\mathbf{R}}_{1,l,p}}=\mathbf{R}\left( {{\theta }_{l,p}} \right)\otimes \left( \mathbf{e}\left( {{f}_{l,p}} \right){{\mathbf{e}}^{H}}\left( {{f}_{l,p}} \right) \right)$, ${{\mathbf{R}}_{2}}={{\mathbf{1}}_{M}}\otimes \left( \mathbf{e}\left( \xi \right){{\mathbf{e}}^{H}}\left( \xi \right) \right)$, ${{\mathbf{R}}_{3}}=\left( \mathbf{U}{\boldsymbol \alpha }{{\boldsymbol{\alpha }}^{H}}{{\mathbf{U}}^{T}} \right)\otimes {{\mathbf{1}}_{N}}$ and ${{\mathbf{R}}_{4,l}}={{\mathbf{1}}_{M}}\otimes \left( \sigma_{l}^{2} {{\mathbf{b}}_{l}}\mathbf{b}_{l}^{H} \right)$.
Clearly, ${{\mathbf{R}}_{1,l,p}}$ is related to the incident angle ${{\theta }_{l,p}}$ and nDFOmax, ${{\mathbf{R}}_{2}}$ is determined by nOFO, ${{\mathbf{R}}_{3}}$ depends on the inter-subarray gain and phase mismatches, and ${{\mathbf{R}}_{4,l}}$ is deterministic since the training sequence is assumed known at the receiver.
As ${{\theta }_{l,p}}$ follows uniform distribution in $(0, 2\rm{\pi})$, the expectation of ${{\mathbf{R}}_{1,l,p}}$ with respect to ${\theta }_{l,p}$ can be expressed as
\begin{align}{\label{R1}}
{{\mathbf{\tilde{R}}}_{1}}=E\left\{ {{\mathbf{R}}_{1,l,p}} \right\}={{J}_{0}}\left( \mathbf{U}\left( {{f}_{d}} \right) \right),
\end{align}
where ${{J}_{0}}\left(\cdot \right)$ denotes zero-order Bessel function and the $n$th order Bessel function is given by
\begin{align}
{{J}_{n}}\left( x \right)=\frac{1}{2{\rm\pi}}\int_{-{\rm\pi}}^{{\rm\pi}}{\cos \left( x\sin \theta -n\theta \right)d\theta }.
\end{align}
Detailed derivation of (\ref{R1}) along with the definition of $\mathbf{U}\left( {{f}_{d}} \right)$ could be found in Appendix~\ref{Expectation}.
Substituting ${{\mathbf{R}}_{1,l,p}}$ with ${{\mathbf{\tilde{R}}}_{1}}$, we can simplify (\ref{R}) into
\begin{align}
\mathbb{R} ={{{\mathbf{\tilde{R}}}}_{1}}\odot {{\mathbf{R}}_{2}}\odot {{\mathbf{R}}_{3}}\odot {{{\mathbf{\tilde{R}}}}_{4}}+{{\sigma }_{\mathrm{n}}^{2}} {{\mathbf{I}}_{MN}},
\end{align}
where ${{\mathbf{\tilde{R}}}_{4}} = \sum\limits_{l=1}^{L}{{{\mathbf{R}}_{4,l}}} ={{\mathbf{1}}_{M}}\otimes \left( \mathbf{B}\mathbf{\Lambda}{{\mathbf{B}}^{H}} \right) \overset{\sigma_{l}^{2} = \frac{1}{L}}{\mathop{=}}\, \frac{1}{L}{{\mathbf{1}}_{M}}\otimes \left( \mathbf{B}{{\mathbf{B}}^{H}} \right)$.
The unknown parameters to be estimated can be listed as $\boldsymbol{\eta }=\left\{ {{f}_{d}}, \xi, \Re \left( {\boldsymbol{\alpha }}^T \right), \Im \left( {\boldsymbol{\alpha }}^T \right), {{\sigma }_{\mathrm{n}}^{2}} \right\}^T$. According to~\cite{P_Stoica1990TASSP, R_Cao2016TSP}, the CRB can be derived from
\begin{align}{\label{CRB}}
{{\left[ \mathbf{CR}{{\mathbf{B}}^{-1}}\left( \boldsymbol{\eta } \right) \right]}_{kl}}=\operatorname{tr}\left[ {{\mathbb{R}}^{-1}}\frac{\partial \mathbb{R}}{\partial {{\eta }_{k}}}{{\mathbb{R}}^{-1}}\frac{\partial \mathbb{R}}{\partial {{ \eta }_{l}}} \right].
\end{align}
The detailed derivation of all the first-order derivatives $\frac{\partial \mathbb{R}}{\partial {{\eta }_{k}}}$ in (\ref{CRB}) can be found in Appendix~\ref{Derivative}. Note that in the case of fully calibrated ULA, $\boldsymbol{\alpha}$ is reduced to scalar $1$, which leads to $\mathbf{R}_{3} \!=\! \mathbf{I}_{MN}$ and $\mathbb{R} = {{{\mathbf{\tilde{R}}}}_{1}}\odot {{\mathbf{R}}_{2}}\odot {{{\mathbf{\tilde{R}}}}_{4}}+{{\sigma }_{n}^{2}}{{\mathbf{I}}_{MN}}$. Moreover, the parameters to be estimated reduce to $\boldsymbol{\eta } = \left\{ {{f}_{d}}, \xi, {{\sigma }_{\mathrm{n}}^{2}} \right\}^T$ and the derivatives of $\mathbf{R}_{3}$ relative to $\boldsymbol{\alpha}$ in (\ref{CRB}) should also be accordingly removed to compute the CRB.
\begin{remark}
It can be seen from (\ref{CRB}) that the CRB acquired at each simulation depends on the realization of the random parameters $\boldsymbol{\eta }$. The CRB results obtained via (\ref{CRB}) under different CFOs and inter-subarray gain and phase mismatches are further averaged numerically to yield the final CRB provided in simulation.
\end{remark}
\section{Simulation Results}
In this section, we will evaluate the performance of our proposed joint estimation algorithms. The terminal HST employs a partly calibrated ULA composed of $M=64$ receive antennas. Unless otherwise stated, the antenna spacing is taken as $d=0.45\lambda$, i.e. $\tilde{d}=\frac{d}{\lambda}=0.45$. Moreover, we assume that the inter-subarray gain mismatch $|\alpha_k|$ follows i.i.d. uniform distribution~\cite{CMS_See2004TSP, Y_GE2017SPAWC} $U\left(\sqrt{1-\sigma_{\alpha}^2}-\sqrt{3}\sigma_{\alpha}, \sqrt{1-\sigma_{\alpha}^2}+\sqrt{3} \sigma_{\alpha}\right)$, where $\sigma_{\alpha}$ stands for the standard deviation of $|\alpha_k|$. In this way, the average array gain is normalized, i.e., $E\big\{|\alpha_k|^2\big\}\!=\!1$. For simulation, we set $|\alpha_k| \!\sim\! U\left(0.8, 1.1875 \right)$.
The total number of subcarriers is taken as $N\!=\!64$, and the first block of each frame serves as pilot, while the rest three blocks are reserved for data transmission. Both the training and data symbols are randomly drawn from 16-QAM constellations. The lengths of channel and CP are set as $L\!=\!8$ and $N_{\mathrm{cp}}\!=\!16$, respectively. For simplicity, the uniform channel PDP, i.e., $\sigma_{l}^{2} \!=\! \frac{1}{L}, l \!=\! 1,2,\cdots \!, L$ is adopted in simulation, yet it should be pointed out that the algorithms do not rely on any specific channel PDP. In fact, we obtained essentially the same performance results for the channels with exponential PDP and the plots are omitted due to the space limitation.
The carrier frequency is fixed as $f_c \!=\! 9\rm{GHz}$, while the block duration is taken as $T_b\!=\!0.1\rm{ms}$. Unless otherwise stated, the HST velocity is assumed to be $480\rm{km/h}$, which translates to $f_d \!=\! 0.4$. The nOFO is randomly generated from $-0.1$ to $0.1$. The beamforming direction $\theta_q$ is drawn from ${\mathcal{D}}_{{\rm{IFFT}}}$.
The MSE for CFO estimation and the symbol error rate (SER) of the recovered data symbols are adopted as the performance metrics. The joint estimation algorithm in Section III for fully calibrated ULA and that in Section IV-B for partly calibrated ULA are referred to as `No-COBP' and `Optimal-COBP', respectively. In the following SER figures, the ideal case with accurate nDFOmax and nOFO knowledge at the receiver will be included as the benchmark.
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.5cm}
\setlength{\belowcaptionskip}{-1.2cm}
\begin{center}
\includegraphics[width=121mm]{fig4.eps}
\end{center}
\caption{ Numerical and analytical MSE comparison of `No-COBP' with fully calibrated ULA ($K=1$) and partly calibrated ULA ($K=4$) at $f_d=0.1$. }
\end{figure}
In Fig. 4, both the numerical, analytical and asymptotical MSEs of `No-COBP' are depicted for fully ($K\!=\!1$) and partly ($K\!=\!4$) calibrated ULA. The nDFOmax is taken as $f_d \!=\! 0.1$. Although the analytical MSE floor is a lower bound of its numerical counterpart, the analytical MSE still well approximates numerical MSE for a wide range of SNR in this example. Meanwhile, we observe an obvious MSE floor at $K\!=\!4$, especially for DFO estimation, which confirms that it is unsuitable to directly apply `No-COBP' to the partly calibrated case and that the new algorithm `Optimal-COBP' is needed. Moreover, a discrepancy between the asymptotical and numerical MSEs exists in this example, which would be reduced by increasing the number of antennas $M$.
\begin{figure}[htbp]
\vspace{-1.5em}
\setlength{\abovecaptionskip}{-0.2cm}
\setlength{\belowcaptionskip}{-0.85cm}
\centering
\begin{minipage}{80mm}
\centering
\includegraphics[width=70mm]{fig5.eps}
\caption{ MSE performance comparison of `No-COBP', `Pilot-halves', `GCE-BEM' and `P-BEM' with fully calibrated ULA ($K=1$) at $f_d=0.1$. }
\end{minipage}
\begin{minipage}{80mm}
\centering
\includegraphics[width=70mm]{fig6.eps}
\caption{ SER performance comparison of `No-COBP', `Pilot-halves', `GCE-BEM' and `P-BEM' with fully calibrated ULA ($K=1$) at $f_d=0.1$. }
\end{minipage}
\end{figure}
In Fig. 5 and Fig. 6, we assess the performance of `No-COBP' against the existing methods, including the scheme in~\cite{W_Guo2017TVT} (referred to as `Pilot-halves') and the most frequently encountered BEM approaches `GCE-BEM'~\cite{H_NguyenLe2010TB} and `P-BEM'~\cite{H_Hijazi2009TVT} (the first block and last block of each frame serve as pilot) under fully calibrated ULA ($K\!=\!1$). The nDFOmax remains $f_d \!=\! 0.1$.
Both figures corroborate the superiority of `No-COBP' over BEM. BEM exhibits obvious OFO estimation MSE floor, and compared to `No-COBP', the performance gaps of about 6dB and 8dB can be observed for `P-BEM' and `GCE-BEM'.
It is also observed that although `Pilot-halves' can achieve comparable MSE performance as `No-COBP', there is an SER performance gap of about 0.5dB. This can be attributed to the fact that the two-halves pilot exploited in~\cite{W_Guo2017TVT} exhibits some sparsity in the frequency domain, which may not be preferred for channel estimation and subsequent data detection~\cite{H_Minn2006TC}. However, the proposed algorithm enables the use of general pilot structure, which is more likely to provide superior detection performance.
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.5cm}
\setlength{\belowcaptionskip}{-1.05cm}
\begin{center}
\includegraphics[width=140mm]{fig7.eps}
\end{center}
\caption{ CFO estimation performance comparison of `No-COBP', `Optimal-COBP' and `GCE-BEM' with different numbers of subarrays ($f_d=0.4$, $K=1,2,4$). }
\end{figure}
\begin{figure}[t]
\setlength{\abovecaptionskip}{-0.5cm}
\setlength{\belowcaptionskip}{-1.2cm}
\begin{center}
\includegraphics[width=70mm]{fig8.eps}
\end{center}
\caption{ SER performance comparison of `No-COBP', `Optimal-COBP' and `GCE-BEM' with different numbers of subarrays ($f_d=0.4$, $K=1,2,4$). }
\end{figure}
In Fig. 7 and Fig. 8, we evaluate the CFO estimation and data detection performance of `No-COBP', `Optimal-COBP' and `GCE-BEM' with different numbers of subarrays $K=1, 2, 4$. Note that `No-COBP' and `Optimal-COBP' become identical at $K=1$.
From Fig. 7, we observe that:
1) Although insensitive to the inter-subarray mismatches, `GCE-BEM' suffers from high OFO estimation error floor.
2) The MSE performance of `No-COBP' degrades severely and drastically as the number of subarrays increases and the MSE floor is evident in the case of partly calibrated ULA. On the contrary, `Optimal-COBP' exhibits strong robustness to the number of subarrays.
3) The MSE performance of `Optimal-COBP' noticeably outperforms that of `No-COBP' at moderate and high SNRs, whereas the latter achieves better performance at low SNR. In fact, the system performance is mainly array mismatches-constrained at high SNR, and undoubtedly `Optimal-COBP' outperforms `No-COBP' since the former mitigates the impact of array mismatches with COBP. However, the system performance is noise-constrained at low SNR and thus `No-COBP' with fewer parameters to be estimated than `Optimal-COBP' will be superior.
4) The CRB obtained for different numbers of subarrays almost coincide, and the reason could be explained as follows. On the one hand, more estimation parameters would increase CRB. On the other hand, mismatches across more subarrays could enhance antenna diversities and thereby improve CRB. These two factors appear to offset each other approximately. In fact, the numerical MSEs of `Optimal-COBP' under different number of subarrays also asymptotically converge at high SNR. This also proves the effectiveness of COBP in mitigating the detrimental effects of inter-subarray gain and phase mismatches.
The results in Fig. 8 indicate that: 1) `GCE-BEM' fails to achieve reliable data detection for nDFOmax as large as $f_d=0.4$. 2) As expected, the SER performance of both `No-COBP' and `Optimal-COBP' relies on the number of subarrays, while the performance degradation of the former is much severer with the increase of number of subarrays. 3) There is an SER performance gap of about 2 or 3$\rm{dB}$ between `Optimal-COBP' and corresponding ideal case. 4) Although in contrast to the fully calibrated case, `Optimal-COBP' suffers from certain SER performance deterioration at a large number of subarrays (around 2dB at $K=4$), it still provides a feasible and so far best solution in the presence of inter-subarray mismatches. Even at $K=4$, the SER performance floor is not observed in this example.
\begin{figure}[htbp]
\vspace{-1.7em}
\setlength{\abovecaptionskip}{-0.1cm}
\setlength{\belowcaptionskip}{-0.9cm}
\centering
\begin{minipage}{80mm}
\centering
\includegraphics[width=70mm]{fig9.eps}
\caption{ MSE performance comparison of `No-COBP' and `Optimal-COBP' at different numbers of receive antennas ($f_d = 0.4, K=4, M = 64, 128$). }
\end{minipage}
\begin{minipage}{80mm}
\centering
\includegraphics[width=70mm]{fig10.eps}
\caption{ SER performance comparison of `No-COBP' and `Optimal-COBP' at different numbers of receive antennas ($f_d = 0.4, K=4, M = 64, 128$). }
\end{minipage}
\end{figure}
Next, the performance of `No-COBP' and `Optimal-COBP' are examined for $M\!=\!64$ and $128$ receive antennas in Fig. 9 and Fig. 10.
Though increasing $M$ from $64$ to $128$ effectively enhances the MSE and SER performances of both `No-COBP' and `Optimal-COBP', the former still suffers from visible CFO estimation error floor and high SER performance floor even under $M\!=\!128$. This further demonstrates the superiority of `Optimal-COBP' over `No-COBP' in the case of partly calibrated ULA.
Moreover, the following observations could be made: 1) In spite of the MSE performance floor, `No-COBP' indeed can provide valid coarse CFO estimates for `Optimal-COBP'.
2) If appropriately exploited, $128$ receive antennas should double the signal power at the receiver vis-a-vis $64$ antennas. Nonetheless, regarding the SER performance, it is observed that the array gain~\cite{A_Paulraj2003} (average increase of SNR at the receiver) is less than $3 \text{dB}$. In fact, even though the CFO estimation can be sufficiently accurate, the unwanted signals from undesired adjacent directions incorporated by a beamforming branch cannot be totally compensated, due to limited number of antennas. As a result of such incomplete compensation of frequency mismatch, the receiver is unable to achieve the thoroughly coherent combining of signals from different branches with different amplitudes and phases. Thus, the array gain which arises from the coherent combining effect of multiple antennas at the receiver (or transmitter or both) could not be fully exploited.
\begin{figure}[htbp]
\vspace{-1.5em}
\setlength{\abovecaptionskip}{-0.1cm}
\setlength{\belowcaptionskip}{0.5cm}
\centering
\begin{minipage}{80mm}
\centering
\includegraphics[width=70mm]{fig11.eps}
\caption{ MSE performance comparison of `No-COBP' and `Optimal-COBP' at different antenna spacings ($\tilde{d} = 0.1, 0.2, 0.3, 0.4, 0.45, 0.48, 0.5$). }
\end{minipage}
\begin{minipage}{80mm}
\centering
\includegraphics[width=70mm]{fig12.eps}
\caption{ SER performance comparison of `No-COBP' and `Optimal-COBP' at different antenna spacings ($\tilde{d} = 0.1, 0.2, 0.3, 0.4, 0.45, 0.48, 0.5$). }
\end{minipage}
\end{figure}
At last, we gauge the performance of `No-COBP' and `Optimal-COBP' at different normalized antenna spacings $\tilde{d} = \{0.1, 0.2, 0.3, 0.4, 0.45, 0.48, 0.5\}$. The SNR is set as $14 \rm{dB}$ for $K=4$ and $11 \rm{dB}$ for $K=1$. Fig. 12 reveals explicitly the strong dependence of data detection performance on the antenna spacing, to which the CFO estimation performance is less sensitive for a wide range as indicated by Fig. 11. As predicated previously, on the one hand, too small antenna spacing (like $\tilde{d} = 0.1$ or $0.2$) cannot extract most of the spatial resolution provided by large-scale antenna array; on the other hand, an antenna spacing as large as $\tilde{d}=0.5$ ineluctably leads to aliasing between $\theta = 0^\circ$ and $\theta = 180^\circ$. Both cases are reflected by the SER performance exacerbation shown in Fig. 12. Such result justifies the empirical choice of $\tilde{d} = 0.45$.
\begin{table}[!t]
\caption{ Computational complexities of `No-COBP', `Optimal-COBP' and BEM }\label{CC_3Algorithms}
\vspace{1.8em}
\centering
\begin{tabular}{|c||c|c|}
\hline
Algorithms & \multicolumn{2}{|c|}{Computational complexities} \\
\hline
\multirow{4}*{`No-COBP'} & CFO estimation & $O\left( L\left( {{N\!+\!L}} \right)^{2} \!+\! \kappa QN\left( {{\log }_{2}}N\!+\!3N\!+\!9 \right) \right)$ \\
\cline{2-3}
$~ $ & \!Channel estimation\! & $O\left( QN\left( L\!+\!1 \right) \right)$ \\
\cline{2-3}
$~ $ & Data detection & $O\left( QN\left( {{N}_{b}}{{\log }_{2}}N\!+\!{{N}_{b}}\!+\!1 \right) \right)$ \\
\cline{2-3}
$~ $ & \multicolumn{2}{|c|}{$O\left( L{{\left( N\!+\!L \right)}^{2}}\!+\!\kappa QN\left( {{\log }_{2}}N\!+\!3N\!+\!9 \right)\!+\!QN\left( {{N}_{b}}{{\log }_{2}}N\!+\!{{N}_{b}}\!+\!L\!+\!2 \right) \right)$} \\
\hline
\multirow{5}*{\!`Optimal-COBP'\!} & CFO estimation & \!$O\left(
L{{\left( N\!+\!L \right)}^{2}}\!+\!QN\left( K\left( M\!+\!4N\!+\!4K\!+\!5 \right)\!+\!M \right)
\!+\!K\left( 2QK\!+\!2Q\!+\!{{K}^{2}} \right)
\right)$\! \\
\cline{2-3}
$~ $ & \!Channel estimation\! & $O\left( QN\left( K\!+\!L\!+\!1 \right) \right)$ \\
\cline{2-3}
$~ $ & Data detection & $O\left( QN\left( {{N}_{b}}{{\log }_{2}}N\!+\!\left( {{N}_{b}}\!-\!1 \right)K\!+\!{{N}_{b}}\!+\!1 \right) \right)$ \\
\cline{2-3}
$~ $ & \multicolumn{2}{|c|}{$O\left( \begin{matrix}
L{{\left( N\!+\!L \right)}^{2}} \!+\!QNK\left( M\!+\!4N\!+\!4K\!+\!{{N}_{b}}\!+\!5 \right) \\
+QN\left( M\!+\!{{N}_{b}}{{\log }_{2}}N\!+\!L\!+\!{{N}_{b}}\!+\!2 \right) \!+\!K\left( 2QK\!+\!2Q\!+\!{{K}^{2}} \right) \\
\end{matrix} \right)$} \\
\hline
\multirow{5}*{BEM} & OFO estimation & $O\left( LR{{\left( {{N}_{2}}\!+\!LR \right)}^{2}}\!+\!LR{{N}_{2}}\!+\!\kappa {{N}_{2}}\left( M\!+\!3{{N}_{2}}\!+\!8 \right) \right)$ \\
\cline{2-3}
$~ $ & \!Channel estimation\! & $O\left( MLR\left( {{N}_{2}}\!+\!N\left( {{N}_{b}}\!-\!1 \right) \right)\!+\! 2{{N}_{2}}M \right)$ \\
\cline{2-3}
$~ $ & Data detection & $O\left( \left( {{N}_{b}}\!-\!1 \right)\left( MN\left( 2{{N}^{2}}\!+\!N\!+\!1 \right)\!+\!{{N}^{3}} \right)\!+\!N{{\log }_{2}}N \right)$ \\
\cline{2-3}
$~ $ & \multicolumn{2}{|c|}{$O\left( \begin{matrix}
LR{{\left( 2N\!+\!LR \right)}^{2}}\!+\!MN\left( LR\left( {{N}_{b}}\!+\!1 \right)\!+\!\left( 2{{N}^{2}}\!+\!N\!+\!1 \right)\left( {{N}_{b}}\!-\!1 \right)\!+\!4 \right) \\
\!+2\kappa N\left( M\!+\!6N\!+\!8 \right)\!+\!{{N}^{3}}\left( {{N}_{b}}\!-\!1 \right)\!+\!2NLR\!+\!N{{\log }_{2}}N \\
\end{matrix} \right)$ } \\
\hline
\end{tabular}
\vspace{-2.5em}
\end{table}
Now, we will evaluate the computational complexities of not only CFO estimation, but also channel estimation and data detection of the proposed algorithms in comparison with BEM. The complexities in terms of complex multiplications of `No-COBP', `Optimal-COBP' and BEM are given in Table~\ref{CC_3Algorithms}. Here, $N_2 \!=\! 2N$, $\kappa$ denotes the number of iterations for CFO estimation, and $R$ is the order of basis functions used by BEM.
Consider $N\!=\!64, M\!=\!64, Q \!=\!64, L \!=\!8, N_b\!=\!4, K\!=\!4, \kappa\!=\!3, R \!=\!3$. The required complexities of the three algorithms are given in Table~\ref{CC_Comparison}. Apparently, the proposed algorithms `No-COBP' and `Optimal-COBP' profit from substantially reduced computational burdens than BEM approaches.
\section{Conclusion}
In this paper, we have addressed the joint estimation issue of DFO and OFO in high-mobility OFDM downlink in richly scattered wireless environments. The fully or partly calibrated massive ULA was adopted at the terminal HST to separate multiple CFOs via array beamforming, such that the doubly selective fading channel could be decomposed into a set of parallel frequency-selective fading channels in the angle domain, each of which is affected by a single dominant CFO and can be facilely managed. The use of iterative Newton's method has greatly reduced the computational burden of the joint estimation procedure. The necessity of introducing COBP was justified by the MSE performance analysis and it could effectively overcome the detrimental effects of array mismatches. Simulation results corroborated our proposed scheme.
\begin{table}[!t]
\vspace{2.2em}
\caption{ Comparison of the computational complexities between `No-COBP', `Optimal-COBP' and BEM }\label{CC_Comparison}
\vspace{1.8em}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline
\multirow{2}*{Algorithms} & \multicolumn{4}{|c|}{Computational complexity} \\
\cline{2-5}
$~ $ & CFO estimation & Channel estimation & Data detection & Overall \\
\hline
`No-COBP' & $2.59\times 10^6$ & $3.69\times 10^4$ & $1.19\times 10^5$ & $2.74\times 10^6$ \\
\hline
`Optimal-COBP' & $5.89\times 10^6$ & $5.32\times 10^4$ & $1.68\times 10^5$ & $6.11\times 10^6$ \\
\hline
BEM & $2.27\times 10^8$ & $5.08\times 10^5$ & $1.02\times 10^8$ & $3.29\times 10^8$ \\
\hline
\end{tabular}
\vspace{-2.5em}
\end{table}
\appendices
\section{Derivation of $a_{11}^{0}, a_{12}^{0}, a_{11}^{\mathrm{n}}, a_{12}^{\mathrm{n}}, a_{21}, a_{22}, a_{23}$}{\label{MSEDerivation}}
\emph{1) Calculation of $a_{11}^{0}$ and $a_{12}^{0}$.}
Denoting $ \mathbf{P}_{1} = \mathbf{DP}_{\mathbf{B}}^{\bot }+\mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}} $, we have
\begin{align}
\frac{\partial {{g}_{0}}}{\partial \tilde{\xi }} & =\int_{0}^{\rm{\pi}} {{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}\mathbf{E}\left( {\tilde{\varphi }} \right) \mathbf{P}_{1} {{\mathbf{E}}^{H}}\left( {\tilde{\varphi }} \right){{\mathbf{Y}}_{0}}{{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right) \sin \tilde{\theta }d\tilde{\theta }.
\end{align}
Define ${{\mathbf{E}}_{b}}=\mathbf{E}\left( {{f}_{d}}\left( \cos \tilde{\theta }-\cos {{\theta }_{p}} \right) \right)$ and $\overset{\scriptscriptstyle\frown}{\varphi }={{f}_{d}}\cos \tilde{\theta }+\xi $. Then, there is
\begin{align}\label{a120_intermed}
& a_{12}^{0} = {E{\left. \left\{ \frac{\partial {{g}_{0}}}{\partial \tilde{\xi }} \right\} \right|}_{ \boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }} = \int_{0}^{\rm{\pi}}{{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}\mathbf{E}\left( {\overset{\scriptscriptstyle\frown}{\varphi }} \right) \mathbf{P}_{1} {{\mathbf{E}}^{H}}\left( {\overset{\scriptscriptstyle\frown}{\varphi }} \right){{\mathbf{Y}}_{0}}{{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right)\sin \tilde{\theta }d\tilde{\theta } \nonumber \\
\approx & \int_{0}^{\rm{\pi}}{{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\sum\limits_{p=1}^{P}{{\mathbf{V}}^{\text{*}}}\left( {{\theta }_{p}} \right){{\boldsymbol{\alpha }}^{\text{*}}}\mathbf{h}_{p}^{H}{{\mathbf{B}}^{H}}{{\mathbf{E}}_{b}} \mathbf{P}_{1} \mathbf{E}_{b}^{H}\mathbf{B}{{\mathbf{h}}_{p}}{{\boldsymbol{\alpha }}^{T}}{{\mathbf{V}}^{T}}\left( {{\theta }_{p}} \right){{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right)\sin \tilde{\theta }d\tilde{\theta } \nonumber \\
\approx & \frac{1}{P}\sum\limits_{p=1}^{P}\int_{0}^{\rm{\pi}}\operatorname{tr}\left[ {{\mathbf{B}}^{H}}{{\mathbf{E}}_{b}} \mathbf{P}_{1} \mathbf{E}_{b}^{H}\mathbf{B} \mathbf{\Lambda} \right] {{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right){{\mathbf{V}}^{\text{*}}}\left( {{\theta }_{p}} \right){{\boldsymbol{\alpha }}^{\text{*}}}{{\boldsymbol{\alpha }}^{T}}{{\mathbf{V}}^{T}}\left( {{\theta }_{p}} \right){{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right)\sin \tilde{\theta }d\tilde{\theta } \nonumber \\
\approx & \frac{1}{{\rm{\pi}}}\int_{0}^{\rm{\pi}}\int_{0}^{\rm{\pi}} \underbrace{\operatorname{tr}\left[ {{\mathbf{B}}^{H}}{{\mathbf{E}}_{b}} {\mathbf{P}_{1}} \mathbf{E}_{b}^{H}\mathbf{B}\mathbf{\Lambda} \right]}_{\eta } {{\boldsymbol{\alpha }}^{T}}{{\mathbf{V}}^{T}}\left( {{\theta }_{p}} \right){{\mathbf{V}}^{*}}\left( {\tilde{\theta }} \right)\mathbf{1} {{\mathbf{1}}^{T}}{{\mathbf{V}}^{T}}\left( {\tilde{\theta }} \right){{\mathbf{V}}^{\text{*}}}\left( {{\theta }_{p}} \right){{\boldsymbol{\alpha }}^{\text{*}}} \sin \tilde{\theta }d\tilde{\theta }d{{\theta }_{p}},
\end{align}
where $\mathbf{\Lambda} = \operatorname{diag}\left( \sigma_{1}^{2}, \sigma_{2}^{2}, \cdots\!, \sigma_{L}^{2} \right)$ and $\operatorname{tr}\left( \mathbf{\Lambda} \right) = \sum \nolimits_{l=1}^{L} {\sigma_{l}^{2}} = 1$. Denote $\tilde{x}=\cos \tilde{\theta }-\cos {{\theta }_{p}}$ for short. We have
\begin{align*}
{{\left[ {{\mathbf{V}}^{T}}\left( {{\theta }_{p}} \right){{\mathbf{V}}^{*}}\left( {\tilde{\theta }} \right) \right]}_{kl}}= \sum\limits_{m=J\left( k-1 \right)}^{Jk-1}{{{e}^{-\text{j}2\chi\left( \cos \tilde{\theta }-\cos {{\theta }_{p}} \right)m}}}{{\delta }_{kl}}
={{e}^{-\text{j}\chi \left( J-1 \right)\tilde{x}}} {{e}^{-\text{j}2\chi J \left( k-1 \right)\tilde{x}}} \frac{\sin \left( \chi J\tilde{x} \right)}{\sin \left( \chi\tilde{x} \right)}{{\delta }_{kl}},
\end{align*}
which leads to
\begin{align} \label{Ab}
{{\boldsymbol{\alpha }}^{T}}{{\mathbf{V}}^{T}}\left( {{\theta }_{p}} \right){{\mathbf{V}}^{*}}\left( {\tilde{\theta }} \right)\mathbf{1}{{\mathbf{1}}^{T}}{{\mathbf{V}}^{T}}\left( {\tilde{\theta }} \right){{\mathbf{V}}^{\text{*}}}\left( {{\theta }_{p}} \right){{\boldsymbol{\alpha }}^{\text{*}}}={{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}},
\end{align}
where ${\mathbf{A}}_{b}$ is a $K \times K$ matrix whose $(p,q)$th element is given by $[{\mathbf{A}}_{b}]_{p,q} =\frac{{{\sin }^{2}}\left( \chi J\tilde{x} \right)}{{{\sin }^{2}}\left( \chi\tilde{x} \right)} {{e}^{\text{j}2\chi J\tilde{x} \left( q-p \right)}}$.
Besides, from ${{\mathbf{B}}^{H}}\mathbf{B} = N {{\left( {{\mathbf{F}}^{H}}\operatorname{diag}\left( \mathbf{x} \right){{\mathbf{F}}_{L}} \right)}^{H}}{{\mathbf{F}}^{H}}\operatorname{diag}\left( \mathbf{x} \right){{\mathbf{F}}_{L}}
\approx N\mathbf{F}_{L}^{H}\operatorname{diag}\left( E\left\{ \left\| \mathbf{x} \right\|_{2}^{2} \right\} \right){{\mathbf{F}}_{L}} =N{{\mathbf{I}}_{L}}$, $\eta = 2\Re \left\{ \operatorname{tr}\left[ {{\mathbf{B}}^{H}}{{\mathbf{E}}_{b}} \mathbf{DP}_{\mathbf{B}}^{\bot } \mathbf{E}_{b}^{H}\mathbf{B}\mathbf{\Lambda} \right] \right\}$ can be simplified as
\begin{align} \label{eta}
& \eta =2\operatorname{tr}\left[ \Re \left\{ \mathbf{E}_{b}^{H}\mathbf{B}\mathbf{\Lambda}{{\mathbf{B}}^{H}}{{\mathbf{E}}_{b}}\mathbf{D} - \frac{1}{N}{{\mathbf{B}}^{H}}{{\mathbf{E}}_{b}}\mathbf{DB}{{\mathbf{B}}^{H}}\mathbf{E}_{b}^{H}\mathbf{B}\mathbf{\Lambda} \right\} \right] \nonumber \\
\approx & -\frac{2}{N}\operatorname{tr}\left[ \Re \left\{ \operatorname{tr}\left( {{\mathbf{E}}_{b}}\mathbf{D} \right){{\mathbf{I}}_{L}} \operatorname{tr}\left( \mathbf{E}_{b}^{H} \right){{\mathbf{I}}_{L}} \mathbf{\Lambda} \right\} \right] = -\frac{2}{N} \operatorname{tr}\left( \mathbf{\Lambda} \right) \Re \left\{ \operatorname{tr}\left( \mathbf{E}_{b}^{H} \right)\operatorname{tr}\left( {{\mathbf{E}}_{b}}\mathbf{D} \right) \right\}, \nonumber \\
= & \frac{\text{2}{ \rm{\pi} }}{{{N}^{2}}} \left( \frac{{{\sin}^{\text{2}}}\left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right)\cos \left( \frac{{ \rm{\pi} }}{N}{{f}_{d}}\tilde{x} \right)}{{{\sin }^{3}}\left( \frac{{ \rm{\pi} }}{N}{{f}_{d}}\tilde{x} \right)}-\frac{N\sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right)\cos \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right)}{{{\sin }^{2}}\left( \frac{{ \rm{\pi} }}{N}{{f}_{d}}\tilde{x} \right)} \right) \nonumber \\
\approx & 2{ \rm{\pi} }N\sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right) \frac{\sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right)-{ \rm{\pi} }{{f}_{d}}\tilde{x}\cos \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right)}{{{\left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right)}^{3}}} \approx 2{ \rm{\pi} }N\sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right)\left( \frac{1}{3}-\frac{{{\left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right)}^{2}}}{30} \right).
\end{align}
By combining (\ref{a120_intermed}), (\ref{Ab}) and (\ref{eta}), $a_{12}^{0} \!\approx\! \frac{1}{{ \rm{\pi} }}\int_{0}^{{ \rm{\pi} }}{\int_{0}^{ \rm{\pi} } {{\eta }{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }}d{{\theta }_{p}}}$ can be simplified as (\ref{a120}).
Similarly, $a_{11}^{0} \!=\! {{\left. \left\{ \frac{\partial {{g}_{0}}}{\partial {{{\tilde{f}}}_{d}}} \right\} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }} \!\approx\! \frac{1}{{ \rm{\pi} }}\int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{\cos \tilde{\theta } {\eta }{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }}d{{\theta }_{p}}}$ can be simplified as (\ref{a110}).
\emph{2) Calculation of $a_{21}$, $a_{22}$ and $a_{23}$.}
From the zero-order Taylor series expansion, there is ${{\mathbf{E}}_{b}}\approx {{\mathbf{I}}_{N}}$. Besides, we have $\left\| \mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}}\mathbf{B\Lambda} \right\|_{2}^{2} \\
\approx \operatorname{tr}\left[ {{\mathbf{B}}^{H}}\mathbf{D}{{\mathbf{D}}^{H}}\mathbf{B\Lambda} \right] - \frac{1}{N} \operatorname{tr}\left[ {{\mathbf{B}}^{H}}{{\mathbf{D}}^{H}}\mathbf{B}{{\mathbf{B}}^{H}}\mathbf{DB\Lambda} \right]
\approx \operatorname{tr}\left( \mathbf{\Lambda} \right) \operatorname{tr}\left( \mathbf{D}{{\mathbf{D}}^{H}} \right)-\frac{\operatorname{tr}\left( \mathbf{\Lambda} \right)}{N}\operatorname{tr}\left( {{\mathbf{D}}^{H}} \right)\operatorname{tr}\left( \mathbf{D} \right)
= \frac{{{ \rm{\pi} }^{2}}}{3}\frac{{{N}^{2}}-1}{N} \approx \frac{{{ \rm{\pi} }^{2}}}{3}N$. Denote ${\mathbf{P}}_{2} = {{\mathbf{D}}^{2}}\mathbf{P}_{\mathbf{B}}^{\bot }+\mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{2H}}+2\mathbf{DP}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}} $.
Then, similar to (\ref{a120_intermed}), there is
\begin{align}\label{a23}
{{a}_{23}}= & {{\left. E\left\{ \frac{{{\partial }^{2}}{{g}_{0}}}{\partial {{{\tilde{\xi }}}^{2}}} \right\} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }} = \int_{0}^{ \rm{\pi} }{{{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}\mathbf{E}\left( {\overset{\scriptscriptstyle\frown}{\varphi }} \right){{\mathbf{P}}_{2}}{{\mathbf{E}}^{H}}\left( {\overset{\scriptscriptstyle\frown}{\varphi }} \right){{\mathbf{Y}}_{0}}{{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right)\sin \tilde{\theta }d\tilde{\theta }} \nonumber \\
\approx & \frac{1}{{ \rm{\pi} }}\int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} }\operatorname{tr}\left[ {{\mathbf{B}}^{H}}{{\mathbf{E}}_{b}}{{\mathbf{P}}_{2}}\mathbf{E}_{b}^{H}\mathbf{B\Lambda} \right] {{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }}d{{\theta }_{p}} \nonumber \\
\approx & \frac{2}{{ \rm{\pi} }}\left\| \mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}}\mathbf{B\Lambda} \right\|_{2}^{2}\int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }}d{{\theta }_{p}}} \approx \frac{2{\rm{\pi}}N}{3} \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }}d{{\theta }_{p}}}.
\end{align}
In the same way, we obtain $a_{21}$ and $a_{22}$ given in (\ref{a21}) and (\ref{a22}), respectively.
\emph{3) Calculation of $a_{11}^{\mathrm{n}}$ and $a_{12}^{\mathrm{n}}$.}
Before the calculation, we first introduce the following Lemma.
\newtheorem{lemma}{Lemma}
\begin{lemma}
For steering vector ${\mathbf{a}}\left( {\theta} \right)$ whose ${r}$th element is defined as ${{e}^{\text{j}2\chi\left( r-1 \right)\cos \theta }}$, there holds
\begin{align}
\int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} } & { {{\mathbf{a}}^{T}}\left( {\bar{\theta }} \right) {{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right) f\left( {\bar{\theta }} \right)d\cos \tilde{\theta }}d\cos \bar{\theta }
\approx \int_{0}^{ \rm{\pi} }{f\left( {\tilde{\theta }} \right) \frac{\sin \tilde{\theta }}{\tilde{d}} d\tilde{\theta }}.
\end{align}
\end{lemma}
\begin{IEEEproof}
Denote $u\left( \cdot \right)$ as the unit step function and $g\left( x, x_0 \right)=\frac{\sin \left( \chi M\left( x-{{x}_{0}} \right) \right)}{\chi \left( x-{{x}_{0}} \right)}$. Then, its Fourier Transform is given by $G\left( \varpi \right) =\mathscr{F}\{\frac{\sin \left( \chi M\left( x-{{x}_{0}} \right) \right)}{\chi \left( x-{{x}_{0}} \right)} \}=\frac{1}{\tilde{d}}{{e}^{-\text{j}\varpi {{x}_{0}}}}\left[ u\left( \varpi \!+\!\chi M \right)-u\left( \varpi \!-\!\chi M \right) \right]$.
Moreover, define $F\left( \varpi \right)$ as the Fourier Transform of $f\left( x \right)$. Then, there holds
\begin{align}
& \underset{M\to \infty }{\mathop{\lim }}\,\int_{-\infty }^{+\infty }{ g\left( x, x_0 \right) f\left( x \right)dx} = {{\left. \underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{2{ \rm{\pi} }}G\left( \varpi \right)\otimes F\left( \varpi \right) \right|}_{\varpi =0}} \nonumber \\
= & \underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{2{ \rm{\pi} }}\int_{-\chi M}^{\chi M}{\frac{1}{\tilde{d}}{{e}^{\text{j}\Omega {{x}_{0}}}}F\left( \Omega \right)d\Omega } = \frac{1}{\tilde{d} 2{ \rm{\pi} }}\int_{-\infty }^{+\infty }{F\left( \Omega \right){{e}^{\text{j}\Omega {{x}_{0}}}}d\Omega }=\frac{1}{\tilde{d}}f\left( {{x}_{0}} \right).
\end{align}
Therefore, we have
\begin{align}
& \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{ {{\mathbf{a}}^{T}}\left( {\bar{\theta }} \right) {{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right) f\left( {\bar{\theta }} \right)d\cos \tilde{\theta }}d\cos \bar{\theta }} \nonumber \\
= & \int_{1}^{-1}{\int_{1}^{-1}{\frac{\sin \left( \chi M\left( y-x \right) \right)}{\sin \left( \chi \left( y-x \right) \right)}{{e}^{\text{j}\chi \left( M-1 \right)\left( y-x \right)}}f\left( y \right)}dxdy} \left( x=\cos \tilde{\theta }, y=\cos \bar{\theta } \right) \nonumber \\
\approx & \int_{-1}^{1}{\int_{-\infty }^{\infty }{\frac{\sin \left( \chi M\left( y-x \right) \right)}{\chi \left( y-x \right)}{{e}^{\text{j}\chi \left( M-1 \right)\left( y-x \right)}}f\left( y \right)dy}dx} \nonumber \\
\approx & \frac{1}{\tilde{d}}\int_{-1}^{1}{f\left( x \right)dx} \ \overset{x=\cos \tilde{\theta }}{\mathop{=}}\, \ \frac{1}{\tilde{d}}\int_{0}^{ \rm{\pi} }{f\left( {\tilde{\theta }} \right)\sin \tilde{\theta }d\tilde{\theta }}.
\end{align}
This completes the proof.
\end{IEEEproof}
Denote ${\mathbf{P}}_{1}\left( {\tilde{\varphi }} \right) = \mathbf{E}\left( {\tilde{\varphi }} \right)\left( \mathbf{DP}_{\mathbf{B}}^{\bot }+\mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}} \right){{\mathbf{E}}^{H}}\left( {\tilde{\varphi }} \right) $. Then
\begin{align*}
\frac{\partial {{g}_{\mathrm{n}}}}{\partial \tilde{\xi }} = -2\Re \left\{ \int_{0}^{ \rm{\pi} } {{{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}{\mathbf{P}}_{1}\left( {\tilde{\varphi }} \right)\mathbf{W}{{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right)d\cos \tilde{\theta }} \right\}.
\end{align*}
By virtue of Lemma 1, we arrive at
\begin{align}\label{Lemma1_usage}
& E\left\{ {{\left( \frac{\partial {g_{\mathrm{n}}}}{\partial \tilde{\xi }} \right)}^{2}} \right\} = 2E\left\{ {{\left| \int_{0}^{ \rm{\pi} }{{{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}{\mathbf{P}}_{1}\left( {\tilde{\varphi }} \right) \mathbf{W}{{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right)d\cos \tilde{\theta }} \right|}^{2}} \right\} \nonumber \\
\approx & 2\sigma _{\mathrm{n}}^{2}\int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} }
\operatorname{tr} \left[ {{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right) {{\mathbf{a}}^{T}}\left( {\bar{\theta }} \right) \right] \underbrace{{{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}{\mathbf{P}}_{1}\left( {\tilde{\varphi }} \right) {\mathbf{P}}_{1}\left( {\bar{\varphi }} \right) {{\mathbf{Y}}_{0}}{{\mathbf{a}}^{*}}\left( {\bar{\theta }} \right)}_{f\left( \bar{\theta} \right)} d\cos \bar{\theta }d\cos \tilde{\theta } \nonumber \\
\approx & \frac{2\sigma _{\mathrm{n}}^{2}}{\tilde{d}}\int_{0}^{ \rm{\pi} }{{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}\mathbf{E}\left( {\tilde{\varphi }} \right)\mathbf{P} {{\mathbf{E}}^{H}}\left( {\tilde{\varphi }} \right){{\mathbf{Y}}_{0}}{{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right)\sin \tilde{\theta }d\tilde{\theta },
\end{align}
where $\mathbf{P} =\mathbf{DP}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}}+\mathbf{DP}_{\mathbf{B}}^{\bot }\mathbf{DP}_{\mathbf{B}}^{\bot }+\mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}}\mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}}+\mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}}\mathbf{DP}_{\mathbf{B}}^{\bot }$.
In the same way as (\ref{a120_intermed}) and (\ref{a23}), we obtain
\begin{align
& a_{12}^{\mathrm{n}} = E{{\left. \left\{ {{\left( \frac{\partial {g_{\mathrm{n}}}}{\partial \tilde{\xi }} \right)}^{2}} \right\} \right|}_{\boldsymbol{\tilde{\phi}} = \boldsymbol{\phi} }}
\approx \frac{2\sigma _{\mathrm{n}}^{2}}{\tilde{d}}\int_{0}^{ \rm{\pi} } {{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right)\mathbf{Y}_{0}^{H}\mathbf{E}\left( {\overset{\scriptscriptstyle\frown}{\varphi }} \right)\mathbf{P} {{\mathbf{E}}^{H}}\left( {\overset{\scriptscriptstyle\frown}{\varphi }} \right){{\mathbf{Y}}_{0}}{{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right)\sin \tilde{\theta }d\tilde{\theta } \nonumber \\
\approx & \frac{2\sigma _{\mathrm{n}}^{2}}{\tilde{d}{\rm{\pi}} }\left\| \mathbf{P}_{\mathbf{B}}^{\bot }{{\mathbf{D}}^{H}}\mathbf{B\Lambda} \right\|_{2}^{2} \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} } {{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{*}}\sin \tilde{\theta } d\tilde{\theta }}d{{\theta }_{p}}} \approx \frac{2{\rm{\pi}}N \sigma_{\mathrm{n}}^{2}}{3\tilde{d}} \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} } {{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{*}}\sin \tilde{\theta } d\tilde{\theta }}d{{\theta }_{p}}}.
\end{align}
Similarly, we can obtain $a_{11}^{\mathrm{n}}$ given by (\ref{a11n}).
\vspace{-0.6em}
\section{Demonstration of the negligibility of term $a_{22}$}{\label{CrossTerm}}
Let ${f_k}\left( \tilde{\theta },{{\theta }_{p}} \right)$ denote the following function
\begin{align}
& {f_k}\left( \tilde{\theta },{{\theta }_{p}} \right) = \sin \tilde{\theta }\frac{{{\sin }^{2}}\left( \chi J\left( \cos \tilde{\theta }-\cos {{\theta }_{p}} \right) \right)}{{{\sin }^{2}}\left( \chi \left( \cos \tilde{\theta }-\cos {{\theta }_{p}} \right) \right)}{{e}^{\text{j}2\chi J\left( \cos \tilde{\theta }-\cos {{\theta }_{p}} \right)k}}.
\end{align}
Define $\zeta _{22}^{k}=\int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{2\cos \tilde{\theta }{f_k}\left( \tilde{\theta },{{\theta }_{p}} \right)d\tilde{\theta }}d{{\theta }_{p}}}$. Then, it is not difficult to verify that
\begin{align}
& \Re \left\{ \cos \left( { \rm{\pi} }-\tilde{\theta } \right){f_k}\left( { \rm{\pi} }-\tilde{\theta },{ \rm{\pi} }-{{\theta }_{p}} \right) \right\}=-\Re \left\{ \cos \tilde{\theta }{f_k}\left( \tilde{\theta },{{\theta }_{p}} \right) \right\}, \nonumber \\
& \Re \left\{ \cos \left( { \rm{\pi} }-\tilde{\theta } \right){f_k}\left( { \rm{\pi} }-\tilde{\theta },{{\theta }_{p}} \right) \right\}=-\Re \left\{ \cos \tilde{\theta }{f_k}\left( \tilde{\theta },{ \rm{\pi} } -{{\theta }_{p}} \right) \right\}.
\end{align}
Therefore, we have $\Re \left\{ \zeta _{22}^{k} \right\}=\Re \left\{ \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{2\cos \tilde{\theta }{f_k}\left( \tilde{\theta },{{\theta }_{p}} \right)d\tilde{\theta }}d{{\theta }_{p}}} \right\}=0$, $\zeta _{22}^{0}=\Re \left\{ \zeta _{22}^{0} \right\}=0$ and ${{\left. \zeta _{22}^{k} \right|}_{k\ne 0}}=\text{j}\Im \left\{ \zeta _{22}^{k} \right\}$.
For a given ${{\theta }_{p}}$, the range of $\tilde{\theta }$ constraining ${f_k}\left( \tilde{\theta },{{\theta }_{p}} \right)$ in the main beam lobe is determined by $\left| \cos \tilde{\theta }-\cos {{\theta }_{p}} \right|\le \frac{1}{\tilde{d} J}$, i.e.,
$\vartheta_{1} = \arccos \left( \cos {{\theta }_{p}}+\frac{1}{\tilde{d} J} \right)\le \tilde{\theta }\le \arccos \left( \cos {{\theta }_{p}}-\frac{1}{\tilde{d} J} \right) = \vartheta_{2} $.
Besides, $\frac{{{\sin }^{2}}{ \rm{\pi} }X\varphi }{{{\sin }^{2}}{ \rm{\pi} }\varphi }\lessapprox {{X}^{2}}{{\cos }^{2}}\frac{{ \rm{\pi} }X\varphi }{2}$ holds for $\left| \varphi \right|\le \frac{1}{X}$.
Hence, there will be
\begin{align}\label{zeta22}
& {{\left. \Im \left\{ \zeta _{22}^{k} \right\} \right|}_{k\ne 0}} =\Im \left\{ \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{2\cos \tilde{\theta }{f_k}\left( \tilde{\theta },{{\theta }_{p}} \right)d\tilde{\theta }}d{{\theta }_{p}}} \right\}, \nonumber \\
\approx & -{J^{2}}\int_{0}^{ \rm{\pi} }\int_{\vartheta_{1}}^{\vartheta_{2}}\text{2}\cos \tilde{\theta } {{\cos }^{2}}\left( \frac{\chi J}{2}\left( \cos \tilde{\theta }-\cos {{\theta }_{p}} \right) \right) \sin \left( 2\chi J\left( \cos \tilde{\theta }-\cos {{\theta }_{p}} \right)k \right)d\cos \tilde{\theta }d{{\theta }_{p}} \nonumber \\
\overset{*}{\mathop{=}}\, & \frac{2}{{{\chi}^{2}}}\int_{0}^{ \rm{\pi} } {\int_{-{ \rm{\pi} }}^{ \rm{\pi} }{\left( \tilde{y}+\chi J\cos {{\theta }_{p}} \right){{\cos }^{2}}\frac{{\tilde{y}}}{2}\sin \left( 2k\tilde{y} \right)d\tilde{y}}d{{\theta }_{p}}} \nonumber \\
= & \frac{\pi}{2{{\chi}^{2}}} \int_{-{ \rm{\pi} }}^{ \rm{\pi} }\tilde{y} \left( 2\sin \left( 2k\tilde{y} \right)+\sin \left( \left( 2k+1 \right)\tilde{y} \right)+\sin \left( \left( 2k-1 \right)\tilde{y} \right) \right)d\tilde{y} = \frac{1}{{{\tilde{d}}^{2}}k\left( 4{{k}^{2}}-1 \right)},
\end{align}
where $\overset{*}{\mathop{=}}$ in (\ref{zeta22}) and hereinbelow is the marker of variable substitution ${\tilde{y}=\chi J\left( \cos \tilde{\theta }-\cos {{\theta }_{p}} \right)}$.
Similarly, define $\zeta _{21}^{k}=\int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{{{\cos }^{2}}\tilde{\theta }{{f}_{k}}\left( \tilde{\theta },{{\theta }_{p}} \right)d\tilde{\theta }}d{{\theta }_{p}}}$. Then, we have $\zeta _{21}^{k}=\Re \left\{ \zeta _{21}^{k} \right\}$ and
\begin{align}\label{zeta21}
\zeta _{21}^{0}= & \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{{{\cos }^{2}}\tilde{\theta }{{f}_{0}}\left( \tilde{\theta },{{\theta }_{p}} \right)d\tilde{\theta }}d{{\theta }_{p}}}
\approx -{{J}^{2}}\int_{0}^{ \rm{\pi} }{\int_{\vartheta_{1}}^{\vartheta_{2}}{{{\cos }^{2}}\tilde{\theta }}} {{\cos }^{2}}\left( \frac{\chi J}{2}\left( \cos \tilde{\theta }-\cos {{\theta }_{p}} \right) \right)d\cos \tilde{\theta }d{{\theta }_{p}} \nonumber \\
\overset{*}{\mathop{=}}\,& \frac{J}{\chi}\int_{0}^{ \rm{\pi} }{\int_{-{ \rm{\pi} }}^{ \rm{\pi} }{{{\left( \frac{{\tilde{y}}}{\chi J}+\cos {{\theta }_{p}} \right)}^{2}}\frac{\cos \tilde{y}+1}{2}d\tilde{y}}d{{\theta }_{p}}}
\approx \frac{J}{\text{2}\chi}\int_{0}^{ \rm{\pi} }{{{\cos }^{2}}{{\theta }_{p}}\int_{-{ \rm{\pi} }}^{ \rm{\pi} }{\left( \cos \tilde{y}+1 \right)d\tilde{y}}d{{\theta }_{p}}}=\frac{{ \rm{\pi} }J}{\text{2}\tilde{d}}.
\end{align}
Besides, define $\zeta _{23}^{k}=\int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{{{f}_{k}}\left( \tilde{\theta },{{\theta }_{p}} \right)d\tilde{\theta }}d{{\theta }_{p}}}$. Then, we have $\zeta _{23}^{k}=\Re \left\{ \zeta _{23}^{k} \right\}$ and similar to the derivation of (\ref{zeta21}), there holds
\begin{align}\label{zeta23}
\zeta _{23}^{0}=\int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{{{f}_{0}}\left( \tilde{\theta },{{\theta }_{p}} \right)d\tilde{\theta }}d{{\theta }_{p}}} \approx \frac{{ \rm{\pi} }J}{\tilde{d}}.
\end{align}
The results in (\ref{zeta22}), (\ref{zeta21}) and (\ref{zeta23}) reveal that $\zeta _{23}^{0}\approx 2\zeta _{21}^{0}\gg \left| \zeta _{22}^{k} \right|$. Taking $\tilde{d}=0.45,\ M=64$ and $K=4$ for example, we have $\zeta _{21}^{0}\approx 55.85,\ \zeta _{23}^{0}\approx 111.7,\ \zeta _{22}^{0}=0,\ \zeta _{22}^{1} = {\zeta _{22}^{-1}}^{*}\approx 1.65\text{j},\ \zeta _{22}^{2} = {\zeta _{22}^{-2}}^{*} \approx 0.165\text{j},\ \zeta _{22}^{3} = {\zeta _{22}^{-3}}^{*} \approx 0.047\text{j}$.
Moreover, define ${\mathbf{A}}_{2n}$ whose $(p,q)$th element is $[{\mathbf{A}}_{2n}]_{p,q} = \zeta _{2n}^{q-p}$. We have ${{a}_{2n}}=\frac{2{\rm{\pi}}N}{3}{{\boldsymbol{\alpha }}^{T}} \mathbf{A}_{2n} {{\boldsymbol{\alpha }}^{\text{*}}}, \\ n=1,2,3 $.
Thus, there holds ${{a}_{23}} \approx 2{{a}_{21}}\gg {{a}_{22}}$ and ${a}_{22}$ is negligible compared to ${{a}_{21}}$ and ${{a}_{23}}$.
\vspace{-0.6em}
\section{MSE performance simplification in the case of fully calibrated ULA}{\label{MSE_ULA_proof}}
From (\ref{MSE0n}), there holds
\begin{align}\label{MSEn_pULA}
{{\text{MSE}}_{\mathrm{n}}}\left\{ {{{\tilde{f}}}_{d}} \right\} & \approx \frac{3\sigma _{\mathrm{n}}^{2}}{2{ \rm{\pi} }N\tilde{d} \underbrace{\int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{{{\cos }^{2}}\tilde{\theta }{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}} \sin \tilde{\theta} d\tilde{\theta }}d{{\theta }_{p}}}}_{{\rho_{1}}} }, \nonumber \\
{{\text{MSE}}_{\mathrm{n}}}\left\{ {{{\tilde{\xi}}}} \right\} & \approx \frac{3\sigma _{\mathrm{n}}^{2}}{2{ \rm{\pi} }N\tilde{d} \underbrace{\int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} }{{{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}} \sin \tilde{\theta} d\tilde{\theta }}d{{\theta }_{p}}}}_{{\rho_{2}}} },
\end{align}
and
\begin{align}\label{MSE0_pULA}
{{\text{MSE}}_{0}}\left\{ {{{\tilde{f}}}_{d}} \right\} & \approx \frac{9} {\left( 2{\rm{\pi}}{\rho_{1}} \right)^2 } \underbrace{\left[ \int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} } \frac{10 - {{\left( { \rm{\pi} } {{f}_{d}}\tilde{x} \right)}^{2}}}{30} \sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right){{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}}\sin 2\tilde{\theta }d\tilde{\theta }d{{\theta }_{p}} \right]^2}_{ \left( \varrho_{1} \right)^2}, \nonumber \\
{{\text{MSE}}_{0}}\left\{ {{{\tilde{\xi}}}} \right\} & \approx \frac{9} { \left( {\rm{\pi}}{\rho_{2}} \right)^2 } \underbrace{\left[ \int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} } \frac{10 - {{\left( { \rm{\pi} } {{f}_{d}}\tilde{x} \right)}^{2}}}{30} \sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right){{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{\text{*}}}\sin \tilde{\theta }d\tilde{\theta }d{{\theta }_{p}} \right]^2}_{ \left( \varrho_{2} \right)^2}.
\end{align}
In the case of fully calibrated ULA, ${\rho_{1}}$ could be simplified as
\begin{align}\label{rho1}
{{\left. {{\rho }_{1}} \right|}_{\boldsymbol{\alpha }=\mathbf{1}}}= & \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} } {{{\cos }^{2}}\tilde{\theta }{{\left. \left( {{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{*}} \right) \right|}_{\boldsymbol{\alpha }=\mathbf{1}}}\sin \tilde{\theta }d\tilde{\theta }}d{{\theta }_{p}}} \nonumber \\
= & \int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} } {{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right) {{\mathbf{a}}^{*}}\left( {{\theta }_{p}} \right) \underbrace{{{\mathbf{a}}^{T}}\left( {{\theta }_{p}} \right) {{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right) \frac{{{\cos }^{2}}\tilde{\theta } }{\sin {{\theta }_{p}}}}_{f\left( {\tilde{\theta }} \right)}d\cos \tilde{\theta }d\cos {{\theta }_{p}} \nonumber \\
\approx & \int_{0}^{ \rm{\pi} }{\frac{1}{\tilde{d}} {{\mathbf{a}}^{T}}\left( {{\theta }_{p}} \right) {{\mathbf{a}}^{*}}\left( {{\theta }_{p}} \right) \frac{{{\cos }^{2}}{{\theta }_{p}}}{\sin {{\theta }_{p}}}\sin {{\theta }_{p}}d{{\theta }_{p}}} =\frac{{ \rm{\pi} }M}{2\tilde{d}},
\end{align}
and in the same way, we get ${\left. {{\rho }_{2}} \right|}_{\boldsymbol{\alpha } = \mathbf{1}} = \int_{0}^{ \rm{\pi} }{\int_{0}^{ \rm{\pi} } {{{\left. \left( {{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{*}} \right) \right|}_{\boldsymbol{\alpha }=\mathbf{1}}}\sin \tilde{\theta }d\tilde{\theta }}d{{\theta }_{p}}} \approx \frac{{ \rm{\pi} }M}{\tilde{d}}$.
Substituting the simplified ${\rho_{1}}$ and ${\rho_{2}}$ into (\ref{MSEn_pULA}) leads to
${{\text{MSE}}_{\mathrm{n}}}\left\{ {{{\tilde{f}}}_{d}} \right\} \approx \frac{3\sigma _{\mathrm{n}}^{2}}{{{ \rm{\pi} }^{2}}MN}$ and ${{\text{MSE}}_{\mathrm{n}}}\left\{ {\tilde{\xi }} \right\} \approx \frac{3\sigma _{\mathrm{n}}^{2}}{\text{2}{{ \rm{\pi} }^{2}}MN}$.
Moreover, in the case of fully calibrated ULA, ${\varrho}_{2}$ could be simplified as
\begin{align}\label{varrho2}
{{\left. {{\varrho }_{2}} \right|}_{\boldsymbol{\alpha }=\mathbf{1}}} = & \int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} } \frac{10 - {{\left( { \rm{\pi} } {{f}_{d}}\tilde{x} \right)}^{2}}}{30} \sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right) {{\left. \left( {{\boldsymbol{\alpha }}^{T}}{{\mathbf{A}}_{b}}{{\boldsymbol{\alpha }}^{*}} \right) \right|}_{\boldsymbol{\alpha }=\mathbf{1}}} \sin \tilde{\theta }d\tilde{\theta }d{{\theta }_{p}}, \nonumber \\
= & \int_{0}^{ \rm{\pi} }\int_{0}^{ \rm{\pi} } {{\mathbf{a}}^{T}}\left( {\tilde{\theta }} \right) {{\mathbf{a}}^{*}}\left( {{\theta }_{p}} \right) \underbrace{{{\mathbf{a}}^{T}}\left( {{\theta }_{p}} \right) {{\mathbf{a}}^{*}}\left( {\tilde{\theta }} \right) \frac{10 - {{\left( { \rm{\pi} } {{f}_{d}}\tilde{x} \right)}^{2}}}{30} \frac{ \sin \left( { \rm{\pi} }{{f}_{d}}\tilde{x} \right) }{\sin {{\theta }_{p}}}}_{f\left( {\tilde{\theta }} \right)}d\cos \tilde{\theta }d\cos {{\theta }_{p}} \nonumber \\
\approx & \int_{0}^{ \rm{\pi} }{\frac{1}{\tilde{d}} {{\mathbf{a}}^{T}}\left( {{\theta }_{p}} \right) {{\mathbf{a}}^{*}}\left( {{\theta }_{p}} \right) \frac{10 - {{\left( { \rm{\pi} } {{f}_{d}} { {{\left. {\tilde{x}} \right|}_{\tilde{\theta} = \theta_{p} }} } \right)}^{2}}}{30} \frac{ \sin \left( { \rm{\pi} }{{f}_{d}} { {{\left. {\tilde{x}} \right|}_{\tilde{\theta} = \theta_{p} }} } \right) }{\sin {{\theta }_{p}}} \sin {{\theta }_{p}}d{{\theta }_{p}}} = 0.
\end{align}
Similar to (\ref{varrho2}), we have ${{\left. {{\varrho }_{1}} \right|}_{\boldsymbol{\alpha }=\mathbf{1}}} \approx 0$.
Hence, there hold ${{\text{MSE}}_{0}}\left\{ {{{\tilde{f}}_{d}}} \right\} \approx 0$ and ${{\text{MSE}}_{0}}\left\{ {{{\tilde{\xi}}}} \right\} \approx 0$.
\vspace{-0.6em}
\section{Expectation of ${\bf R}_{1,l,p}$ with respect to ${\theta }_{l,p}$}{\label{Expectation}}
For ease of representation, we simplify ${{\mathbf{R}}_{1,l,p}}$ as ${{\mathbf{R}}_{1}}=\mathbf{R}\left( \theta \right)\otimes \left( \mathbf{e}\left( {{f}_{d}}\cos \theta \right){{\mathbf{e}}^{H}}\left( {{f}_{d}}\cos \theta \right) \right)$ and $\theta_{l,p}$ as $\theta$. Then the $(p,q)$th element of the $(m,n)$th submatrix of ${{\mathbf{R}}_{1}}$ is given by
\begin{align}
{{\mathbf{R}}_{1, mn-pq}}& ={{e}^{\text{j}2\chi\left( m-n \right)\cos \theta }}{{e}^{\text{j} \frac{2{\rm\pi}}{N}\left( p-q \right){{f}_{d}}\cos \theta }} ={{e}^{\text{j}x\cos \theta }}, x=2\chi\left( m-n \right)+2{\rm\pi} \frac{p-q}{N}{{f}_{d}}.
\end{align}
As $\theta \sim U\left(0,2\pi \right)$, we have
\begin{align}\label{R1Expectation}
E\left\{ {{e}^{\text{j}x\cos \theta }} \right\} = \int_{0}^{2{\rm\pi}}{{\frac{{e}^{\text{j}x\cos \theta }}{2{\rm\pi}} }d\theta } =\int_{0}^{2{\rm\pi}}{\frac{\cos \left( x\cos \theta \right)}{2{\rm\pi}} d\theta } = \frac{1}{2{\rm\pi}}\int_{-{\rm\pi}}^{{\rm\pi}}{\cos \left( x\sin \theta \right)d\theta }={{J}_{0}}\left( x \right).
\end{align}
Define the operator $\oplus $ such that the $\left( m, n \right)$th submatrix of $A\oplus B$ is given by ${{a}_{mn}}+B$, where ${{a}_{mn}}$ is the $(m,n)$th element of $A$. Define ${{\mathbf{U}}_{M}}\in {{\mathbb{C}}^{M\times M}}$ whose $(m,n)$th element is $m-n$ and define ${{\mathbf{U}}_{N}}\in {{\mathbb{C}}^{N\times N}}$ whose $(p,q)$th element is $p-q$. Then with the results in (\ref{R1Expectation}), we readily arrive at (\ref{R1}), where $\mathbf{U}\left( {{f}_{d}} \right)$ is defined as $\mathbf{U}\left( {{f}_{d}} \right) = 2{\rm\pi}\frac{d}{\lambda }{{\mathbf{U}}_{M}} \oplus 2{\rm\pi} \frac{{{f}_{d}}}{N}{{\mathbf{U}}_{N}}$.
\vspace{-0.8em}
\section{Calculation of The First-Order Derivatives}{\label{Derivative}}
For zero-order Bessel function, there holds $\frac{\partial {{J}_{0}}\left( x \right)}{\partial x} =-\frac{1}{2}\left( {{J}_{1}}\left( x \right)-{{J}_{-1}}\left( x \right) \right)$.
Consequently, the first-order derivative (FOD) of ${{\mathbf{\tilde{R}}}_{1}}$ with respect to ${{f}_{d}}$ is given by
\begin{align}
& \frac{\partial {{{\mathbf{\tilde{R}}}}_{1}}}{\partial {{f}_{d}}} =\frac{\partial {{J}_{0}}\left( \mathbf{U}\left( {{f}_{d}} \right) \right)}{\partial \mathbf{U}\left( {{f}_{d}} \right)}\odot \frac{\partial \mathbf{U}\left( {{f}_{d}} \right)}{\partial {{f}_{d}}} = -\frac{{{J}_{1}}\left( \mathbf{U}\left( {{f}_{d}} \right) \right)-{{J}_{-1}}\left( \mathbf{U}\left( {{f}_{d}} \right) \right)}{2} \odot \left( {{\mathbf{0}}_{M}}\oplus \frac{2{\rm\pi}}{N}{{\mathbf{U}}_{N}} \right).
\end{align}
The FOD of ${{\mathbf{R}}_{2}}$ with respect to $\xi $ is given by
\begin{align}
\frac{\partial {{\mathbf{R}}_{2}}}{\partial \xi } ={{\mathbf{1}}_{M}}\otimes \left( \mathbf{De}\left( \xi \right){{\mathbf{e}}^{H}}\left( \xi \right)+\mathbf{e}\left( \xi \right){{\mathbf{e}}^{H}}\left( \xi \right){{\mathbf{D}}^{H}} \right).
\end{align}
Define ${{\mathbf{i}}_{k}}\in {{\mathbb{C}}^{K\times 1}}$ as the $k$th column of the $K \times K$ identity matrix ${{\mathbf{I}}_{K}}$. Then the FOD of ${{\mathbf{R}}_{3}}$ with respect to the $k$th element of $\Re \left( \boldsymbol{\alpha } \right)$ and that of $\Im \left( \boldsymbol{\alpha } \right)$ could be respectively computed as
\begin{align}
\frac{\partial {{\mathbf{R}}_{3}}}{\partial \Re \left( {{\alpha }_{k}} \right)} =\left( \mathbf{U}\left( {{\mathbf{i}}_{k}}{{\boldsymbol{\alpha }}^{H}}+{\boldsymbol{\alpha}}\mathbf{ i}_{k}^{T} \right){{\mathbf{U}}^{T}} \right)\otimes {{\mathbf{1}}_{N}}, \ \frac{\partial {{\mathbf{R}}_{3}}}{\partial \Im \left( {{\alpha }_{k}} \right)} =\left( \mathbf{U}\left( \text{j}{{\mathbf{i}}_{k}}{{\boldsymbol{\alpha }}^{H}}-\text{j}{\boldsymbol{\alpha}}\mathbf{ i}_{k}^{T} \right){{\mathbf{U}}^{T}} \right)\otimes {{\mathbf{1}}_{N}}.
\end{align}
At last, $\frac{\partial \mathbb{R}}{\partial {{\eta }_{l}}}$ can be expressed as
\begin{align}
& \frac{\partial \mathbb{R}}{\partial {{f}_{d}}}=\frac{\partial {{{\mathbf{\tilde{R}}}}_{1}}}{\partial {{f}_{d}}}\odot {{\mathbf{R}}_{2}}\odot {{\mathbf{R}}_{3}}\odot {{{\mathbf{\tilde{R}}}}_{4}},\
\frac{\partial \mathbb{R}}{\partial \xi } ={{{\mathbf{\tilde{R}}}}_{1}}\odot \frac{\partial {{\mathbf{R}}_{2}}}{\partial \xi }\odot {{\mathbf{R}}_{3}}\odot {{{\mathbf{\tilde{R}}}}_{4}},\ \frac{\partial \mathbb{R}}{\partial {{\sigma }_{\mathrm{n}}^{2}}} ={{\mathbf{I}}_{MN}}, \nonumber \\
& \frac{\partial \mathbb{R}}{\partial \Re \left( {{\mathbf{\alpha }}_{k}} \right)} ={{{\mathbf{\tilde{R}}}}_{1}}\odot {{\mathbf{R}}_{2}}\odot \frac{\partial {{\mathbf{R}}_{3}}}{\partial \Re \left( {{\mathbf{\alpha }}_{k}} \right)}\odot {{{\mathbf{\tilde{R}}}}_{4}},\
\frac{\partial \mathbb{R}}{\partial \Im \left( {{\mathbf{\alpha }}_{k}} \right)} ={{{\mathbf{\tilde{R}}}}_{1}}\odot {{\mathbf{R}}_{2}}\odot \frac{\partial {{\mathbf{R}}_{3}}}{\partial \Im \left( {{\mathbf{\alpha }}_{k}} \right)}\odot {{{\mathbf{\tilde{R}}}}_{4}}.
\end{align}
\linespread{1.24}
|
2,869,038,156,289 | arxiv | \section*{Introduction}
Although there is much current interest in using combinations of molecularly targeted drugs to improve outcomes for cancer patients \cite{Jameson2015,Gatzka2018}, relatively little work has been done in the area of formal therapy design, meaning therapy selection and/or scheduling driven by insights from mathematical models \cite{Anderson2008,Michor2015}. Formal approaches to therapy design are potentially useful for at least three reasons. First, all possible combinations of drugs may be difficult, if not impossible, to evaluate experimentally simply because of the large number of possible combinations. Second, an ability to extrapolate accurately beyond well-characterized scenarios with the aid of predictive models would be valuable for individualized treatment, especially in cases where molecular causes of disease are diverse and vary from patient to patient, as in many forms of cancer \cite{Vogelstein2013}. Third, it is often non-obvious how the immediate effects of drug perturbations propagate through a cellular regulatory network to affect cellular phenotypes and fates \cite{Rukhlenko2018} or how drug combinations might be deployed to avoid or delay the emergence of resistance, a common response of malignant cells to targeted therapies \cite{Ramos2015}. Predictive models promise to help identify new robust therapies.
Here, we apply mathematical modeling and optimal control methods to design drug schedules for manipulating autophagy, a stress-relieving/homeostatic cellular recycling process that, when nutrients are in limited supply, generates building blocks for protein synthesis through degradation of cytoplasmic contents \cite{Klionsky2000}, such as cytotoxic protein aggregates that are too large for proteosomal degradation and damaged organelles (e.g., depolarized mitochondria). Autophagy also plays an important role in immunity \cite{Deretic2013,Levine2011}; the autophagic degradative machinery can be directed to target intracellular microbes, such as {\it Mycobacterium tuberculosis}, for destruction.
Cytoplasmic contents that are targeted for autophagic degradation are first trapped in double-membrane vesicles, termed autophagosomes or autophagic vesicles (AVs), and then delivered to lysosomes for digestion \cite{mizushima2008autophagy,nakatogawa2009dynamics}. The production of AVs is controlled by an intricate regulatory network, in which three protein kinase-containing complexes are prominent: the heterotrimeric AMP-activated kinase (AMPK), which senses energy (glucose) supply through interactions with adenosine derivatives (AMP and ATP) \cite{kahn2005amp,loffler2011ulk1}; MTOR complex 1 (MTORC1), which senses amino acid supply and growth factor signaling through interactions with small GTPases localized to lysosomal surfaces (Rag proteins and RHEB) \cite{zoncu2011mtor,nazio2013mtor}; and the ULK1 complex, which is activated by AMPK and repressed by MTORC1 \cite{shang2011nutrient,dunlop2011ulk1,kim2011ampk}. A fourth complex, which contains a lipid kinase, VPS34, also plays an important role \cite{fimia2007ambra1,di2010dynamic}. Interestingly, VPS34 and MTOR are phylogenetically related: they are both members of the phosphoinositide 3-kinase (PI3K) family. Drugs with specificity for each of these kinases are available, and because of the relationship between MTOR and VPS34, drugs are also available with dual specifity for this pair of kinases \cite{Galluzzi2017,Moschetta2014,Hardie2013}.
In cancer, and other contexts, autophagy is a double-edged sword \cite{Shintani2004}. It can protect cancer cells from stresses of the tumor environment (e.g., lack of nutrients because of defective vasculature) or induce cell death if recycling is excessive. Thus, there are potential benefits to be gained by using drugs to either upregulate autophagy (to kill malignant cells through excessive recycling) or downregulate autophagy (to kill cancer cells that rely on autophagy for survival) \cite{MulcahyLevy2017}.
To investigate how single drugs and drug pairs might be best used for these purposes, we constructed a system of nonlinear ordinary differential equations (ODE) that captures regulatory interactions between MTORC1, ULK1, AMPK, and VPS34, as well as the idealized pharmacokinetics of kinase inhibitors specific for MTORC1, ULK1, AMPK, and VPS34, such as rapamycin \cite{edwards2007rapamycin}, SBI-0206965 \cite{egan2015small}, dorsomorphin \cite{meley2006amp}, and SAR405 \cite{ronan2014highly}, respectively. We also considered an allosteric activator of AMPK (e.g., PF-06409577\cite{cameron2016discovery}) and a kinase inhibitor with dual specificity for MTORC1 and VPS34 (e.g., buparlisib\cite{burger2011identification}). Although the model is minimalist by design, it reproduces key behavioral features of earlier, more mechanistically detailed models \cite{szymanska2015computational,martin2013computational}, such as oscillatory responses to intermediate levels of nutrient or energy stress. We then applied optimization methods implemented in the open-source $\mathcal{PSOPT}$ software package \cite{becerra2010solving} to find locally optimal dosing schedules that minimize the total amount of drug needed to drive the network to a desired, non-attracting operating point (corresponding to low or high AV count/turnover) and maintain it there. The dosing schedules are non-obvious, and synergistic drug pairs were predicted (drug 6 plus drug 1, 2 or 3), such as the combination of a VPS34 inhibitor and a dual specificity PI3K inhibitor, which acts on both VPS34 and MTORC1. This drug pair requires less total drug to achieve the same effect than either of the individual drugs alone and is relatively fast acting, which may be important for preventing or slowing the emergence of resistance.
The approach illustrated here differs from earlier applications of control theory concepts in the area of formal therapy design \cite{martin1992optimal,swierniak2003optimal,ledzewicz2006drug,ledzewicz2007optimal,joshi2002optimal} in that 1) the system being controlled is a cellular regulatory network, 2) the control interventions are injections (i.e., inputs) of (combinations of) molecularly targeted drugs, and 3) the control objective is manipulation of a cellular phenotype, namely the number of AVs per cell, which is related to the rate of AV turnover, with minimization of total drug used and a constraint on the maximum instantaneous drug concentration. The rationale for minimizing drug use is to avoid offtarget effects and associated toxicities. Our work is distinct from earlier studies of (non-biological) nonlinear network control \cite{cornelius2013realistic,wang2016geometrical,zanudo2017structure,klickstein2017locally}, in that our control goal is not to drive the system to an attractor (e.g., a stable steady state or limit cycle), but to an arbitrary point in phase space (i.e., the multidimensional space defined by the state variables of a system) and to then maintain the system there indefinitely. The approach is both flexible and generalizable and provides a means for computationally prioritizing drug dosing schedules for experimental evaluation.
\section*{Results}
\subsection*{Model for cellular regulation of autophagy and the effects of targeted drug interventions}\label{model}
A prerequisite for formal therapy design is a mathematical model that captures the relevant effects of drugs of interest. Given our interest in using drugs to modify the process of (macro)autophagy, we constructed a model for regulation of the rate of synthesis of autophagic vesicles (AVs) that accounts for the enzymatic activities and interactions of four kinases that play critical roles in regulating autophagy, all of which are potential drug targets. The model further considers the effects of achievable drug interventions and idealized drug pharmacokinetics, meaning instantaneous drug injection according to a time-dependent control function and first-order clearance. The model is illustrated in Fig.~\ref{fig:model}.
The model was constructed in two steps. First, we constructed a minimalist model for physiological regulation of autophagy consistent with key features of earlier, more mechanistically detailed models \cite{martin2013computational,szymanska2015computational} (see ``Formulation of the Model'' in Supplementary Methods for details). These features include the time scale of drug-stimulated autophagy induction and the dynamic range of regulation characterized by Martin \textit{et al.}\cite{martin2013computational} and the qualitative system behaviors characterized by Szyma{\'n}ska \textit{et al.}\cite{szymanska2015computational}, including a steady, low level of autophagy at low stress levels, oscillatory behavior at intermediate stress levels, and a steady, high level of autophagy at high stress levels. Simulations based on the present model---generated through numerical integration of the equations given below---and simulations based on earlier, related models\cite{martin2013computational,szymanska2015computational} are compared in Supplementary Fig.\ S1. Simulations of AV dynamics are compared to measured AV dynamics\cite{szymanska2015computational} in Supplementary Fig.\ S2.
The model of Fig.~\ref{fig:model} is intended to provide an idealized representation of regulation of AV synthesis in a single (average) cell in response to changes in the cellular supplies of energy and nutrients, which are treated in the model as external inputs that modulate the serine/threonine-specific protein kinase activities of AMPK and MTORC1, respectively. Thus, the model reflects regulation of AMPK activity by the cellular AMP:ATP ratio, which is affected by glucose availability, for example, and regulation of MTORC1 activity via, for example, the various amino acid-sensing regulators of Ragulator-associated heterodimeric Rag proteins, which recruit MTORC1 to lysosomes for activation in a manner that depends on their regulated guanine nucleotide binding states. The model further accounts for regulatory interactions among AMPK, MTORC1, a third serine/threonine-specific protein kinase ULK1, and a class III phosphoinositide 3-kinase (PI3K) VPS34. As noted earlier, these kinases are key regulators of autophagy, and each is a potential drug target.
In the second step of model construction, we added idealized consideration of six distinct drug interventions, which correspond to interventions achievable through use of available small-molecule compounds, such as rapamycin\cite{edwards2007rapamycin} (an inhibitor of MTORC1 kinase activity), buparlisib\cite{burger2011identification} (an inhibitor of PI3K-family kinases that has specificity for both MTORC1 and VPS34), SBI-206965\cite{egan2015small} (an inhibitor of ULK1 kinase activity), dorsomorphin\cite{meley2006amp} (an inhibitor of AMPK kinase activity), PF-06409577\cite{cameron2016discovery} (a direct activator of AMPK kinase activity), and SAR405\cite{ronan2014highly} (an inhibitor of VPS34 kinase activity). Each drug $i \in \{1,\ldots,6\}$ (Fig.~\ref{fig:model}) is taken to be cleared via a pseudo first-order process and introduced in accordance with a specified, time-dependent injection function $u_i$.
The model was formulated as a coupled system of nonlinear ordinary differential equations (ODEs):
\begin{subequations}{\label{eq:ode}}
\begin{align}
T \dot{x}_1(t) & = (1-x_1)C_\text{Nu} H(w_1) H(w_2) - x_1 h_{12}(x_2)h_{13}(x_3),\label{eq:drug1and2}\\
T \dot{x}_2(t) & = (1-x_2) h_{23}(x_3) H(w_3) - x_2 h_{21}(x_1),\\
T \dot{x}_3(t) & = (1-x_3) k_1 H(w_4) - C_\text{En}x_2 x_3 H(w_5),\\
T \dot{x}_4(t) & = (1-x_4)h_{42}(x_2) H(w_2)H(w_6) - k_2x_4, \label{eq:drug2and6}\\
T \dot{x}_5(t) & = k_3x_4 - k_4 x_5,\\
T \dot{w}_i(t) & = b_i u_i(t) - \delta_i w_i(t),
\ i=1,\ldots,6.\label{eq:bi}
\end{align}
\end{subequations}
In these equations, $t$ is time (in min) and $T$ is a timescale, which we specify as $1.0$ min. The variable $x_1$ represents the fraction of MTORC1 that is active, the variable $x_2$ represents the fraction of ULK1 that is active, the variable $x_3$ represents the fraction of AMPK that is active, the variable $x_4$ represents the fraction of VPS34 that is active, and the variable $x_5$ represents the AV count or number of AVs per cell (on a continuum scale). Thus, $x_i$ always lies somewhere in the interval $[0,1]$ for $i=1,\ldots,4$.
The AV count is bounded $0 \leq x_5 \leq k_3/k_4$ because $x_5(t) = 0$ implies $\dot{x}_5(t) \geq 0$ and $x_5(t) = k_3/k_4$ implies $\dot{x}_5 \leq 0$ (by the previously stated bound on $x_4(t)$).
The variables $w_1,\ldots,w_6$ represent the dimensionless concentrations of drugs 1--6. Thus, $w_i \geq 0$ for each $i$. The non-dimensional parameters $C_{\rm En}$ and $C_{\rm Nu}$ are condition-dependent constants that define the supplies of energy and nutrients. An increase in energy supply is taken to positively influence the rate of deactivation of AMPK, and an increase in nutrient supply is taken to positively influence the rate of activation of MTORC1. The non-dimensional parameters $k_1$ and $k_2$ influence the rate of activation of AMPK and the rate of deactivation of VPS34, respectively. The non-dimensional parameter $k_3$ is the maximal rate of VPS34-dependent synthesis of AVs, and the non-dimensional parameter $k_4$ is the rate constant for clearance of AVs. Taking the rate of AV synthesis to be proportional to VPS34 activity is consistent with the model of Martin \textit{et al.}\cite{martin2013computational}, as is (pseudo) first-order clearance of AVs. The non-dimensional parameters $\delta_1,\ldots,\delta_6$ are rate constants for clearance of drugs 1--6. Each $h_{ji}(x_i)$ is a non-dimensional Hill function that has the following form:
\begin{equation} \label{eq:Hill_h}
h_{ji}(x_i) = r_{b,ji} + \left(r_{m,ji}-r_{b,ji}\right)\frac{ x_i^{n_{ji}}}{x_i^{n_{ji}}+\theta_{ji}^{n_{ji}}}
\end{equation}
where $n_{ji}$ (the Hill coefficient), $r_{b,ji}$, $r_{m,ji}$ and $\theta_{ji}$ are non-negative constants. The $h$ functions account for regulatory influences among the four kinases considered in the model; the influences considered are the same as those considered in the model of Szyma{\'n}ska \textit{et al.}\cite{szymanska2015computational} (cf. Fig.~\ref{fig:model} and Figs. 1 and 2 in Ref. 33). Each $H(w_i)$ is a non-dimensional Hill function that has the following form:
\begin{equation} \label{eq:Hill_H}
H(w_i) = r_{m} - \left(r_{m}-r_{b} \right) \frac{w_i^{n} }{w_i^{n}+\theta^{n}}
\end{equation}
where $n$ (the Hill coefficient), $r_{b}$, $r_{m}$ and $\theta$ are non-negative constants. The $H$ functions account for drug effects on kinase activities. The parameters $b_i$ ($i=1,\ldots,6$) in Eq.\ \eqref{eq:bi} are Boolean variables introduced for convenience, for the purpose of defining allowable drug combinations. Recall that the $u_i$ terms represent drug injection/input functions, which will be determined by solving an optimal control problem (described in the following section).
Parameter settings are summarized in Supplementary Tables S1 and S2. Each $\delta$ parameter was assigned a value consistent with a known drug half-life\cite{cameron2016discovery,sato2006temporal,baselga2017buparlisib,milkiewicz2011improvement,engers2013synthesis,juric2017first} (Supplementary Table S2). Other parameters were assigned values that allow the model to reproduce the qualitative signaling behaviors of the AMPK-MTORC1-ULK1 triad characterized in the theoretical study of Szyma{\'{n}}ska \textit{et al.}\cite{szymanska2015computational} and to reproduce the timescale of autophagy induction and the range of regulation quantified experimentally in the study of Martin \textit{et al.}\cite{martin2013computational}. According to Szyma{\'{n}}ska \textit{et al.}\cite{szymanska2015computational}, at low levels of energy/nutrient stress, ULK1 activity, which can be expected to correlate with autophagic flux and AV count, is steady and low; at intermediate levels of stress, ULK1 activity is oscillatory; and at high levels of stress, ULK1 activity is steady and high. As noted earlier, in Supplementary Fig.\ S1, we compare simulations based on Eq.\ \eqref{eq:ode} with simulations based on models of Szyma{\'{n}}ska \textit{et al.}\cite{szymanska2015computational} and Martin \textit{et al.}\cite{martin2013computational}, and in Supplementary Fig.\ S2, we compare simulations of AV dynamics based on Eq.\ \eqref{eq:ode} with experimental measurements of AV dynamics reported by Martin \textit{et al.}\cite{martin2013computational}. Parameter settings are further explained in Supplementary Methods. In Supplementary Methods, we also elaborate on how earlier models\cite{szymanska2015computational,martin2013computational} guided our formulation of Eq.\ \eqref{eq:ode} and how these models differ from Eq.\ \eqref{eq:ode}.
Model-predicted physiological regulation of autophagy, by energy and nutrients, is summarized in Fig.~\ref{fig:bifur}. Figure \ref{fig:bifur}{\it A} shows how qualitative long-time behavior depends on the supplies of energy and nutrients, when these supplies are maintained at constant levels and in the absence of external control inputs $(u_i=0, i = 1,\ldots,6)$. Figures \ref{fig:bifur}{\it B--E} show time courses of autophagy induction or repression triggered by different energy/nutrient changes. All together, these plots show that model predictions of responses to physiological perturbations (i.e., changes in $C_{\rm En}$ and $C_{\rm Nu}$) are consistent with expectations based on the studies of Martin \textit{et al.}\cite{martin2013computational} and Szyma{\'{n}}ska \textit{et al.}\cite{szymanska2015computational}.
Dose-response curves predicted by the model for single-drug, constant-concentration perturbations are shown in Fig.~\ref{fig:constinput}. As can be seen, with increasing dosage, drugs 1 and 5 tend to increase the number of AVs per cell, whereas the other drugs tend to decrease the number of AVs per cell. These results are consistent with negative regulation of autophagy by MTORC1 and positive regulation of autophagy by ULK1, AMPK, and VPS34. As is the case for some physiological conditions (Fig.~\ref{fig:bifur}), AV count oscillates at some of the drug doses, depending on the supplies of energy and nutrients. All together, the plots shown in Fig.~\ref{fig:constinput} indicate that responses to single-drug, constant-concentration perturbations are consistent with accepted regulatory influences of MTORC1, ULK1, AMPK and VPS34 on autophagy.
As can be seen in Fig.~\ref{fig:constinput}, the ability of each drug $i$ to influence $x_5$ depends on the supplies of energy and nutrients, meaning the values of $C_{\rm En}$ and $C_{\rm Nu}$ (cf. the left and right panels in each row). In this figure, two energy/nutrient conditions are considered ($C_{\rm En}=C_{\rm Nu}=0.1$ and $0.6$); additional conditions are considered in Supplementary Figs. S3 and S4. Taken together, these results define the condition-dependent ranges over which $x_5$ can be feasibly controlled by each drug $i$.
\subsection*{Therapy design as an optimal control problem}
To design optimal therapies, we must first introduce design goals. Below, we introduce a series of goals/constraints that we will require optimal therapies to satisfy. However, let us first introduce notation useful for referring to therapies. We will refer to the set of six available drugs, or more precisely, drug types, as $\mathcal{D} =\{1,\ldots,6\}$, and we will refer to a therapy involving $k$ drugs chosen from $\mathcal{D}$ as $\mathcal{T}_k$, where
\begin{equation}\label{eq:therapy}
\begin{aligned}
\mathcal{T}_k \subseteq \mathcal{D} \quad \text{s.t.} \quad |\mathcal{T}_k | = k.
\end{aligned}
\end{equation}
Thus, for example, we will use $\mathcal{T}_1$ to refer to a monotherapy, and $\mathcal{T}_2$ to refer to a dual therapy. There are six possible monotherapies and, in general, $C^6_k$ distinct therapies that combine $k$ of the six drugs. Here, we will focus on monotherapies and dual therapies, leaving the evaluation of higher-order combination therapies for future work. As a simplification, we will assume that drugs used together in a combination do not interact. Thus, for example, for dual therapy with drugs 2 and 6 (Fig.~\ref{fig:model}), we consider these drugs to bind/inhibit VPS34 independently (i.e., non-competitively).
Our first, and most important, therapy design goal can be described (somewhat informally) as follows. Starting from a stationary (or recurrent) state at time $t=0$, we wish to use drug injections (i.e., drug inputs) according to a schedule defined by ${\bf u}(t)=(u_1(t),\ldots,u_6(t))$ to eventually maintain, after a transient of duration $t_0$, the number of AVs in an average cell, $x_5$, near (to within a tolerance $\epsilon$) a specified target level, $x_5^f$, for a period of at least $t_f-t_0$ ($t_f>t_0>0$), thereby achieving sustained control of the level of autophagic degradative flux in a cell, which is given by $k_4x_5$ according to Eq.~\eqref{eq:ode}. In our analyses, we will consider $t_0 = 120$ min and $t_f = 240$ min because these times are longer than typical transients (Figs.~\ref{fig:bifur}\emph{B}--\emph{E}).
A second therapy design goal of interest is minimization of the total amount of drug used, which is motivated by a desire to avoid drug toxicity arising from dose-dependent offtarget effects. In the optimal control literature, a problem entailing this type of constraint is called a \emph{minimum fuel} problem \cite{kirk2012optimal,lewis2012optimal}. The constraint can be expressed mathematically as follows:
\begin{equation}\label{eq:obj}
\begin{aligned}
\min_{\substack{u_i(t), \\ i \in \mathcal{T}_k}} J\left\{u_i\right\}:= \sum_{ i \in \mathcal{T}_k} \int_{0}^{t_f} u_i(t) \text{ d} t
\end{aligned}
\end{equation}
where $u_i(t) \geq 0$ for $i = 1,\ldots,6$. As a simplification, we are considering an objective functional $J\left\{u_i\right\}$ that treats the different drugs equally, i.e., the sum in Eq.\ \eqref{eq:obj} is unweighted. With this approach, we are assuming that the different drugs of interest have equivalent toxicities. If drugs are known to have different toxicities, this assumption can be lifted simply by introducing weights to capture the toxicity differences, with greater weight assigned for greater toxicity. Indeed, arbitrary modifications of the form of the objective functional $J\left\{u_i\right\}$ would be feasible if such modifications are needed to capture problem-specific constraints on drug dosing.
A third design goal is to disallow the instantaneous concentration of any drug $i$, $w_i(t)$, from ever rising above a threshold $w_i^{\max}$. The rationale for this constraint is again related to a desire to eliminate or minimize dose-dependent drug toxicity. In other words, we are assuming that a drug $i$ is tolerable so long as its concentration $w_i$ is below a toxicity threshold $w_i^{\max}$. In our analyses, we set the toxicity threshold of a drug as a factor ($>1$) times its EC$_{50}$ dosage, which we define as the concentration of the drug at which its effect on $x_5$, negative or positive, is half maximal (see Eqs.\ \eqref{eq:Hill_h} and \eqref{eq:Hill_H}).
We are now prepared to formulate the problem of (combination) therapy design as a constrained, optimal control problem. The problem, for a given $\mathcal{T}_k$ (Eq.\ \eqref{eq:therapy}), is to find a drug schedule ${\bf u}(t)$ that minimizes the objective functional defined in Eq.\ \eqref{eq:obj} and that also satisfies the following constraints:
\begin{subequations}\label{eq:const}
\begin{align}
\dot{\textbf{X}}(t) ={}& \textbf{f}(\textbf{X}(t), \textbf{u}(t)), \quad 0\le t \le t_f, \\
b_i ={}& \begin{cases}
1, & \text{if $i \in \mathcal{T}_k$},\\
0, & \text{otherwise},
\end{cases} \\
x_5^f-\epsilon \le{}& x_5(t) \le x_5^f+\epsilon, \quad t_0\le t \le t_f, \label{eq:tube}\\
0 \le{}& w_i(t) \le w_i^{\max}, \quad i = 1,\ldots,6, \label{eq:upperBound} \\
0 \le{}& u_i(t),\quad i = 1,\ldots,6, \\
\textbf{X}(0) ={}& [\textbf{x}(0),\textbf{w}(0)]\equiv[\textbf{x}_0 , \textbf{0}].
\end{align}
\end{subequations}
Here, $\textbf{X}(t)$ is defined as $[\textbf{x}(t), \textbf{w}(t)]$, where $\textbf{x}(t)=(x_1(t),\ldots,x_5(t))$ and $\textbf{w}(t)=(w_1(t),\ldots,w_6(t))$, and $\textbf{f}(\textbf{X}(t),\textbf{u}(t))$ is the vector field of Eq.\ \eqref{eq:ode}. The initial condition $\textbf{X}_0 = \textbf{X}(0)$ is taken to be a stationary (or recurrent) state of Eq.\ \eqref{eq:ode} where supplies of energy and nutrients are constant (i.e., $C_{\rm En}$ and $C_{\rm Nu}$ are fixed) and drugs are absent (i.e., $\textbf{u}(t)=0$). With this formulation, it should be noted that we are attempting to drive the system variable $x_5$ to a specified final value $x_5^f$ (to within a tolerance $\epsilon$), but we are making no attempt to control the other system variables $x_1$, $x_2$, $x_3$, and $x_4$. This approach is called target control \cite{klickstein2017energy,shirin2017optimal}. In all of our analyses, we set $\epsilon=1$.
A useful measure of the amount of `fuel' used to achieve drug control of autophagy is the total dosage of drug $i$ used up to time $t$ during a therapy $\mathcal{T}_k$, which we denote as $r^\ast_{i,k}(t)$. This quantity is calculated using
\begin{equation} \label{eq:rcum}
r^\ast_{i,k}\left(t\right) = \int_{0}^{t} u^\ast_i(\tau) d \tau,
\end{equation}
where $u^\ast_i(t)$ for $i \in \mathcal{T}_k$ is the solution of the nonlinear optimal control problem defined by Eqs.\ \eqref{eq:obj} and \eqref{eq:const}.
\subsection*{Optimal monotherapies}
We will illustrate generic features of solutions to the nonlinear optimal control problem defined by Eqs.\ \eqref{eq:obj} and \eqref{eq:const} by focusing on a particular (severe) energy/nutrient stress condition (i.e., the condition where $C_\text{Nu}=C_\text{En}=0.1$). For this condition, the system represented by Eq.\ \eqref{eq:ode} has a near maximal, steady-state AV count of approximately 37 per cell (i.e., $x_5 \approx 37$). Let us focus for the moment on monotherapy with drug $4$ (an AMPK inhibitor) to downregulate the number of AVs to a target level of 10 per cell (i.e., $x_5^f = 10$) over the time period between $t_0=120$ min and $t_f=240$ min from an unperturbed steady state (i.e., dynamics with $u_i=0$) at $t=0$.
We solved the optimal control problem using the approach outlined in the Methods section and described in more detail in ``Pseudo-Spectral Optimal Control'' in Supplementary Methods. The solution, represented by the optimal cumulative dosage of drug $4$ (i.e., $r^\ast_{4,1}\left(t\right)$) (Eq.\ \eqref{eq:rcum}), is presented in Fig.~\ref{fig:best_mono}\emph{A}. The optimal solution exhibits several generic features of the system's dynamics, regardless of its parameterization. First, the computation suggests an optimal earliest time to apply the drug. In this particular example, this time is $t\lesssim 60$ min. The difference between the target time $t_0$ and the earliest time to apply the drug quantitatively measures the speed of action of the drug. Secondly, the function $r^\ast_{4,1}\left(t\right)$ exhibits a staircase behavior, indicating that the optimal strategy of drug administration for this particular problem is to intermittently inject a specific dosage of drug into the system at specific times. Mathematically, this is due to the fact that the objective functional (Eq.\ \eqref{eq:obj}) is a linear combination of the $L^1$ norm of the injection/input rate $u_i$'s---see Sections 5.5 and 5.6 in Kirk\cite{kirk2012optimal}.
Figure \ref{fig:best_mono}\emph{B} depicts how the drug concentration $w_4(t)$ evolves subject to the optimal protocol $u^\ast_4(t)$.
We observe surges of $w_4(t)$ in response to the drug being applied to the system in large quantities over small intervals, and slow decays in between applications of the drug (caused by the natural decay of the drug concentration in the absence of external drug inputs dictated by $\delta_i$.) As a consequence, the optimal solution is to inject a relatively large dose of a drug periodically, and to continuously supply small amounts of that drug to replenish drug cleared from the system to stably maintain autophagic flux (i.e., constant AV count and constant degradative flux, which we take to be proportional to the AV count).
Figure \ref{fig:best_mono}\emph{C} illustrates the time evolution of $x_5$ (AV count) subject to the optimal drug administration protocol.
As can be seen, for $t \geq 120$ min, $x_5$ is maintained within the desired interval $x_{5_f} \pm \epsilon=10 \pm 1$. The time evolution of the non-target variables $x_1$, $x_2$, $x_3$ and $x_4$ (i.e., the activities of the regulatory kinases) are presented in Fig.~\ref{fig:best_mono}\emph{D}. Together, Figs.~4\emph{C} and \emph{D} provide a full representation of the time evolution of the system represented by Eq.\ \eqref{eq:ode} (the target and non-target variables) under the influence of the optimal drug administration schedule. Because our procedure to find the optimal solution to the nonlinear optimal control problem is numerical, we have verified that the optimal control solution satisfies the necessary conditions that it must satisfy for optimality. See ``Pseudo-Spectral Optimal Control'' in Supplementary Methods for details.
Given that cancer cells may be killed by using drugs to either elevate or suppress autophagy \cite{MulcahyLevy2017}, we will now consider optimal control solutions that either upregulate or downregulate autophagic flux by using a single drug. We will identify the drugs which can perturb and maintain the system near the target AV count. Perhaps more importantly, our analysis will deliver optimal protocols which include the precise times to inject the drugs, whose dosages are also tightly controlled to minimize the total quantities of drugs that are supplied.
Let us consider the case of intermediate energy/nutrient stress before treatment (i.e., the condition corresponding to $C_\text{Nu}=C_\text{En}=0.6$; see Fig.~\ref{fig:bifur}), for which the system exhibits oscillations in the range $[20,27]$ without treatments. For this scenario, our goal is to either downregulate the number of AVs to $x_5^f \approx 9$ (shown in Figs.~\ref{fig:best_mono}\emph{E}--\emph{H}) or to upregulate the AVs to $x_5^f\approx 37$ (shown in Figs.~\ref{fig:best_mono}\emph{I}--\emph{L}). We have performed extensive numerical solutions of the monotherapy optimal control problem with various settings of the parameters $w_i^{\max}$, $t_0$, $t_f$ and $x_5^f$.
We set the control window in the interval between $t_0=120$ min and $t_f=240$ min and imposed a constraint on each drug concentration $w_i$, requiring it not to exceed $w^{\max} = 4 \times {\rm EC}_{50}$.
We found drug 2 to be best suited for downregulation for two reasons. First, drug 2 is able to drive $x_5$ nearly to zero (in contrast with the case for drug 3 or 4). See Figs.~\ref{fig:bifur}\emph{B} and \ref{fig:bifur}\emph{H} and compare with Figs. \ref{fig:bifur}\emph{C}, \ref{fig:bifur}\emph{D}, \ref{fig:bifur}\emph{I}, and \ref{fig:bifur}\emph{J}. Second, drug 2 (in contrast with drug 6) is able to overcome the autonomous oscillatory behavior in $x_5$. In the analysis summarized in Supplementary Fig. S7, we found that drug 6 cannot eliminate oscillatory behavior; thus, it is incapable of maintaining a low, steady AV level. Drug 6 becomes viable if we remove the lower bound from the constraint of Eq.\ \eqref{eq:tube}. Without the lower bound, oscillations in $x_5$ are permitted. We choose to keep the constraint of Eq.\ \eqref{eq:tube} as written to avoid oscillatory solutions because, depending on period and amplitude, oscillations in $x_5$ may allow for autophagy-addicted cells to survive periods of relatively low autophagy by thriving during periods of relatively high autophagy. In the other direction (i.e., drug-induced upregulation of autophagy), it is only possible to use drug 5 to upregulate autophagy to the target value $x_5^f=37$ (Fig.~\ref{fig:constinput}). Figs.~\ref{fig:best_mono}\emph{E}--\emph{H} and \ref{fig:best_mono}\emph{I}--\emph{L} illustrate the optimal solutions using drugs 2 and 5 to downregulate and upregulate autophagy, respectively.
Although the selection of a single drug to achieve a given qualitative change in $x_5$ is intuitive, especially given the results of Fig.~\ref{fig:constinput}, optimization of drug scheduling (Fig.~\ref{fig:best_mono}) delivers better solutions in the sense that the total dosage applied to achieve the same effect (compared to constant input) is lower (minimized). Furthermore, the generic staircase-like solutions for $r^\ast_{i,k}$ illustrated in Fig.~\ref{fig:best_mono} persist for all the parameter sets we have tested (see below), indicating that variable, tightly controlled dosages should be injected into the system at controlled times. Given a particular type of drug, the central result of our optimal control analysis is to provide injection/input times and the amounts of drugs to be injected/added.
\subsection*{Optimal combination therapies}
Let us now consider dual therapies ($k=2$). The motivation is to identify therapies---protocols involving lower quantities of drugs and faster responses---that are even more efficient than optimal monotherapies. We have evaluated all possible dual therapies ($C^6_2 = 15$) for each of two energy/nutrient stress conditions: $C_\text{En}=C_\text{Nu}=0.1$ (corresponding to severe stress) and $C_\text{En}=C_\text{Nu}=0.6$ (corresponding to moderate stress). With an identical control objective and identical constraints $w_i^{\max}=2.0$ $t_0=120$, $t_f=240$, $x_5^f=10$, and $\epsilon=1$, we found four pairs of drugs that are each more efficient than the optimal monotherapy with either of the two drugs included in the combination. These dual therapies are illustrated in Fig.~\ref{fig:best_dual}. Additional results from our analyses of dual therapies are presented in the Supplementary Note and Supplementary Figs. S3--S10.
We found that when baseline autophagy is high ($C_\text{En}=C_\text{Nu}=0.1$), the only combination of drugs that can drive AV count down to the target $x_5^f$ is the combination of drugs 2 and 6. The dynamical response of the system is shown in Figs.~\ref{fig:best_dual}\emph{A}--\emph{D}. For this particular combination, either drug alone cannot lower $x_5$ to $10$ without violating one or both of the constraints $w_i< w_i^{\max}$ ($i=2$, $6$). However, with use of drugs 2 and 6 in combination, it is possible to achieve the target AV count because the effects of the drugs are multiplicative (Eq.\ \eqref{eq:drug2and6}) and drug $2$ directly affects both MTORC1 (Eq.\ \eqref{eq:drug1and2}) and VPS34 (Eq.\ \eqref{eq:drug2and6}).
Our analysis predicts non-trivial synergistic activities between drugs when the baseline level of autophagy is intermediate (on average) and exhibits oscillatory behavior ($C_\text{En}=C_\text{Nu}=0.6$). The results are summarized in Figs.~\ref{fig:best_dual}\emph{E}--\emph{P}. In this scenario, multiple drug combinations (drugs 1 and 6, 2 and 6, and 3 and 6) are able to downregulate and stabilize $x_5$, whereas drug 6 alone cannot do so. Using drug 6 alone results in oscillations in $x_5$, causing a violation of the constraint of Eq.\ \eqref{eq:tube}. More interestingly, the optimal application of the drugs reveals a clear sequential protocol: first apply a drug other than drug 6 (1, 2, or 3) to suppress oscillations (see Figs.~\ref{fig:best_dual}\emph{H}, \emph{L} and \emph{P}), then apply drug 6 to drive AV count down to the desired level. The combination of drugs 1 and 6 is peculiar in that in this case application of drug 1 drives the system out of the oscillatory regime (Fig.~\ref{fig:best_dual}\emph{O}) but also upregulates autophagy; subsequent application of drug 6 is effective in downregulating autophagy.
It is important to emphasize that the two drugs acting together in any given combination therapy are, for simplicity, modeled as non-interacting, which may or may not be reasonable, depending on the mechanisms of actions of specific drugs of interest. The drug synergies detected in our analyses arise from the nonlinear dynamics of the regulatory network controlling autophagy. Without the formal framework presented here for therapy design, it would arguably be difficult to identify these synergies.
\section*{Discussion}
Here, we have taken up the problem of designing targeted therapies to control a cellular phenotype of cancer cells, namely, their commitment to recycling of cytoplasmic contents through the process of autophagy, as measured by cellular autophagic vesicle (AV) count. Autophagy generates building blocks needed for \textit{de novo} protein synthesis in support of growth (and proliferation). Modulation of autophagy, up or down, in autophagy-addicted cancer cells has the potential to selectively kill these cells \cite{MulcahyLevy2017}.
Our approach was to first construct a mathematical model for autophagy regulation that captures the effects of key physiological stimuli---changes in the supplies of energy and nutrients---and the idealized effects of six available drug types (Eq.\ \eqref{eq:ode}, Figs. \ref{fig:model}--\ref{fig:constinput}) and to then pose the question of therapy design as a constrained, optimal control problem (Eqs.\ \eqref{eq:therapy}--\eqref{eq:const}). Numerical solution of this problem, through optimization of a control input accounted for in the model (i.e., an adjustable time-dependent drug injection/input rate), yielded monotherapy drug schedules that require a minimum amount of drug, maintain drug concentration below a specified threshold at all times, and that bring about desired effects in the most efficient manner possible (Fig.~\ref{fig:best_mono}), in a well-defined sense. Furthermore, through the essentially same approach, but with consideration of adjustable time-dependent drug injection/input rates for two different drugs, we were able to predict synergistic drug pairs (Fig.~\ref{fig:best_dual}).
Optimal monotherapies were found to entail intermittent pulses of drug injection/input at irregular, non-obvious intervals and doses (Fig.~\ref{fig:best_mono}). These features of optimal drug schedules---the pulsatile nature of drug administration and the irregularity of drug administration in terms of both timing and dosage---appear to be generic and each is discussed in further detail below.
The pulsatile nature of optimal monotherapy arises from the optimal control problem that we posed (Eqs.\ \eqref{eq:therapy}--\eqref{eq:const}), which can be viewed as a minimum-fuel problem, in that our control problem calls for usage of a minimal total amount of drug. The rationale for this control objective is that drugs typically have dose-dependent offtarget effects, which may contribute to drug toxicity. Thus, by seeking drug schedules that achieve desired endpoints while using only a minimal total amount of drug, we seek to mitigate the possible negative consequences of offtarget drug effects. Mathematically, our minimum-fuel objective function, Eq.\ \eqref{eq:obj}, leads to pulsatile drug administration because the Hamiltonian of the optimal control problem is linear in the control inputs $u_i(t)$, $i \in \mathcal{T}_k$ (see ``Pseudo-Spectral Optimal Control'' in Supplementary Methods for a detailed derivation). Optimal control problems which have Hamiltonians that are linear in the control input are well-known to have singular arcs, that is, discontinuities jumping between upper and lower bounds of the control input (see Chapter 5 in Kirk\cite{kirk2012optimal} for the derivation of singular arc behavior and the brief overview of this issue in ``Pseudo-Spectral Optimal Control'' in Supplementary Methods). Because we do not impose an upper bound on $u_i(t)$, the discontinuities we expect to see are Dirac delta type functions, a pulse of infinite magnitude but infinitesimal width. With the use of numerical methods to find solutions of the optimal control problem, we cannot capture the Dirac delta behavior exactly.
Instead, we see finite pulses of finite width, which, while likely suboptimal, are more physically realistic.
Although pulses of drug input are consistent with convenient drug delivery modalities, such as oral administration of a drug in pill form or intravenous injection, optimal schedules do not entail uniform drug doses, nor uniform periods of drug administration. This irregular nature of optimal drug administration depends on the structure of the nonlinear cellular network that controls the synthesis of AVs. In particular, in our model, each drug specifically targets individual nodes of the cellular network, and therefore, different drugs play dynamically distinct roles and cannot be treated as equivalent control inputs. Thus, it may be critically important to better understand the interplay between targeted therapies and archetypical cellular regulatory network dynamics if we are to design the best possible therapies for populations of patients. Because network dynamics can be expected to vary between patients, patient-specific variability in network dynamics, which we have not considered in our analyses here, is a factor that likely affects the efficacy of individualized targeted therapy and that therefore should receive attention in future studies. The study of Fey \textit{et al.}\cite{fey2015signaling} points to the feasibility of considering patient-specific parameters in mathematical models. In this study, gene expression data available for individual patients were used to set the abundances of gene products in patient-specific models for a cell signaling system. Because mutations can be detected in the tumors of individual patients, effects of oncogenic mutations could also potentially be accounted for in patient-specific models. The study of Rukhlenko \textit{et al.}\cite{Rukhlenko2018} provides an example of a study where the effects of an oncogenic mutation were considered in a mathematical model. In the study of Fr{\"o}hlich \textit{et al.}\cite{frohlich2018efficient}, gene expression and mutational profiles were both considered in cell line-specific models.
The therapy design approach presented here is flexible and allows for the evaluation of drug combinations. In our analyses, we focused on dual therapies. Somewhat surprisingly, we found several drug pairs that together are more effective than either drug alone (according to our model). These pairs are drug 2 and drug 6 when $C_{\text{Nu}}=C_{\text{En}}=0.1$ (severe energy/nutrient stress) and the combination of drug 6 with drug 1, 2, or 3 when $C_{\text{Nu}}=C_{\text{En}}=0.6$ (moderate energy/nutrient stress). In the latter cases, drug 6 alone is incapable of downregulating autophagy to the desired level, but it sensitizes the network to drugs 1--3 when one of these drugs is used in conjunction with drug 6. According to the model (and its parameterization), the most potent synergistic drug pair is the combination of drugs 2 and 6. With this combination, the total amount of drug 2 used was reduced by more than 5-fold (see the Supplementary Note and Supplementary Fig. S5) in comparison to the case where drug 2 is used optimally in isolation. More striking perhaps is that drug 6 when used alone is incapable of achieving the performance objective. Interestingly, our results provide mechanistic insight into the optimal sequence of drug delivery: therapy is optimal when drug 2 is injected about 80 minutes earlier than drug 6. That is, the best outcome was achieved when first inhibiting MTORC1, thus halting the intrinsic oscillations of the network dynamics, and then only inhibiting VPS34 to reduce synthesis of AVs. It should be noted that in our evaluation of this drug pair, we have assumed that there is no interaction between drugs 2 and 6, an idealization that may not be appropriate for specific examples of drugs of these types.
The same optimal control approach that we have demonstrated for 2-drug combinations can be applied for combinations involving more than two drugs. Indeed, our approach was presented for the general case of $k$ drugs used in combination. Our expectation is that effective combinations involving more than two drugs may be more likely to exist than effective combinations involving only two drugs, because controllability would presumably increase with the availability of more drugs. However, finding an effective combination may be more computationally expensive because of the larger number of possible combinations, and 2-drug combinations may be preferable to higher-order combinations because of drug side effects.
As reported by Palmer and Sorger \cite{Palmer2017}, many clinically used drug combinations are effective for reasons other than drug synergy, which is rare. In essence, the majority of clinically available drug combinations are, for all intents and purposes, equivalent to monotherapy at the level of individual patients. The basis for their effectiveness at the population level is simply that tumors in different subpopulations of patients have distinct drug sensitivities. Thus, new methods for predicting promising, non-obvious synergistic drug combinations, such as the approach reported here, could be helpful in developing combination therapies that derive their effectiveness from drug synergy. Synergistic drug combinations would seemingly offer significant benefits over monotherapy, or what is effectively monotherapy, in terms of delaying or perhaps eliminating the emergence of drug resistance. We note that our analysis identified synergies between pairs of drugs that are predicted to manifest without fine tuning of the doses used or the timing of drug administration. We admit that these predictions could perhaps have been found through an \textit{ad hoc} model analysis. Nevertheless, we see value in leveraging an optimal control framework for model analysis, even if an optimal control strategy is not sought, because with this type of approach it is less likely that interesting behavior will be missed.
There is presently cautious optimism that effective drug combinations will be identified through high-throughput screening experiments \cite{Holbeck2017}, or through learning from data. However, the sheer number of possible drug combinations poses a barrier to experimental discovery of efficacious drug combinations and it is not clear that the data requirements of machine learning approaches can be met in the near term. Thus, it is important to consider alternatives, such as the approach presented here, which leverages available mechanistic understanding of how regulatory protein/lipid kinases influence the synthesis of AVs, which we have consolidated in the form of a mathematical model (Eq.\ \eqref{eq:ode}), designed to be useful for computational characterization of drug combinations. We note that our model was formulated specifically for this purpose, and it was not designed to make predictions outside this limited domain of application. Indeed, to facilitate our computational analyses, the model was handcrafted to be as simple as possible while still reproducing key behaviors of more mechanistically detailed models \cite{martin2013computational,szymanska2015computational}. This approach was helpful in making calculations feasible. Unfortunately, to our best knowledge, there are no proven approaches for systematically and automatically deriving a suitable surrogate model for therapy design from a more detailed, mechanistic model of a cellular regulatory network. Pursuit of such a capability seems like an important subject of future research.
Our intent at the start of this study was to investigate how control engineering concepts might be introduced into formal therapy design. Thus, we have only attempted to demonstrate that our methodology is capable of generating interesting (and testable) predictions of effective drug schedules and drug combinations. Development of novel therapies will, of course, require experimental validation of candidate combinations, which is beyond the intended scope of the present study. Thus, we caution that our predictions of optimal drug schedules and synergistic drug combinations are only intended to demonstrate methodology. The merit of this methodology is not in reaching final conclusions but in prioritizing experimental efforts and thereby accelerating experimental validation of targeted therapies. Because kinase inhibitors of each type considered in our analysis are available for experimental characterization and autophagy is a cellular phenotype that can be readily assayed, as in the study of Martin \textit{et al.}\cite{martin2013computational} or du Toit \textit{et al.}\cite{dutoit2018measuring}, a logical next step would be to probe for the predicted drug synergies in cell line experiments. It might be especially interesting to evaluate a combination of an ULK1-specific inhibitor, such as ULK-101 \cite{martin2018potent}, and a VPS34-specific inhibitor, such as VPS34-IN1 \cite{bago2014characterization}. We predict that this combination will be synergistic, and the combination targets the two kinases considered in our analysis that are most proximal to the cellular machinery for producing autophagosomes. On the computational side, to increase confidence in predictions, sensitivity analysis techniques tailored for optimal control problems could be applied to characterize the robustness of predictions \cite{castillo2008sensitivity,malanowski1998sensitivity}, and experimental design techniques could be applied to aid in generating data useful for reducing parameter uncertainty \cite{hagen2013convergence,dehghannasiri2015efficient}. Several studies strongly support the potential value of formal therapy design \cite{Chmielecki2011,Chakrabarti2017,stein2018mathematical}, and the main contribution here is a new approach to this subject. Two important distinguishing features of this approach are 1) the consideration of a mathematical model for a cellular regulatory network that controls a cellular phenotype and 2) application of sophisticated methods from automatic control theory.
\section*{Methods}
\subsection*{Simulations}
Simulations were performed by numerical integration of the model ODEs. The parameter settings used in calculations are provided in the Supplementary Tables S1 and S2.
\subsection*{Pseudo-Spectral Optimal Control}
Optimal control as a field of research combines aspects of dynamical systems, mathematical optimization and the calculus of variations \cite{kirk2012optimal}.
Together Eqs.\ \eqref{eq:obj} and \eqref{eq:const} form a constrained optimal control problem, which can generally be written as,
\begin{equation}\label{eq:OCP}
\begin{aligned}
\min_{\textbf{u}(t)} && &J(\textbf{x}(t),\textbf{u}(t),t) = \int_{t_0}^{t_f} F\left( \textbf{x}(t), \textbf{u}(t), t\right) \text{ d} t\\
\text{s.t.} && &\dot{\textbf{x}}(t) = \textbf{f} ( \textbf{x}(t), \textbf{u}(t), t)\\
&& &\textbf{e}^L \leq \textbf{e}(\textbf{x}(t_0), \textbf{x}(t_f), t_0, t_f) \leq \textbf{e}^U\\
&& &\textbf{h}^L \leq \textbf{h}(\textbf{x}(t), \textbf{u}(t), t) \leq \textbf{h}^U\\
&& &t \in [t_0,t_f]
\end{aligned}
\end{equation}
In general, there exists no analytic framework that is able to provide the optimal time traces of the controls and the states in Eq.\ \eqref{eq:OCP}, and so we must resort to numerical techniques.
Pseudo-spectral optimal control (PSOC) has become a popular tool in recent years \cite{rao2009survey, ross2012review} that has allowed scientists and engineers solve optimal control problems like that of Eq.\ \eqref{eq:OCP} reliably and efficiently in applications such as guiding autonomous vehicles and maneuvering the international space station \cite{ross2012review}.
The main concepts of PSOC are summarized here but are explained at length in ``Pseudo-Spectral Optimal Control'' in Supplementary Methods. See also Supplementary Fig. S11.
We define a set of $N$ discrete times $\{\tau_i\}$ $i = 0,1,\ldots,N$ where $\tau_0 = -1$ and $\tau_N = 1$ with a mapping between $t \in [t_0,t_f]$ and $\tau \in [-1,1]$. The choice of $\{\tau_i\}$ is key to the convergence of the full discretized problem and so typically they are chosen as the roots of an $N+1$th order orthogonal polynomial such as Legendre or Chebyshev.
In fact, the type of PSOC one uses is typically named after the type of polynomial used to generate the discretization points.
Let $\hat{\textbf{x}}(\tau) = \sum_{i=0}^N \hat{\textbf{x}}_i L_i(\tau)$ be an approximation of $\textbf{x}(\tau)$ where $L_i(\tau)$ is the $i$th Lagrange interpolating polynomial. The dynamical system is approximated by differentiating the approximation $\hat{\textbf{x}}(\tau) = \sum_{i=0}^N \hat{\textbf{x}}_i L_i(\tau)$ with respect to time.
\begin{equation}
\begin{aligned}
\frac{d \hat{\textbf{x}}}{d \tau} = \sum_{i=0}^N \textbf{x}_i \frac{d L_i}{d\tau}
\end{aligned}
\end{equation}
Let $D_{k,i} = \frac{d}{d\tau} L_i(\tau_k)$ so that we may rewrite the original dynamical system constraint in Eq.\ \eqref{eq:OCP} as the following set of algebraic constraints.
\begin{equation}
\sum_{i=0}^N D_{k,i} \textbf{x}_i - \frac{t_f-t_0}{2} \textbf{f}(\hat{\textbf{x}}_i,\hat{\textbf{u}}_i,\tau_i) = \boldsymbol{0},\ k = 1,\ldots,N
\end{equation}
With the original time-varying states and control inputs now discretized, the dynamical equations approximated with Lagrange interpolating polynomials, and the cost function approximated by a quadrature, the discretized optimal control problem can be expressed as the following nonlinear programming (NLP) problem.
\begin{equation}\label{eq:dOCP}
\begin{aligned}
\min_{\substack{\textbf{u}_i\\ i=0,\ldots,N}} && &\hat{J} = \frac{t_f-t_0}{2} \sum_{i=0}^N w_i f(\hat{\textbf{x}}_i,\hat{\textbf{u}}_i,\tau_i)\\
\text{s.t.} && &\sum_{i=0}^N D_{k,i} \hat{\textbf{x}}_i - \frac{t_f-t_0}{2} \textbf{f}(\hat{\textbf{x}}_k,\hat{\textbf{u}}_k,\tau_k) = \boldsymbol{0},\ k = 0,\ldots,N\\
&& &\textbf{e}^L \leq \textbf{e}(\hat{\textbf{x}}_0,\hat{\textbf{x}}_N,\tau_0,\tau_N) \leq \textbf{e}^U\\
&& &\textbf{h}^L \leq \textbf{h}(\hat{\textbf{x}}_k,\hat{\textbf{u}}_k, \tau_k) \leq \textbf{h}^U,\ k = 0,\ldots,N\\
&& &t_i = \frac{t_f-t_0}{2}\tau_i + \frac{t_f+t_0}{2}
\end{aligned}
\end{equation}
We used $\mathcal{PSOPT}$ \cite{becerra2010solving}, an open-source PSOC toolbox written in C++, to perform the PSOC discretization procedure.
The NLP problem of Eq.\ \eqref{eq:dOCP} can be solved with a number of different techniques, but here we use an interior point algorithm \cite{nocedal2006numerical} as implemented in the open-source C++ software Ipopt \cite{wachter2006implementation}.
\section*{Acknowledgements}
I.S.K., A.S. and F.S. acknowledge support from the National Science Foundation (CRISP-1541148), the Office of Naval Research (N00014-16-1-2637), and the Defense Threat Reduction Agency (HDTRA1-12-1-0020). W.S.H. and Y.T.L. acknowledge support from the National Cancer Institute of the National Institutes of Health (R01CA197398). S.F. acknowledges support from the Center for Nonlinear Studies and the Laboratory-Directed Research and Development program at Los Alamos National Laboratory, which is operated by Triad National Security, LLC for the National Nuclear Security Administration of the U.S. Department of Energy (contract no. 89233218CNA000001). W.S.H. performed part of this work at the Aspen Center for Physics, which is supported by the National Science Foundation (PHY-1607611). We thank Jeffrey P. MacKeigan for helpful discussions.
\section*{Author Contributions Statement}
W.S.H. and F.S. designed the research; A.S., I.S.K., S.F. and Y.T.L. performed the research; and all authors contributed to the analyses of results and the writing of the manuscript. Modeling work was the primary responsibility of the Los Alamos National Laboratory authors. Optimal control work was the primary responsibility of the University of New Mexico authors.
\section*{Additional Information}
The authors declare no competing interests.
\section*{Data availability}
Problem-specific software used in this study is provided as Supplementary Data.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{Fig1.pdf}
\caption{Schematic diagram of a minimalist mathematical model for regulation of autophagy and the effects of targeted drug interventions.
The model accounts for two physiological inputs (energy and nutrient supply) and regulatory influences, stimulatory or inhibitory, within a network of interacting kinases. Each kinase is taken to have a constant total abundance and to be dynamically distributed between active and inactive forms. The active fractions of MTORC1, ULK1, AMPK, and VPS34 are represented by $x_1$, $x_2$, $x_3$ and $x_4$, respectively. Targeted drugs, denoted by red ovals, promote kinase inactivation or activation as indicated. Six drug types are considered: 1) a kinase inhibitor specific for MTORC1, 2) a kinase inhibitor specific for both MTORC1 and VPS34, 3) an ULK1 kinase inhibitor, 4) an allosteric activator of AMPK, 5) an AMPK kinase inhibitor, and 6) a VPS34 kinase inhibitor. The supplies of cellular energy and nutrients ($C_{\rm En}$ and $C_{\rm Nu}$), together with drug concentrations ($w_1,\ldots,w_6$), determine the kinase activities of MTORC1, ULK1, AMPK, and VPS34 and thereby the rate of synthesis of autophagic vesicles (AVs). The control parameters are drug injection/input rates ($u_1,\ldots,u_6$). Note that drug clearance is not indicated in this diagram but is considered in the model equations.}
\label{fig:model}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width = 0.48\textwidth]{Fig2.pdf}
\caption{Predicted dependence of AV count on energy and nutrient supplies according to the model for autophagy regulation (Eq.\ \eqref{eq:ode}). (\emph{A}) Long-time behavior. In this panel, the stationary or time-averaged value of $x_5(t)$
for constant supplies of energy and nutrients as $t \rightarrow \infty$ is indicated by color over the full ranges of the two physiological inputs of the model: energy supply ($C_{\rm En}$) and nutrient supply ($C_{\rm Nu}$). It should be noted that we take the most extreme energy/nutrient starvation conditions to correspond to $C_{\rm En}=C_{\rm Nu}=0$, and we take the most extreme energy/nutrient replete conditions to correspond to $C_{\rm En}=C_{\rm Nu}=1$. The solid black curves delimit the regions where long-time behavior of $x_5$ is oscillatory or not. If behavior is oscillatory, the time-averaged value of $x_5$ is reported; otherwise, the stationary value is reported. A bifurcation analysis indicates that long-time behavior is characterized by a stable fixed point, the coexistence of a stable fixed point \emph{and} a stable limit cycle, or a stable limit cycle. The region labeled `oscillatory' indicates the conditions for which a stable limit cycle exists; however, this diagram is not intended to provide a full characterization of the possible qualitative behaviors and bifurcations of Eq.\ \eqref{eq:ode}. As indicated by the color bar, the (average) AV count varies over a range of roughly 2 to 37 vesicles per cell. (\emph{B}--\emph{E}) Transient behavior. Each of these plots shows $x_5$ as a function of time $t$ after a coordinated change in energy and nutrient supplies. The plot in panel \emph{B} shows the predicted response to a steep, step increase in stress level, i.e., a change in conditions from $C_{\rm En}=C_{\rm Nu}=1$ to $0.2$. The plot in panel \emph{C} shows the predicted response to a moderate, step increase in stress level, i.e., a change in conditions from $C_{\rm En}=C_{\rm Nu}=1$ to $0.6$. The plot in panel \emph{D} shows the predicted response to a moderate, step decrease in stress level, i.e., a change in conditions from $C_{\rm En}=C_{\rm Nu}=0.2$ to $0.6$ The plot in panel \emph{E} shows the predicted response to a steep, step decrease in stress level, i.e., a change in conditions from $C_{\rm En}=C_{\rm Nu}=0.2$ to $1$.}
\label{fig:bifur}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{Fig3.png}
\caption{Predicted dependence of AV count ($x_5$) on drug dose according to Eq.\ \eqref{eq:ode}. In each panel, we show the long-time effects of monotherapy with drug $i \in \{1,\ldots,6\}$; the drug considered in each panel is maintained at the constant (dimensionless) concentration indicated on the horizontal axis. Drugs 1--6 are considered from top to bottom. Responses to drugs depend on the supplies of energy and nutrients. The left panels (\emph{A}--\emph{F}) correspond to conditions for which $C_\text{Nu} = C_\text{En}=0.1$ (severe energy/nutrient stress), and the right panels (\emph{G}--\emph{L}) correspond to conditions for which $C_\text{Nu} = C_\text{En}=0.6$ (moderate energy/nutrient stress). The long-time behavior of $x_5$ under the influence of monotherapy can be stationary (with a stable fixed point) or oscillatory (with a stable limit cycle). The shaded regions indicate where there is oscillatory behavior. At a given drug dose, the top and bottom bounds of a shaded region delimit the envelope of oscillations (i.e., the maximum and minimum values of $x_5$).}
\label{fig:constinput}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=17.8cm]{Fig4.pdf}
\caption{Best performing monotherapies.
(\emph{A}--\emph{D}) Panels \emph{A}--\emph{D} are from a numerical experiment for which we set $C_{\text{Nu}} = C_{\text{En}}= 0.1$ and attempt to use drug 4 to downregulate the AV count. (\emph{E}--\emph{H}) Panels \emph{E}--\emph{H} from a numerical experiment for which we set $C_{\text{Nu}} = C_{\text{En}}= 0.6$ and attempt to use drug 2 to downregulate the AV count. (\emph{I}--\emph{L}) Panels \emph{I}--\emph{L} are from a numerical experiment for which we set $C_{\text{Nu}} = C_{\text{En}}= 0.6$ and attempt to use drug 5 to upregulate the AV count. The plots in the first column are cumulative drug dosages for the monotherapies considered. The plots in the second column are the drug concentrations. The plots in the third column show $x_5(t)$ and the plots in the fourth, or rightmost, column show $x_1(t)$, $x_2(t)$, $x_3(t)$, and $x_4(t)$ that we are making no attempt to control. In all simulations, the upper bound on the allowable concentration of drug $i$, $w_i^{\max}$, was set at $2$. For panels \emph{A}--\emph{H}, the target AV count was 10 (i.e., $x_5^f=10$). For panels \emph{I}--\emph{L}, the target AV count was 37 (i.e., $x_5^f=37$). The white region corresponds to the time interval $[t_0,t_f]$ when we either upregulate or downregulate the AV count The shaded region corresponds to the time interval $[t_0, t_f]$ when the AV count is maintained within the interval $x_5^f \pm \epsilon$.}
\label{fig:best_mono}
\end{figure}
\begin{figure}[tbhp!]
\centering
\includegraphics[width=17.8cm]{Fig5.pdf}
\caption{Optimal dual therapies.
(\emph{A}--\emph{D}) Panels \emph{A}--\emph{D} are from a numerical experiment for which we set $C_{\text{Nu}} = C_{\text{En}}= 0.1$ and attempt to use a combination of drugs 2 and 6. (\emph{E}--\emph{H}) Panels \emph{E}--\emph{H} are from a numerical experiment in which we set $C_{\text{Nu}} = C_{\text{En}}= 0.6$ and attempt to use a combination of drugs 2 and 6. (\emph{I}--\emph{L}) Panels \emph{I}--\emph{L} are from a numerical experiment in which we set $C_{\text{Nu}} = C_{\text{En}}= 0.6$ and attempt to use a combination of drugs 3 and 6. (\emph{M}--\emph{P}) Panels \emph{M}--\emph{P} are from a numerical experiment in which we set $C_{\text{Nu}} = C_{\text{En}}= 0.6$ and attempt to use a combination of drugs 2 and 6. The plots on the first column are cumulative drug dosages for the dual therapies considered. The plots on the second column are drug concentrations. The plots in the third column show $x_5(t)$ and the plots in the fourth, rightmost, column show $x_1(t)$, $x_2(t)$, $x_3(t)$, and $x_4(t)$, which we did not attempt to control. In all the simulations, the target value for AV count was 10 (i.e., $x_5^f=10$) and the upper bound on each drug concentration $w_i$ was 2 (i.e., $w_i^{\max} = 2$). The white region corresponds to the time interval $[t_0,t_f]$ when we either upregulate or downregulate the AV count The shaded region corresponds to the time interval $[t_0, t_f]$ when the AV count is maintained within the interval $x_5^f \pm \epsilon$.}
\label{fig:best_dual}
\end{figure}
\section*{\LARGE Supplementary Methods}
\section*{Formulation of the Model}
Formulation of Eq.\ (1) was guided by the models of Szyma{\'n}ska et al.\cite{szymanska2015computational} (Ref. 33 in the main text) and Martin et al.\cite{martin2013computational} (Ref. 34 in the main text) mainly as follows. The model of Eq.\ (1) was formulated and parameterized so as to allow the model to predict oscillatory induction of autophagy in response to intermediate drug, energy, and nutrient stress inputs (as illustrated in Figs. 2 and 3), in accord with the predictions of the model of Szyma{\'n}ska et al.\cite{szymanska2015computational}. Moreover, as in both models considered by Martin et al.\cite{martin2013computational}, Eq.\ (1) takes AVs to be turned over constitutively via a pseudo first-order degredative process. Another factor that drove model formulation and parameterization was the availability of measured AV dynamics induced by MTORC1 inhibition\cite{martin2013computational}. Eq.\ (1) was parameterized so as to reproduce the essential aspects of these dynamics (see below for more discussion).
Equation (1) differs from the earlier models of Szyma{\'n}ska et al.\cite{szymanska2015computational} and Martin et al.\cite{martin2013computational} mainly as follows. In the model of Szyma{\'n}ska et al.\cite{szymanska2015computational}, the regulatory influences depicted in Fig. 1 (e.g., mutual inhibition of MTORC1 and ULK1 and negative feedback from ULK1 to AMPK) are not explicitly represented, as is the case in the model of Eq.\ (1), where regulatory influences on enzymatic activities are represented explicitly using Hill functions. Rather, in the model of Szyma{\'n}ska et al.\cite{szymanska2015computational}, regulatory influences emerge from formal representations of the biomolecular interactions considered in the model, which are termed rules\cite{chylek2014}. In other words, Eq.\ (1) provides a model of regulatory influences and their effects, whereas the model of Szyma{\'n}ska et al.\cite{szymanska2015computational} provides a model of biomolecular interactions and their effects, which include emergent regulatory influences. The rules of the model of Szyma{\'n}ska et al.\cite{szymanska2015computational} can be processed automatically by the BioNetGen software package\cite{faeder2009} to obtain a system of 173 coupled ordinary differential equations (ODEs). These equations account for various complexes (e.g., a complex of AMPK and ULK1 that is generated when AMPK docks to a particular site in ULK1) and protein phosphoforms. In contrast, the model of Eq.\ (1) does not track these details. Rather, it simply tracks the activities of AMPK, MTORC1, and ULK1 (and also the activity of VPS34, which was not considered by Szyma{\'n}ska et al.\cite{szymanska2015computational}). In the model of Szyma{\'n}ska et al.\cite{szymanska2015computational}, AMPK, MTORC1, and ULK1 each has numerous states. In contrast, in the model of Eq.\ (1), these protein states are reduced to just two for each protein: active or inactive.
Although the model of Szyma{\'n}ska et al.\cite{szymanska2015computational} provides a mechanistically detailed representation of biomolecular interactions, it does not include a representation of autophagic vesicle (AV) population dynamics. To include a representation of AV population dynamics in Eq.\ (1), we started with the simple representation of AV production and clearance used in the AV population dynamics model of Martin et al.\cite{martin2013computational}:
\begin{equation*}
\frac{dV}{dt}= P^{\ast} - cV,
\end{equation*}
where $V$ is cellular AV count, $P^{\ast}$ is a condition-dependent zero-order rate constant for AV production, and $c$ is a pseudo first-order rate constant for clearance of AVs. In our model, we modified this equation by allowing the production rate to be time dependent. In Eq.\ (1) the rate of AV production is a linear function of VPS34 activity, $x_4(t)$. In other words, the rate of AV production is given by $k_3x_4(t)$ (vs. a constant, $P^{\ast}$).
Parameter settings are summarized in Supplementary Tables S1 and S2. These settings are not uniquely determined by data; they were guided by the considerations explained below.
Parameter settings for parameters in the $h$ and $H$ Hill functions were determined first, as follows. For each Hill function, we initially set $r_b = 0$, $r_m = 1$, $\theta = 0.5$, and $n = 2$. (We omit indices in referring to these parameters for convenience.) We then varied parameter values (by hand tuning) to obtain qualitative behavior consistent with that predicted by the model of Szyma{\'n}ska et al.\cite{szymanska2015computational}. The behaviors of the two models are compared directly in Supplementary Fig.\ \ref{fig:model_vs_model_comparison}. In panels \emph{A} and \emph{B} of Supplementary Fig.\ \ref{fig:model_vs_model_comparison}, AV count ($x_5$) and ULK1 activity ($x_2$) are shown, respectively, as a function of time. Initially, in these plots, we consider a nutrient/energy replete condition ($C_{\rm En}=C_{\rm Nu}=1$) without rapamycin (or any other drug). A low dose of rapamycin is added at time $t=100$ min and then a high dose of rapamycin is added at time $t=200$ min. As can be seen, $x_5$ (Supplementary Fig.\ \ref{fig:model_vs_model_comparison}\emph{A}) and $x_2$ (Supplementary Fig.\ \ref{fig:model_vs_model_comparison}\emph{B}) initially have steady low values. After the initial introduction of rapamycin, these quantities begin to oscillate. After the second addition of rapamycin, the two quantities have steady high values. This behavior is qualitatively the same as the behavior predicted by the model of Szyma{\'n}ska et al.\cite{szymanska2015computational} (Supplementary Fig.\ \ref{fig:model_vs_model_comparison}\emph{C}). It should be noted that the study of Szyma{\'n}ska et al.\cite{szymanska2015computational} did not establish that the AMPK-MTORC1-ULK1 network actually exhibits oscillatory behavior; this study only showed that oscillatory behavior is a possible consequence of known regulatory mechanisms. By requiring Eq.\ (1) to reproduce the qualitative nonlinear dynamics of the model of Szyma{\'n}ska et al.\cite{szymanska2015computational}, we made the optimal control problem considered here more of a challenging test of our methodology.
Next, parameter settings for the rate constants $k_1$, $k_2$, $k_3$ and $k_4$ were determined (again through hand tuning). In the study of Martin et al.\cite{martin2013computational}, AV population dynamics were monitored after cells in a nutrient/energy replete condition were treated with a dose of rapamycin or AZD8055 (a catalytic MTOR inhibitor) sufficient to fully inhibit MTORC1 activity. We selected values for the rate constants that allow the model of Eq.\ (1) to roughly reproduce the observed dynamics induced by MTORC1 inhibition in the study of Martin et al.\cite{martin2013computational}. The behaviors predicted by Eq.\ (1) and the model of Martin et al.\cite{martin2013computational} are directly compared in panels \emph{D} and \emph{E} of Supplementary Fig.\ \ref{fig:model_vs_model_comparison}. The AV population dynamics model of Martin et al.\cite{martin2013computational} can be written as follows: $dV/dt=(1+k\delta)P-cV$, where $\delta=0$ indicates a 0 dose of MTORC1 inhibitor, $\delta=1$ indicates a saturating dose of MTORC1 inhibitor, $P$ is the baseline rate of AV production, and $(1+k)P$ is the induced rate of AV production stimulated by a saturating dose of MTORC1 inhibitor. By varying $\delta$ from 0 to 1, we obtain the plots shown in Supplementary Fig.\ \ref{fig:model_vs_model_comparison}\emph{E}. Note that AV dynamics at intermediate values for $\delta$ are not oscillatory, as we would expect from the analysis of Szyma{\'n}ska et al.\cite{szymanska2015computational}. In contrast, Eq.\ (1) does predict oscillatory AV dynamics at intermediate doses of MTORC1 inhibitor (Supplementary Fig.\ \ref{fig:model_vs_model_comparison}\emph{D}). Importantly, as desired, Eq.\ (1) makes predictions that are in qualitative agreement with the model of Martin et al.\cite{martin2013computational}, in that both models predict that AV dynamics stimulated by MTORC1 inhibitor treatment unfold on a similar timescale and that the maximal range of regulation is similar. In Supplementary Fig.\ \ref{fig:model_vs_data_comparison}, we directly compare the AV dynamics predicted by Eq.\ (1) with AV dynamics measured by Martin et al.\cite{martin2013computational}. As can be seen, Eq.\ (1) is roughly consistent with the data.
Finally, parameter settings for the drug clearance rate constants in Eq.\ (1) ($\delta_1,\ldots,\delta_6$) were set in accordance with measured drug lifetimes reported in the literature, which have half-lives ranging from approximately 1 to 40 h. See Supplementary Table S2 and references cited therein. With this approach, the different drugs considered have different pharmacokinetics, arguably making the optimal control problem more realistic.
\section*{Pseudo-Spectral Optimal Control}\label{sec:PSOC}
We present here a brief overview of the theory of pseudo-spectral optimal control (PSOC).
Before discussing the PSOC framework, we briefly review optimal control as well as the difficulties that arise when attempting to solve a general optimal control problem (OCP) analytically.
Afterwards, we describe how PSOC discretizes the OCP, approximating the original OCP as a nonlinear programming (NLP) problem.
Approximating the original problem as an NLP is beneficial because there exists a vast literature and many pieces of software capable of solving large-scale NLPs efficiently.
Finally, we discuss our choices of software, all of which are open-source, and briefly discuss the algorithms they implement.
%
\subsection*{Optimal Control}
%
The field of optimal control combines aspects of dynamical systems, optimization, and calculus of variations \cite{kirk2012optimal}.
In words, an optimal control problem is solved by finding a time varying control input $\textbf{u}(t)$ that minimizes a quantity $J(\textbf{x},\textbf{u},t)$ subject to a system's dynamics and other constraints.
%
\subsubsection*{General Problem}
Define the states of the system as $\textbf{x}(t) \in \mathbb{R}^n$, the control inputs as $\textbf{u}(t) \in \mathbb{R}^m$, and time $t \in [t_0,t_f]$ where $t_0 < t_f$.
The typical form of an optimal control problem for a continuous-time system can be written as,
%
\begin{equation}\label{eq:OCP}
\begin{aligned}
\min_{\textbf{u}(t)} && &J(\textbf{x}(t),\textbf{u}(t),t) = E\left(\textbf{x}(t_0),\textbf{x}(t_f), t_0, t_f\right) + \int_{t_0}^{t_f} F\left( \textbf{x}(t), \textbf{u}(t), t\right) d t\\
\text{s.t.} && &\dot{\textbf{x}}(t) = \textbf{f} ( \textbf{x}(t), \textbf{u}(t), t)\\
&& &\textbf{e}^L \leq \textbf{e}(\textbf{x}(t_0), \textbf{x}(t_f), t_0, t_f) \leq \textbf{e}^U\\
&& &\textbf{h}^L \leq \textbf{h}(\textbf{x}(t), \textbf{u}(t), t) \leq \textbf{h}^U\\
&& &t \in [t_0,t_f]
\end{aligned}
\end{equation}
%
The objective function (or cost function) $J(\textbf{x},\textbf{u},t)$ is composed of two parts, (i) $E : \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R} \times \mathbb{R} \mapsto \mathbb{R}$ which is a cost associated with the endpoint behavior of the system $\textbf{x}(t_0)$ and $\textbf{x}(t_f)$, and (ii) $F : \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R} \mapsto \mathbb{R}$ which is a running cost over the entire time interval $[t_0,t_f]$.
The system dynamics is described by the function $\textbf{f} : \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R} \mapsto \mathbb{R}^n$.
Constraints on the endpoints ($\textbf{x}(t_0)$ and/or $\textbf{x}(t_f)$) are described by $\textbf{e} : \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R} \times \mathbb{R} \mapsto \mathbb{R}^e$.
While we only specify initial conditions, more complicated relations between the endpoints of the states can be specified as well.
Finally, path constraints, such as bounds on the states or control inputs, are described by $\textbf{h} : \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R} \mapsto \mathbb{R}^h$.
%
\subsubsection*{Notation for Therapies}
Let $\mathcal{D} = \{1,2,3,4,5,6\}$ denote the possible drugs we may use (described in the main text) and $\mathcal{T}_k \subseteq \mathcal{D}$ denote the drugs chosen for our therapy such that $|\mathcal{T}_k| = k$.
%
Let $\textbf{w}(t) \in \mathbb{R}^k$ denote the drug concentrations and $\textbf{u}(t) \in \mathbb{R}^k$ denote the drug injection rates for \emph{only those drugs chosen to be in the therapy}.
For example, if we consider the dual therapy $\mathcal{T}_2 = \{3,6\}$, then
%
\begin{equation}
\begin{aligned}
\textbf{w}(t) = \left[ \begin{array}{c}
w_3(t) \\ w_6(t)
\end{array} \right], && \textbf{u}(t) = \left[ \begin{array}{c}
u_3(t) \\ u_6(t)
\end{array} \right]
\end{aligned}
\end{equation}
%
Those drugs not chosen to be in $\mathcal{T}_k$ are denoted $\mathcal{D} \backslash \mathcal{T}_k$.
In the example where $\mathcal{T}_k = \{3,6\}$, those drugs not used are $\mathcal{D} \backslash \mathcal{T}_k = \{1,2,4,5\}$.
If a drug $i \in \mathcal{D} \backslash \mathcal{T}$ then we set $w_i(t) = 0$ for all time $t$.
The drug concentrations appear in the dynamical equations as inhibitory Hill functions $H(w_i(t))$.
%
\begin{equation}\label{eq:Hill}
H(w_i(t)) = r_{m,i} - (r_{m,i} -r_{b,i}) \frac{w_i^{n_i}(t)}{w_i^{n_i}(t)+\theta^{n_i}}
\end{equation}
%
Note that if $i \notin \mathcal{T}_k$, then, as stated previously, $w_i(t) = 0$, and so, by Eq.\ \eqref{eq:Hill}, $H(w_i(t)) = 1$ for all time $t$.
\subsubsection*{The Minimum Drug OCP}
In the main text, we present a \emph{multi-phase optimal control problem}, i.e., two optimal control problems linked together by enforcing continuity at their interface.
Despite this added complexity, we can develop a set of necessary conditions for each phase individually and so for now we focus on the single phase problem.
We will return to the multi-phase problem in the next section that covers the discretization procedure.
Either phase of the OCP presented in the main text can be mapped to the general formulation presented in Eq.\ \eqref{eq:OCP} with the following definitions.
%
\begin{itemize}
\item The state variables $\textbf{x}(t) = \left[\begin{array}{cccccc} x_1(t) & x_2(t) & x_3(t) & x_4(t) & x_5(t) & \textbf{w}^T(t)\end{array} \right]^T \in \mathbb{R}^{5+k}$ and the control input $\textbf{u}(t)\in \mathbb{R}^k$ so that $n = 5+k$ and $m = k$.
\item The cost function $J = \int_{t_0}^{t_f} u_i(t) d t$ (see Eq.\ (1) in the main text) so that, from Eq.\ \eqref{eq:OCP}, $E \equiv 0$ and $F = \sum_{i \in \mathcal{T}} u_i(t)$.
\item The system dynamics, as presented in Eq.\ (1), are rewritten here,
%
\begin{equation}
\begin{aligned}
\dot{\textbf{x}}(t) &= \left[ \begin{array}{c}
\dot{x}_1(t)\\
\dot{x}_2(t)\\
\dot{x}_3(t)\\
\dot{x}_4(t)\\
\dot{x}_5(t)\\
\dot{\textbf{w}}(t)
\end{array} \right] = \textbf{f}(\textbf{x}(t),\textbf{u}(t)) = \bar{\textbf{f}}(\textbf{x}(t)) + B \textbf{u}(t)\\
&= \left[ \begin{array}{c}
(1-x_1)C_\text{Nu} H(w_1) H(w_2) - x_1 h_{12}(x_2)h_{13}(x_3)\\
(1-x_2) h_{23}(x_3) H(w_3) - x_2 h_{21}(x_1)\\
(1-x_3) k_1 H(w_4) - C_\text{En}x_2 x_3 H(w_5)\\
(1-x_4)h_{42}(x_2) H(w_2)H(w_6) - k_2x_4\\
k_3x_4 - k_4 x_5\\
- \Delta \textbf{w}(t)
\end{array} \right] + \left[ \begin{array}{c}
\boldsymbol{0}_k^T\\
\boldsymbol{0}_k^T\\
\boldsymbol{0}_k^T\\
\boldsymbol{0}_k^T\\
\boldsymbol{0}_k^T\\
I_k
\end{array} \right] \textbf{u}(t)
\end{aligned}
\end{equation}
where $\boldsymbol{0}_k$ is a vector of all zeros of length $k$, $I_k$ is the identity matrix of order $k$, and $\Delta$ is a diagonal matrix with the corresponding rates $\delta_i$ on the diagonal if $i \in \mathcal{T}$.
For example, if $\mathcal{T} = \{3,6\}$, then
%
\begin{equation}
\Delta = \left[ \begin{array}{cc}
\delta_3 & 0 \\ 0 & \delta_6
\end{array} \right]
\end{equation}
%
Also, note that if $i \notin \mathcal{T}$, then $w_i(t) \equiv 0$ and $H(w_i(t)) = 1$.
%
\item The only endpoint constraints are set at the initial time,
%
\begin{equation}
\begin{aligned}
\textbf{e}(\textbf{x}(t_0),\textbf{x}(t_f),t_0,t_f) = \left[ \begin{array}{c}
x_1(0) \\ x_2(0) \\ x_3(0) \\ x_4(0) \\ x_5(0) \\ \textbf{w}(0)
\end{array} \right], && \textbf{e}^L = \textbf{e}^U = \left[ \begin{array}{c}
x_{1,0} \\ x_{2,0} \\ x_{3,0} \\ x_{4,0} \\ x_{5,0} \\ \textbf{0}_k
\end{array} \right]
\end{aligned}
\end{equation}
%
%
where $x_{i,0}$ is chosen to either be the steady state value of the system in the absence of control inputs or the time-average of the time evolution of the system if the dynamics, in the absence of control inputs, is oscillatory.
We assume there is no drug present initially so $w_i(0) = 0$, $i \in \mathcal{D}$.
%
\item Finally, the path constraints consist of upper bounds on the drug concentrations and possibly a lower and/or upper bound on the AVs.
%
\begin{equation}\label{eq:h}
\begin{aligned}
\textbf{h}(\textbf{x}(t),\textbf{u}(t),t) = \left[ \begin{array}{c}
x_5(t) \\ \textbf{w}(t) \\ \textbf{u}(t)
\end{array} \right], && \textbf{h}^L = \left[ \begin{array}{c}
x_5^L \\
\boldsymbol{0}_k \\
\boldsymbol{0}_k
\end{array} \right], &&
\textbf{h}^U = \left[ \begin{array}{c}
x_5^U \\ w^{\max} \boldsymbol{1}_k \\ \infty
\end{array} \right]
\end{aligned}
\end{equation}
%
where, for the first phase, $x_5^L = 0$ and $x_5^U = \infty$ but for the second phase we choose $x_5^L = x_5^f - \epsilon$ and $x_5^U + \epsilon$.
Also, the upper bound on the drug concentration is chosen to be identical for all drugs in the therapy.
\end{itemize}
%
Solving Eq.\ \eqref{eq:OCP} is not a trivial task, and typically there exists no closed form solution.
Instead one typically must turn to numerical methods, such as PSOC, which we will discuss in the subsequent subsections in some detail.
Nonetheless, one can derive a set of necessary conditions that any solution to Eq.\ \eqref{eq:OCP} must satisfy using Pontryagin's minimum principle \cite{kirk2012optimal}.
Developing these types of necessary conditions allows us to construct a set of validation criteria with which we may test the quality of any solution returned by our numerical methods.
A full derivation of Pontryagin's minimum principle is beyond the scope of this work but it is readily available in many standard texts \cite{kirk2012optimal}.
Here, we present the main results surrounding the Hamiltonian constructed from Eq.\ \eqref{eq:OCP}.
\subsubsection*{Minimizing the Hamiltonian}
Define a vector of time-varying costates (or adjoint variables) as $\boldsymbol{\lambda}(t) = \left[ \begin{array}{cc} \boldsymbol{\lambda}_{\textbf{x}}^T(t) & \boldsymbol{\lambda}_{\textbf{w}}^T(t) \end{array} \right]^T \in \mathbb{R}^{5+k}$ so that $\boldsymbol{\lambda}_{\textbf{x}}(t) \in \mathbb{R}^5$ and $\boldsymbol{\lambda}_{\textbf{w}}(t) \in \mathbb{R}^k$.
The Hamiltonian of the OCP in Eq.\ \eqref{eq:OCP} is defined as,
%
\begin{equation}\label{eq:H}
\begin{aligned}
H(\boldsymbol{\lambda}, \textbf{x}, \textbf{u}, t) &= F(\textbf{x}, \textbf{u}, t) + \boldsymbol{\lambda}^T \textbf{f}(\textbf{x},\textbf{u},t)\\
&=\sum_{i \in \mathcal{T}} u_i + \boldsymbol{\lambda}^T \bar{\textbf{f}}(\textbf{x}) + \boldsymbol{\lambda} B \textbf{u}
\end{aligned}
\end{equation}
%
where $\boldsymbol{\lambda}(t) \in \mathbb{R}^n$ are the costates (or adjoint variables).
%
A solution to Eq.\ \eqref{eq:OCP} must also be a solution of the following minimization problem.
%
\begin{equation}\label{eq:HMC}
\begin{aligned}
\min_{\textbf{u}(t)} && &H(\boldsymbol{\lambda},\textbf{x},\textbf{u},t)\\
\text{s.t.} && &\textbf{h}^L \leq \textbf{h}(\textbf{x},\textbf{u},t) \leq \textbf{h}^U
\end{aligned}
\end{equation}
%
To solve Eq.\ \eqref{eq:HMC}, we define the associated Lagrangian,
%
\begin{equation}\label{eq:Hbar}
\begin{aligned}
\bar{H} (\boldsymbol{\mu},\boldsymbol{\lambda}, \textbf{x}, \textbf{u}, t) &= H(\boldsymbol{\lambda}, \textbf{x}, \textbf{u}, t) + \boldsymbol{\mu}^T \textbf{h}(\textbf{x},\textbf{u},t)\\
&= \sum_{i \in \mathcal{T}} u_i + \boldsymbol{\lambda}^T \bar{\textbf{f}}(\textbf{x}) + \boldsymbol{\lambda}^T B \textbf{u} + \mu_{x_5} x_5 + \boldsymbol{\mu}_{\textbf{w}}^T \textbf{w} + \boldsymbol{\mu}_{\textbf{u}}^T \textbf{u}
\end{aligned}
\end{equation}
%
where $\boldsymbol{\mu} = \left[ \begin{array}{ccc} \mu_{x_5} & \boldsymbol{\mu}_{\textbf{w}}^T & \boldsymbol{\mu}_{\textbf{u}}^T \end{array} \right]^T \in \mathbb{R}^h$ is the copath vector with components associated with the components of the vector of path constraints in \eqref{eq:h}.
A solution to Eq.\ \eqref{eq:HMC}, and thus to our original OCP, must satisfy,
%
\begin{equation}\label{eq:dHdu}
\begin{aligned}
\frac{\partial \bar{H}}{\partial \textbf{u}} = \boldsymbol{1}_k + B^T \boldsymbol{\lambda} + \boldsymbol{\mu}_{\textbf{u}} = \boldsymbol{0}
\end{aligned}
\end{equation}
%
where the costates evolve according to the dynamical equation,
%
\begin{equation}
\dot{\boldsymbol{\lambda}} = -\frac{\partial \bar{H}}{\partial \textbf{x}} = - \left( \frac{\partial \bar{\textbf{f}}}{\partial \textbf{x}} \right)^T \boldsymbol{\lambda} + \left[ \begin{array}{c}
\boldsymbol{0}_4 \\ \mu_{x_5} \\ \boldsymbol{\mu}_{\textbf{w}}
\end{array} \right]
\end{equation}
%
The optimal control input $u_i(t)$, $i \in \mathcal{T}$, must satisfy the complementarity condition \cite{nocedal2006numerical, ross2015primer}
%
\begin{equation}\label{eq:complement}
\left\{ \begin{aligned}
u_i(t) = 0 && \text{if} && \mu_i(t) < 0\\
u_i(t) \geq 0 && \text{if} && \mu_i(t) = 0\\
u_i(t) \rightarrow \infty && \text{if} && \mu_i(t) > 0
\end{aligned} \right.
\end{equation}
%
Combining Eqs.\ \eqref{eq:dHdu} and \eqref{eq:complement}, we can relate $\boldsymbol{\mu}_{\textbf{u}}$ to the time-varying costates by noting from the structure of $B$, $B^T \boldsymbol{\lambda} = \boldsymbol{\lambda}_{\textbf{w}}$ so that,
%
\begin{equation}
\boldsymbol{\mu}_{\textbf{u}}(t) = -\boldsymbol{1}_k - \boldsymbol{\lambda}_{\textbf{w}}(t)
\end{equation}
%
Thus, if $\lambda_{w_i} > -1$ then $u_i = 0$, but if $\lambda_{w_i} = -1$, then all we can say is that $u_i \geq 0$.
When $\lambda_{w_i} > -1$, the optimal control is said to have a \emph{singular arc} (see chapter 5 in \cite{kirk2012optimal}).
Despite the technical difficulties, we have arrived at our first set of validation conditions, that is,
%
\begin{equation}
\begin{aligned}
u_i \cdot (\lambda_{w_i}-1) = 0, && \forall i \in \mathcal{T}
\end{aligned}
\end{equation}
%
Let us now assume that we have solved Eq.\ \eqref{eq:HMC}, that is,
%
\begin{equation}
\begin{aligned}
\mathcal{H}(t) = \min_{\textbf{u} \in \mathbb{U}} H(\boldsymbol{\lambda},\textbf{x},\textbf{u},t)
\end{aligned}
\end{equation}
%
where $\mathbb{U}$ is the set of feasible control inputs, i.e., they satisfy all of the constraints imposed by Eq.\ \eqref{eq:OCP}.
%
The evolution of the Hamiltonian at the optimal solution can be written,
%
\begin{equation}
\frac{d \mathcal{H}}{d t} = \frac{\partial H}{\partial t}
\end{equation}
%
where, since in our OCP, $H$ does not explicitly depend on time, we expect that $d\mathcal{H} / d t = 0$ and so $\mathcal{H}$ should be constant.
This is the second validation condition.
While in the paper and the supplementary sections we display time traces of the states and the control inputs as they are the quantities of interest to the general reader, we are also able to access the costate and copath time traces, as well as the time trace of the Hamiltonian.
In Fig. \ref{fig:VV} we show a typical set of output that we use for measuring the quality of our returned numerical solution.
The sample shows a monotherapy where $\mathcal{T} = \{4\}$.
Panel (a) shows the level of AVs, $x_5(t)$, and panel (b) shows the drug concentration $w_4(t)$.
Panel (c) contains the copath associated with the level of AVs, $\mu_{x_5}(t)$.
Note that during the first phase when there is no finite bound on $x_5(t)$ the copath $\mu_{x_5}(t) = 0$, while during second phase if $\mu_{x_5}(t) \neq 0$ then $x_5(t) = x_5^f \pm \epsilon$.
In panel (d) we plot the other copath $\mu_{w_4}(t)$.
The control input $u_4(t)$ itself is shown in panel (e) along with the costate $\lambda_{w_4}(t)$ in panel (f).
Note that the times at which $u_4(t) > 0$ correspond to times when $\lambda_{w_4}(t) = -1$ as expected.
Panel (g) plots the time evolution of the Hamiltonian evaluated at the optimal solution.
Note that the $y$-axis is scaled by $10^{-2}$.
We see that $\mathcal{H} \approx const$ within each phase, with a jump occurring at the interface between the two phases.
As we cannot say anything about the value of the Hamiltonian at the interface, a discontinuity at this point in time can be expected.
\subsection*{Discretization of the OCP}
As presented in the previous subsection, we have seen that the set of necessary conditions which must be satisfied consist of a system of coupled nonlinear differential equations for $\textbf{x}(t)$ and $\boldsymbol{\lambda}(t)$ along with a set of non-trivial constraints.
Searching for an analytic solution is unlikely to be successful and so instead we turn to pseudospectral optimal control (PSOC).
In short, PSOC is a methodology by which one may discretize an OCP, approximating the integrals by quadratures and the time-varying states and control inputs with interpolating polynomials.
The key to PSOC is choosing the discretization points properly.
Let $\{\tau_i\}$, $i = 0, \ldots, N$, denote the discretization points.
Typically these are chosen as the roots of an orthogonal polynomial such as a Legendre polynomial or a Chebyshev polynomial of order $N$.
For some popular choices of discretization schemes see \cite{rao2009survey}.
For concreteness, we will assume that $\tau_0 = -1$ and $\tau_N = 1$, i.e., we are using a discretization scheme that includes the endpoints and is normalized by the mapping,
%
\begin{equation}
t = \frac{t_f-t_0}{2} \tau + \frac{t_f+t_0}{2}
\end{equation}
%
For the discretization scheme chosen, we also compute the associated quadrature weights.
For instance, if we choose the roots of a Legendre polynomial as the discretization scheme, the associated quadrature weights can be found in the typical way for Gauss quadrature.
The time-varying states and control inputs are found by approximating them with a Lagrange interpolating polynomial.
%
\begin{equation}\label{eq:dxu}
\begin{aligned}
\textbf{x}(\tau) &\approx \hat{\textbf{x}}(\tau) = \sum_{i=0}^N \hat{\textbf{x}}_i L_i(\tau)\\
\textbf{u}(\tau) &\approx \hat{\textbf{u}}(\tau) = \sum_{i=0}^N \hat{\textbf{u}}_i L_i(\tau)
\end{aligned}
\end{equation}
%
The Lagrange interpolating polynomials are defined as,
%
\begin{equation}
L_i(\tau) = \prod_{j=0,j\neq i}^N \frac{\tau - \tau_j}{\tau_i - \tau_j}
\end{equation}
%
Note that the Lagrange interpolating polynomials satisfy the isolation property, that is, $L_i(\tau_j) = \delta_{i,j}$.
We can thus construct a set of algebraic equations corresponding to the discretization points $\{\tau_i\}$.
Define $D_{k,i} = \frac{d L_i}{d\tau}(\tau_k)$ so that the derivative of the states at the discretization points can be approximated as,
%
\begin{equation}\label{eq:dxhat}
\dot{\hat{\textbf{x}}}(\tau_k) = \sum_{i=0}^N \hat{\textbf{x}}_i D_{k,i}
\end{equation}
%
With Eqs.\ \eqref{eq:dxu} and \eqref{eq:dxhat}, we can approximate the original system of $n$ differential equations as $n(N+1)$ algebraic equations.
%
\begin{equation}\label{eq:discstate}
\begin{aligned}
\sum_{i=0}^N D_{k,i} \hat{\textbf{x}}_i - \frac{t_f-t_0}{2} \textbf{f}(\hat{\textbf{x}}_k, \hat{\textbf{u}}_k, \tau_k) = \boldsymbol{0}_n, && k = 1,\ldots,N\\
\hat{\textbf{x}}_N - \hat{\textbf{x}}_0 - \sum_{k=1}^N \sum_{i=0}^N w_k D_{k,i} \hat{\textbf{x}}_i = \boldsymbol{0}_n
\end{aligned}
\end{equation}
%
%
The last set of algebraic constraints arise from the consistency condition $\int_{t_0}^{t_f} \dot{\textbf{x}}(t) d t = \textbf{x}(t_f) - \textbf{x}(t_0)$.
Similarly to the consistency condition, the integral in the cost function is approximated as,
%
\begin{equation}
J = \int_{t_0}^{t_f} F(\textbf{x},\textbf{u},t) \approx \hat{J} = \frac{t_f-t_0}{2} \sum_{k=1}^N F(\hat{\textbf{x}}_k, \hat{\textbf{u}}_k, \tau_k)
\end{equation}
%
The discretized approximation of the original OCP is compiled into the following nonlinear programming (NLP) problem.
%
\begin{equation}\label{eq:NLP}
\begin{aligned}
\min_{\textbf{u}_i} && &\hat{J} = \frac{t_f-t_0}{2} \sum_{k=1}^N F(\hat{\textbf{x}}_k, \hat{\textbf{u}}_k, \tau_k)\\
\text{s.t.} && &\sum_{i=0}^N D_{k,i} \hat{\textbf{x}}_i - \frac{t_f-t_0}{2} \textbf{f}(\hat{\textbf{x}}_k,\hat{\textbf{u}}_k, \tau_k) = \boldsymbol{0},\ k = 1,\ldots,N\\
&& &\hat{\textbf{x}}_N - \hat{\textbf{x}}_0 - \sum_{k=1}^N \sum_{i=0}^N w_k D_{k,i} \hat{\textbf{x}}_i = \boldsymbol{0}\\
&& &\textbf{e}^L \leq \textbf{e}(\hat{\textbf{x}}_0,\hat{\textbf{x}}_N, \tau_0,\tau_N) \leq \textbf{e}^U\\
&& &\textbf{h}^L \leq \textbf{h}(\hat{\textbf{x}}_k, \hat{\textbf{u}}_k, \tau_k) \leq \textbf{h}^U
\end{aligned}
\end{equation}
%
With the above results, we now present the application to the full multi-phase optimal control problem.
In general, let us assume there are $p$ phases where $p=2$ in our problem.
Each phase is active within the interval $t \in [t_0^{(p)},t_f^{(p)}]$.
In each phase there is a cost function $J^{(p)}$, a dynamical system $\textbf{f}^{(p)}$, a set of endpoint constraints $\textbf{e}^{(p)}$, and a set of path constraints $\textbf{h}^{(p)}$.
If two phases, $p$ and $q$, are linked, then there also exists a set of linkage constraints $\Phi^{(p,q)}$.
%
\begin{equation}\label{eq:mpOCP}
\begin{aligned}
\min_{\textbf{u}^{(p)}} && &\sum_{p = 1}^P J^{(p)} = \sum_{p=1}^P \int_{t_0^{(p)}}^{t_f^{(p)}} F^{(p)}(\textbf{x}^{(p)},\textbf{u}^{(p)},t) d t\\
\text{s.t.} && &\dot{\textbf{x}}^{(p)}(t) = \textbf{f}^{(p)}(\textbf{x}^{(p)}, \textbf{u}^{(p)}, t)\\
&& &\textbf{h}^{L,(p)} \leq \textbf{h}^{(p)}(\textbf{x}^{(p)},\textbf{u}^{(p)},t) \leq \textbf{h}^{U,(p)}\\
&& &\textbf{e}^{L,(p)} \leq \textbf{e}^{(p)}(\textbf{x}^{(p)}(t_0^{(p)}), \textbf{x}^{(p)}(t_f^{(p)}), t_0^{(p)}, t_f^{(p)}) \leq \textbf{e}^{U,(p)}\\
&& &\Phi^{L,(p,q)} \leq \Phi^{(p,q)}(\textbf{x}^{(p)},\textbf{x}^{(q)},\textbf{u}^{(p)},\textbf{u}^{(q)}) \leq \Phi^{U,(p,q)}
\end{aligned}
\end{equation}
%
Each phase is discretized with its own set of points, $\{\tau_i^{(p)}\}$ so that,
%
\begin{equation}
\textbf{x}^{(p)}(\tau) \approx \hat{\textbf{x}}^{(p)}(\tau) = \sum_{i=1}^N \hat{\textbf{x}}_i^{(p)} L_i(\tau)
\end{equation}
%
so that the full multi-phase NLP is,
%
\begin{equation}\label{eq:dmpOCP}
\begin{aligned}
\min_{\textbf{u}_i^{(p)}} && &\sum_{p=1}^P \frac{t_f^{(p)} - t_0^{(p)}}{2} \sum_{k=1}^N F^{(p)}(\hat{\textbf{x}}_k^{(p)},\hat{\textbf{u}}_k^{(p)},\tau_k)\\
\text{s.t.} && &\sum_{i=0}^N D_{k,i} \hat{\textbf{x}}_i^{(p)} - \frac{t_f^{(p)}- t_0^{(p)}}{2} \textbf{f}^{(p)} (\hat{\textbf{x}}^{(p)}_k, \hat{\textbf{u}}_k^{(p)},\tau_k) = \boldsymbol{0}_n, \quad p = 1,\ldots,P, \quad k = 1,\ldots,N\\
&& &\hat{\textbf{x}}_N^{(p)} - \hat{\textbf{x}}_0^{(p)} - \frac{t_f^{(p)}-t_0^{(p)}}{2} \sum_{k=1}^N \sum_{i=0}^N w_k D_{k,i} \hat{\textbf{x}}_i = \boldsymbol{0}_n, \quad p = 1,\ldots,P\\
&& &\textbf{e}^{L,(p)} \leq \textbf{e}^{(p)}(\hat{\textbf{x}}_0^{(p)},\hat{\textbf{x}}_N^{(p)},t_0^{(p)},t_f^{(p)}) \leq \textbf{e}^{U,(p)}, \quad p = 1,\ldots,P\\
&& &\textbf{h}^{L,(p)} \leq \textbf{h}^{(p)} (\hat{\textbf{x}}^{(p)}_k, \hat{\textbf{u}}_k^{(p)}, \tau_k) \leq \textbf{h}^{U,(p)}, \quad k = 1, \ldots,N, \quad p = 1,\ldots P\\
&& &\Phi^{L,(p,q)} \leq \Phi^{(p,q)}(\hat{\textbf{x}}_0^{(p)},\hat{\textbf{u}}^{(p)}_0, \hat{\textbf{x}}_N^{(q)}, \hat{\textbf{u}}_N^{(q)}) \leq \Phi^{U,(p,q)}, \quad p,q = 1,\ldots,P
\end{aligned}
\end{equation}
%
To perform the discretization described in this subsection, we use the open-source C++ PSOC package $\mathcal{PSOPT}$ \cite{becerra2010solving}.
Next we show that Eq.\ \eqref{eq:dmpOCP} can be expressed in the typical NLP form \cite{nocedal2006numerical}.
Let $\textbf{z}^{(p)}$ contain all of the variables for phase $p$.
%
\begin{equation}
\textbf{z}^{(p)} = \left[ \begin{array}{c}
\hat{\textbf{x}}_0^{(p)} \\ \vdots \\ \hat{\textbf{x}}_N^{(p)} \\ \hat{\textbf{u}}_0^{(p)} \\ \vdots \\ \hat{\textbf{u}}_N^{(p)}
\end{array} \right] \in \mathbb{R}^{(n+m)}
\end{equation}
%
Next, let $\textbf{z}$ contain the variables for every phase,
%
\begin{equation}
\textbf{z} = \left[ \begin{array}{c}
\textbf{z}^{(1)} \\ \vdots \\ \textbf{z}^{(P)}
\end{array} \right] \in \mathbb{R}^{(N+1)(n+m)}
\end{equation}
%
With some algebraic manipulation, the entire discretized multi-phase OCP can be rewritten as an NLP in the typical form.
%
\begin{equation}\label{eq:NLPs}
\begin{aligned}
\min_{\textbf{z}} && &c(\textbf{z})\\
\text{s.t.} && &\textbf{g}(\textbf{z}) = \boldsymbol{0}\\
&& &\textbf{d}(\textbf{z}) \leq \boldsymbol{0}
\end{aligned}
\end{equation}
%
To solve the large-scale NLP in Eq.\ \eqref{eq:NLPs} we employ an interior-point algorithm \cite{nocedal2006numerical}.
Specific details of the algorithm are outside the scope of this paper. We used the open-source C++ package Ipopt \cite{wachter2006implementation} to solve each instance of Eq.\ \eqref{eq:NLPs}.
We direct interested readers who would like to learn more about the technical detailed involved when solving Eq.\ \eqref{eq:NLPs} to the documentation provided with Ipopt.
%
The optimal solution returned, $\textbf{z}^\ast$, is separated into its component parts; first by splitting it into the phases $\textbf{z}^{(p)\ast}$, and second by reconstructing the discrete states and contrlol inputs, $\hat{\textbf{x}}_i^\ast$ and $\hat{\textbf{u}}_i^\ast$.
The continuous time control inputs and states are then reconstructed using the Lagrange interpolating polynomials in Eq.\ \eqref{eq:dxu}.
With the continuous time states and control inputs, $\textbf{x}^\ast(t)$ and $\textbf{u}^\ast(t)$, we then verify that the necessary conditions are met to within an acceptable tolerance.
\clearpage
\section*{\LARGE Supplementary Note}
\section*{The Response of AVs to Constant Perturbation by Dual Therapies}\label{response_si_2drugs}
Before solving the optimal control problem presented in the main text, we explore the capabilities of the dual therapies in terms of upregulate and downregulate with constant drug concentration as we did in Fig.\ 3 of the main manuscript.
There, we plotted the long-time response of the system to an individual time-constant drug concentration ($w$) perturbation for the two sets of parameters $C_ {\text{Nu}} = C_ {\text{En}} = 0.1$ and $C_{\text{Nu}} = C_{\text{En}} = 0.6$.
Similarly, in Fig.\ \ref{fig:constinput_2drugs_0101} and \ref{fig:constinput_2drugs_0606}, we plot the long-time system AV response for the case of dual therapies with time-constant drug concentration perturbations.
%
In Fig.\ \ref{fig:constinput_2drugs_0101}, we set the parameters $C_ {\text{Nu}} = C_{\text{En}}= 0.1$.
For these parameter values, in the absence of any drugs (control inputs), the sole attractor of the dynamical system corresponds to a high AV count ($\approx 37$).
Fig.\ \ref{fig:constinput_2drugs_0101} shows the long-time AV response when the system is perturbed by different combinations of constant inputs.
Note that those subsets that contain either drug $2$ or $6$ are capable of driving the AVs to zero if $w^{\max}$ is made large enough (pairs $\{2,3\}$, $\{2,4\}$, $\{2,6\}$, $\{3,6\}$, $\{4,6\}$, and $\{1,6\}$).
For each pair $\{i,j\}$, we set $w_i = w_j$ and all other values $w_k = 0$, $k \neq i$ and $k \neq j$.
The pair $\{3,4\}$ on the other hand is only capable of driving the AVs to $\approx 10$ where any increase of $w^{\max}$ afterwards can produce no further results.
Also, dual therapy $\{1,5\}$ is incapable of downregulate .
%
In Fig.\ \ref{fig:constinput_2drugs_0606}, we set the parameters $C_{\text{Nu}} = C_{\text{En}}=0.6$, for which the free evolution of the system is periodic (see Fig. 2 in the main text), and show the same long-time AV response results under constant drug concentration perturbation.
For all dual therapies shown, small drug concentrations are unable to remove the oscillations present (denoted by the shaded regions).
Similar to Fig. \ref{fig:constinput_2drugs_0101}, we see that all drug combinations that contain either drug $2$ or $6$ are capable of driving the level of AVs to zero for $w^{\max}$ set large enough.
Also, dual therapy $\{3,4\}$, as before, is only able to reduce the AVs level to $\approx 10$ while the dual therapy $\{1,5\}$ instead upregulates the AVs.
\section*{Exhaustive Analysis of Two-Drug Combinations}\label{sec:twodrug}
In this section, we present simulation results for all possible dual therapies .
First, we set both the parameters $C_\text{Nu} = C_\text{En} = 0.1$ for which the number of AVs at steady state in the absence of control inputs is equal to $\approx 37$.
We attempt to downregulate the number of AVs using pairs of drugs from the set $\{2, 3, 4, 6\}$ so that there are a total of $\binom{4}{2}= 6$ combinations.
A pair of drugs drawn from this set is called a dual therapy .
If $\{i,j\}$ is a dual therapy , then we say $\{i\}$ and $\{j\}$ are its component monotherapies .
%
The goal is to investigate our ability to downregulate the number of AVs from the steady state value $\approx 37$ to a lower value in a specified control time interval $[0,t_0]$ and, subsequently, to maintain the number of AVs near the target level for a second time interval $[t_0,t_f]$, by using each different dual therapy .
We say a dual therapy is \emph{viable} if it is capable of performing the goal stated.
A dual therapy is deemed efficient if;
%
\begin{itemize}
\item the dual therapy is viable while at least one of its component monotherapies is not, and
\item the total amount of drugs provided by the dual therapy is less than either of the component monotherapies .
\end{itemize}
%
To compare the efficiencies of the dual therapies we define $r^\ast_{i,k}(t) = \int_0^t u_i^*(\tau) d\tau$ as the total amount of drug $i$ administered at time $t$ as part of a $k = \text{dual}$ or $k = \text{mono}$ and introduce the quantities $\rho_i$ and $\tau_i$.
%
\begin{equation}\label{eq:ratio}
0 \leq \rho_{i} = \frac{r^\ast_{i,\text{dual}}(t_f)}{r^\ast_{i,\text{mono}} (t_f) }\leq 1,
\end{equation}
%
Note that $r^\ast_{i,\text{dual}}(t_f) \leq r^\ast_{i,\text{mono}} (t_f)$, as otherwise the solution of the dual therapy optimal control problem would be suboptimal with respect to the case that only drug $i$ is used.
We also define the ratio
%
\begin{equation}\label{eq:activation}
\tau_{i} = \frac{\bar{t}_{i,\text{dual}}-\bar{t}_{i,\text{mono}}}{\bar{t}_{i,\text{mono}} }
\end{equation}
%
where $\bar{t}_{i,\text{dual}}$ is the time when drug $i$ is activated (that is, the earliest time at which the drug injection rate is nonzero) as a part of a dual therapy and $\bar{t}_{i,\text{mono}} $ is the time when drug $i$ is activated as a monotherapy .
Note that $\tau_i > 0$ ($\tau_i < 0$) indicates a later (earlier) activation time of drug $i$ as a part of dual therapy compared to as a monotherapy .
For our simulations, we set the upper bound of the drug concentrations to $w_i^{\max} = 2$ for each drug $i$, the time at which we apply the upper bound to the AVs to $t_0=120$ minutes, the time at which we end the simulation to $t_f=240$ minutes, and we set the initial condition $\textbf{x}(0)$ to be equal to the steady state solution of the system in the absence of control inputs with parameters $C_{\text{En}} = C_{\text{Nu}} = 0.1$.
%
In Fig.\ \ref{fig:Nu01En01}, we plot the total drug administered $r_i(t) = \int_0^t u_i(\tau) d\tau$ in the interval $[0, t_f]$.
The plots on the diagonal panels, labeled $(u_i , u_i)$, correspond to the monotherapies and the plots on the upper triangular panels, labeled $(u_i,u_j)$, correspond to the dual therapies .
Symmetric to each upper triangular panel $(u_i , u_j)$, the corresponding lower triangular panel $(u_j,u_i)$ contains the values of the ratios $\rho_i$ and $\tau_i$ in Eqs.\ \eqref{eq:ratio} and \eqref{eq:activation}, respectively.
We notice from Fig.\ 3\emph{A} in the main text that the only monotherapies which can downregulate the number of AVs from $\approx 37$ to $\approx 10$, with $w_i \leq 2$, is $\{4\}$.
Thus, the red crosses in panels $(u_2,u_2)$, $(u_3 , u_3)$ and $(u_6,u_6)$ in Fig.\ \ref{fig:Nu01En01} indicate that those monotherapies cannot solve the downregulate problem.
Clearly, dual therapies $\{2,4\}$, $\{3,4\}$ and $\{4,6\}$ are viable as drug $\{4\}$ as a monotherapy is viable.
On the other hand, the dual therapies $\{2,3\}$ and $\{3,6\}$ are not viable.
The most interesting dual therapy is $\{2,6\}$ as neither component monotherapy is viable yet as a pair they are viable
Thus by our stated goal and definitions, the dual therapy $\{2,6\}$ is efficient according to our criteria.
Also, dual therapy $\{3, 4\}$ is deemed efficient as the total consumption of drug $4$ is much lower ($\rho_4=0.29$) than the total consumption of drug $4$ as a monotherapy as shown in panel $(u_3,u_4)$ in Fig.\ \ref{fig:Nu01En01}.
We also observe the faster response of drug $4$ as a part of the $\{3,4\}$ dual therapy than its response as a monotherapy because $\tau_4 = 0.32 > 0$.
In Fig.\ \ref{fig:Nu01En01_other}, we consider the dual therapies by combining one of the downregulate drugs, 2, 3, 4 or 6, with one of the upregulate drugs, 1 or 5.
A red cross in a panel again represents a monotherapy or a dual therapy that is not viable.
While the dual therapies $\{1,4\}$ and $\{4,5\}$ are viable, they are not efficient as neither drugs $1$ nor $5$ are used (non-zero).
%
In Fig.\ \ref{fig:Nu06En06_down}, we present detailed results when we set the parameters $C_\text{En}=C_\text{Nu} = 0.6$, for which the dynamics in the absence of control inputs is oscillatory.
In our numerical experiments, we attempt to downregulate the number of AVs from its initial periodic behavior to $x_5(t_0) \approx 10$ and to maintain the number of AVs near that value for the time interval $[t_0=120,t_f=240]$.
The red cross in panel $(u_6,u_6)$ indicates the inability of drug $6$ as a monotherapy to downregulate the AVs to the desired level.
However, we found this drug to be particularly beneficial when used as a component in a dual therapy .
We find that while all dual therapies are viable, the most efficient dual therapy is $\{2,6\}$, as the total amount of drug $2$ required is reduced by more than five folds when compared to the monotherapy $\{2\}$.
A comparison with drug $6$ alone is not possible as drug $6$ as a monotherapy is not viable.
The dual therapy $\{3,6\}$ is also efficient by our definition, but only slightly as the amount of drug $3$ used is hardly reduced, $\rho_3=0.96$.
For all other dual therapies , one of the component drugs is never activated so while they may by viable, we do not consider them efficient.
%
In Fig. \ref{fig:Nu06En06_up}, we summarize the results when we attempt to upregulate the number of AVs to $\approx 37$ in the same control time interval $[0,t_0]$ and, subsequently, maintain the number of AVs throughout the time interval $[t_0, t_f]$ by using dual therapy $\{1,5\}$.
We observe that, while the dual therapy $\{1,5\}$ is viable, it is not efficient as drug $1$ is never activated and so we must use the same amount of drug $5$ as when it is used as a monotherapy .
In Fig.\ \ref{fig:Nu06En06_other}, we consider the dual therapies by combining one of the downregulate drugs, 2, 3, 4 or 6, with one of the upregulate drugs, 1 or 5.
We observe that the dual therapies $\{1,6\}$ and $\{5,6\}$ are only efficient when $C_{\text{En}} = C_{\text{Nu}} = 0.6$.
The other dual therapies while viable are not efficient as the upregulate component (either $1$ or $5$) is never activated (that is, non-zero).
%
%
\clearpage
\renewcommand{\refname}{References Cited in Supplementary Information}
\section*{Supplementary Tables}
\renewcommand{\tablename}{Supplementary Table}
\renewcommand{\thetable}{S\arabic{table}}
\clearpage
\label{table1}
\begin{table}[tbhp!]
\centering
\caption{Parameters of the model (Eq.\ (1)). See ``Formulation of the Model'' in Supplementary Methods for discussion. The parameter values are dimensionless except as indicated.}
{\setlength\doublerulesep{0.5pt}
\begin{tabular}{rlcrl}
\toprule[1 pt] \midrule
Parameter & Value & \quad & Parameter & Value \\
\midrule \midrule
$r_{b,12}$ & 0 & & $k_1$ & $1.00\times 10^{-1}$ \\
$r_{m,12}$ & $1.00\times 10^1$ && $k_2$ & $3.00\times 10^{-1}$ \\
$\theta_{12}$ & $3.00\times 10^{-1}$ && $k_3$ & $4.00\times 10^0$\\
$n_{12}$ & $4.00\times 10^0$ && $k_4$ & $1.00\times 10^{-1}$ \\
$r_{b,13}$ & 0 && $\delta_1$ & $3.10\times 10^{-4}$ \\
$r_{m,13}$ & $1.00\times 10^1$ && $\delta_2$ & $1.93\times 10^{-3}$ \\
$\theta_{13}$ & $6.00\times 10^{-1}$ && $\delta_3$ & $5.78 \times 10^{-3}$ \\
$n_{13}$ & $6.00\times 10^0$ && $\delta_4$ & $1.15 \times 10^{-2}$ \\
$r_{b,23}$ & 0 && $\delta_5$ & $2.31 \times 10^{-3}$ \\
$r_{m,23}$ & $6.00\times 10^0$& & $\delta_6$ & $1.16 \times 10^{-3}$ \\
$\theta_{23}$ & $1.00\times 10^0$ && $r_{b}$ & 0 \\
$n_{23}$ & $4.00\times 10^0$ && $r_{m}$ & $1.00\times 10^0$ \\
$r_{b,21}$ & $1.00\times 10^{-1}$ && $\theta$ & $5.00\times 10^{-1}$ \\
$r_{m,21}$ & $6.00\times 10^0$ && $n$ & $2.00\times 10^0$ \\
$\theta_{21}$ & $6.00\times 10^{-1}$ && $T$ & $1.00\times 10^0$ (min)\\
$n_{21}$ & $4.00\times 10^0$ && & \\
$r_{b,42}$ & $1.00\times 10^{-1}$ && & \\
$r_{m,42}$ & $6.00\times 10^0$ && & \\
$\theta_{42}$ & $5.00\times 10^{-1}$ && & \\
$n_{42}$ & $4.00\times 10^0$ && & \\
\midrule
\bottomrule[1pt]
\end{tabular}
}
\label{tab:param}
\end{table}
\clearpage
\begin{table}[tbhp!]
\centering
\caption{Summary of measured drug half-lives used to set values for the drug clearance rate constants $\delta_1,\ldots,\delta_6$ in Eq.\ (1). Each half-life, $t_{1/2,i}$, is the measured half-life of a representative of drug type $i$. See the references cited in the table for details about the drugs and measurements. }
{\setlength\doublerulesep{0.5pt}
\begin{tabular}{cccccccl}
\toprule[1 pt] \midrule
Drug $i$ & Half-life $t_{1/2,i}$ & Value ($\mathrm{h}^{-1}$) & \quad & Rate constant $\delta_i$ & Value ($\mathrm{min}^{-1}$) & \quad & Reference\\
\midrule \midrule
1 & $t_{1/2,1}$ & $\sim 37$ && $\delta_1$ & $3.10\times 10^{-4}$ && Sato et al.\cite{sato2006temporal} \\
2 & $t_{1/2,2}$ & $\sim 6$ && $\delta_2$ & $1.93\times 10^{-3}$ && Baselga et al.\cite{baselga2017buparlisib} \\
3 & $t_{1/2,3}$ & $\sim 2$ && $\delta_3$ & $5.78 \times 10^{-3}$ && Milkiewicz et al.\cite{milkiewicz2011improvement} \\
4 & $t_{1/2,4}$ & $\sim 1$ && $\delta_4$ & $1.15 \times 10^{-2}$ && Engers et al.\cite{engers2013synthesis} \\
5 & $t_{1/2,5}$ & $\sim 5$ && $\delta_5$ & $2.31 \times 10^{-3}$ && Cameron et al.\cite{cameron2016discovery} \\
6 & $t_{1/2,6}$ & $\sim 10$& & $\delta_6$ & $1.16 \times 10^{-3}$ && Juric et al.\cite{juric2017first} \\
\midrule
\bottomrule[1pt]
\end{tabular}
}
\label{tab:param2}
\end{table}
\clearpage
\section*{Supplementary Figures}
\renewcommand{\figurename}{Supplementary Fig.}
\renewcommand{\thefigure}{S\arabic{figure}}
\clearpage
\begin{figure}[tbhp!]
\centering
\includegraphics[width=\textwidth]{FigS1.pdf}
\caption{Comparison of simulations based on Eq.\ (1) and simulations based on models of Szyma{\'n}ska et al.\cite{szymanska2015computational} (Ref. 33 in the main text) and Martin et al.\cite{martin2013computational} (Ref. 34 in the main text). (\emph{A}) AV dynamics, $x_5(t)$, predicted by Eq.\ (1). The value of $x_5$ is initially steady and low; the system is perturbed by two additions of rapamycin at time $t=100$ and 200 min, as indicated. (\emph{B}) Dynamics of ULK1 activity, $x_2(t)$, predicted by Eq.\ (1). The conditions considered are the same as those in panel \emph{A}. (\emph{C}) Dynamics of ULK1 activity predicted by the model of Szyma{\'n}ska et al.\cite{szymanska2015computational}. The conditions considered here correspond qualitatively to those considered in panels \emph{A} and \emph{B}. Initially, there is no rapamycin. Later, a low dose of rapamycin is added. Still later, a high dose of rapamycin is added. Note that the models of Eq.\ (1) and Szyma{\'n}ska et al.\cite{szymanska2015computational} have different timescales. This situation is partly a consequence of requiring Eq.\ (1) to reproduce the AV dynamics measured by Martin et al.\cite{martin2013computational}. Szyma{\'n}ska et al.\cite{szymanska2015computational} showed that the qualitative pattern of behavior illustrated here is a robust feature of known regulatory interactions among AMPK, MTORC1, and ULK1 (i.e., the pattern of behavior is insensitive to parameter variations). Furthermore, it should be noted that the model of Szyma{\'n}ska et al.\cite{szymanska2015computational} does not track AVs. Thus, there is no direct comparison to be made with the time course shown in panel \emph{A}. (\emph{D}) AV dynamics predicted by Eq.\ (1). AV production is stimulated by the addition of rapamycin at the (dimensionless) doses indicated in the legend. (\emph{E}) AV dynamics predicted by the model of Martin et al.\cite{martin2013computational}. As in panel \emph{D}, autophagy is induced by the addition of rapamycin at different doses, as indicated in the legend. For further discussion, see ``Formulation of the Model'' in Supplementary Methods.}
\label{fig:model_vs_model_comparison}
\end{figure}
\clearpage
\begin{figure}[tbhp!]
\centering
\includegraphics[width=10cm]{FigS2.pdf}
\caption{Comparison of simulations based on Eq.\ (1) and data generated by Martin \textit{et al}.\cite{martin2013computational} (Ref. 34 in the main text). We parameterized the model of Eq.\ (1) to roughly reproduce autophagic vesicle (AV) population dynamics reported by Martin et al.\cite{martin2013computational}. Our goal was not to reproduce the observed dynamics exactly but rather to select parameters that yield induction dynamics on a comparable timescale and a comparable maximal range of regulation. The measured dynamics were induced by inhibition of MTORC1 using AZD8055, a catalytic MTOR inhibitor. Dynamics were similar when autophagy was induced using rapamycin\cite{martin2013computational}. The curve corresponds to a simulation based on Eq.\ (1). Each dot corresponds to the average of AV counts measured in a series of fluorescence microscopy experiments\cite{martin2013computational}. The data shown here are taken from Figure 6B in Martin et al.\cite{martin2013computational}. For further discussion, see ``Formulation of the Model'' in Supplementary Methods.}
\label{fig:model_vs_data_comparison}
\end{figure}
\clearpage
\begin{figure}[tbhp]
\centering
\includegraphics[width = 8.7 cm]{FigS3.pdf}
\caption{
The dual therapy long-time response of the system in the case of time-constant drug concentration perturbations for the parameters $C_ {\text{Nu}} = C_ {\text{En}} = 0.1$.
Note that when $w$ is small, the system is oscillatory (represented by the shaded region in the panels).
For each pair of drug, there is some value of $w$ required to overcome the natural oscillatory behavior of the system.
}
\label{fig:constinput_2drugs_0101}
\end{figure}
\clearpage
\begin{figure}[tbhp]
\centering
\includegraphics[width = 8.7cm]{FigS4.pdf}
\caption{
The dual therapy long-time response of the system in the case of time-constant drug concentration perturbations for the parameters $C_ {\text{Nu}} = C_ {\text{En}} = 0.6$.
}
\label{fig:constinput_2drugs_0606}
\end{figure}
\clearpage
\begin{figure}[t]
\centering
\includegraphics[width=17.8cm]{FigS5.pdf}
\caption{
The parameter set $C_{\text{Nu}} = C_{\text{En}}=0.1 $.
The target level of AVs is set $x_5^f = 10$ and the maximum drug concentration is set $w_i^{\max} = 2$.
The diagonal panels represent monotherapies while off-diagonal panels represent dual therapies .
Super-diagonal panels plot the total drug administered and sub-diagonal panels show the efficiency ratios described in the text of the dual therapies .
Those diagonal panels with a red cross correspond to those monotherapies which are not viable.
The only viable monotherapy is $\{4\}$, which is shown with a green background.
The off-diagonal panel with a red background for dual therapy $\{2,4\}$ is viable, but it is not efficient as drug $2$ is not activated.
The other three viable dual therapies , $\{2,6\}$, $\{3,4\}$, and $\{4,6\}$ are both viable and efficient, shown with a blue background.
}
\label{fig:Nu01En01}
\end{figure}
\clearpage
\begin{figure}[t]
\centering
\includegraphics[width=17.8cm]{FigS6.pdf}
\caption{
The parameter set $C_{\text{Nu}} = C_{\text{En}}=0.1 $.
The target level of the AVs is set to $x_5^f = 10$ and the maximum drug concentration is set to $w_i^{\max} = 2$.
Here we consider those dual therapies which combine one downregulate drug ($2$, $3$, $4$, or $6$) with one of the upregulate drugs ($1$ or $5$).
Most of the dual therapies are not viable, which is represented with a red cross.
The two viable dual therapies , $\{1,4\}$ and $\{4,5\}$, are not viable and so they are shown with a red background.
}
\label{fig:Nu01En01_other}
\end{figure}
\clearpage
\begin{figure}[t]
\centering
\includegraphics[width=17.8cm]{FigS7.pdf}
\caption{
The parameter set $C_{\text{Nu}} = C_{\text{En}}=0.6$.
The target level of the AVs is set to $x_5^f = 10$ and the maximum drug concentration is set to $w_i^{\max} = 2$.
The diagonal panels $(u_i,u_i)$ (with a green background) show the total drug administered for monotherapies .
The red cross on the diagonal panel corresponding to monotherapy $\{6\}$ represents the fact $\{6\}$ is not viable.
The upper triangular panels $(u_i,u_j)$, $i < j$, show the total drugs administered for dual therapies .
In the lower triangular panels $(u_j,u_i)$, $i < j$, we compare the dual therapies to their component monotherapies with the efficiency parameters $\tau$ and $\rho$.
A red background in an off-diagonal panel represents those dual therapies which are viable but not efficient with respect to its component monotherapies .
A blue background represents those dual therapies which are both viable and efficient.
}
\label{fig:Nu06En06_down}
\end{figure}
\clearpage
\begin{figure}[tbhp!]
\centering
\includegraphics[scale = 1]{FigS8.pdf}
\caption{
The parameter set $C_{\text{Nu}} = C_{\text{En}}=0.6 $.
The target level of the AVs is set to $x_5^f = 10$ and the maximum drug concentration is set to $w_i^{\max} = 2$.
The red crosses on the diagonal panels represents the fact that the monotherapies $\{1\}$ and $\{6\}$ are not viable.
On the other hand, the dual therapy $\{1,6\}$ is both viable and efficient.
The viable dual therapies composed of two monotherapies which are not viable alone are the type of dual therapies we find most interesting as they are not obvious when analyzing the monotherapies alone.
In the lower triangular panel we compare the dual therapy to its component monotherapies with respect to the efficiency ratios $\rho$ and $\tau$.
}
\label{fig:Nu06En06_down_2}
\end{figure}
\clearpage
\begin{figure}[tbhp!]
\centering
\includegraphics[scale = 1]{FigS9.pdf}
\caption{
The parameter set $C_{\text{Nu}} = C_{\text{En}}=0.6 $.
The target level of the AVs is set to $x_5^f = 10$ and the maximum drug concentration is set to $w_i^{\max} = 2$.
The diagonal panels represent the monotherapies $\{1\}$ and $\{5\}$.
A red cross on the diagonal panel for monotherapy $\{1\}$ represents the fact $\{1\}$ is not viable.
On the other hand, monotherapy $\{5\}$ is viable (shown with a green background).
The dual therapy $\{1,5\}$ is viable (total drug administered is shown with the red background in the upper triangular panel) but is not efficient.
The inefficiency is shown in the lower triangular panel with the efficiency ratios $\rho_5 = 1$.
}
\label{fig:Nu06En06_up}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=17.8cm]{FigS10.pdf}
\caption{
The parameter set $C_{\text{Nu}} = C_{\text{En}}=0.6$.
The target level of AVs is set to $x_5^f = 10$ and the maximum drug concentration is set to $w_i^{\max} = 2$.
Here we consider those dual therapies compose of one downregulate drug ($2$, $3$, $4$, or $6$), and one upregulate drug ($1$ or $5$).
Those panels with a red background represent dual therapies which are viable but not efficient while the two dual therapies $\{1,6\}$ and $\{5,6\}$ are efficient.
In fact, as seen before, neither the component monotherapy $\{6\}$ nor the upregulate drugs are viable for this parameter set, so these efficient dual therapies are particularly interesting as they could not be found when analyzing the monotherapies alone.
}
\label{fig:Nu06En06_other}
\end{figure}
\clearpage
\begin{figure}[tbhp!]
\centering
\includegraphics[width = 8.7cm]{FigS11.pdf}
\caption{
a) The optimal time evolution of the amount of AVs.
b) The optimal time evolution of the drug concentration $w_4(t)$.
c) The time evolution of the path covector $\mu_{x_5}$ associated with the upper bound applied to $x_5(t)$.
d) The time evolution of the path covector $\mu_{w_4}$ associated with the state $w_4(t)$.
e) The optimal time evolution of the drug $u_4(t)$.
f) The costate $\lambda_{w_4}(t)$ associated with the state $w_4(t)$.
g) The time evolution of the lower Hamiltonian $\mathcal{H}$.
h) The relative local discretization error at each time $t$.}
\label{fig:VV}
\end{figure}
\bibliographystyle{naturemag-doi}
|
2,869,038,156,290 | arxiv | \section{Introduction}
Spintronics, aiming at utilizing the spin degree of freedom instead of or in addition to the charge one of an electron, has attracted much attention recently.
Creation, annihilation, and control of the spin current--a flow of the spin degree of freedom--have been the main topics in this research field.
Spin currents can be generated electromagnetically~\cite{Silsbee1979,Mizukami2002,Tserkovnyak2002,Kato2004,Wunderlich2005}, optically,~\cite{Prins1995} and thermally~\cite{Uchida2008,Slachter2010,Cornelissen2015}, and their propagation spans the whole momentum ($\vec{Q}$) space.
However, the detection has been limited to the long-wavelength limit ($Q=0$) by voltage measurement through the inverse spin Hall effect~\cite{Azevedo2005,Saitoh2006,Valenzuela2006,Kimura2007}.
The measured voltage is the macroscopic sum of the induced spin currents; hence, only the relative intensity and overall propagation direction can be discriminated.
To gain microscopic views of the spin current including the diffusion length, lifetime, and quantitative signal strength with the effects of thermal activation, information at the characteristic momentum/energy points will be needed.
In this review, we tackle the microscopic view of the spin current from the $(\vec{Q},E)$-resolved information via neutron scattering techniques using the quintessential ferrimagnet yttrium iron garnet.
Yttrium iron garnet (YIG) with the chemical composition of Y$_3$Fe$_5$O$_{12}$ is an insulator with a complex structure.
It is an essential material for microwave and optical technologies~\cite{Wu2013} and also for basic research in spintronics, magnonics, and quantum information~\cite{Tabuchi2015}.
This is owing to the highest quality magnetization dynamics among the known magnets, yielding long magnon lifetimes~\cite{Chang2014}.
The unit cell in YIG contains Fe$^{3+}$ local moments with spin $S=5/2$ in tetrahedral ($24d$ Wyckoff position) and octahedral ($16a$ Wyckoff position) oxygen ligands with opposite spin projections in a ratio of 3:2, giving a net magnetization.
There are two major magnon dispersions, acoustic and optical modes.
The gap separating the optical and acoustic modes is on the order of the thermal energy at room temperature.
At low temperatures, YIG behaves like a simple ferromagnet with an approximately quadratic magnon dispersion.
At higher temperatures, non-parabolicities become apparent, and optical modes start to become occupied.
Neutron scattering is the unrivaled method of choice to measure the magnon dispersion across large areas of reciprocal space, enabling the magnon dispersion in magnets to be unambiguously determined.
Here, we review the crystal/magnetic structure and magnon dispersion relations of YIG in a wide $(\vec{Q},E)$-regime and the mode-resolved direction of the precessional motion of the magnetic moments, i.e., magnon polarization.
Results are mainly based on both unpolarized~\cite{Shamoto2018,Shamoto2020} and polarized~\cite{Nambu2020} neutron scattering experiments.
We find negatively polarized optical modes over the exchange gap, as well as the positively polarized acoustic mode, confirming the ferrimagnetic character in YIG, and that thermal excitation of the optical mode will limit the amplitude of the induced spin current~\cite{Nambu2020}.
This review is structured as follows.
We first discuss the detailed crystal and magnetic structure of YIG.
We then show magnetic excitations in YIG ranging from high (100~meV) to ultralow (10~$\mu$eV) energy with macroscopic magnetization and specific heat results.
Next, we move on to polarized neutron scattering results, starting with a review of the magnon polarization as the spin current carrier.
Cross sections as a function of the direction of neutron polarization are then explained, which are needed to follow the results of the polarized neutron scattering.
Finally, the mode-resolved magnon polarization is explicitly determined through the chiral term detection.
The authors wrote Section 5 together and separately wrote other Sections: Y.~N. is responsible for Sects. 1, 4, and 6, and S.~S. is responsible for Sects. 2 and 3.
\section{Crystal and Magnetic Structure}
Recent detailed studies of the longitudinal spin Seebeck effect (LSSE) on YIG have revealed the importance of the basic parameters of YIG under magnetic fields~\cite{Kikkawa2015,Kikkawa2016}.
Here, the crystal and magnetic structure is discussed in detail.
In a sub-unit cell of this ferrimagnet with five Fe spins, as shown in Fig.~\ref{fig:1}, there are three up spins and two down spins, corresponding to the three positive and two negative polarization modes, respectively.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig1.pdf}
\end{center}
\caption{(Color online) Fe spins in a sub-unit cell that corresponds to $1/8$ of a cubic unit cell ($Ia\bar{3}d$) with 40 Fe spins. Blue and brown arrows are spins at 16$a$ (octahedral) and 24$d$ (tetrahedral) sites for $Ia\bar{3}d$, respectively. Three nearest-neighbor-exchange integrals between 16$a$ and 24$d$ sites, $J_{aa}$, $J_{ad}$, and $J_{dd}$, are shown by red, blue, and orange lines, respectively. Reprinted with permission from Shamoto {\it et al}.~\cite{Shamoto2018}({\copyright}$\,$ 2018 The American Physical Society).}
\label{fig:1}
\end{figure}
A large number of basic properties of YIG have been historically reported~\cite{Cherepanov}.
The crystal and magnetic structures were studied by using a powder sample~\cite{Rodic}.
The crystal structure of YIG was distorted from cubic to trigonal symmetries under a magnetic field of 0.2~T~\cite{Rodic}.
For precise crystal and magnetic structure refinements, a magnetic field is required to remove the magnetic domain walls from a single crystal.
Such a measurement was carried out at about 295~K under a magnetic field ($B\approx 0.1$~T) along [111]$_{\rm cubic}$ with a pair of permanent magnets.
The magnetic field at the sample position was measured using a Hall effect sensor.
Single crystals were grown by a traveling solvent floating zone method~\cite{Kimura} with an image furnace with four halogen lamps (FZ-T-4000-H-II-S-TS, Crystal Systems Co., Ltd.).
This method allows us to grow impurity-free crystals. Under a magnetic field, the demagnetization effect can become large for a ferromagnet, depending on the shape. In our neutron scattering experiments, each rod crystal was grown along each magnetic field to reduce the demagnetization effect.
Nuclear and magnetic Bragg reflections were measured at BL18 SENJU~\cite{SENJU} of J-PARC MLF.
Their intensities were refined with a trigonal space group ($R\bar{3}$, No.~148: hexagonal setting)~\cite{Rodic} with lattice parameters $a=17.50227(55)$~\AA \ and $c=10.73395(29)$~\AA$\;$ by using {\sc FullProf Suite}~\cite{FullProf} and {\sc STARGazer} software~\cite{STARGazer}.
The structural refinement result is shown in Fig.~\ref{fig:2}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig2.pdf}
\end{center}
\caption{(Color online) Observed nuclear and magnetic Bragg peak intensity refined as a function of calculated intensity refined with the trigonal space group ($R\bar{3}$). Reprinted with permission from Shamoto {\it et al}.~\cite{Shamoto2018} ({\copyright}$\,$ 2018 The American Physical Society).}
\label{fig:2}
\end{figure}
All the magnetic moments align along [111]$_{\rm cubic}$ ([001]$_{\rm hexagonal}$) parallel to the applied magnetic field $B$.
The refined crystal and magnetic structure is shown in Fig.~\ref{fig:3}.
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig3.pdf}
\end{center}
\caption{(Color online) Refined crystal and magnetic structure of YIG in the trigonal unit cell of $R\bar{3}$. Blue and brown arrows are iron spins for the octahedral and tetrahedral sites, respectively, small pink spheres are oxygen, and pale blue spheres are yttrium. The structure was drawn using {\sc VESTA} software~\cite{VESTA}. Reprinted with permission from Shamoto {\it et al}.~\cite{Shamoto2018} ({\copyright}$\,$ 2018 The American Physical Society).}
\label{fig:3}
\end{figure}
The refined crystallographic parameters with reliability factors $R_{F^2}=9.85$\% and $R_F=7.07$\% are listed in Table~\ref{table1}.
\begin{table*}[h]
\caption{Parameters of the crystal and magnetic structure of YIG at about 295~K under $B\approx 0.1$~T in the space group $R\bar{3}$. Errors are shown in parentheses by the corresponding digits. Note that the occupancies $g$ of Fe and O7$_{18f}$ (indicated by ``fix'') were fixed during the refinements.}
\label{table1}
\centering
\begin{tabularx}{\textwidth}{lRRRRRR}
\hline\hline
Atoms & \multicolumn{3}{c}{Fractional coordinates} & $B$~(\AA$^2$) & $g$ & $\mu_z$~(unit of ${\rm \mu_B}$) \\
& \multicolumn{1}{c}{$x$} & \multicolumn{1}{c}{$y$} & \multicolumn{1}{c}{$z$} & & & \\
\hline
Y1$_{18f}$ & 0.1255(3) & 0.0005(3) & 0.2497(2) & 0.245(53) & 0.943(22) & \\
Y2$_{18f}$ & 0.2911(3) & 0.3333(3) & 0.5834(2) & 0.245(53) & 0.948(22) & \\
Fe$_{3a}$ & 0 & 0 & 0 & 0.243(27) & 1.00(fix) & 3.50(17) \\
Fe$_{3b}$ & 0 & 0 & 0.5 & 0.243(27) & 1.00(fix) & 3.50(17) \\
Fe$_{9d}$ & 0 & 0.5 & 0.5 & 0.243(27) & 1.00(fix) & 3.50(17) \\
Fe$_{9e}$ & 0.5 & 0 & 0 & 0.243(27) & 1.00(fix) & 3.50(17) \\
Fe1$_{18f}$ & 0.2084(2) & 0.1672(2) & 0.4166(2) & 0.315(28) & 1.00(fix) & $-$3.37(17) \\
Fe2$_{18f}$ & 0.2912(2) & $-$0.1670(2) & 0.5832(2) & 0.315(28) & 1.00(fix) & $-$3.37(17) \\
O1$_{18f}$ & 0.0877(3) & 0.0920(4) & 0.1210(2) & 0.344(28) & 0.993(18) & \\
O2$_{18f}$ & 0.2622(4) & 0.1158(4) & 0.3230(3) & 0.344(28) & 0.941(17) & \\
O3$_{18f}$ & $-$0.4212(3) & $-$0.3721(4) & 0.5444(2) & 0.344(28) & 0.991(20) & \\
O4$_{18f}$ & 0.4867(4) & 0.0953(4) & 0.4188(3) & 0.344(28) & 0.940(17) & \\
O5$_{18f}$ & $-$0.0042(4) & $-$0.0904(4) & 0.3798(3) & 0.344(28) & 0.963(18) & \\
O6$_{18f}$ & 0.1453(4) & $-$0.1154(4) & 0.1773(3) & 0.344(28) & 0.940(18) & \\
O7$_{18f}$ & $-$0.0490(3) & 0.3717(4) & $-$0.0451(2) & 0.344(28) & 1.00(fix) & \\
O8$_{18f}$ & 0.3899(4) & $-$0.0967(4) & 0.0809(3) & 0.344(28) & 0.933(17) & \\
\hline\hline
\end{tabularx}
\end{table*}
They were almost consistent with the reported parameters~\cite{Rodic}.
The occupancy $g$ of O7$_{18f}$ was fixed because of the slightly larger $g$ value than 1.00 within the error during the refinement.
The chemical composition of the present YIG crystal was Y$_{2.84(9)}$Fe$_{5}$O$_{11.57(21)}$.
The deficiency of the Y$^{3+}$ ion was almost compensated by the oxygen deficiency for the Fe valence of $+3$ within the error.
The obtained magnetic moments, with ${\rm \mu_B}$ being the Bohr magneton, were $3.50\pm 0.17$ ${\rm \mu_B}$ and $3.37\pm0.17$ ${\rm \mu_B}$ at the octahedral and tetrahedral sites, respectively.
Although the obtained magnetic moments under $B\approx 0.1$~T are smaller than $4.47\pm 0.04$~${\rm \mu_B}$ and $4.02\pm 0.05$~${\rm \mu_B}$~\cite{Rodic}, the total magnetization of 3.1(6) ${\rm \mu_B}$/f.u. agrees with the magnetization of 3.05~${\rm \mu_B}$/f.u. under $B = 1$~T.
In the previous study~\cite{Rodic}, magnetic domain walls may have remained in the powder sample, resulting in these discrepancies.
The slightly larger trigonal lattice distortions observed here compared with the previous ones reduce the observed magnetic moments in the present analysis due to the overlapping of Bragg peaks.
This result suggests that it is important to determine the crystal structure of YIG together with the estimation of the magnetic moments under a magnetic field.
Regardless of the atomic distortions from cubic to trigonal symmetries, the following magnon dispersions are discussed in the cubic symmetry of $Ia\bar{3}d$ (No.~230) for simplicity.
\section{Inelastic $\bm{Unpolarized}$ Neutron Scattering}
\subsection{High-energy spin-wave excitations}
According to detailed studies of LSSE on YIG, LSSE depends on the detailed structure of the magnon density of states (MDOS, ${\mathcal D}_M$)~\cite{Kikkawa2015,Kikkawa2016}.
The magnetic field produces a Zeeman energy gap in the ferromagnetic dispersion, resulting in the reduction of thermally excited spin current.
The theoretical model is based on a simple quadratic magnon dispersion of YIG measured by previous inelastic neutron scattering (INS) measurements~\cite{Plant1977,Plant2}.
The magnon dispersion relations of YIG were first studied up to 55~meV by INS measurements~\cite{Plant2}.
The exchange integrals of YIG have been estimated under the energy limitation~\cite{Plant1977,Plant2,Cherepanov,Serga}.
Owing to high-efficiency INS spectrometers at pulsed-neutron sources, it has become possible to access high energies above 55~meV even with a small crystal.
A recent detailed magnon study was still limited to about 80~meV~\cite{Princep2017a}.
A measurement covering the full range of the energy of interest of YIG was first carried out by Shamoto's group~\cite{Shamoto2018}.
The theory of YIG magnons for LSSE~\cite{Barker} emphasizes the importance of the mode mixing for the lowest-$E$ branches.
For an antiferromagnet, the lowest-$E$ dispersion is known to have doubly degenerate modes.
For this ferrimagnet YIG, however, the lowest-$E$ dispersion is theoretically predicted to have only a single mode of positive polarization~\cite{Barker}.
The mode number can be verified from the MDOS (${\mathcal D}_M$) if we measure it on an absolute scale. Because of the estimation difficulties, there has been no report of the absolute MDOS for YIG so far.
For the MDOS estimation, we introduce an approximated dynamical structure factor and effective reciprocal space volume to simplify the absolute estimation. In addition, we performed the numerical calculation of absorption coefficients.
The INS probability on a magnet can be expressed by Fermi's golden rule, including the MDOS of the final states.
This is the same as the phonon density of states for the phonon dispersions, which is often measured by INS.
By using this method, the MDOS of YIG was estimated from the observed scattering intensity.
For the inelastic magnetic scattering, the sum of the $q$-integrated scattering function $S(E)$ after energy integration is well known to be proportional to $S(S+1)$~\cite{Shamoto2018}.
Therefore, the $q$-integrated dynamical spin susceptibility $\chi^{\prime\prime}(E)$ normalized by $g^2{\rm \mu^2_B}S(S+1)$ gives the MDOS at $T=0$~K, where the energy integration becomes unity.
The simple quadratic dispersion model~\cite{Kikkawa2015} at low energies below 14~meV has been successfully used to estimate the MDOS, suggesting the validity of our estimation.
The observed lowest-$E$ dispersion was fitted by a simple quadratic dispersion with a stiffness constant $D$.
The fitted parameter $D$ can also be verified by the exchange integrals obtained from the whole magnon spectrum in our experiment.
Thus, the relation between the MDOS and $\chi^{\prime\prime}(E)$ is discussed quantitatively on the basis of the stiffness constant $D$.
Here, the Heisenberg Hamiltonian of the spin system in YIG is written as~\cite{Cherepanov}
\begin{equation}
\mathcal{H}=-2\sum_{ij} J_{ij}S_{i}S_{j} + g\mu_{B} B\sum_{i}S_{i} + \sum_{i}KS_{zi}^2,
\label{eq:1}
\end{equation}
where $S=5/2$, $g=2$ for the Fe$^{3+}$ spins, $B$ is the magnetic field, and $K$ is the anisotropy coefficient.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig4.pdf}
\end{center}
\caption{(Color online) One of the magnon excitations of YIG at (220) ($(Q_a, Q_b, Q_c)$=(1/3, 4/3, $-$1)) in the $E$ range from 5 to 20~meV measured with $E_{\rm i}=45.3$~meV. (a) $Q_a$--$Q_b$ contour map of $-1.05<Q_c<-0.95$. (b) $Q_c$--$Q_b$ contour map of $0.3<Q_a<0.4$. (c) $Q_a$--$Q_c$ contour map of $1.3<Q_b<1.4$. White areas are regions not covered by the detector. The color bars are in the unit of mbarn sr$^{-1}$meV$^{-1}$r.l.u.$^{-3}$. Reprinted with permission from Shamoto {\it et al}.\cite{Shamoto2018} ({\copyright}$\,$ 2018 The American Physical Society).}
\label{fig:4}
\end{figure*}
Although complex models with a large number of parameters have been intensively studied~\cite{Plant2,Princep2017a}, a minimum model has three nearest-neighbor exchange integrals, $J_{ad}$, $J_{aa}$, and $J_{dd}$, where the subscripts $a$ and $d$ refer to the Fe 16$a$ (octahedral) and 24$d$ (tetrahedral) sites in the cubic symmetry $Ia\bar{3}d$, respectively.
The spin Hamiltonian was diagonalized using the {\sc spinW} software package~\cite{spinw} based on the linear spin-wave theory with the Holstein--Primakoff approximation.
The spin-wave spectra were drawn by using the {\sc Horace} software package~\cite{Horace}.
A magnon excitation in YIG is observed at ${\bf q}$ in a reciprocal space deviating from the $\Gamma$ point at a finite energy transfer $E$ by using the BL01 4SEASONS~\cite{Kajimoto}, BL14 AMATERAS\cite{Nakajima} with the multi-$E_{\rm i}$ option~\cite{Nakamura}, and BL02 DNA~\cite{Shibata} spectrometers at J-PARC MLF.
The data sets were analyzed by using the {\sc Utsusemi} software package~\cite{Utsusemi}.
It forms a three-dimensional (3D) spherical shell at $E$ in the reciprocal ${\bf Q}$-space due to the nearly isotropic 3D interactions of localized spins as shown in Fig.~\ref{fig:4}.
Here, we define the scattering wave vector ${\bf Q}$ as ${\bf Q}={\bf q}+{\bf G}$, where ${\bf Q}=Q_a(2,-1,-1)+Q_b(1,1,1)+Q_c(0,-1,1)$, ${\bf q}=q_a(2,-1,-1)+q_b(1,1,1)+q_c(0,-1,1)$ in the crystal-setting as below Brillouin zone, and ${\bf G}$ is a reciprocal lattice vector such as (2, 2, 0).
$Q_a$, $Q_b$, $Q_c$, $q_a$, $q_b$, and $q_c$ are in reciprocal lattice units (r.l.u.).
The crystal was aligned in the ($Q_a$, $Q_b$, 0) zone as a scattering plane.
This crystal-setting Brillouin zone with 1 r.l.u.$^3$ is six times larger than the original Brillouin zone.
The $q$-integrated magnon intensity in the Brillouin zone was obtained through the integration of one 3D spherical-shell excitation.
Note that phonons were not observed in the low-$Q$ region.
\begin{figure}[b]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig5.pdf}
\end{center}
\caption{(Color online) Wide-$E$-range magnon dispersions in the $Q_c$--$E$ space. Left: observed pattern as a function of $Q_c$ measured with $E_{\rm i}=150.0$~meV in the range of $-0.5< Q_a<3$ and $1< Q_b< 3$. Right: magnon dispersion relations along the same direction calculated from the $\Gamma$ to $N$ point at (123) by {\sc spinW} with the three nearest-neighbor exchange integrals estimated here. The brown (blue) coloring denotes the positive (negative) polarization mode. The color bar is in the unit of mbarn sr$^{-1}$meV$^{-1}$r.l.u.$^{-3}$. Reprinted with permission from Shamoto {\it et al}.~\cite{Shamoto2018} ({\copyright}$\,$ 2018 The American Physical Society).}
\label{fig:5}
\end{figure}
The magnons of YIG extended up to 86~meV as shown in Fig. \ref{fig:5}. The strong magnetic excitations at about 73 and 86~meV were nearly $Q$-independent in the dispersions. In the middle-$E$ range below 55~meV, the dispersions become a broad band down to 30~meV due to the overlapping of many dispersions.
Three nearest-neighbor exchange integrals, $J_{aa}$, $J_{ad}$, and $J_{dd}$, were estimated step by step by comparing with the simulation with $gS$=5 $\mu_{\rm B}$ using {\sc spinW} software~\cite{spinw} as follows.
$J_{ad}$ was determined from the whole magnon bandwidth.
A strong positive correlation was found between $J_{ad}$ and $J_{aa}$, which was sensitive to the second-highest magnon energy at $P$ ($\sim$70~meV). $J_{dd}$ was determined by the magnon energy at $P$ ($\sim$45~meV) with positive polarization in the middle-$E$ range.
The three nearest-neighbor exchange integrals, $J_{aa}$, $J_{ad}$, and $J_{dd}$, became $0.00\pm 0.05$, $-2.90\pm 0.07$, and $-0.35\pm 0.08$~meV, respectively.
The minus sign means that the couplings are antiferromagnetic in the definition of Eq.~\ref{eq:1}.
The errors of integrals correspond to the largest energy shift of up to 2~meV in the dispersion energies, typically at the $P$ point.
The calculated dispersion relations are shown in Fig.~\ref{fig:5}.
The present three exchange integrals agree with reported values ($J_{aa}$$\sim 0$, $J_{ad}=-2.78$, $J_{dd}=-0.28$~meV) estimated from the magnetic susceptibility above 750~K~\cite{Wojtowicz} with the temperature dependence correction of the lattice constant.
$J_{aa}$ is estimated to be less than $-0.03$~meV based on the study of the garnet compound Ca$_{3}$Fe$_{2}$Si$_{3}$O$_{12}$ with only 16$a$ Fe$^{3+}$ sites~\cite{Wojtowicz,Geller}.
In the previous measurement below 55~meV~\cite{Plant1977}, $J_{aa}$, $J_{ad}$, and $J_{dd}$ were $-0.69$, $-3.43$, and $-0.69$~meV, respectively.
After detailed refinement of the same magnon data, $J_{aa}$, $J_{ad}$, and $J_{dd}$ became $-0.33$, $-3.43$, and $-1.16$~meV, respectively~\cite{Cherepanov}.
The magnon dispersion relations simulated with these integrals seem consistent with those below 55~meV, but deviate largely from the present observed dispersions above 55~meV.
To verify the validity of our exchange integrals, observed and simulated constant-$E$ cuts at various energies are compared in Fig.~\ref{fig:6}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig6.pdf}
\end{center}
\caption{(Color online) Constant-$E$ cuts of magnon spectra in the $Q_b$--$Q_c$ plane with 10~meV $E$ intervals. Left and right plots show observed and simulated patterns, respectively. The transfer energies are (a) 12, (b) 25, and (c) 33~meV for $E_{\rm i}=45.3$~meV and (d) 40, (e) 50, (f) 60, (g) 70, and (h) 85~meV for $E_{\rm i}=150.0$~meV. The corresponding $h$ values in (2$h$, $-h$, $-h$) are about 0.2, 0.9, 1.7, 0.65, 0.95, 1.25, 1.6, and 1.9, respectively. The color bars are for observed spectra in unit of mbarn sr$^{-1}$meV$^{-1}$r.l.u.$^{-3}$. Reprinted with permission from Shamoto {\it et al}.~\cite{Shamoto2018} ({\copyright}$\,$ 2018 The American Physical Society).}
\label{fig:6}
\end{figure}
They are consistent with each other even in the middle-$E$ range from 30 to 50~meV.
Precise fittings with more parameters than ours were reported in recent studies~\cite{Xie, Princep2017a}.
According to the model~\cite{Princep2017a}, however, simulated spin-waves with their exchange parameters have energies exceeding 100~meV.
On the other hand, the highest energy was 86~meV in our case.
This discrepancy may be due to the limited $E_{\rm i}$ of 120~meV at the MAPS time-of-flight neutron spectrometer at the ISIS spallation neutron source, which may not fully cover the high-energy spin-wave of YIG.
Although our INS data with $E_{\rm i}=150$~meV covered energies of above 100~meV, no dispersion was observed above 86~meV as shown in Fig.~\ref{fig:5}.
A quadratic dispersion has been observed in the lowest-$E$ acoustic magnon dispersion measured at various spectrometers below 14~meV near the $\Gamma$ point as shown in Fig.~\ref{fig:7}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig7.pdf}
\end{center}
\caption{(Color online) Lowest-$E$ magnon dispersion along the $\Lambda$ and $\Sigma$ directions measured at $\sim$20~K. The solid line is the fitting of the quadratic function of Eq.~\ref{eq:2}. The calculated dispersions with exchange integrals are also shown by pale blurry lines in the same $Q$--$E$ space. Brown (blue) denotes the positive (negative) polarization mode. Reprinted with permission from Shamoto {\it et al}.~\cite{Shamoto2018} ({\copyright}$\,$ 2018 The American Physical Society).}
\label{fig:7}
\end{figure}
The nearly isotropic low-$E$ dispersion can be written approximately as follows:
\begin{equation}
E=Da^2q^2+E_g,
\label{eq:2}
\end{equation}
where $D$ is the stiffness constant, $q$ is the magnon wave vector, $a$ is the lattice constant, and $E_g$ is the energy gap.
The energy gap $E_g$ in YIG is approximated as the summation of the Zeeman energy by the applied magnetic field $B$ ($=\mu_{0}H$) and the anisotropy field $KS_z^2$ as follows:
\begin{equation}
E_g=g\mu_{\rm B}B+KS_z^2.
\label{eq:3}
\end{equation}
$Da^2$ is estimated to be $633\pm 17$ meV\AA$^2$ (3.95 $\times$ 10$^{-29}$ erg cm$^2$ = 3.95 $\times$ 10$^{-40}$ J m$^2$) according to the fitting below 14~meV in Fig.~\ref{fig:7}.
It is slightly smaller than the value of 670~meV\AA$^2$ (4.2 $\times$ 10$^{-29}$ erg cm$^2$) used in an LSSE study~\cite{Kikkawa2015}.
The present value is also roughly consistent with the other reported values of $580\pm 60$ meV\AA$^2$ ~\cite{Man2017} and $\sim$533 meV\AA$^2$~\cite{Cherepanov1993}.
$D$ can be estimated using the obtained exchange integrals as follows~\cite{Srivastava,Cherepanov}:
\begin{equation}
D=\frac{5}{16}\left(8J_{aa}-5J_{ad}+3J_{dd}\right).
\label{eq:4}
\end{equation}
From this equation, $Da^2$ is estimated as 642 meV\AA$^2$ from our three exchange integrals.
This value agrees with the stiffness constant obtained from Eq.~\ref{eq:2} and Fig.~\ref{fig:7}.
\subsection{Magnon density of states}
The imaginary part of the dynamical spin susceptibility $\chi^{\prime\prime}({\bf q}, E)$ is estimated from the following equation of the magnetic differential scattering cross section:
\begin{eqnarray}
\left(\frac{{\rm d}^2\sigma}{{\rm d}\Omega{\rm d}E}\right)_{M}&=& \frac{(\gamma r_{\rm e})^2}{\pi g^2 \mu_{\rm B}^2} \frac{{\bf k}_{\rm f}}{{\bf k}_{\rm i}} f^2(Q) t^2({\bf Q}) \left\{1+(\hat{\tau} \cdot \hat{\eta})^2\right\}_{\rm av} \nonumber\\
&&\times \left\{1+n(E)\right\} \chi^{\prime\prime}({\bf Q},E)\;,
\label{eq:5}
\end{eqnarray}
where the constant value $(\gamma r_e)^2$ is 0.2905~barn sr$^{-1}$; $g$ is the Land\'{e} $g$ factor; ${\bf k}_{\rm i}$ and ${\bf k}_{\rm f}$ are the incident and final wavenumbers, respectively; the isotropic magnetic form factor $f^2(Q)$ of Fe$^{3+}$ at (220) is 0.8059 ($Q=1.44$ \AA$^{-1}$); the dynamic structure factor $t^2({\bf Q})$~\cite{Shirane} is approximated to be the square static magnetic structure factor relative to the full moments, i.e., $t^2({\bf Q})$$\approx$$F^2_M(${\bf G}$)$/$F^2_{M0}$ =13/25; $\hat{\tau}$ is the unit vector in the direction of ${\bf Q}$; $\hat{\eta}$ is the unit vector in the mean direction of the spins; the angle-dependent term $\{1+(\hat{\tau} \cdot \hat{\eta})^2\}_{av}$ is 4/3 due to the domain average without a magnetic field; and $n(E)$ is the Bose factor.
The imaginary part of the $q$-integrated dynamical spin susceptibility $\chi^{\prime\prime}(E)$ was obtained, as shown in Fig.~\ref{fig:8}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig8.pdf}
\end{center}
\caption{(Color online) Energy dependence of $q$-integrated dynamical spin susceptibility $\chi^{\prime\prime}(E)$ for the lowest-$E$ magnon mode. Downward triangle: result obtained at BL02 DNA. Diamond: result obtained at BL14 AMATERAS. Upward triangle, squares, and circles: results obtained at BL01 4SEASONS from $E_{\rm i}=$ 12.5, 21.5, and 45.3~meV, respectively. The solid line is the fitting with a single parameter $\chi^{\prime\prime}_0$ using Eq.~\ref{eq:6} at $E_g$= 0. The dashed line is a guide to the eye. Reprinted with permission from Shamoto {\it et al}.~\cite{Shamoto2018} ({\copyright}$\,$ 2018 The American Physical Society).}
\label{fig:8}
\end{figure}
The $E$-dependence of $\chi^{\prime\prime}(E)$ for a quadratic dispersion case becomes a square-root function of energy~\cite{Shirane}.
In the case of ferromagnetic Fe, the constant-$E$ scan intensity of magnons with a certain $E$ width is inversely proportional to the slope of the dispersion ($\propto$$1/\sqrt{E}$)~\cite{Shirane}.
This leads to excitation at a finite energy $E$ at a $q$ position deviating from the $\Gamma$ point.
The excitation forms a spherical shell in the 3D reciprocal ${\bf Q}$-space.
For the 3D spherical shell, the surface area $\sim 4\pi q^2$ is proportional to the energy for a ferromagnet.
The multiplication of $E$ by $1/\sqrt{E}$ results in $\sqrt{E}$ for the $q$-integrated intensity of a constant-$E$ scan.
The same $E$-dependence of $\chi^{\prime\prime}(E)$ is expected in this YIG around the $\Gamma$ point because of the quadratic dispersion as follows:
\begin{equation}
\chi^{\prime\prime}(E)=\chi^{\prime\prime}_0 \sqrt{E-E_g},
\label{eq:6}
\end{equation}
where $\chi^{\prime\prime}_0$ is a constant value.
Although the data in Fig.~\ref{fig:8} were obtained under five different conditions with three spectrometers, all the values follow the same trend below 14~meV, which can be reproduced by Eq.~\ref{eq:6} at $E_g = 0$.
The fitted value of $\chi^{\prime\prime}_0$ below 14~meV in Fig.~\ref{fig:8} was $88\pm 4$ $\rm \mu_B^2$eV$^{-1.5}$Fe$^{-1}$.
On the basis of the good fitting of the MDOS, the validity of the theoretical simple model of LSSE is proved~\cite{Kikkawa2015}.
Under this condition, the MDOS, ${\mathcal D}_M$ in our simple model, can be described by the stiffness constant $D$ at $n(E)=0$.
Moreover, the MDOS is also proportional to the normalized $\chi^{\prime\prime}(E)$ obtained for the lowest-$E$ branch as follows:
\begin{eqnarray}
{\mathcal D}_M(E)&=&\frac{n_{\rm mode}D^{-3/2}}{(2\pi)^2 40}\sqrt{E-E_g} \nonumber\\
&=&\frac{A\chi^{\prime\prime}_0}{g^2 \mu_{\rm B}^2 S(S+1)}\sqrt{E-E_g}\;,
\label{eq:7}
\end{eqnarray}
where $A$ is a constant value and 40 is the number of Fe sites in the crystallographic unit cell with a cubic lattice parameter $a=12.36$ \AA.
The value $g^2 \mu_{\rm B}^2 S(S+1)$ is 35 ${\rm \mu_B}^2$Fe$^{-1}$ for Fe$^{3+}$.
However, there are only 20 Fe sites in the magnetic unit cell, where the MDOS is only proportional to the volume per Fe site.
In the calculation by the {\sc spinW} software~\cite{spinw}, 20 Fe sites result in 20 modes in the first Brillouin zone.
Here, we focus on the lowest-$E$ acoustic magnon mode with $+$ polarization.
The constant value $A$ is basically unity because of the sum rule for $\chi^{\prime\prime}(E)$ in the $E$-integration at $n(E)=0$. Based on our experimentally obtained stiffness constant of 633~meV\AA$^2$, the constant value $A$ became $0.94\pm 0.02$ at $n_{mode}=1$ (single-mode case).
Thus, we confirmed the single mode for the lowest-$E$ magnon branch from the absolute intensity estimation.
Equation~\ref{eq:7} can be regarded as a Debye model of magnons.
The difference of the constant value from unity may come from our experimental errors and our simplified model.
Above 14~meV, however, the magnon dispersion deviates from the quadratic function, resulting in the upturn of $\chi^{\prime\prime}(E)$ in Fig.~\ref{fig:8}, which is schematically shown as a dashed line in Fig.~\ref{fig:8}.
What is the meaning of the single mode?
Our result suggests that the mode is only a single polarization, as expected theoretically, although the present inelastic {\it unpolarized} neutron scattering cannot distinguish two polarizations.
This contrasts with doubly degenerate modes in the lowest-$E$ magnon dispersion of an antiferromagnet, which often split in $Q$ due to the Dzyaloshinskii--Moriya interaction~\cite{Park}.
The two types of polarization modes in YIG are split in energy. In Sect. 4, results from inelastic {\it polarized} neutron scattering will be shown to elucidate the polarization of each magnon mode~\cite{Nambu2020}.
According to theoretical calculations on YIG~\cite{Barker}, there are 20 modes in the first magnetic Brillouin zone, where 12 modes have $+$ polarization and the other 8 modes have $-$ polarization.
These correspond to $\chi^{\prime\prime}_{yx}$ and $\chi^{\prime\prime}_{xy}$, respectively. The two types of modes in YIG are split in energy corresponding to the energy splitting of up and down spins.
The total number of magnon modes corresponds to the number of Fe in the magnetic unit cell.
This means that the magnetic moment has only one degree of freedom.
On the other hand, a phonon has three degrees of freedom per atom.
They are two transverse modes and one longitudinal mode simultaneously dispersing from the $\Gamma$ point.
Here, we obtained the relation between the magnon dispersion and the dynamical spin susceptibility in the quadratic dispersion case around the $\Gamma$ point.
The validity of the relation may not be limited to only around the $\Gamma$ point nor to the quadratic dispersion.
As far as the dynamic structure factor can be approximated, the dynamical spin susceptibility should be proportional to the MDOS.
Alternatively, if one knows the dynamic structure factor in the $Q$--$E$ space that one wants to study, the present relation can be universally applied.
As an example, the present relation is used to study the ultralow-energy dispersion in the following subsection.
\subsection{Ultralow-energy spin-wave excitations}
LSSE depends on the detailed structure of the MDOS, ${\mathcal D}_M$, including the Zeeman energy gap~\cite{Kikkawa2015,Kikkawa2016}.
The Zeeman energy gap in the ferromagnetic dispersion results in the reduction of thermally excited spin current.
Therefore, it is important to determine the Zeeman energy gap directly by INS.
Ultralow-energy magnon excitation of YIG below 45~${\mu}$eV has been measured, as shown in Fig.~\ref{fig:9} using the inverted-geometry spectrometer BL02 DNA~\cite{Shibata} at J-PARC.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig9.pdf}
\end{center}
\caption{(Color online) $q$-$E$ spectrum of YIG at $Q = (0, 2, -2)$ measured by BL02 DNA with the expected spin-wave dispersion (solid quadratic line), where $q$ is along [2 -1 -1]$_{\rm cubic}$ (normal to the scattering plane). The instrumental $Q$-resolution, 0.02 \AA$^{-1}$, is shown by the left-right arrow above the figure.}
\label{fig:9}
\end{figure}
A pulse-shaping chopper with a 3 cm slit rotating at a speed of 225~Hz results in a fine energy-resolution of $3.44\pm 0.02$~$\mu$eV at $E=0$ meV.
Along the vertical $q$-direction, the $Q$-resolution is about 0.02~\AA$^{-1}$.
It is usually difficult to obtain the dispersion at such a low energy by a constant-$E$ or constant-$Q$ scan.
Instead, we estimated the magnon dispersion from the energy dependence of the dynamical spin susceptibility $\chi^{\prime\prime}(E)$ on an absolute scale by assuming the quadratic dispersion of Eq.~\ref{eq:2} described in the previous subsection.
Here, a magnetic field of about 0.1~T was applied along [111]$_{\rm cubic}$.
This direction is parallel to the growth direction of the rod-shaped crystal, leading to the removal of magnetic domain walls under a magnetic field.
In Eq.~\ref{eq:2}, the latter anisotropy term was $0.9\pm 0.5$ $\mu$eV, estimated from the zero-field INS measurement in Fig.~\ref{fig:10}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig10.pdf}
\end{center}
\caption{(Color online) Energy dependence of $Q$-integrated dynamical spin susceptibilities $\chi^{\prime\prime}(E)$. (a) $\chi^{\prime\prime}(E)$ and ${\mathcal D}_M$ at 300~K with (open circles) and without (closed circles) magnetic field. Solid lines are fittings by Eq.~\ref{eq:6}. (b) $\chi^{\prime\prime}(E)$ in the temperature range from 10 to 300~K with a magnetic field of 0.1~T along [111]$_{\rm cubic}$. (c) Temperature dependence of the obtained parameters $\chi^{\prime\prime}_0$ (open circles) and $E_g$ (closed circles) in Eq.~\ref{eq:6}. The solid lines are guides to the eye. Reprinted with permission from Shamoto {\it et al}.~\cite{Shamoto2020} ({\copyright}$\,$ 2020 The American Physical Society).}
\label{fig:10}
\end{figure}
Because the total gap was $11.0\pm 0.5$~$\mu$eV under a magnetic field, the Zeeman energy gap becomes $10.1\pm 0.7$~$\mu$eV, suggesting the applied magnetic field $B$ of $0.088\pm 0.006$~T.
This value agrees with $0.1\pm 0.01$~T measured by a Gauss meter.
The magnetic intensity will weaken under a magnetic field normal to the ${\bf Q}$-vector by a factor of 4/3 due to the angle-dependent term.
The solid lines with the fixed ratio in Fig.~\ref{fig:10}(a) reproduce the intensities very well.
This suggests that the magnetic domain walls are fully removed from the crystal under the magnetic field of 0.1~T, whereas the magnetic domains are randomly oriented under zero magnetic field.
Figure~\ref{fig:10}(b) shows the temperature dependence of the dynamical spin susceptibility $\chi^{\prime\prime}(E)$. The fitted parameters are shown in Fig.~\ref{fig:10}(c).
The Zeeman energy gap nearly closed at 10~K.
The observed intensity at 10~K was weak because of the small Bose factor, resulting in the data scattering of $\chi^{\prime\prime}(E)$ in Fig.~\ref{fig:10}(b).
Because of the large errors at 10~K in Fig.~\ref{fig:10}(c), ambiguity remains in the fitting.
This gap closure at 10~K is confirmed by specific heat measurement under a magnetic field along [111]$_{\rm cubic}$ as discussed in the next subsection.
From Eq.~\ref{eq:7} with $n_{mode}=1$ and $A=1$, the magnon stiffness constant $D$ becomes
\begin{equation}
D=\left\{\frac{g^2 \mu_{\rm B}^2 S(S+1)}{{(2\pi)^2 40}\chi^{\prime\prime}_0}\right\}^{2/3},
\label{eq:8}
\end{equation}
where 40 is the number of Fe sites in the crystal unit cell.
In Fig.~\ref{fig:10}(c), $\chi^{\prime\prime}_0$ increases from $157.0\pm 4.3$~${\rm \mu_B}^2$eV$^{-3/2}$Fe$^{-1}$ at $T=150$~K to $177\pm 25$~${\rm \mu_B}^2$eV$^{-3/2}$Fe$^{-1}$ at 10~K. The enhancement corresponds to about 13\%.
This has two possible explanations.
One is the magnon softening with decreasing temperature.
The other is the spin canting observed in the magnetization, which increases the INS intensity via the angle-dependent term.
The stiffness constant $Da^2$ estimated from $\chi^{\prime\prime}_0$ decreases from $415\pm 7$~meV\AA$^2$ at 150~K to $383\pm 76$~meV\AA$^2$ at 10~K.
The value at 10~K is smaller than our previous result of $633\pm 17$~meV\AA$^2$ at $\sim$20~K obtained from the magnon dispersion in the energy range from 2 to 14~meV~\cite{Shamoto2018}.
The present result is opposite to the standard expectation that a magnon is hardened with decreasing temperature.
The softening is also observed below 150~K via the microwave spin-wave resonance of YIG under a magnetic field along [111]$_{\rm cubic}$~\cite{LeCraw}.
The stiffness constant is estimated assuming that a relevant phonon velocity is constant.
The stiffness constant $D$ decreases by about 5\% due to the magnon softening, which is well reproduced by the random phase approximation with sublattice magnetizations $M_a$ and $M_d$ at the $a$ and $d$ sites, respectively.
$D$ is not proportional to the total magnetization $M$ but is expressed in terms of $M_a$ and $M_d$ as follows~\cite{LeCraw}:
\begin{equation}
D=B\frac{-8J_{aa}M_{a}^{2}-3J_{dd}M_{d}^{2}+5J_{ad}M_{a}M_{d}}{3M_{d}-2M_{a}},
\label{eq:9}
\end{equation}
where $B$ is a constant value.
This magnon softening corresponds to an 8\% increase in $\chi^{\prime\prime}_0$.
This is not large enough for the enhancement of about 13\%.
Although our magnetic structure refinement at about 295 K does not show any spin canting, the result suggests that the spins cant at low temperatures below 150~K, as discussed in the next subsection.
\subsection{Magnetization and specific heat results}
The observed magnetic anomaly below 150~K was examined by investigating the magnetization and specific heat of YIG.
The results will be compared with INS results.
Figures~\ref{fig:11}(a) and \ref{fig:11}(b) show the temperature dependence of magnetization under a magnetic field cooling of 0.5~T along [001]$_{\rm cubic}$ and [111]$_{\rm cubic}$, respectively.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.6\linewidth,clip]{82303Fig11.pdf}
\end{center}
\caption{Magnetization as a function of temperature. Observed magnetization under a magnetic field of 0.5~T along [001]$_{\rm cubic}$ (a) and [111]$_{\rm cubic}$ (b) (solid lines) with Bloch-type magnetization (broken lines) fitted in the temperature range from 150 to 350~K. Enlarged temperature dependences of the magnetization at $B=0.5$~T along [001]$_{\rm cubic}$ (c) and [111]$_{\rm cubic}$ (d). The insets show the magnetic field dependence of the magnetization. The thin horizontal line in (d) is a guide to the eye. Reprinted with permission from Shamoto {\it et al}.~\cite{Shamoto2020} ({\copyright}$\,$ 2020 The American Physical Society).}
\label{fig:11}
\end{figure}
For these measurements, non-cylindrical small crystals were used due to the limitation of these measurements.
A magnetic field of 0.5~T, larger than 0.1~T, was applied in Fig.~\ref{fig:11}, because of the crystal shapes having the ambiguity of the demagnetization effect.
To confirm the difference by the demagnetization, the magnetic field dependence was measured from 0.1 to 0.5~T as shown in the insets of Figs.~\ref{fig:11}(c) and \ref{fig:11}(d).
They show that 0.1~T is not large enough for the full magnetization of YIG owing to the small demagnetization effect of the crystals.
The magnetization at low temperatures is usually decreased by the magnon excitation with increasing temperature, where the temperature dependence is expressed by the Bloch $T^{3/2}$ rule~\cite{Kikkawa2015}.
In the present case, however, the Bloch rule can be applied to the magnetization only above 150~K as shown in Figs.~\ref{fig:11}(a) and \ref{fig:11}(b).
The fittings lead to similar values of $\zeta=5.96$--$7$ $\times$ 10$^{-5}$~K$^{-3/2}$ for $M=M_0(1-\zeta T^{3/2})$.
They are slightly larger than the reported values of 5.20--5.83 $\times$ 10$^{-5}$~K$^{-3/2}$ in the previous study~\cite{Kikkawa2015}.
Below 150~K, the magnetization under a magnetic field along [001]$_{\rm cubic}$ shows a continuous increase with decreasing temperature (Fig.~\ref{fig:11}(c)). On the other hand, a peak appears at approximately 25~K in the magnetic field along [111]$_{\rm cubic}$ (Fig.~\ref{fig:11}(d)), suggesting a crossover.
Specific heat measurement has also been performed to observe the anomalous behavior at 25~K as shown in Figs.~\ref{fig:12}(a) and \ref{fig:12}(b).
A 3D ferromagnet/ferrimagnet magnon without a gap is expected to show a specific heat proportional to $T^{3/2}$, in addition to the phonon term of $T^3$ at low temperatures~\cite{Kittel} as follows:
\begin{equation}
\frac{C}{T^{3/2}}=A+BT^{3/2},
\label{eq:8}
\end{equation}
where $A=0.113k_{\rm B}(Da^2 /k_{\rm B})^{-3/2}$ and $B=12\pi^{4}N_{\rm A} k_{\rm B}/(5\theta_{\rm D}^{3})$ with $k_{\rm B}$ the Boltzmann constant, $N_{\rm A}$ the Avogadro number for the formula weight (Y$_3$Fe$_5$O$_{12}$), and $\theta_{\rm D}$ the Debye temperature.
In Figs.~\ref{fig:12}(a) and \ref{fig:12}(b), $C/T^{3/2}$ is plotted as a function of $T^{3/2}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig12.pdf}
\end{center}
\caption{(Color online) Specific heat capacity $C/T^{3/2}$ plotted as a function of $T^{3/2}$. (a) The magnetic field directions are parallel to [001]$_{\rm cubic}$ (red dots) and [111]$_{\rm cubic}$ (black dots) under 1 T. Both heat capacities are plotted in a wide temperature range up to 250 K. The upper values (red dots) around the peak at $T$= 96~K ($\sim$920 K$^{3/2}$) are measured at $B$=1 T along [001]$_{\rm cubic}$, which at low temperatures become lower than those (black dots) measured at $B$=1 T along [111]$_{\rm cubic}$. (b) Enlarged temperature dependence of the heat capacities for [001]$_{\rm cubic}$ (red open circles) and [111]$_{\rm cubic}$ (black closed circles) magnetic fields at low temperatures below 14~K. The difference becomes large below $T$= 9 K ($\sim$25 K$^{3/2}$). The linear solid lines are fitted results within the range, extrapolated to the ends. Reprinted with permission from Shamoto {\it et al}.\cite{Shamoto2020} ({\copyright}$\,$ 2020 The American Physical Society).
}
\label{fig:12}
\end{figure}
Figure~\ref{fig:12}(a) shows the specific heat over a wide temperature range.
The specific heat capacities under the two magnetic field directions start to deviate from each other below around 150~K, suggesting that the magnon anomaly may have already started by 150~K.
This corresponds to the non-Bloch-type temperature dependence of magnetization below 150~K.
The anomaly corresponds to the magnon softening observed below 150 K by microwave spin-wave resonance~\cite{LeCraw}.
In addition, the first-order anisotropy constant anomalously increases below 150~K in a ferromagnetic resonance study of YIG~\cite{Dillon}, suggesting different origins from a simple sublattice magnetization effect such as spin canting.
Figure~\ref{fig:12}(b) shows the same data at temperatures below 14~K.
The slope corresponds to the Debye temperature $\theta_{\rm D}$, whereas the extrapolation to the $y$-axis leads to the stiffness constant $Da^2$, which is expected to be common for all the plots in Fig.~\ref{fig:12}(b).
However, both parameters below 9~K ($\sim$25~$K^{3/2}$) are different between the two conditions.
In the case of $B$ along [001]$_{\rm cubic}$, $A$ becomes $-2.8\pm 0.2\times 10^{-4}$~J/mole/K$^{5/2}$, whereas $A$ is $8.17\pm 0.02\times 10^{-3}$~J/mole/K$^{5/2}$ at $B$ along [111]$_{\rm cubic}$.
The former negative $A$ value can be regarded as an artifact coming from the assumed magnon excitation without a gap in the narrow temperature range from 5.0 to 8.8~K.
The latter finite magnon contribution to the specific heat suggests that the magnon energy gap disappears under $B$ along [111]$_{\rm cubic}$.
The stiffness constant $Da^2$ obtained from the $A$ value at the magnetic field along [111]$_{\rm cubic}$ was 312~meV\AA$^2$.
This value is fairly consistent with $383\pm 76$~meV\AA$^2$ obtained from the ultralow-energy magnon at 10~K.
However, these values are much smaller than our previous result of $633\pm 17$~meV\AA$^2$ estimated at $\sim$20~K.
In the magnetic field along [001]$_{\rm cubic}$, the Debye temperature was 195.8~K, whereas it became 277.0~K in the magnetic field along [111]$_{\rm cubic}$. The Debye temperature seems to change depending on the magnetic field direction. This lattice hardening can be seen below 9~K in Fig.~\ref{fig:12}(b).
We did not observe any peaks in the temperature dependence of specific heat in the whole range of the measured temperature, suggesting no appreciable phase transition.
This suggests that this anomaly may originate from a crossover from a ferrimagnet to a canted ferrimagnet.
We observed a new crossover at 25~K below the precursor anomaly at 150~K.
Regarding the precursor anomaly, the deviation from the Bloch rule below 150~K in Fig.~\ref{fig:11}(a) reaches 3.6\% (${\bf H}\parallel$[001]$_{\rm cubic}$) and 4.0\% (${\bf H}\parallel$[111]$_{\rm cubic}$) at the lowest temperature of 2.5~K.
The amount of suppression corresponds to about 15.4 and 16.3 degrees in the canting angles of magnetic moments, respectively.
The spin canting is expected to increase the INS intensity due to the angle-dependent term $\{1+(\hat{\tau} \cdot \hat{\eta})^2\}_{av}$ in Eq.~\ref{eq:5}.
The canting of 15.4 degrees leads to a 7\% enhancement of the intensity, which is smaller than the enhancement of about 13\% in Fig. \ref{fig:10}(c).
However, the remaining part can be explained by the magnon softening based on the microwave spin resonance result.~\cite{LeCraw}
The summation of these two components is about 15\%, which is fairly close to the total enhancement of about 13\%.
Regardning the magnetization peak at 25~K in Fig.~\ref{fig:11}(d), this magnetization anomaly accompanies the Zeeman energy-gap closure.
The dielectric relaxation anomaly also exhibits the same temperature and the same magnetic field direction dependence~\cite{Yamasaki2009}, suggesting the same origin as the magnetization anomaly.
In this temperature range, the lattice must strongly couple with the spin via spin-orbit coupling.
The crystal structure of YIG is trigonally distorted at room temperature~\cite{Shamoto2018}.
The symmetry axis [111]$_{\rm cubic}$ coincides with the magnetic field direction along [111]$_{\rm cubic}$, which is also the magnetic easy axis. However, it is difficult to understand why the dielectric property appears at low temperatures as a crossover transition.
One possible scenario is that thermally activated itinerant Fe$^{2+}$ impurities are frozen out at low temperatures~\cite{Yamasaki2009}.
Impurity Fe$^{2+}$ centers provide large spin-orbit coupling effects~\cite{Kohara}.
Therefore, the observed magnetic anomaly may appear through the spin-orbit coupling effect due to the localization of Fe$^{2+}$ centers below 150~K.
Based on the magnon anomaly appearing under a magnetic field along [111]$_{\rm cubic}$, we may expect the low-temperature spin Seebeck effect to reflect this anomaly.
So far, no experiments under this condition have been reported.
It is necessary to investigate how the spin Seebeck effect changes depending on the magnetic field direction relative to the crystal axis.
Although the $\chi^{\prime\prime}_{xy}$ polarization mode may change the magnon character with the spin canting in the low-lying acoustic magnon mode, the Zeeman energy gap closure can increase the spin current, especially at low temperatures.
This type of experiment may provide further information about the enhancement of the spin Seebeck effect.
The relationship between the magnon dispersion and the dynamical spin susceptibility $\chi^{\prime\prime}(E)$ has been useful for investigating ultralow-energy magnons beyond the $Q$-resolution.
This method was indispensable for the study of ultralow-energy magnons.
By this method, the Zeeman energy gap anomaly was found under a magnetic field along [111]$_{\rm cubic}$ in YIG.
Magnetization measurement also revealed a similar anomaly under a magnetic field along the same direction.
Moreover, specific heat measurement also revealed that the Zeeman energy gap closes at low temperatures.
The increase of $\chi^{\prime\prime}_0$ was discussed in terms of the following two possibilities.
One is that the sublattice magnetization effect softens the ultralow-energy magnon dispersion.
The other is that spin canting emerges at low temperatures.
The former magnon softening estimated from the previous report\cite{LeCraw} was an increase of about 8\% in $\chi^{\prime\prime}_0$, whereas the latter spin canting of 15.4 degrees estimated from the magnetization contributes to an increase of about 7\%.
The large increase of $\chi^{\prime\prime}_0$ in Fig. \ref{fig:10}(c) was roughly explained by these two effects.
The spin canting anomaly was attributed to the magnetic crossover transition in YIG below 150~K.
Although this still remains as a speculation because of the large error bars in $\chi^{\prime\prime}_0$ at $T$= 10~K, this spin canting anomaly may play an important role in enhancing spintronic properties, such as the spin Seebeck and ultrasound spin-pumping effects.
\section{Inelastic $\bm{Polarized}$ Neutron Scattering}
\subsection{Spin current carrier in insulators}
The spin current in insulators can be carried by the precessional motion of the ordered moments.
More precisely, the thermal spin motive force, i.e., ``spin pumping,'' which governs the spin Seebeck signal, is proportional to the product of the integrated energy and the chiral correlation function (transverse component of the magnons)~\cite{Xiao2010}.
We commence this section with a review of the direction of the precessional motion, i.e., magnon polarization~\cite{Nambu2020}.
The magnon polarization had never been directly measured in any material.
According to the Landau--Lifshitz equation without the Gilbert damping term for simplicity,
\begin{align}
\frac{\partial\vec{M}}{\partial t}=-\gamma\vec{M}\times\vec{H}^{\rm eff},
\label{LL}
\end{align}
a magnetic moment precesses counterclockwise around the effective magnetic field direction, where $\gamma$ is the gyromagnetic ratio of the moments.
This motion can be defined as ``positively'' polarized.
The collective excitations in single-domain ferromagnets also precess only counterclockwise; hence, all ferromagnetic magnons have a positive polarization (Fig.~\ref{nambu-fig1}(a)).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig13.pdf}
\end{center}
\caption{(Color online) Illustration of magnon polarization for (a) a ferromagnet, (b) an antiferromagnet, and (c) two magnon modes in a ferrimagnet. The ``positive'' polarization acoustic mode is a coherent right-handed circular precession of the moments, whereas the ``negative'' polarization optical mode is a left-handed precession dominated by the exchange interaction between Fe$_{\rm oct}$ and Fe$_{\rm tet}$ sites. Reprinted with permission from Nambu {\it et al}.~\cite{Nambu2020} ({\copyright} 2020 The American Physical Society).}
\label{nambu-fig1}
\end{figure}
Simple collinear antiferromagnets have two magnon modes with opposite polarization (Fig.~\ref{nambu-fig1}(b)), but these are degenerate unless large magnetic fields or Dzyaloshinskii--Moriya interactions are turned on.
Simple ferrimagnets have two anti-aligned sublattices and also support two magnon polarizations, but the inter-sublattice exchange field naturally separates the branches of opposite polarization into acoustic and optical modes (Fig.~\ref{nambu-fig1}(c)).
The energy gap between these modes can be large; hence, spectroscopic studies have the potential to observe this character.
YIG is a good material for attempting such measurements, since it is well studied and frequently employed for spintronics and magnonics.
The magnon polarization in ferrimagnets for the uniform ($Q=0$) modes can be understood with Eq.~\ref{LL}.
With $j\in [{\rm tet}, {\rm oct}]$, the effective magnetic field can be written as~\cite{Schlomann1960}
\begin{align}
\vec{H}_j^{\rm eff}&=-\frac{\partial U}{\partial \vec{M}_j}\nonumber\\
&=\frac{\partial}{\partial \vec{M}_j}\left(\frac{1}{2}\Lambda_{\rm tet}M_{\rm tet}^2+\frac{1}{2}\Lambda_{\rm oct}M_{\rm oct}^2-\Lambda\vec{M}_{\rm tet}\cdot\vec{M}_{\rm oct}\right),
\end{align}
with $\Lambda$ being exchange interaction constants.
We respectively divide $\vec{M}_j$ and $\vec{H}_j^{\rm eff}$ into static and dynamic parts, $\vec{M}_j(t)=\vec{M}_{j,0}+\vec{m}_j(t)$ and $\vec{H}_j^{\rm eff}(t)=\vec{H}_{j,0}^{\rm eff}+\vec{h}_j^{\rm eff}(t)$.
After linearization, choosing the quantization axis along $z$, $M_{\rm tet}^z=M_{\rm tet}$, $M_{\rm oct}^z=-M_{\rm oct}$, and $m^{\pm}_j=m^x_j\pm im^y_j$, the secular equation reads
\begin{equation}
\begin{pmatrix}
\pm\omega-\gamma_{\rm tet}\Lambda M_{\rm oct} & -\gamma_{\rm tet}\Lambda M_{\rm tet}\\
\gamma_{\rm oct}\Lambda M_{\rm oct} & \pm\omega+\gamma_{\rm oct}\Lambda M_{\rm tet}
\end{pmatrix}
\begin{pmatrix}
m^{\pm}_{\rm tet}\\
m^{\pm}_{\rm oct}
\end{pmatrix}
=0.
\end{equation}
One of the eigenenergies is $\omega_{\rm acoustic}=0$ for the acoustic mode, whereas the other gives $\omega_{\rm optical}=\Lambda\left(\gamma _{\rm oct}M_{\rm tet}-\gamma_{\rm tet}M_{\rm oct}\right)$ corresponding to the optical gap.
Eigenoscillations within the $xy$-plane can then be easily calculated, where the precession radii give $m^{\pm}_{\rm tet}/m^{\pm}_{\rm oct}=-M_{\rm tet}/M_{\rm oct}$ and $-\gamma_{\rm tet}/\gamma_{\rm oct}$ for the acoustic and optical modes, respectively.
The corresponding eigenoscillations are schematically depicted in Fig.~\ref{nambu-fig1}(c): the acoustic mode is a coherent right-handed circular precession in which $\vec{M}_{\rm tet}$ and $\vec{M}_{\rm oct}$ are exactly antiparallel, whereas the optical mode is a left-handed circular precession with a finite canting angle.
\subsection{Polarized neutron scattering cross sections}
Polarized neutron scattering--taking into account the neutrons' spin degree of freedom in the scattering process--gives detailed information of the magnetism~\cite{Chatterji2006}.
It has mainly been used to disentangle the magnetic and nuclear contributions to scattering cross sections~\cite{Moon1969}.
The elucidation of the magnetic moment directions, as well as the symmetry of magnetic fluctuations~\cite{Kakurai1984}, has also been demonstrated.
More recently, the ``chiral term''~\cite{Maleyev1995} was used to measure the chiral (handedness of) magnetic order~\cite{Loire2011} and excitations in paramagnetic~\cite{Roessli2002} and magnetically ordered phases~\cite{Lorenzo2007a}.
The chirality observed in these studies is a spatial variation of the {\it non-collinear} magnetic moments caused by effects such as geometrical frustration and the Dzyaloshinskii--Moriya interactions.
Here, we aim for a different property: the intrinsic polarization of the magnetic excitations in a {\it collinear} magnet.
The nuclear-magnetic ``interference term'' has also attracted much attention recently in the context of magnetoelastic coupling.
To detect the magnon polarization, a special coordination of the neutron polarization is required.
The neutron polarization direction can be taken as an arbitrary direction, i.e., it is not restricted only to perpendicular to the scattering plane.
For a schematic understanding, we adopt a simple orthogonal scattering coordinate $(x,y,z)$ ({\it aka} Blume--Maleyev coordination, as in Fig.~\ref{nambu-fig2}), where $x\parallel \vec{Q}$, $y\perp \vec{Q}$, and $z$ is perpendicular to the horizontal scattering plane with the scattering wave vector $\vec{Q}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig14.pdf}
\end{center}
\caption{(Color online) Sketch of the $P_x$-polarized neutron scattering experiment on the IN20~\cite{IN20} instrument at ILL, France, with bold black arrows denoting the neutron path. The scattering coordinate $(x,y,z)$ is also given. Reprinted with permission from Nambu {\it et al}.~\cite{Nambu2020} ({\copyright} 2020 The American Physical Society).}
\label{nambu-fig2}
\end{figure}
The observations for each neutron polarization direction are summarized via the following cross-section formulae:~\cite{Blume1963,Maleyev1963},
\begin{align}
&\sigma_x^{\pm\pm}\propto N, \label{xnsf}\\
&\sigma_x^{\pm\mp}\propto M_y + M_z \mp M_{\rm ch}, \label{xsf}\\
&\sigma_y^{\pm\pm}\propto N+M_y\pm R_y, \label{ynsf}\\
&\sigma_y^{\pm\mp}\propto M_z, \label{ysf}\\
&\sigma_z^{\pm\pm}\propto N+M_z\pm R_z, \label{znsf}\\
&\sigma_z^{\pm\mp}\propto M_y, \label{zsf}
\end{align}
where the ideal case with perfect performances of neutron spin polarizers and flippers is assumed.
The term $\sigma_{\alpha}^{io}$ ($\alpha = x,y,z$) stands for the partial differential scattering cross section (${\rm d}^2\sigma/({\rm d}\Omega{\rm d}E)^{io}$) with $i$ incoming and $o$ outgoing neutrons with $+/-$ neutron polarization.
The nonmagnetic nuclear ($N=\langle N_{Q}N_{Q}^{\dagger }\rangle _{\omega}$), in-plane magnetic ($M_y=\langle M_{Qy}M_{Qy}^{\dagger}\rangle _{\omega}$), out-of-plane magnetic ($M_z=\langle M_{Qz}M_{Qz}^{\dagger}\rangle _{\omega}$), chiral ($M_{\rm ch}=i(\langle M_{Qy}M_{Qz}^{\dagger}\rangle_{\omega}-\langle M_{Qz}M_{Qy}^{\dagger}\rangle_{\omega})$), and interference ($R_{\beta}=\langle N_Q M_{Q\beta}^{\dagger}\rangle_{\omega}+\langle M_{Q\beta}N_Q^{\dagger}\rangle_{\omega}$) ($\beta = y,z$) terms are included.
$\langle N_{Q}N_{Q}^{\dagger }\rangle _{\omega }$ and $\langle M_{Q\beta}M_{Q\beta}^{\dagger}\rangle_{\omega}$ are the spatiotemporal Fourier transforms of the nuclear-nuclear and spin-spin correlation functions, respectively.
The chiral term defines the antisymmetric correlation function within the $yz$-plane, and the interference terms describe the symmetric part of the nuclear-magnetic interference.
Detection of the magnon polarization owing to the chiral term is difficult due to the low scattering intensities.
The chiral terms can only be measured when the applied field and magnetization are aligned with the scattering wave vector $\vec{Q}$.
Magnetic neutron scattering can only detect the spin components perpendicular to this $\vec{Q}$, and these projections in {\it collinear} magnets are considered to be tiny.
Moreover, the signal is contaminated by imperfections in the polarizers and flippers, which are needed to select the incident and scattered neutrons (see Fig.~\ref{nambu-fig2}).
Derived analytical formulae for corrections of the neutron polarization are summarized elsewhere.~\cite{Nambu-arXiv}
\subsection{Mode-resolved magnon polarization}
Single-crystalline samples of YIG were grown by the traveling solvent floating zone method, and the measured Curie temperature $T_{\rm C}=553$~K agrees well with previous reports.
Inelastic polarized neutron scattering data were obtained using the thermal neutron triple-axis spectrometer IN20~\cite{IN20} at the Institut Laue-Langevin, France.
IN20 is equipped with a Heusler (111) monochromator and analyzer to polarize the incident neutron beam and analyze the scattered neutron polarization; the horizontally variable and vertically fixed curvature enables a large polarized neutron flux.
We used a graphite filter in the outgoing beam to suppress higher-order contamination and fixed the final wavenumbers of $k_{\rm f}=2.662$ and 4.1~{\AA}$^{-1}$, corresponding to final energies of $E_{\rm f}=14.7$ and $34.8$~meV, respectively.
A YIG single crystal ($\sim$8~g) was oriented with $(HHL)$ in the horizontal scattering plane, and put into a cryomagnet supplying a horizontal magnetic field of 0.3~T parallel to the momentum transfer $\vec{Q}$ and temperatures between 10 and 300~K.
YIG is a very soft magnet~\cite{Yamasaki2009}, and the external field of 0.3~T was sufficient to fully saturate the magnetization into a single magnetic domain.
The obtained fields were homogeneous over a large area around the sample position and were discontinuously connected to the guide fields along the neutron path.
In IN20, the scattered neutrons are recorded in four {\it channels}: $I_{x}^{++}$, $I_{x}^{--}$, $I_{x}^{+-}$, $I_{x}^{-+},$ where $I_{x}^{io}\ \left(\propto {\rm d}^2\sigma/({\rm d}\Omega{\rm d}E)^{io}\right)$ is the intensity of $i$ incoming and $o$ outgoing neutrons with $+/-$ neutron polarization~\cite{Chatterji2006}.
From the four channels, the non-magnetic nuclear ($N$), magnetic ($M=M_{y}+M_{z}$), and chiral ($M_{\rm ch}$) contributions can be extracted through the following combinations:
\begin{align}
&N = \langle N_Q N_Q^{\dagger}\rangle_{\omega} = \frac{1}{2}(I_x^{++} + I_x^{--}),\\
&M = \langle M_{Qy}M_{Qy}^{\dagger}\rangle_{\omega} + \langle M_{Qz}M_{Qz}^{\dagger}\rangle_{\omega} = \frac{1}{2}(I_x^{+-} + I_x^{-+}),\\
&M_{\rm ch} = i(\langle M_{Qy}M_{Qz}^{\dagger}\rangle_{\omega}-\langle M_{Qz}M_{Qy}^{\dagger}\rangle_{\omega}) = \frac{1}{2}(I_x^{+-} - I_x^{-+}).
\end{align}
Phonon and magnon scatterings in YIG have been separated in terms of the nuclear and magnetic spectra~\cite{Plant1977,Princep2017a,Shamoto2018,Shamoto2020}.
The chiral contribution $M_{\rm ch}$ contains new information about the magnon polarization.
Figure~\ref{nambu-fig3} summarizes the results of accumulating many scans and their derivations around $Q=(4,4,-4)$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig15.pdf}
\end{center}
\caption{(Color online) Derived spectra of (a) the nuclear term $N=\frac{1}{2}(I_x^{++}+I_x^{--})$, (b) magnetic term $M=M_y+M_z=\frac{1}{2}(I_x^{+-}+I_x^{-+})$, and (c) chiral term $M_{\rm ch}=\frac{1}{2}(I_x^{+-}-I_x^{-+})$ from mesh scans taken at 293~K. Note that some scans miss the $I^{++}$ channel, which is approximated by $I^{--}$. The chiral term is compared with (d) the calculated resolution-convoluted partial differential scattering cross section. Reprinted with permission from Nambu {\it et al}.~\cite{Nambu2020} ({\copyright} 2020 The American Physical Society).}
\label{nambu-fig3}
\end{figure}
The nuclear response is very weak (Fig.~\ref{nambu-fig3}(a)) as intended: the $(4,4,-4)$ intensity is four orders of magnitude smaller than that of the strongest nuclear Bragg peak $(0,0,4)$.
Very weak phonon excitations and/or the imperfections of the neutron polarization and flippers may cause the remaining weak signals.
The disentangled magnetic response in Fig.~\ref{nambu-fig3}(b) is basically equivalent to unpolarized neutron scattering.
The chiral term $M_{\rm ch}$ is plotted in Fig.~\ref{nambu-fig3}(c).
The dispersion is the same as in the magnetic response, but the sign (color) of the signal distinguishes the polarization of the magnon modes.
Note that Fig.~\ref{nambu-fig3} summarizes the data obtained at 293~K, which shows softening behavior compared with Fig.~\ref{fig:7} obtained at 20~K as explained earlier.
The red acoustic mode has the ``positive'' polarization (counterclockwise with respect to the field), whereas the blue optical mode is the exchange-split mode that precesses in the opposite (clockwise) direction.
We compare the measurements with the polarized neutron partial differential cross section calculated using atomistic spin dynamics with quantum statistics~\cite{Barker2016b,Barker2019}.
The exchange constants are taken from Princep {\it et al}.~\cite{Princep2017a} and scaled by $S^2$ ($S=5/2$).
With the convolution of the approximated instrument resolution, Fig.~\ref{nambu-fig3}(d) shows excellent agreement with the experiments.
Calculation with the parameter set from Ref.~30 also gives good agreement, since both yield almost identical magnon dispersion relation below 35~meV.
We measured a large number of points on the two magnon branches and also measured an optical mode with positive polarization by moving to the $(6,6,-4)$ Brillouin zone (Fig.~\ref{nambu-fig4}(a)).
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.8\linewidth,clip]{82303Fig16.pdf}
\end{center}
\caption{(Color online) (a), (b) Calculated partial differential scattering cross sections overlaid with experimentally estimated peak positions at 293~K. $(H,H,-L)$ in (a) and $(H,H,-H)$ in (b) span the ranges $(5,5,-3)$ to $(7,7,-5)$ and $(3,3,-3)$ to $(5,5,-5)$, respectively. (c) Temperature dependence of the estimated optical gap value compared with the calculation and the previous results~\cite{Plant1977}. The shaded area marks $E\le k_{\rm B}T$. (d) Calculated $T/T_{\rm C}$ dependence of the thermal spin pumping from Y$_3$Fe$_5$O$_{12}$ by the Fe$_{\rm tet}$ and Fe$_{\rm oct}$ sites, and the total. Reprinted with permission from Nambu {\it et al}.~\cite{Nambu2020} ({\copyright} 2020 The American Physical Society).}
\label{nambu-fig4}
\end{figure*}
Peaks were extracted using resolution-convoluted fits with the recently developed Eckold--Sobolev-type resolution function~\cite{Eckold2014}.
We found almost perfect agreement between the experiment and theory for both the nearly flat magnon mode at 50~meV around $(6,6,-4)$ (Fig.~\ref{nambu-fig4}(a)) and the acoustic and optical modes below 35~meV around $(4,4,-4)$ (Fig.~\ref{nambu-fig4}(b)).
The agreement from a low temperature to room temperature validates the low-temperature parameterization of the exchange coupling constants~\cite{Princep2017a}.
The magnon polarization of the localized (flat) mode is positive, in agreement with the calculations, highlighting the ability to measure polarization anywhere in the reciprocal space.
The optical gap is important for the thermodynamic and transport properties of YIG around and above room temperature, including the spin Seebeck effect.
Magnon modes are thermally occupied below $E=k_{\rm B}T$ (shaded area in Fig.~\ref{nambu-fig4}(c)).
At low temperatures, only the acoustic mode is occupied, but at room temperature and above, the optical mode with the opposite polarization becomes occupied.
The strength of the spin Seebeck signal is proportional to the product of the integrated energy and the magnon polarization, to which the acoustic and optical modes contribute with opposite signs.
The observed maximum of the spin Seebeck voltage in YIG near room temperature~\cite{Kikkawa2015} has been interpreted in terms of this competition.
This is also theoretically illustrated in Fig.~\ref{nambu-fig4}(d), in which the total spin-pumping signal is clearly not the sum of these from the Fe$_{\rm tet}$ and Fe$_{\rm oct}$ moments, but the spin Seebeck voltage drops much faster than the magnetization with increasing temperature.
Whereas a theoretical treatment including interactions on the Pr/YIG interface is not yet available, the optical modes are thus expected to play an important role in the spin Seebeck voltage.
The optical modes might also explain the observation of reduced magnon conductivity~\cite{Wimmer2018}.
\section{Discussion}
\begin{table*}[t]
\caption{Reported exchange parameters in the unit of meV for Y$_3$Fe$_5$O$_{12}$. $J_i^{jk}$ stands for the $i$th neighbor interaction between the $j$ and $k$ sites. Note that $J_3^{aa}$ and $J_3^{aa\prime}$ with identical displacement have separate superexchange pathways. The positive and negative signs respectively correspond to antiferromagnetic and ferromagnetic interactions.}
\label{table-J}
\begin{center}
\begin{tabular}{lccccccc}
\hline
& $J_1^{ad}$ & $J_2^{dd}$ & $J_3^{aa}$ & $J_3^{aa\prime}$ & $J_4^{ad}$ & $J_5^{dd}$ & $J_6^{aa}$\\
\hline
Wojtowicz (1964)~\cite{Wojtowicz} & 5.56 & 0.56 & 0 & 0 & -- & -- & --\\
Plant (1977)~\cite{Plant1977} & 6.86 & 1.38 & 1.38 & 1.38 & -- & -- & --\\
Plant (1983)~\cite{Plant2} & 6.4 & 0.9 & 0 & 0 & 0.46 & 0.28 & 1.5\\
Cherepanov {\it et al}. (1993)~\cite{Cherepanov1993} & 6.87 & 2.3 & 0.65 & 0.65 & -- & -- & --\\
Princep {\it et al}. (2017)~\cite{Princep2017a} & 6.8(2) & 0.52(4) & 0.0(1) & 1.1(3) & -0.07(2) & 0.47(8) & -0.09(5)\\
Xie {\it et al}. (2017)~\cite{Xie} & 4.774 & 0.308 & 0.144 & 0.144 & 0.326 & 0.358 & 0.008\\
Shamoto {\it et al}. (2018)~\cite{Shamoto2018} & 5.80(14) & 0.70(16) & 0.0(1) & 0.0(1) & -- & -- & --\\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth,clip]{82303Fig17.pdf}
\end{center}
\caption{(Color online) Spin-wave calculations for Y$_3$Fe$_5$O$_{12}$ based on the exchange parameters from (a) Plant (1983)~\cite{Plant2}, (b) Princep {\it et al}. (2017)~\cite{Princep2017a}, and (c) Shamoto {\it et al}. (2018)~\cite{Shamoto2018}. Dashed curves represent the dispersion relations, and solid curves with the color correspond to the sign and magnitude of the chiral correlation function, i.e., the magnon polarization.}
\label{spinwavecalc}
\end{figure*}
Both inelastic {\it unpolarized} and {\it polarized} neutron scattering measurements have been performed to study the magnetic excitation in YIG.
Each measurement has its strengths and weaknesses.
The magnon polarization for each magnon branch can only be observed by inelastic polarized neutron scattering.
However, the polarization of neutrons strongly reduces the count rate and limits the accessible energy range due to the performance of polarization devices.
Moreover, the energy resolution is sometimes sacrificed to maximize the neutron flux.
Unpolarized neutron scattering, on the other hand, is suitable for studying high-energy and ultralow-energy spin-wave spectra with relatively high resolution.
Another typical example is the determination of exchange parameters.
Fittings have been made on inelastic unpolarized neutron scattering results~\cite{Xie,Princep2017a,Shamoto2018}, and representative parameter sets are summarized in Table~\ref{table-J}.
We compared the calculated spin-wave spectra in Fig.~\ref{spinwavecalc} and found that they are consistent with each other for two major branches below 40~meV.
Differences are, however, visible for higher energy transfers around the maxima of the acoustic and optical modes.
The obtained parameters depend on the energy transfer regime used during the fitting.
The spin-wave spectrum should be measured in the whole energy range with a reasonably high $E$-resolution for precise fitting.
This has been a challenging condition for any inelastic neutron scattering, especially for inelastic polarized neutron scattering.
This is due to the limitation of polarizers for high-energy neutron beams.
The $^3$He spin filter and dynamic nuclear polarization methods have been developed for the neutron polarization in higher energy transfers.
In the inelastic unpolarized neutron scattering measurement, absolute-scale intensity estimation has played an important role in discoveries.
The ultralow-energy spin-wave dispersion with a Zeeman energy gap under an external magnetic field was determined from the energy dependence of the absolute intensity.
The magnon mode number of YIG was also determined to be unity at low energies by the intensity estimation.
However, it was impossible to determine the polarization mode by this method.
The clear demonstration of two different polarizations in the acoustic and optical modes by inelastic polarized neutron scattering measurement shows the importance of two-mode mixing at high temperatures.
Although the mixing plays an important role in the spin Seebeck effect, the present detailed study on the spin-wave by inelastic unpolarized neutron scattering reveals various anomalies depending on the magnetic field.
These anomalies were overlooked in the previous measurements.
Each technique was used to reveal novel features of YIG via its advantages.
All these techniques are applicable to other magnetic systems.
Through our first attempts to observe magnon polarization~\cite{Nambu2020}, the magnon polarization was found to be a fundamental property of matter of relevance to spintronic phenomena.
Our technique~\cite{Nambu2020} to resolve the magnon polarization is also applicable for other ferrimagnetic systems.
For instance, Gd$_3$Fe$_5$O$_{12}$ shows a sign change in the spin Seebeck voltage~\cite{Geprags2016}, in which modes with different polarizations are thought to exist close together.
A magnon polarization analysis of rare-earth iron garnets could also help to understand the observed magnon spin currents~\cite{Cramer2017}.
In a magnetically soft material such as YIG, the magnon polarization is nearly circular; however, YIG is very amenable to doping, and magnetic anisotropies can also be introduced.
Strong anisotropies, as well as local anisotropy in the tetrahedral and octahedral sites, may couple magnons with opposite polarization, thereby causing ellipticity and anticrossings between optical and acoustic modes.
This ``magnon squeezing''~\cite{Kamra2016} may be essential for applications of magnets in quantum information and can be measured by this technique.
\section{Conclusions}
We have studied the basic characteristics of the quintessential magnet YIG using neutrons.
Although YIG has been believed to have the space group $Ia\bar{3}d$, our detailed crystal structure refinement showed distortion to the trigonal $R\bar{3}$~\cite{Shamoto2018}.
Unpolarized neutron scattering experiments revealed magnetic excitations~\cite{Shamoto2018,Shamoto2020} ranging from high (100~meV) to ultralow energy (10~$\mu$eV).
Through linearized spin-wave analysis, the nearest-neighbor exchange interactions, $J_{aa}$, $J_{ad}$, and $J_{dd}$, were estimated, and the stiffness constant $D$ was found to be consistent within this approximation.
We also measured the polarization of magnons, which is an important degree of freedom in magnets yet hitherto untested, in collinear ferrimagnetic YIG and found quantitative agreement with the theory~\cite{Nambu2020}.
The magnon polarization can easily be squeezed by spin anisotropy and/or any mixing between the acoustic and optical modes.
Our first attempt to measure the pure polarization using YIG can therefore be a textbook case for magnon polarization observation.
We anticipate that valuable information can be gained from similar measurements on other ferrimagnets.
Theories discussing the role of magnon polarization in spintronics are now appearing~\cite{Kamra2017}, and our direct measurement of the magnon polarization has thus demonstrated the importance of neutron scattering for the next generation of spintronics and magnonics.
\begin{acknowledgments}
We acknowledge the following individuals for fruitful discussions: M. Akatsu, J. Barker, S.~E. Barnes, G.~E.~W. Bauer, L.-J. Chang, M. Enderle, H. Endo, M. Fujita, J. Ieda, Y. Inamura, T.~U. Ito, R. Kajimoto, K. Kakurai, T. Kikkawa, Y. Kobayashi, K. Kodama, M. Kofu, C.-H. Lee, S. Maekawa, M. Matsuura, M. Mori, T. Moyoshi, K. Munakata, M. Nakamura, A. Nakao, Y. Nemoto, T. Oda, T. Ohhara, S. Ohira-Kawamura, H. Onishi, Y. Ohnuma, E. Saitoh, N. Sato, K. Shibata, Y. Shiomi, S. Toth, J.~M. Tranquada, T. Weber, B. Winn, H. Yamauchi, Y. Yasui, and T. Ziman.
We also thank the CROSS sample environment team and M.~B\"ohm for their experimental assistance, and M. Usami and Y. Baba in the JAEA technical support team.
The work at J-PARC was performed under proposals 2012B0134, 2015A0174 (BL01), 2014B0157, 2015I0002, 2016A0318, 2017L0301 (BL02), and 2013B0278 (BL14).
The work at ILL was performed under project 4-01-1559 (doi:10.5291/ILL-DATA.4-01-1559).
This work was supported by JSPS (Nos.~JP21H03732, JP25287094, JP16K05424, JP16H04007, JP17H05473, JP19H04683, JP17H06137), JST (No.~JPMJFR202V) Iketani Science and Technology Foundation, and the Graduate Program in Spintronics at Tohoku University.
\end{acknowledgments}
|
2,869,038,156,291 | arxiv | \section{Introduction}
Modal decompositions such as proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD) have been used to distill important physical mechanisms from data, and to develop reduced-order models for turbulent wall-bounded flows \citep{Berkooz1993}, flow past a cylinder \citep{Chen2012,Bagheri2013}, and a jet in cross-flow \citep{Rowley2009,Schmid2010}, to name a few examples.
These techniques were developed for flows involving (at most) stationary immersed surfaces, and have been applied less extensively to fluid-structure interaction (FSI) problems, where the fluid motion is coupled to deformation and/or vibration of an immersed structure. In this FSI setting, data analysis has, to our knowledge, only been applied to data of either the fluid or the structure independently of the other. The fluid-only approach has been used to study flow past a flexible membrane \citep{Schmid2010}, a cantilevered beam \citep{Cesur2014}, and an elastically-mounted cylinder undergoing vortex-induced vibration \citep{Blanchard2017}. The solid-only approach has been applied to fish swimming \citep{Bozkurttas2009,Tangorra2010} and flag flapping \citep{Michelin2008,Kim2013}. These approaches reveal significant flow or structure behavior, respectively, but do not yield driving mechanisms in the omitted quantity. This in turn leaves the correlation between fluid and structure behavior unknown.
We propose a framework for data analysis of FSI systems where the fluid and structure are treated together, which naturally allows correlation between the fluid and structure to inform the resulting modes of the fully-coupled system. As part of this formulation, we define a norm in terms of the total mechanical energy of the FSI system. This combined fluid-structure data-analysis procedure is then demonstrated on limit-cycle flapping and chaotic flapping of strictly two-dimensional flags. We show that the methodology is useful in extracting the mechanisms of FSI in these various regimes.
We focus here on proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD) because of their widespread use and their expected suitability for the problems considered here. The limit-cycle case described in section \ref{sec:LC} is associated with one dominant frequency, and thus DMD is a natural candidate because of its localized harmonic nature \citep{Mezic2013}. POD is also expected to be suitable because of the near-harmonic decomposition it typically yields for limit-cycle flows (such as occurs in vortex shedding past a cylinder near the critical Reynolds number of approximately 47; see, \emph{e.g.}, \citet{Kutz2016}). For the chaotic flapping problem described in section \ref{sec:chaos}, the non-broadband (`peaky') nature of the dynamics again makes DMD a fitting technique. However, POD and DMD are not ideal for all contexts. For example, \citet{Towne2017} demonstrated that in statistically stationary flows with broadband frequency content -- as observed in the majority of turbulent flows -- spectral POD provides an optimal decomposition. The major goal of the current work is to demonstrate the utility of performing data analysis in a manner that accounts for both the fluid and the structure, rather than explore the advantages of any particular technique, a question which in any event depends on the specific FSI problem under consideration. Future work can readily incorporate the methodology presented here into the appropriate technique for the intended application.
\section{POD and DMD of fluid-structure interaction}
We consider snapshot-based methods applied to discrete data. The associated data matrices are assumed to be organized so that each column provides the state of the system at an instance in time and each row contains the time history of a specific state variable. For simplicity we present our formulation in a two-dimensional setting; the extension to three dimensions is straightforward.
We assume fluid data is given on a stationary Cartesian grid, $\Omega$, made up of $n_f$ points ($\Omega \subset \mathbb{R}^{n_f}$), and let the streamwise and transverse fluid velocities at the $i^{th}$ time instance, $t_i$, be $\textbf{u}_i,\textbf{v}_i \in \Omega$. Fluid data is often provided in this format by immersed boundary methods and experiments; some numerical methods use moving meshes at each time step that conform to the moving structure, and fluid data obtained from these methods would need to be interpolated onto a single stationary grid at each time instance to use the method we propose here. Note that for FSI problems with bodies of finite (non-negligible) thickness, there may be points on $\Omega$ that lie within the body. In this case, the corresponding velocities $\textbf{u}_i, \textbf{v}_i$ should be set to zero to avoid spurious contributions from these `fictitious-fluid' quantities.
We consider structural data provided in a Lagrangian setting, with the structural domain, $\Gamma$, comprised of $n_s$ points ($\Gamma$ depends on time). We let $\boldsymbol{\chi}_i, \boldsymbol{\eta}_i \in \Gamma$ denote the streamwise and transverse structural displacements from an undeformed reference configuration at the $i^{th}$ time instance, and $\boldsymbol{\xi}_i, \boldsymbol{\zeta}_i \in \Gamma$ be the corresponding structural velocities. We define the total state vector at $t_i$ as $\textbf{y}_i = [\textbf{u}_i, \textbf{v}_i, \boldsymbol{\chi}_i, \boldsymbol{\eta}_i, \boldsymbol{\xi}_i, \boldsymbol{\zeta}_i ] ^T\in\mathbb{R}^{2n_f + 4n_s}$, and define the data matrix, $\textbf{Y}\in\mathbb{R}^{n\times m}$ ($n = 2n_f + 4n_s$ is the size of the state and $m$ is the number of snapshots), as $\textbf{Y} = [ \textbf{y}_1, \dots, \textbf{y}_m]$.
POD modes are computed from the mean-subtracted data matrix, $\tilde{\textbf{Y}}$, whose $i^{th}$ column is defined as $\tilde{\textbf{Y}}_i = \textbf{Y}_i - \boldsymbol{\mu}$, where $\boldsymbol{\mu} = 1/m \sum_{k = 1}^m \textbf{y}_k$ is the sample temporal mean of $\textbf{Y}$. For DMD, \cite{Chen2012} found that the use of $\tilde{\textbf{Y}}$ reduces DMD to a discrete Fourier transform in time, and that using \textbf{Y} allows for growth-rate information to be retained. For this reason, DMD is performed on \textbf{Y} below.
\subsection{Proper orthogonal decomposition}
POD decomposes the data into orthogonal spatially uncorrelated modes that are ordered such that the leading $k$ modes $(k \le m)$ provide the most energetically dominant rank-$k$ representation of $\tilde{\textbf{Y}}$. This optimal representation is defined with respect to a norm, and we therefore select an inner product space whose induced norm yields the mechanical energy of the FSI system. Defining $\textbf{x}$ as an Eulerian spatial coordinate and $\textbf{s}$ as a Lagrangian variable that parameterizes the structure, and letting $\textbf{u}(\textbf{x},t) = [u(\textbf{x},t), v(\textbf{x},t)]^T$, $\boldsymbol\chi(\textbf{s},t) = [\chi(\textbf{s},t), \eta(\textbf{s},t)]^T$, and $\boldsymbol{\xi}(\textbf{s},t) = [\xi(\textbf{s},t), \zeta(\textbf{s},t)]^T$ be continuous analogues of the discrete variables defined earlier, the mechanical energy is
\begin{equation}
E(t) = \frac{\rho_f}{2}\int_\Omega |\textbf{u}(\textbf{x}, t)|^2 d\textbf{x} + \int_\Gamma \left[ \kappa(\boldsymbol{\chi}(\textbf{s},t)) + \frac{\rho_s}{2} \left|\boldsymbol{\xi}(\textbf{s},t) \right|^2 \right] d\textbf{s}
\label{eqn:TE_cont}
\end{equation}
where $\Omega$ and $\Gamma$ are continuous analogous of the discrete domains defined earlier. The terms corresponding to the fluid and structural velocities represent the kinetic energy in the system ($\rho_f$ and $\rho_s$ are the fluid and structure density, respectively) and $\kappa(\boldsymbol{\chi}(\textbf{s},t))$ is the potential energy within the structure (for deforming bodies this is the strain energy). The potential (strain) energy for flapping flags will be defined in the next section. Note that for bodies of finite thickness where there is fictitious fluid in $\Omega \cap \Gamma$, we again assume the fluid velocity is set to zero within $\Gamma$. This can equivalently be viewed as subtracting the fictitious fluid contribution, $\rho_f/2 \int_\Gamma |\textbf{u}(\textbf{x}, t)|^2 \delta(\textbf{x} - \boldsymbol{\chi}(\textbf{s},t) )d\textbf{s}$, from the definition of energy above.
While there are a variety of definitions of energy one could use (so long as it is the induced norm of an inner-product space), the mechanical energy is a natural choice because it is nonincreasing in time and accounts for the transfer of energy between the fluid and structure apart from viscous dissipation in the fluid. That is, through a straightforward computation one can show that in the absence of body forces and under the assumption that the shear stress is negligible on the boundary of $\Omega$ (which occurs for sufficiently large $\Omega$),
\begin{equation}
\frac{dE(t)}{dt} = -2 \mu \int_\Omega \left( \nabla \textbf{u} + (\nabla \textbf{u})^T \right) : \left( \nabla \textbf{u} + (\nabla \textbf{u})^T \right) d\textbf{x} \le 0
\label{eqn:diss}
\end{equation}
where $\mu$ is the dynamic viscosity of the fluid. Note that we assumed there is no dissipation in the structure in arriving at (\ref{eqn:diss}). Including this term would modify (\ref{eqn:diss}) by a term that depends on the properties of the structure but in any case is nonpositive.
In the discrete setting of interest, the norm is defined as $||(\cdot)||_\textbf{W} \equiv ||\textbf{W} (\cdot) ||_2$, where $\textbf{W}$ is a weighting matrix defined as
\begin{equation}
\textbf{W} = \begin{bmatrix} \sqrt{\frac{\rho_f}{2}}\textbf{I}^{2n_f} & \textbf{0} & \textbf{0} \\ \textbf{0} & \textbf{L} &\textbf{0} \\ \textbf{0} &\textbf{0} & \sqrt{\frac{\rho_s}{2}} \textbf{I}^{2n_s} \end{bmatrix}
\end{equation}
In this expression, $\textbf{I}^n$ is the $n\times n$ identity matrix and \textbf{L} is the operator that maps the structural displacements to the potential energy of the structure. We assume that \textbf{L} is formulated to be positive definite and symmetric so that \textbf{W} is positive definite and symmetric.
The inner product associated with this weighting matrix is defined as $\langle\textbf{q}, \textbf{p}\rangle_\textbf{W} \equiv \textbf{q}^T\textbf{W}^2\textbf{p} = (\textbf{W}\textbf{q})^T( \textbf{W}\textbf{p})$ $\forall \textbf{q},\textbf{p} \in \mathbb{R}^n$ and the induced norm is $||\textbf{q}||_\textbf{W} \equiv \sqrt{\langle \textbf{q}, \textbf{q} \rangle_\textbf{W}} = \sqrt{(\textbf{W}\textbf{q})^T( \textbf{W}\textbf{q})}$ $\forall \textbf{q} \in \mathbb{R}^n$, which is a discrete approximation of the square root of (\ref{eqn:TE_cont}) scaled by one on the length between data points, $\Delta x$. (This assumes that the distance between points of the fluid and structural domains is equal; unequal spacings can be incorporated into \textbf{W} in the standard ways).
The energetically ordered POD modes with respect to the \textbf{W}-weighted norm may be written in terms of the singular value decomposition (SVD) $\textbf{W}\tilde{\textbf{Y}} = \textbf{U}\boldsymbol{\Sigma}\textbf{V}^T$, where $\boldsymbol{\Sigma}$ is a diagonal matrix containing the singular values $\sigma_1, \dots, \sigma_m$ ordered by decreasing energy, and \textbf{U} (\textbf{V}) has columns $\textbf{u}_j$ ($\textbf{v}_j$) containing the left (right) singular vectors that correspond to $\sigma_j$. In this notation, the POD modes are $\hat{\textbf{U}} \equiv \textbf{W}^{-1} \textbf{U}$ (note that they are orthogonal with respect to the $\textbf{W}$-weighted inner product). These modes are written in terms of the SVD, but may be computed more efficiently using the method of snapshots \citep{Sirovich1987}. The energetically optimal rank-$k$ ($k\le m$) approximation of a snapshot $\textbf{y}_i$ may be expressed through an orthogonal projection onto the POD modes as
\begin{equation}
\textbf{y}_i \approx \sum_{j=1}^k \hat{\textbf{u}}_j^T (\textbf{W}\textbf{y}_i)\hat{\textbf{u}}_j
\label{eqn:POD_approx}
\end{equation}
\subsection{Dynamic mode decomposition}
Whereas POD modes define an energetically optimal representation of the data, DMD modes are obtained from a linear regression that best represents the dynamics of a (potentially nonlinear) data set. Though there are more general variants \citep{Tu2014}, we compute DMD modes from the matrix $\textbf{A}$ that best maps the progression of the state from one time instance to the next; \emph{i.e.}, the \textbf{A} that satisfies $\min \sum_{j=1}^{m-1} || \textbf{y}_{j+1} - \textbf{A}\textbf{y}_j ||_2$\footnote{The minimization can also be performed with respect to the \textbf{W}-weighted norm, but we retain the use of the standard 2-norm for consistency with most approaches in the literature.}. This relation can often be satisfied exactly under reasonable conditions on the data (such as linear independence of the columns of \textbf{Y}), and the best-fit matrix is $\textbf{A} = \textbf{Y}' (\textbf{Y}'')^{\#}$, where $\textbf{Y}' = [\textbf{y}_2, \dots, \textbf{y}_m]$, $\textbf{Y}'' = [\textbf{y}_1, \dots, \textbf{y}_{m-1}]$, and $(\textbf{Y}'')^\#$ is the pseudo-inverse of $\textbf{Y}''$.
DMD modes are the eigenvectors of \textbf{A}, denoted as $\boldsymbol\Phi = [\boldsymbol\phi_1, \dots, \boldsymbol\phi_{m-1}]$. These modes may be computed efficiently without forming \textbf{A} explicitly \citep{Tu2014}. The corresponding eigenvalues, $\hat{\gamma}_1, \dots, \hat{\gamma}_{m-1}$, are structured such that $\hat{\gamma}_j = e^{2\pi\gamma_j \Delta t}$, where $\Delta t$ is the time step between two snapshots and $\gamma_j$ is a complex number whose real and imaginary parts give the growth rate and frequency, respectively, of mode $j$. Note that $\gamma_j$ may be computed from $\hat{\gamma}_j$ via $\gamma_j =\log(\hat{\gamma}_j) / (2\pi\Delta t)$. A $k^{th}$ order ($k \le m-1$) representation of the system at the $i^{th}$ time instance $t_i$ may be written in terms of the DMD modes as
\begin{equation}
\textbf{y}_i \approx \sum_{j=1}^k c_j e^{2\pi\gamma_j t_i} \boldsymbol\phi_j
\label{eqn:DMD_approx}
\end{equation}
where $c_j = (\boldsymbol\Phi^\# \textbf{y}_1)_j$ represents the initial condition in terms of the $j^{th}$ DMD mode.
The above describes the DMD formulation derived for flows without bodies or flows involving stationary bodies, and may be used without modification for FSI problems to obtain the coupled flow-structure behavior that best represents the full system dynamics.
\section{Application to flag flapping}
\label{sec:flags}
The dynamics of flag flapping are governed by the Reynolds number ($Re$) and the dimensionless mass ($M_\rho$) and bending stiffness ($K_B$), defined as
\begin{equation}
Re = \frac{\rho_f U L}{\mu}, \; M_\rho = \frac{\rho_s h}{\rho_f L}, \; K_B = \frac{EI}{\rho_f U^2 L^3}
\end{equation}
where $\rho_f$ ($\rho_s$) is the fluid (structure) density, $U$ is the freestream velocity, $L$ is the flag length, $\mu$ is the dynamic viscosity of the fluid, $h$ is the flag thickness, and $EI$ is the bending stiffness.
The potential (strain) energy in the flag is given by the flag displacement in the direction normal to the flag, $\chi_n(s,t)$, as $\kappa(\chi_n(s,t)) = K_B\partial^2 \chi_n / \partial s^2$ (note that for flags only one Lagrangian variable is required to parametrize the body, so the scalar $s$ is used in place of $\textbf{s}$). In the case of inextensible flags considered here, the strain energy may be written in terms of the streamwise and transverse displacements as $\kappa(\boldsymbol\chi(s,t)) = K_B(\partial^2 \chi/ \partial s^2 + \partial^2 \eta / \partial s^2)$. We therefore define the $\textbf{L}$-submatrix of \textbf{W} using the standard second-order central difference formula for the $\chi$ and $\eta$ sub-blocks, which results in a symmetric positive definite weighting matrix.
The data for this analysis was obtained using the immersed-boundary method of \cite{Goza2017}. The method allows for arbitrarily large flag displacements and rotations, and is strongly-coupled to account for the nonlinear coupling between the flag and the fluid. The method was validated on several flapping flag problems. The physical parameters for each run are described in the subsequent subsections; see \cite{Goza2017} for details about the simulation parameters such as the grid spacing and time step that were used for the different simulations.
\subsection{Limit-cycle flapping}
\label{sec:LC}
We consider a POD and DMD analysis of flapping with $Re =500, M_\rho = 0.1,$ and $K_B = 0.0001$, for which the system enters limit-cycle behavior \citep{Connell2007}. Figure \ref{fig:conv_LC_tip} shows the transverse displacement of the trailing edge of the flag as a function of time along with the corresponding power spectral density. Our analysis is performed after the transient region, once the system enters periodic behavior of fixed amplitude and frequency (beginning at $t \approx 20$ in figure \ref{fig:conv_LC_tip}). Figure \ref{fig:LC_conventional} shows contours of vorticity at four snapshots in time during a period of flapping in the limit cycle regime. Snapshots were obtained over the range $t \in [20,40]$ in increments of $\Delta t =0.05$.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\hspace*{-15mm}
\includegraphics[scale=0.25,trim={0cm 0cm 0cm 0cm},clip]{Figures/tipdisp_convLCbig.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\hspace*{12mm}
\includegraphics[scale=0.25,trim={0cm 0cm 0cm 0cm},clip]{Figures/tipfreq_convLCbig.pdf}
\end{subfigure}
\caption{Transverse displacement (left) and spectral density (right) of the trailing edge of a flag in limit-cycle flapping with $Re =500$, $M_\rho = 0.18$, and $K_B = 0.0001$.}
\label{fig:conv_LC_tip}
\end{figure}
\begin{figure}
\begin{subfigure}[b]{0.245\textwidth}
\includegraphics[scale=0.35,trim={0cm 0cm 0cm 0cm},clip]{Figures/Re500_Mp18_Kbp0001_1.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{3.9mm}
\includegraphics[scale=0.35,trim={2cm 0cm 0cm 0cm},clip]{Figures/Re500_Mp18_Kbp0001_2.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{1.5mm}
\includegraphics[scale=0.35,trim={2cm 0cm 0cm 0cm},clip]{Figures/Re500_Mp18_Kbp0001_3.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{-0.8mm}
\includegraphics[scale=0.35,trim={2cm 0cm 0cm 0cm},clip]{Figures/Re500_Mp18_Kbp0001_4.pdf}
\end{subfigure}
\caption{Snapshots of a flapping period for a flag in limit-cycle flapping with $Re =500, M_\rho = 0.18, K_B = 0.0001$. Contours are of vorticity, in 18 increments from -5 to 5.}
\label{fig:LC_conventional}
\end{figure}
Figure \ref{fig:singvals_convLC} shows the singular values $\sigma$ from POD along with the DMD eigenvalues $\gamma$ of largest growth rate (real part). The four leading POD modes (which represent approximately $66\%$ of the total system energy) are shown in the top row of figure \ref{fig:mode_convLC}. Apart from the mode corresponding to the temporal mean, DMD modes typically come in complex conjugate pairs (\emph{e.g.}, the two leading modes are $\boldsymbol{\phi}_1, \bar{\boldsymbol{\phi}}_1$). We show in the bottom row of figure \ref{fig:mode_convLC} the real and imaginary parts of $\boldsymbol{\phi}_1$ and $\boldsymbol{\phi}_2$ (the mode corresponding to the temporal mean is not pictured). The POD and DMD modes are nearly identical since this system is characterized by a specific frequency (\emph{c.f.}, figure \ref{fig:conv_LC_tip}). The energetically optimal modes are therefore driving behavior at this dominant frequency and its harmonics. The flag behavior is conveyed through the leading two POD modes (leading complex-conjugate pair of DMD modes): these modes represent phase-shifted flapping at the dominant frequency to create the traveling-wave behavior of high spatial frequency that the flag undergoes for these parameters \citep{Connell2007}. The two leading POD modes (leading complex-conjugate pair of DMD modes) also demonstrate the creation and advection of vortices associated with flapping. Subsequent modes are not associated with flag flapping (the flag mode in the insert is undeformed), and instead describe the higher-harmonic response of the fluid to this dominant flapping motion.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[scale=0.45,trim={0cm 0cm 0cm 0cm},clip]{Figures/sig_convLCbig.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\hspace*{0mm}
\includegraphics[scale=0.45,trim={0cm 0cm 0cm 0cm},clip]{Figures/eigs_convLCbig.pdf}
\end{subfigure}
\caption{POD singular values $\sigma$ normalized by $\sigma_1$ (left) and DMD eigenvalues $\gamma$ (right) for limit-cycle flapping of a conventional flag with $Re =500, M_\rho = 0.1, K_B = 0.0001$.}
\label{fig:singvals_convLC}
\end{figure}
\begin{figure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{0mm}
\includegraphics[scale=0.27,trim={0cm 2.25cm 0cm 0cm},clip]{Figures/POD_convLCbig1.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{4.2mm}
\includegraphics[scale=0.27,trim={2.4cm 2.25cm 0cm 0cm},clip]{Figures/POD_convLCbig2.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{2mm}
\includegraphics[scale=0.27,trim={2.4cm 2.25cm 0cm 0cm},clip]{Figures/POD_convLCbig3.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{0mm}
\includegraphics[scale=0.27,trim={2.4cm 2.25cm 0cm 0cm},clip]{Figures/POD_convLCbig4.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{0mm}
\includegraphics[scale=0.27,trim={0cm 0cm 0cm 0cm},clip]{Figures/DMD_convLCbig1.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{4.2mm}
\includegraphics[scale=0.27,trim={2.4cm 0cm 0cm 0cm},clip]{Figures/DMD_convLCbig2.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{2mm}
\includegraphics[scale=0.27,trim={2.4cm 0cm 0cm 0cm},clip]{Figures/DMD_convLCbig3.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{0mm}
\includegraphics[scale=0.27,trim={2.4cm 0cm 0cm 0cm},clip]{Figures/DMD_convLCbig4.pdf}
\end{subfigure}
\caption{Leading POD (top row) and DMD (bottom row) modes for the limit-cycle conventional-flag problem.}
\label{fig:mode_convLC}
\end{figure}
\subsection{Chaotic flapping}
\label{sec:chaos}
Chaotic flapping of conventional flags can be triggered for flags of low stiffness ($K_B$) by increasing the flag mass ($M_\rho$). For flows at moderate Reynolds numbers of $O(1000)$, the system transitions with increasing mass from a stable equilibrium to limit-cycle flapping of increasing amplitude, then to chaotic flapping \citep{Connell2007}. Similar transitions occur in inviscid fluids \citep{Alben2008}. We focus here on the case of moderate Reynolds number; establishing similarities in the driving mechanisms is an avenue of future work.
We investigate the route to chaotic flapping here by choosing $M_\rho = 0.25$, which is near the critical value where the system transitions from limit-cycle flapping. The trailing-edge displacement and corresponding spectral density for this regime are shown in figure \ref{fig:conv_chaos_tip}. We also show in figure \ref{fig:chaos_snaps} snapshots of the system over $t\in[28.6, 30.2]$. Note the increase in flapping amplitude compared with the $M_\rho = 0.18$ case described above (\emph{c.f.}, figure \ref{fig:conv_LC_tip}). Moreover, in chaotic flapping there are multiple frequencies present at non-integer harmonics of the dominant frequency. These non-integer frequencies were first observed by \citet{Connell2007}, and the mechanism that introduces them remains unexplained.
Using DMD within our FSI framework, we propose a mechanism in which chaotic flapping is instigated by the increase in flapping amplitude associated with the increased mass ratio. This increase in amplitude leads the flag to become sufficiently bluff to the flow at its peak deflection that a bluff-body wake instability arises and interacts triadically with the dominant flapping behavior to produce the subdominant flapping frequencies observed in figure \ref{fig:conv_chaos_tip}. DMD is selected here to isolate behavior at distinct frequencies. This can be done in a POD context using spectral POD (SPOD) \citep{Towne2017}, and future work could compare the results between DMD and SPOD.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\hspace*{-15mm}
\includegraphics[scale=0.25,trim={0cm 0cm 0cm 0cm},clip]{Figures/tipdisp_convchaos.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\hspace*{12mm}
\includegraphics[scale=0.25,trim={0cm 0cm 0cm 0cm},clip]{Figures/tipfreq_convchaos.pdf}
\end{subfigure}
\caption{Transverse displacement (left) and spectral density (right) of the trailing edge of a flag in chaotic flapping for $Re =500,$ $M_\rho = 0.25$, and $K_B = 0.0001$.}
\label{fig:conv_chaos_tip}
\end{figure}
\begin{figure}
\begin{subfigure}[b]{0.245\textwidth}
\includegraphics[scale=0.35,trim={0cm 0cm 0cm 0cm},clip]{Figures/Re500_Mp25_Kbp0001_1.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{3.9mm}
\includegraphics[scale=0.35,trim={2cm 0cm 0cm 0cm},clip]{Figures/Re500_Mp25_Kbp0001_2.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{1.5mm}
\includegraphics[scale=0.35,trim={2cm 0cm 0cm 0cm},clip]{Figures/Re500_Mp25_Kbp0001_3.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{-0.8mm}
\includegraphics[scale=0.35,trim={2cm 0cm 0cm 0cm},clip]{Figures/Re500_Mp25_Kbp0001_4.pdf}
\end{subfigure}
\caption{Snapshots for a flag in chaotic flapping with $Re =500, M_\rho = 0.25, K_B = 0.0001$. Contours are of vorticity, in 18 increments from -5 to 5.}
\label{fig:chaos_snaps}
\end{figure}
The DMD eigenvalues $\gamma$ and four leading modes ${\boldsymbol{\phi}}$ (omitting the mode associated with the mean) for the chaotic case of $M_\rho = 0.25$ are shown in figures \ref{fig:evals_chaos} and \ref{fig:mode_convchaos}. The dominant and non-integer harmonic frequencies from the spectral density plot of figure \ref{fig:conv_chaos_tip} manifest themselves in DMD modes $\boldsymbol{\phi}_1,$ $\boldsymbol{\phi}_3$, and $\boldsymbol{\phi}_4$ (see the corresponding eigenvalues in figure \ref{fig:evals_chaos}). Note that despite the significant change in behavior from the limit-cycle regime, $\boldsymbol{\phi}_1$ remains largely unchanged. Yet, due to the increased system complexity, flapping is no longer conveyed entirely through the first mode, and both $\boldsymbol{\phi}_3$ and $\boldsymbol{\phi}_4$ are associated with flapping motion and a correlated set of flow features.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\hspace*{3mm}
\includegraphics[scale=0.45,trim={0cm 0cm 0cm 0cm},clip]{Figures/eigs_convchaos.pdf}
\end{subfigure}
\caption{DMD eigenvalues $\gamma$ for chaotic flapping of a conventional flag with $Re =500, M_\rho = 0.25, K_B = 0.0001$.}
\label{fig:evals_chaos}
\end{figure}
\begin{figure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{0mm}
\includegraphics[scale=0.27,trim={0cm 2.25cm 0cm 0cm},clip]{Figures/DMD_convchaos2.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{4.2mm}
\includegraphics[scale=0.27,trim={2.4cm 2.25cm 0cm 0cm},clip]{Figures/DMD_convchaos1.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{2mm}
\includegraphics[scale=0.27,trim={2.4cm 2.25cm 0cm 0cm},clip]{Figures/DMD_convchaos4.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{0mm}
\includegraphics[scale=0.27,trim={2.4cm 2.25cm 0cm 0cm},clip]{Figures/DMD_convchaos3.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{0mm}
\includegraphics[scale=0.27,trim={0cm 0cm 0cm 0cm},clip]{Figures/DMD_convchaos6.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{4.2mm}
\includegraphics[scale=0.27,trim={2.4cm 0cm 0cm 0cm},clip]{Figures/DMD_convchaos5.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{2mm}
\includegraphics[scale=0.27,trim={2.4cm 0cm 0cm 0cm},clip]{Figures/DMD_convchaos8.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.245\textwidth}
\hspace*{0mm}
\includegraphics[scale=0.27,trim={2.4cm 0cm 0cm 0cm},clip]{Figures/DMD_convchaos7.pdf}
\end{subfigure}
\caption{Leading DMD modes for chaotic flapping of a conventional flag with $Re = 500, M_\rho =0.25, K_B=0.0001$.}
\label{fig:mode_convchaos}
\end{figure}
By contrast, $\boldsymbol{\phi}_2$ is not associated with flapping (the flag mode in the insert is undeformed). This is consistent with the absence of the $\gamma_2$ frequency in the spectral density plot of figure \ref{fig:conv_chaos_tip}. Thus, the mode represents a response of the fluid to the dominant flapping motion. The pronounced shear layers at the top and bottom peak displacement and the corresponding wake vortices are reflective of a bluff-body vortex-shedding mode that appears because of the increased flapping amplitude compared with the limit-cycle case. This is further evidenced by the modal frequency, which agrees with the classical 0.2 Strouhal scaling \citep{Roshko1954} when normalized by the projected length of the maximum peak-to-peak-amplitude ($0.35 \times 0.5 \approx 0.18$). Note also that $\gamma_2$ is not a sub-harmonic of the dominant flapping frequency $\gamma_1$, and thus this bluff-body mode is reflective of the appearance of a new physical mechanism rather than of resonance or harmonic interactions.
This bluff-body mode is key to understanding the sub-dominant flapping behavior of the flag: the sub-dominant frequencies seen in figure \ref{fig:conv_chaos_tip} arise as triadic combinations of the frequencies of the dominant flapping mode and the bluff-body mode; \emph{i.e.}, $\gamma_3 = \gamma_1 + \gamma_2$ and $\gamma_4 = \gamma_1 - \gamma_2$. These triadic interactions are necessitated by the quadratic nonlinearity of the advective term in the Navier-Stokes equations.
\section{Conclusions}
We presented a formulation for performing data analysis on FSI problems that accounts for both the fluid and the structure. We designed this formulation to be compatible with the manner in which data is typically obtained for experiments and nonconforming mesh simulations. As part of this framework, we defined a physically meaningful norm for FSI systems. We considered POD and DMD because of their widespread use, but extensions to other methods are straightforward.
Our formulation was first applied to limit-cycle flag flapping. Because of the dominant frequency associated with this limit-cycle behavior, both POD and DMD give similar decompositions. The leading two POD modes (leading complex-conjugate pair of DMD modes) convey both the flapping information of the flag and the dominant vortical structures associated with this motion. Subsequent modes describe harmonic responses in the fluid to the flapping described in the leading modes.
Next, the physical mechanism driving chaotic flapping was clarified. \citet{Connell2007} identified that the transition from limit-cycle flapping to chaotic flapping coincides with the appearance of a new flapping frequency near the 3/2 harmonic of the dominant flapping frequency. We identified the mechanism driving this non-integer harmonic through a DMD analysis. We first demonstrated that at the onset of chaos, the flag becomes sufficiently bluff at its peak deflection to initiate a bluff-body wake instability. This is in contrast to limit-cycle flapping, where flapping amplitudes are smaller and this bluff-body instability is not instigated. The associated shedding frequency of this new behavior coincides with the Strouhal scaling of 0.2 common to bluff-body flows \citep{Roshko1954}. Moreover, we demonstrated that this bluff-body mode combines triadically with the dominant flapping behavior to produce the observed flapping near the 3/2 harmonic (and the other sub-dominant flapping frequencies).
Finally, we note that data analysis is often used to develop reduced-order models of complex flow. For FSI systems, these models are typically derived by performing a data-driven decomposition of the fluid and coupling this to the full governing equations for the structure (see \citet{Dowell2001} for a review). This approach may require more modes than those derived from a combined fluid-structure treatment, and there are avenues for future work in evaluating the efficiency of our proposed data-analysis technique in the context of reduced-order models.
\section{Acknowledgments}
AJG and TC acknowledge funding through the BOSCH Bern program and through the AFOSR (grant number FA9550-14-1-0328). AJG is also grateful to Dr. Scott Dawson for his thoughtful comments on an early version of the manuscript.
\bibliographystyle{jfm}
|
2,869,038,156,292 | arxiv | \section{Introduction}
\label{introduction}
Given a graph with edge weights, the
graph partitioning problem is to partition the vertices into
two sets satisfying specified size constraints,
while minimizing the sum of the weights of
the edges that connect the vertices in the two sets.
Graph partitioning problems arise in many areas including
VLSI design, data mining,
parallel computing, and sparse matrix factorizations
\cite{HagerKrylyuk99, Johnson93, Lengauer, Teng}.
The graph partitioning problem is NP-hard \cite{Garey76}.
There are two general classes of methods for the graph partitioning
problem, exact methods which compute the optimal partition,
and heuristic methods which try
to quickly compute an approximate solution.
Heuristic methods include spectral methods
\cite{HendricksonLeland95}, geometric methods
\cite{GilbertMillerTeng98}, multilevel schemes \cite{Hendrickson},
optimization-based methods \cite{FalknerRendlWolkowicz94}, and
methods that employ randomization techniques such as genetic
algorithms \cite{SoperWalshawCross04}.
Software which implements heuristic methods includes
Metis (\cite{KarypisKumar98e,KarypisKumar99b,KarypisKumar00}),
Chaco \cite{hendrickson94chaco}, Party \cite{Preis96theparty},
PaToH \cite{Catalyurek-hyper},
SCOTCH \cite{PellegriniRomanAmestoy00},
Jostle \cite{WalshawCrossEverett97},
Zoltan \cite{Zoltan06ipdps}, and
HUND \cite{GrigoriBomanDonfackDavis08}.
This paper develops an exact algorithm for the graph partitioning problem.
In earlier work,
Brunetta, Conforti, and Rinaldi
\cite{bcr97} propose a branch-and-cut scheme based on a linear
programming relaxation and subsequent cuts based on separation techniques.
A column generation approach is developed by Johnson, Mehrotra, and
Nemhauser \cite{Johnson93}, while Mitchell \cite{Mitchell01}
develops a polyhedral approach.
Karisch, Rendl, and Clausen \cite{Karisch00} develop
a branch-and-bound method utilizing a semidefinite programming
relaxation to obtain a lower bound. Sensen \cite{Sensen01} develops
a branch-and-bound method based on a
lower bound obtained by solving a multicommodity flow problem.
In this paper, we develop a branch-and-bound algorithm based on
a quadratic programming (QP) formulation of the graph partitioning problem.
The objective function of the QP is expressed as the sum of
a convex and a concave function.
We consider two different techniques for making this decomposition,
one based on eigenvalues and the other based on semidefinite programming.
In each case, we give an affine underestimate for the concave function,
which leads to a tractable lower bound in the branch and bound algorithm.
The paper is organized as follows.
In Section \ref{continuousQP} we review the continuous
quadratic programming formulation of the
graph partitioning problem developed in \cite{HagerKrylyuk99}
and we explain how to associate a solution of the continuous problem
with the solution to the discrete problem.
In Section \ref{LowerBound} we discuss approaches for decomposing
the objective function for the QP into the sum of convex and a concave
functions, and in each case, we show how to generate an affine lower bound
for the concave part.
Section \ref{BB} gives the branch-and-bound algorithm,
while Section \ref{NS} provides necessary and sufficient conditions
for a local minimizer.
Section \ref{numerics} compares the performance of the new
branch-and-bound algorithm to earlier results given in
\cite{Karisch00} and \cite{Sensen01}.
{\bf Notation.} Throughout the paper, $\| \cdot \|$ denotes
the Euclidian norm. $\m{1}$ is the vector whose entries are all 1.
The dimension will be clear from context.
If $\m{A} \in \mathbb{R}^{n\times n}$, $\m{A} \succeq \m{0}$ means
that $\m{A}$ is positive semidefinite.
We let $\m{e}_i$ denote the $i$-th column of the identity matrix;
again, the dimension will be clear from context.
If $\C{S}$ is a set, then $|\C{S}|$ is the number of elements in $\C{S}$.
The gradient $\nabla f (\m{x})$ is a row vector.
\section{Continuous quadratic programming formulation}
\label{continuousQP}
Let $G$ be a graph with $n$ vertices
\[
\C{V} = \{ 1, 2, \cdots , n \},
\]
and let $a_{ij}$ be a weight associated with the edge $(i,j)$.
When there is no edge between $i$ and $j$, we set $a_{ij} = 0$.
For each $i$ and $j$, we assume that $a_{ii} = 0$ and $a_{ij} = a_{ji}$;
in other words, we consider an undirected graph without self loops
(a simple, undirected graph).
The sign of the weights is not restricted, and in fact,
$a_{ij}$ could be negative, as it would be in the max-cut problem.
Given integers $l$ and $u$ such that $0 \le l \le u \le n$,
we wish to partition the vertices into two disjoint sets,
with between $l$ and $u$ vertices in one set,
while minimizing the sum of the weights associated with edges
connecting vertices in different sets.
The edges connecting the two sets in the partition are referred to
as the cut edges, and the optimal partition minimizes the sum of the
weights of the cut edges.
Hence, the graph partitioning problem is also called the min-cut problem.
In \cite{HagerKrylyuk99} we show that for a suitable choice of
the diagonal matrix $\m{D}$,
the graph partitioning problem is equivalent to the following
continuous quadratic programming problem:
\begin{equation}\label{Q}
\begin{array}{c}
\mbox{minimize } \; \; f(\m{x}) := (\m{1} - \m{x} )^{\sf T} (\m{A}+\m{D}) \m{x} \\
\rule{0in}{.2in} \mbox{ subject to } \; \m{0} \le \m{x} \le \m{1} ,
\;\; l \le \m{1}^{\sf T} \m{x} \le u,
\end{array}
\end{equation}
where $\m{A}$ is the matrix with elements $a_{ij}$.
Suppose $\m{x}$ is binary and let us define the sets
\begin{equation}\label{part}
\C{V}_0 = \{ i : x_i = 0 \} \quad \mbox{and} \quad \C{V}_1 = \{ i:
x_i = 1\} .
\end{equation}
It can be checked that $f(\m{x})$ is the sum of the weights of
the cut edges associated with the partition (\ref{part}).
Hence, if we add the restriction that $\m{x}$ is binary, then
(\ref{Q}) is exactly equivalent to finding the partition which
minimizes the weight of the cut edges.
Note, though, that there are no binary constraints in (\ref{Q}).
The equivalence between (\ref{Q}) and the graph partitioning problem
is as follows (see \cite[Thm. 2.1]{HagerKrylyuk99}):
\bigskip
\begin{theorem}
\label{Q=GP}
If the diagonal matrix $\m{D}$ is chosen so that
\begin{equation}\label{d-condition}
d_{ii} + d_{jj} \ge 2a_{ij} \quad \mbox{and} \quad d_{ii} \ge 0
\end{equation}
for each $i$ and $j$, then $(\ref{Q})$ has a binary solution
$\m{x}$ and the partition given by $(\ref{part})$ is a min-cut.
\end{theorem}
\bigskip
The generalization of this result to multiset partitioning is given
in \cite{HagerKrylyuk02}.
The condition (\ref{d-condition}) is satisfied, for example,
by the choice
\[
d_{jj} = \max \;\; \{ 0, a_{1j}, a_{2j}, \ldots , a_{nj} \}
\]
for each $j$.
The proof of Theorem \ref{Q=GP} was based on showing that any solution
to (\ref{Q}) could be transformed to a binary solution without changing
the objective function value.
With a modification of this idea, any feasible point can
be transformed to a binary feasible point without increasing the
objective function value.
We now give a constructive proof of this result,
which is used when we solve (\ref{Q}).
\begin{corollary}
\label{move_to_binary}
If $\m{x}$ is feasible in $(\ref{Q})$ and the diagonal matrix $\m{D}$
satisfies $(\ref{d-condition})$,
then there exists a binary $\m{y}$ with $f(\m{y}) \le f(\m{x})$ and
$y_i = x_i$ whenever $x_i$ is binary.
\end{corollary}
\begin{proof}
We first show how to find $\m{z}$ with the property that
$\m{z}$ is feasible in (\ref{Q}),
$f(\m{z}) \le f (\m{x})$, $\m{1}^{\sf T}\m{z}$ is integer,
and the only components of $\m{z}$ and $\m{x}$
which differ are the fractional components of $\m{x}$.
If $\m{1}^{\sf T}\m{x} = u$ or $\m{1}^{\sf T}\m{x} = l$, then we are done since
$l$ and $u$ are integers;
hence, we assume that $l < \m{1}^{\sf T} \m{x} < u$.
If all components of $\m{x}$ are binary, then we are done,
so suppose that there exists a nonbinary component $x_i$.
Since $a_{ii} = 0$, a Taylor expansion of $f$ gives
\[
f(\m{x} + \alpha \m{e}_i) = f (\m{x}) + \alpha \nabla f(\m{x})_i
- \alpha^2 d_{ii} ,
\]
where $\m{e}_i$ is the $i$-th column of the identity matrix.
The quadratic term in the expansion is nonpositive since $d_{ii} \ge 0$.
If the first derivative term is negative, then increase $\alpha$ above 0
until either $x_i + \alpha$ becomes 1 or
$\m{1}^{\sf T}\m{x} + \alpha$ is an integer.
Since the first derivative term is negative and $\alpha > 0$,
$f(\m{x} + \alpha \m{e}_i) < f (\m{x})$.
If $\m{1}^{\sf T}\m{x} + \alpha$ becomes an integer, then we are done.
If $x_i + \alpha$ becomes 1, then we reach a point $\m{x}_1$
with one more binary component and with an objective function value
no larger than $f(\m{x})$.
If the first derivative term is nonnegative, then decrease $\alpha$ below 0
until either $x_i + \alpha$ becomes 0 or
$\m{1}^{\sf T}\m{x} + \alpha$ is an integer.
Since the first derivative term is nonnegative and $\alpha < 0$,
$f(\m{x} + \alpha \m{e}_i) \le f (\m{x})$.
If $\m{1}^{\sf T}\m{x} + \alpha$ becomes an integer, then we are done.
If $x_i + \alpha$ becomes 0, then we reach a point $\m{x}_1$
with one more binary component and with a smaller value for the cost function.
In this latter case, we choose another nonbinary component of $\m{x}_1$ and
repeat the process.
Hence, there is no loss of generality in assuming that $\m{1}^{\sf T}\m{x}$ is
an integer.
Suppose that $\m{x}$ is not binary.
Since $\m{1}^{\sf T}\m{x}$ is an integer,
$\m{x}$ must have at least two nonbinary components,
say $x_i$ and $x_j$.
Again, expanding $f$ is a Taylor series gives
\[
f(\m{x} + \alpha (\m{e}_i-\m{e}_j)) = f (\m{x})
+ \alpha (\nabla f(\m{x})_i - \nabla f(\m{x})_j) +
\alpha^2 (2a_{ij}- d_{ii} -d_{jj}) .
\]
By (\ref{d-condition}), the quadratic term is
nonpositive for any choice of $\alpha$.
If the first derivative term is negative, then we
increase $\alpha$ above 0 until either $x_i + \alpha$
reaches 1 or $x_j - \alpha$ reach 0.
Since the first derivative term is negative and $\alpha > 0$,
we have $f(\m{x} + \alpha (\m{e}_i-\m{e}_j)) < f (\m{x})$.
If the first derivative term is nonnegative, then we
decrease $\alpha$ below 0 until either $x_i + \alpha$
reaches 0 or $x_j - \alpha$ reach 1.
Since the first derivative term is nonnegative and $\alpha < 0$,
it follows that $f(\m{x} + \alpha (\m{e}_i-\m{e}_j)) \le f (\m{x})$.
In either case, the value of the cost function does not increase,
and we reach a feasible point $\m{x}_1$ with $\m{1}^{\sf T}\m{x}_1$
integer and with at least one more binary component.
If $\m{x}_1$ is not binary, then $\m{x}_1$ must have at
least two nonbinary components; hence, the adjustment
process can be continued until all the components of $\m{x}$ are binary.
These adjustments to $\m{x}$ do not increase the value of the cost
function and we only alter the fractional components of $\m{x}$.
This completes the proof.
\end{proof}
\smallskip
\section{Convex lower bounds for the objective function}
\label{LowerBound}
We compute an exact solution to the continuous formulation (\ref{Q})
of graph partitioning problem using a branch and bound algorithm.
The bounding process requires a lower bound for the objective
function when restricted to the intersection of a box and two half spaces.
This lower bound is obtained by
writing the objective function as the sum of a convex and a concave function
and by replacing the concave part by the best affine underestimate.
Two different strategies are given for decomposing the objective function.
\subsection{Lower bound based on minimum eigenvalue}
\label{mineig}
Let us decompose the objective function
$f(\m{x}) = (\m{1} - \m{x} )^{\sf T} (\m{A}+\m{D}) \m{x}$ in the
following way:
\[
f(\m{x}) = (f(\m{x}) + \sigma \|\m{x}\|^2) - \sigma \|\m{x}\|^2,
\]
where $\sigma$ is the maximum of 0 and the largest eigenvalue
of $\m{A} + \m{D}$.
This represents a DC (difference convex) decomposition (see \cite{HPT95})
since $f(\m{x}) + \sigma \|\m{x}\|^2$ and $\sigma \|\m{x}\|^2$ are both convex.
The concave term $- \|\m{x}\|^2$
is underestimated by an affine function $\ell$ to obtain
a convex underestimate $f_{L}$ of $f$ given by
\begin{equation}\label{fL1}
f_{L}(\m{x}) = \left( f (\m{x}) + \sigma \|\m{x}\|^2 \right) + \sigma \ell
(\m{x}). \nonumber
\end{equation}
We now consider the problem of finding the best
affine underestimate $\ell$ for the concave function
$-\|\m{x}\|^2$ over a given compact, convex set denoted $\C{C}$.
The set of affine underestimators for $-\|\m{x}\|^2$ is given by
\[
\C{S}_1 = \{ \ell: \mathbb{R}^n \rightarrow \mathbb{R}
\mbox{ such that } \ell
\mbox{ is affine and } - \|\m{x}\|^2 \ge \ell(\m{x})
\mbox{ for all } \m{x} \in \C{C} \} .
\]
The best affine underestimate is a solution of the problem
\begin{equation}\label{linearest}
\min_{\ell \in \C{S}_1} \;\; \max_{\m{x}\in \C{C}} \;\; - \left(
\|\m{x}\|^2 + \ell (\m{x}) \right) .
\end{equation}
The following result generalizes Theorem 3.1 in \cite{HagerPhan09}
where we determine the best affine underestimate
for $-\|\m{x}\|^2$ over an ellipsoid.
\smallskip
\begin{theorem}
\label{UnderTheorem1}
Let $\C{C} \subset \mathbb{R}^n$ be a compact, convex set and
let $\m{c}$ be the center and $r$ be the radius of the smallest
sphere containing $\C{C}$.
This smallest sphere is unique and a solution of $(\ref{linearest})$ is
\[
\ell^* (\m{x}) = -2\m{c}^{\sf T}\m{x} + \|\m{c}\|^2 - r^2 .
\]
Furthermore,
\[
\min_{\ell \in \C{S}_1} \;\; \max_{\m{x}\in \C{C}} \;\; - \left(
\|\m{x}\|^2 + \ell^* (\m{x}) \right) = r^2.
\]
\end{theorem}
\smallskip
\begin{proof}
To begin, we will show that the minimization in (\ref{linearest})
can be restricted to a compact set. Clearly, when carrying out the
minimization in (\ref{linearest}), we should restrict our attention
to those $\ell$ which touch the function $h(\m{x}) := -\|\m{x}\|^2$
at some point in $\C{C}$. Let $\m{y} \in \C{C}$ denote the point of
contact. Since $h(\m{x}) \ge \ell (\m{x})$ and $h (\m{y}) = \ell
(\m{y})$, a lower bound for the error $h (\m{x}) - \ell (\m{x})$
over $\m{x} \in \C{C}$ is
\[
h (\m{x}) - \ell (\m{x}) \ge |\ell (\m{x}) - \ell (\m{y})| -
|h(\m{x}) - h(\m{y})| .
\]
If $M$ is the difference between the maximum and minimum value of
$h$ over $\C{C}$, then we have
\begin{equation}\label{change}
h (\m{x}) - \ell (\m{x}) \ge |\ell (\m{x}) - \ell (\m{y})| - M.
\end{equation}
An upper bound for the minimum in (\ref{linearest}) is obtained by
the linear function $\ell_0$ which is constant on $\C{C}$, with
value equal to the minimum of $h(\m{x})$ over $\m{x} \in \C{C}$. If
$\m{w}$ is a point where $h$ attains its minimum over $\C{C}$, then
we have
\[
\max_{\m{x} \in \C{C}} \;\; h (\m{x}) - \ell_0 (\m{x}) = \max_{\m{x} \in
\C{C}} \;\; h (\m{x}) - h (\m{w}) = M.
\]
Let us restrict our attention to the linear functions $\ell$
which achieve an
objective function value in (\ref{linearest}) which is at least as
small as that of $\ell_0$.
For these $\ell$ and for $\m{x} \in \C{C}$, we have
\begin{equation}\label{ell0}
h (\m{x}) - \ell (\m{x}) \le \max_{\m{x}\in \C{C}} \;\; h (\m{x}) - \ell
(\m{x}) \le \max_{\m{x}\in \C{C}} \;\; h (\m{x}) - \ell_0 (\m{x}) = M .
\end{equation}
Combining (\ref{change}) and (\ref{ell0}) gives
\begin{equation}\label{2M}
|\ell (\m{x}) - \ell (\m{y})| \le 2M .
\end{equation}
Thus, when we carry out the minimization in (\ref{linearest}), we
should restrict our attention to linear functions which touch $h$ at some point
$\m{y} \in \C{C}$ and with the change in $\ell$ across $\C{C}$
satisfying the bound (\ref{2M}) for all $\m{x}\in \C{C}$. This tells
us that the minimization in (\ref{linearest}) can be restricted to a
compact set, and that a minimizer must exist.
Suppose that $\ell$ attains the minimum in (\ref{linearest}). Let
$\m{z}$ be a point in $\C{C}$ where $h (\m{x}) - \ell (\m{x})$
achieves its maximum. A Taylor expansion around $\m{x} = \m{z}$
gives
\[
h (\m{x}) - \ell (\m{x}) = h (\m{z}) - \ell (\m{z}) + (\nabla
h(\m{z}) - \nabla \ell)(\m{x}-\m{z}) - \|\m{x} - \m{z}\|^2 .
\]
Since $\ell \in \C{S}_1$,
$h (\m{x}) - \ell (\m{x}) \ge 0$ for all $\m{x} \in \C{C}$.
It follows that
\begin{equation}\label{flx}
h (\m{z}) - \ell (\m{z}) \ge -(\nabla h(\m{z}) - \nabla \ell)(\m{x}-\m{z})
+ \|\m{x} - \m{z}\|^2 .
\end{equation}
Since $\C{C}$ is convex,
the first-order optimality conditions for $\m{z}$ give
\[
(\nabla h(\m{z}) - \nabla \ell)(\m{x}-\m{z}) \le 0
\]
for all $\m{x} \in \C{C}$.
It follows from (\ref{flx}) that
\begin{equation}\label{diameter}
h (\m{z}) - \ell (\m{z}) \ge \|\m{x} - \m{z}\|^2
\end{equation}
for all $\m{x} \in \C{C}$.
There exists $\m{x} \in \C{C}$ such that
$\|\m{x} - \m{z}\| \ge r$
or else $\m{z}$ would be the center of a smaller sphere
containing $\C{C}$.
Hence, (\ref{diameter}) implies that
\[
h (\m{z}) - \ell (\m{z}) \ge r^2.
\]
It follows that
\begin{equation}\label{l_lower}
\max_{\m{x} \in \C{C}} \;\; h(\m{x}) - \ell (\m{x}) \ge
h(\m{z}) - \ell (\m{z}) \ge r^2.
\end{equation}
We now observe that for the specific linear function $\ell^*$ given
in the statement of the theorem, (\ref{l_lower}) becomes an
equality, which implies the optimality of $\ell^*$ in
(\ref{linearest}).
Expand $h$ in a Taylor series around $\m{x} = \m{c}$ to obtain
\begin{eqnarray}
h(\m{x}) &=& -\|\m{c}\|^2 - 2\m{c}^{\sf T}(\m{x}-\m{c}) -\|\m{x}-\m{c}\|^2
\nonumber \\
&=& -2\m{c}^{\sf T}\m{x} + \|\m{c}\|^2 - \|\m{x}-\m{c}\|^2. \nonumber
\end{eqnarray}
Subtract $\ell^* (\m{x}) = -2\m{c}^{\sf T}\m{x} + \|\m{c}\|^2 - r^2$ from
both sides to obtain
\begin{equation}\label{h-l}
h(\m{x}) - \ell^* (\m{x}) = r^2 - \|\m{x}-\m{c}\|^2 .
\end{equation}
If $\m{c} \in \C{C}$, then the maximum in (\ref{h-l})
over $\m{x} \in \C{C}$ is attained by $\m{x} = \m{c}$ for which
\[
h(\m{c}) - \ell^* (\m{c}) = r^2.
\]
Consequently, (\ref{l_lower}) becomes an equality for $\ell =
\ell^*$, which implies the optimality of $\ell^*$ in
(\ref{linearest}).
We can show that $\m{c} \in \C{C}$ as follows:
Suppose $\m{c} \not\in \C{C}$.
Since $\C{C}$ is compact and convex, there exists a hyperplane $\C{H}$
strictly separating $\m{c}$ and $\C{C}$ -- see Figure \ref{c_in_C}
\begin{figure}
\includegraphics[scale=.4]{c_in_C}
\caption{Suppose $\m{c} \not\in \C{C}$}
\label{c_in_C}
\end{figure}
If $\m{c}'$ is the projection of $\m{c}$ onto $\C{H}$,
then
\begin{equation}\label{circle}
\|\m{x} - \m{c}'\| < \|\m{x} - \m{c}\| \quad \mbox{for all }\m{x} \in \C{C} .
\end{equation}
Let $\m{x}' \in \C{C}$ be the point
which is farthest from $\m{c}'$ and
let $\m{x} \in \C{C}$ be the point farthest from $\m{c}$.
Hence, $\|\m{x} - \m{c}\| = r$.
By (\ref{circle}), we have
$\|\m{x}' - \m{c}'\| < \|\m{x} - \m{c}\| = r$;
it follows that the
sphere with center $\m{c}'$ and radius $\|\m{x}' - \m{c}'\|$ contains
$\C{C}$ and has radius smaller than $r$.
This contradicts the assumption that $r$ was the sphere of smallest
radius containing $\C{C}$.
The uniqueness of the smallest sphere containing $\C{C}$ is as follows:
Suppose that there exist two different smallest spheres
$\C{S}_1$ and $\C{S}_2$ containing $\C{C}$.
Let $\C{S}_3$ be the smallest sphere containing
$\C{S}_1 \cap \C{S}_2$.
Since the diameter of the intersection is strictly less than the
diameter of $\C{S}_1$ or $\C{S}_2$, we contradict the assumption that
$\C{S}_1$ and $\C{S}_2$ were spheres of smallest radius containing $\C{C}$.
\end{proof}
\begin{remark}
\label{rem0}
Although the smallest sphere containing $\C{C}$ in Theorem
\ref{UnderTheorem1} is unique, the best linear underestimator of
$h(\m{x}) = -\|\m{x}\|^2$ is not unique.
For example, suppose $\m{a}$ and $\m{b} \in {\mathbb{R}}^n$ and
$\C{C}$ is the line segment
\[
\C{C} = \{ \m{x} \in \mathbb{R}^n : \m{x} = \alpha \m{a} + (1-\alpha)\m{b},
\quad \alpha \in [0, 1] \} .
\]
Along this line segment, $h$ is a concave quadratic in one variable.
The best affine underestimate along the line segment corresponds to
the line connecting the ends of the quadratic restricted to the line segment.
Hence, in $\mathbb{R}^{n+1}$,
any hyperplane which contains the points $(h(\m{a}), \m{a})$ and
$(h(\m{b}), \m{b})$ leads to a best affine underestimate.
\end{remark}
\begin{remark}
\label{rem1}
Let $\C{C}$ be the box
\[
\C{B} = \{ \m{x} \in \mathbb{R}^n : \m{p} \le \m{x} \le \m{q} \}.
\]
The diameter of $\C{B}$, the distance between the points in $\C{B}$
with greatest separation, is $\|\m{p} - \m{q}\|$.
Hence, the smallest sphere containing $\C{B}$ has radius at least
$\|\m{p} - \m{q}\|/2$.
If $\m{x} \in \C{B}$, then
\[
|x_i - (p_i + q_i)/2| \le (q_i - p_i)/2
\]
for every $i$.
Consequently, $\|\m{x} - (\m{p} + \m{q})/2\| \le \|\m{p} - \m{q}\|/2$ and
the sphere with center
$\m{c} = (\m{p}+\m{q})/2$ and radius $r = \|\m{p} - \m{q}\|/2$
contains $\C{B}$.
It follows that this is the smallest sphere containing $\C{B}$ since
any other sphere must have radius at least $\|\m{p} - \m{q}\|/2$.
\end{remark}
\begin{remark}
\label{rem2}
Finding the smallest sphere containing $\C{C}$ may not be easy.
However, the center and radius of any sphere containing $\C{C}$
yields an affine underestimate for $\|\m{x}\|^2$ over $\C{C}$.
That is, if $\C{S}$ is a sphere with $\C{C} \subset \C{S}$,
then the best affine underestimate for $-\|\m{x}\|^2$ over
$\C{S}$ is also an affine underestimate for $-\|\m{x}\|^2$ over $\C{C}$.
\end{remark}
\subsection{Lower bound based on semidefinite programming}
\label{SDP}
A different DC decomposition of
$f(\m{x}) = (\m{1} - \m{x} )^{\sf T} (\m{A}+\m{D}) \m{x}$ is the following:
\[
f(\m{x}) = (f(\m{x}) + \m{x}^{\sf T}\g{\Lambda}\m{x}) - \m{x}^{\sf T}\g{\Lambda}\m{x} ,
\]
where $\g{\Lambda}$ is a diagonal matrix with $i$-th
diagonal element $\lambda_i \ge 0$.
We would like to make the second term
$\m{x}^{\sf T}\g{\Lambda}\m{x}$ as small as possible while keeping
the first term $f(\m{x}) + \m{x}^{\sf T}\g{\Lambda}\m{x}$ convex.
This suggests the following semidefinite programming problem
\begin{equation}\label{sdp}
\begin{array}{c}
\mbox{minimize } \; \; \sum_{i=1}^n \lambda_i \\
\rule{0in}{.2in} \mbox{ subject to } \; \g{\Lambda} -(\m{A} + \m{D})
\succeq \m{0}, \quad \g{\Lambda} \succeq \m{0},
\end{array}
\end{equation}
where $\g{\lambda}$ is the diagonal of $\g{\Lambda}$.
If the diagonal of $\m{A} + \m{D}$ is nonnegative,
then the inequality $\g{\Lambda} \succeq \m{0}$ can be dropped
since it is implied by the inequality
$\g{\Lambda} -(\m{A} + \m{D}) \succeq \m{0}$.
As before, we seek the best linear underestimate of the concave
function $- \m{x}^{\sf T}\g{\Lambda}\m{x}$ over a compact, convex set $\C{C}$.
If any of the $\lambda_i$ vanish,
then reorder the components of $\m{x}$ so that
$\m{x} = (\m{y}, \m{z})$ where $\m{z}$ corresponds
to the components of $\lambda_i$ that vanish.
Let $\g{\Lambda}_+$ be the principal submatrix of $\g{\Lambda}$ corresponding
to the positive diagonal elements, and define the set
\[
\C{C}_+ = \{ \m{y} : (\m{y},\m{z}) \in \C{C} \mbox{ for some } \m{z} \} .
\]
The problem of finding the best linear underestimate for
$- \m{x}^{\sf T}\g{\Lambda}\m{x}$ over $\C{C}$ is essentially equivalent to
finding the best linear underestimate for
$-\m{y}^{\sf T}\g{\Lambda}_+\m{y}$ over the $\C{C}_+$.
Hence, there is no loss of generality in assuming that the diagonal
of $\g{\Lambda}$ is strictly positive.
As a consequence of Theorem \ref{UnderTheorem1}, we have
\smallskip
\begin{corollary}
\label{lambda_under}
Suppose the diagonal of $\g{\Lambda}$ is strictly positive and
let $\m{c}$ be the center and $r$ the radius of the unique smallest
sphere containing the set
\[
\g{\Lambda}^{1/2}\C{C} := \{ \g{\Lambda}^{1/2}\m{x}: \m{x} \in \C{C} \}.
\]
The best linear underestimate of
$-\m{x}^{\sf T}\g{\Lambda}\m{x}$ over the compact, convex set $\C{C}$ is
\[
\ell^* (\m{x}) =
-2\m{c}^{\sf T}\g{\Lambda}^{1/2}\m{x} + \|\m{c}\|^2 - r^2.
\]
Furthermore,
\[
\min_{\ell \in \C{S}_2} \;\; \max_{\m{x}\in \C{C}} \;\; - \left(
\m{x}^{\sf T}\g{\Lambda}\m{x} + \ell^* (\m{x}) \right) = r^2,
\]
where
\[
\C{S}_2 = \{ \ell: \mathbb{R}^n \rightarrow \mathbb{R} \mbox{ such
that } \ell \mbox{ is affine and } -\m{x}^{\sf T}\g{\Lambda}\m{x} \ge
\ell(\m{x}) \mbox{ for all } \m{x} \in \C{C} \} .
\]
\end{corollary}
\smallskip
\begin{proof}
With the change of variables $\m{y} = \g{\Lambda}^{1/2}\m{x}$, an
affine function in $\m{x}$ is transformed to an affine function in
$\m{y}$ and conversely, an affine function in $\m{y}$ is transformed
to an affine function in $\m{x}$. Hence, the problem of finding the
best affine underestimate for $- \m{x}^{\sf T}\g{\Lambda}\m{x}$ over
$\C{C}$ is equivalent to the problem of finding the best affine
underestimate for $-\|\m{y}\|^2$ over $\g{\Lambda}^{1/2}\C{C}$.
Apply Theorem \ref{UnderTheorem1} to the transformed problem in
$\m{y}$, and then transform back to $\m{x}$.
\end{proof}
\begin{remark}
\label{rem3}
If $\C{C}$ is the box
$\{ \m{x} \in \mathbb{R}^n : \m{0} \le \m{x} \le \m{1} \}$,
then $\g{\Lambda}^{1/2}\C{C}$ is also a box to which we can apply
the observation in Remark \ref{rem1}.
In particular, we have
\begin{equation}\label{box_center}
\m{c} = \frac{1}{2} \g{\Lambda}^{1/2}\m{1} =
\frac{1}{2} \g{\lambda}^{1/2}
\quad \mbox{and} \quad
r = \|\g{\Lambda}^{1/2}\m{1}\|/2 = \|\g{\lambda}^{1/2}\|/2 .
\end{equation}
Hence, $\|\m{c}\|^2 - r^2 = 0$ and we have
$\ell^* (\m{x}) = - \g{\lambda}^{\sf T} \m{x}$.
\end{remark}
\begin{remark}
\label{rem4}
Let us consider the set
\[
\C{C} = \{ \m{x} \in \mathbb{R}^n : \m{0} \le \m{x} \le \m{1},
\quad \m{1}^{\sf T} \m{x} = b \},
\]
where $0 < b < n$.
Determining the smallest sphere containing
$\g{\Lambda}^{1/2}\C{C}$ may not be easy.
However, as indicated in Remark \ref{rem2}, any sphere containing
$\g{\Lambda}^{1/2} \C{C}$ yields an underestimate for
$\m{x}^{\sf T}\g{\Lambda}\m{x}$.
Observe that
\[
\g{\Lambda}^{1/2} \C{C} =
\{ \m{y} \in \mathbb{R}^n : \m{0} \le \m{y} \le \g{\lambda}^{1/2},
\quad \m{y}^{\sf T} {\g{\lambda}^{-1/2}} = b \} .
\]
As observed in Remark \ref{rem3}, the center $\m{c}$ and radius $r$ of
the smallest sphere $\C{S}$ containing the set
\[
\{ \m{y} \in \mathbb{R}^n : \m{0} \le \m{y} \le \g{\lambda}^{1/2} \}
\]
are given in (\ref{box_center}).
The intersection of this sphere with the hyperplane
$\m{y}^{\sf T} {\g{\lambda}^{-1/2}} = b$ is a lower dimensional sphere $\C{S}'$
whose center $\m{c}'$ is the projection of $\m{c}$ onto the hyperplane.
$\C{S}'$ contains $\C{C}$ since $\C{C}$ is contained in both the
original sphere $\C{S}$ and the hyperplane.
With a little algebra, we obtain
\[
\m{c}' = \frac{1}{2} \g{\lambda}^{1/2} +
\left( \frac{b - .5n}{\sum_{i=1}^n \lambda_i^{-1}} \right)
\g{\lambda}^{-1/2}.
\]
By the Pythagorean Theorem,
the radius $r'$ of the lower dimensional sphere $\C{S}'$ is
\[
r' = \sqrt{.25 \left( \sum_{i=1}^n \lambda_i \right)
- \frac{(b-.5n)^2}{\sum_{i=1}^n \lambda_i^{-1}}} .
\]
Hence, by Corollary \ref{lambda_under}, an underestimate of
$-\m{x}^{\sf T}\g{\Lambda}\m{x}$ is given by
\[
\ell(\m{x}) = -\g{\lambda}^{\sf T}\m{x} +
\left( \frac{n - 2b}{\sum_{i=1}^n \lambda_i^{-1}} \right) \m{1}^{\sf T}\m{x}
+ \|\m{c}'\|^2 - (r')^2 .
\]
Since $\m{1}^{\sf T}\m{x} = b$ when $\m{x} \in \C{C}$, it can be shown,
after some algebra, that $\ell (\m{x}) = - \g{\lambda}^{\sf T}\m{x}$
(all the constants in the affine function cancel).
Hence, the affine underestimate $\ell^*$ computed in Remark \ref{rem3} for
the unit box and the affine underestimate $\ell$ computed in this remark
for the unit box intersect the hyperplane $\m{1}^{\sf T}\m{x} = b$
are the same.
\end{remark}
\section{Branch and bound algorithm}
\label{BB}
Since the continuous quadratic program (\ref{Q}) has a binary solution,
the branching process in the branch and bound algorithm is
based on setting variables to 0 or 1 and reducing the problem
dimension (we do not employ bisections of the feasible region
as in \cite{HagerPhan09}).
We begin by constructing a linear ordering of the vertices of
the graph according to an estimate for the difficulty in
placing the vertex in the partition.
For the numerical experiments, the order was based on the
total weight of the edges connecting a vertex to the adjacent vertices.
If two vertices $v_1$ and $v_2$ have weights $w_1$ and $w_2$ respectively,
then $v_1$ precedes $v_2$ if $w_1 > w_2$.
Let $v_1$, $v_2$, $\ldots$, $v_n$ denote the ordered vertices.
Level $i$ in the branch and bound tree corresponds to
setting the $v_i$-th component of $\m{x}$ to the values 0 or 1.
Each leaf at level $i$ represents a specific selection of 0 and 1
values for the $v_1$ through $v_i$-th components of $\m{x}$.
Hence, a leaf at level $i$ has a label of the form
\begin{equation}\label{leaf}
\tau = (b_1, b_2, \ldots, b_i), \quad b_j = 0 \mbox{ or } 1
\mbox{ for } 1 \le j \le i.
\end{equation}
Corresponding to this leaf, the value of the $v_j$-th component of
$\m{x}$ is $b_j$ for $1 \le j \le i$.
Let $\C{T}_k$ denote the branch and bound tree at iteration $k$
and let $\C{E}(\C{T}_k)$ denote the leaves in the tree.
Suppose $\tau \in \C{E}(\C{T}_k)$ lies at level $i$ in $\C{T}_k$
as in (\ref{leaf}).
Let $\m{x}_\tau$ denote the vector gotten by removing
components $v_j$, $1 \le j \le i$, from $\m{x}$.
The $v_j$-th component of $\m{x}$
has the pre-assigned binary value $b_j$ for $1 \le j \le i$.
After taking into account these assigned binary values,
the quadratic problem reduces to a lower dimensional problem
in the variable $\m{x}_\tau$ of the form
\[
\begin{array}{c}
\mbox{minimize } \; \; f_\tau (\m{x}_\tau) \\
\rule{0in}{.2in} \mbox{ subject to } \; \m{0} \le \m{x}_\tau \le \m{1} ,
\;\; l_\tau \le \m{1}^{\sf T} \m{x}_\tau \le u_\tau,
\end{array}
\]
where
\[
u_{\tau} = u - \sum_{j=1}^i b_j \quad \mbox{and} \quad
l_{\tau} = l - \sum_{j=1}^i b_j .
\]
Using the techniques developed in Section \ref{LowerBound},
we replace $f_\tau$ by a convex lower bound denoted $f_{\tau}^L$ and
we consider the convex problem
\begin{equation}\label{Ltau}
\begin{array}{c}
\mbox{minimize } \; \; f_{\tau}^L (\m{x}_\tau) \\
\rule{0in}{.2in} \mbox{ subject to } \; \m{0} \le \m{x}_\tau \le \m{1} ,
\;\; l_\tau \le \m{1}^{\sf T} \m{x}_\tau \le u_\tau.
\end{array}
\end{equation}
Let $M(\tau)$ denote the optimal objective function value for (\ref{Ltau}).
At iteration $k$, the leaf $\tau \in \C{E}(\C{T}_k)$ for which
$M(\tau)$ is smallest is used to branch to the next level.
If $\tau$ has the form (\ref{leaf}), then the branching processes
generates the two new leaves
\begin{equation}\label{bisect}
(b_1, b_2, \ldots, b_i, 0) \quad \mbox{and} \quad
(b_1, b_2, \ldots, b_i, 1) .
\end{equation}
An illustration involving a 3-level branch and bound tree
appears in Figure~\ref{bbtree}.
\begin{figure}
\includegraphics[scale=.4]{bbtree}
\caption{Branch and bound tree}
\label{bbtree}
\end{figure}
During the branch and bound process, we must also compute
an upper bound for the minimal objective function value in (\ref{Q}).
This upper bound is obtained using a heuristic technique based
on the gradient projection algorithm and sphere approximations
to the feasible set.
These heuristics for generating an upper bound will be described
in a separate paper.
As pointed out earlier, many heuristic techniques are available
(for example, Metis
(\cite{KarypisKumar98e,KarypisKumar99b,KarypisKumar00}), Chaco
\cite{hendrickson94chaco}, and Party \cite{Preis96theparty}).
An advantage of our quadratic programming based heuristic
is that we start at the solution to the lower bounding problem,
a solution which typically has fractional entries and which is
a feasible starting point for (\ref{Q}).
Consequently, the upper bound is no larger than the objective
function value associated with the optimal point in
the lower-bound problem.
\bigskip
\begin{itemize}
\item[] \hspace{-.2in}\textbf{Convex quadratic branch and bound (CQB)}
\item [1.]
Initialize $\C{T}_0 = \emptyset$ and $k = 0$.
Evaluate both a lower bound for the solution to (\ref{Q}) and
an upper denoted $U_0$.
\item [2.]
Choose $\tau_k \in \C{E}(\C{T}_k)$ such that
$M (\tau_k) = \min \{ M (\tau) : \tau \in \C{E}(\C{T}_k) \}$.
If $M(\tau_k) = U_k$, then stop, an optimal solution of (\ref{Q})
has been found.
\item[3.]
Assuming that $\tau_k$ has the form (\ref{leaf}), let
$\C{T}_{k+1}$ be the tree obtained by branching at
$\tau_k$ and adding two new leaves as in (\ref{bisect});
also see Figure~\ref{bbtree}.
Evaluate lower bounds for the quadratic programming problems
(\ref{Ltau}) associated with the two new leaves,
and evaluate an improved upper bound, denoted $U_{k+1}$, by using
solutions to the lower bound problems as starting guesses in
a descent method applied to (\ref{Q}).
\item[4.]
Replace $k$ by $k+1$ and return to step 2.
\end{itemize}
\bigskip
Convergence is assured since there are a finite number of
binary values for the components of $\m{x}$.
In the worst case, the branch and bound algorithm will build
all $2^{n+1} - 1$ nodes of the tree.
\smallskip
\section{Necessary and sufficient optimality conditions}
\label{NS}
We use the gradient projection algorithm
to obtain an upper bound for a solution to (\ref{Q}).
Since the gradient projection algorithm can terminate at
a stationary point, we need to be able to distinguish
between a stationary point and a local minimizer, and at a
stationary point which is not a local minimizer,
we need a fast way to compute a descent direction.
We begin by stating the first-order optimality conditions.
Given a scalar $\lambda$, define the vector
\[
\g{\mu}(\m{x},\lambda) = (\m{A}+\m{D})\m{1} - 2(\m{A}+\m{D})\m{x} +
\lambda \m{1},
\]
and the set-valued maps $\C{N} : \mathbb{R} \rightarrow 2^{\mathbb{R}}$
and $\C{M} : \mathbb{R} \rightarrow 2^{\mathbb{R}}$
\[
{\cal N}(\nu) =
\left\{ \begin{array}{cl}
\mathbb{R} & \mbox{if} \;\; \nu = 0\\
\{ 1 \} & \mbox{if} \;\; \nu < 0\\
\{ 0 \} & \mbox{if} \;\; \nu > 0
\end{array}
\right. ,
\quad
{\cal M}(\nu) =
\left\{ \begin{array}{cl}
\mathbb{R} & \mbox{if} \;\; \nu = 0\\
\{ u \} & \mbox{if} \;\; \nu > 0\\
\{ l \} & \mbox{if} \;\; \nu < 0
\end{array}
\right. .
\]
For any vector $\g{\mu}$, $\C{N}(\g{\mu})$ is a vector of sets
whose $i$-component is the set $\C{N}(\mu_i)$.
The first-order optimality (Karush-Kuhn-Tucker) conditions
associated with a local minimizer $\m{x}$ of (\ref{Q}) can be
written in the following way: For some scalar $\lambda$, we have
\begin{equation} \label{KT}
\m{0} \le \m{x} \le \m{1} ,
\quad \m{x} \in {\cal N}(\g{\mu}(\m{x},\lambda)) ,
\quad l \le \m{1}^{\sf T}\m{x} \le u , \quad
\mbox{and} \quad \m{1}^{\sf T}\m{x} \in \C{M}(\lambda) .
\end{equation}
The first and third conditions in (\ref{KT}) are the constraints in
(\ref{Q}), while the remaining two conditions correspond to
complementary slackness and stationarity of the Lagrangian.
In \cite{HagerKrylyuk99} we give a necessary and sufficient optimality
conditions for (\ref{Q}), which we now review.
Given any $\m{x}$ that is feasible in (\ref{Q}), let us define the sets
\[
{\cal U}(\m{x}) = \{ i : x_i = 1 \}, \quad
{\cal L}(\m{x}) = \{ i : x_i = 0 \}, \quad
\mbox{and}\quad \C{F}(\m{x}) = \{ i: 0 < x_i < 1\} .
\]
We also introduce subsets ${\cal U}_0$ and ${\cal L}_0$ defined by
\[
{\cal U}_0(\m{x},\lambda) =
\{ i \in {\cal U}(\m{x}) : \mu_i(\m{x},\lambda) = 0 \}
\quad \mbox{and} \quad
{\cal L}_0(\m{x},\lambda) =
\{ i \in {\cal L}(\m{x}) : \mu_i(\m{x},\lambda) = 0 \} .
\]
\smallskip
\begin{theorem}
\label{opttheorem}
Suppose that $l = u$ and $\m{D}$ is chosen so that
\begin{equation}\label{a-condition}
d_{ii} + d_{jj} \ge 2a_{ij} .
\end{equation}
for all $i$ and $j$.
A necessary and sufficient condition for $\m{x}$ to be a local minimizer in
$(\ref{Q})$ is that the following all hold:
\begin{itemize}
\item[{\rm (P1)}]
For some $\lambda$,
the first-order conditions $(\ref{KT})$ are satisfied at $\m{x}$.
\item[{\rm (P2)}]
For each $i$ and $j \in {\cal F}(\m{x})$,
we have $d_{ii} + d_{jj} = 2a_{ij}$.
\item[{\rm (P3)}]
Consider the three sets ${\cal U}_0(\m{x},\lambda)$,
${\cal L}_0(\m{x},\lambda)$, and ${\cal F}(\m{x})$. For each $i$ and $j$ in
two different sets, we have $d_{ii} + d_{jj} = 2a_{ij}$.
\end{itemize}
\end{theorem}
\smallskip
In treating the situation $l < u$, an additional condition concerning
the dual multipliers $\lambda$ and $\g{\mu}$ in the first-order
optimality conditions (\ref{KT}) enters into the statement of the result:
\begin{itemize}
\item[{\rm (P4)}]
{\it If $\lambda = \mu_i(\m{x},0) = 0$
for some $i$, then $d_{ii} = 0$ in any of the following three cases:}
\begin{itemize}
\item[{\rm (a)}]
$l < \m{1}^{\sf T} \m{x} < u$.
\item[{\rm (b)}]
$x_i > 0$ and $\m{1}^{\sf T} \m{x} = u$.
\item[{\rm (c)}]
$x_i < 1$ and $\m{1}^{\sf T} \m{x} = l$.
\end{itemize}
\end{itemize}
\bigskip
\begin{corollary}
\label{optcorollary}
Suppose that $l < u$ and $\m{D}$ is chosen so that
\begin{equation}\label{strong-d-condition}
d_{ii} + d_{jj} \ge 2a_{ij} \quad \mbox{and} \quad
d_{ii} \ge 0
\end{equation}
for all $i$ and $j$.
A necessary and sufficient condition for $\m{x}$ to be a local minimizer in
$(\ref{Q})$ is that {\rm (P1)}--{\rm (P4)} all hold.
\end{corollary}
\bigskip
Based on Theorem \ref{opttheorem} and
Corollary \ref{optcorollary}, we can easily check whether
a given stationary point is a local minimizer.
This is in contrast to the general quadratic programming problem
for which deciding whether a given point is a
local minimizer is NP-hard (see \cite{murty1987,pardalos91}).
We now observe that when $\m{x}$ is a stationary point and when any
of the conditions (P2)--(P4) are violated, then a descent direction
is readily available.
\bigskip
\begin{proposition}
\label{descent_direction}
Suppose that $\m{x}$ is a stationary point for $(\ref{Q})$ and
$(\ref{strong-d-condition})$ holds.
If either {\rm (P2)} or {\rm (P3)} is violated, then
$\m{d} = \m{e}_i - \m{e}_j$, with an appropriate choice of
sign, is a descent direction.
If $l < u$, $\lambda = 0 = \mu_i(\m{x},0)$, and $d_{ii} > 0$,
then $\m{d} = \m{e}_i$, with an appropriate choice of sign,
is a descent direction in any of the cases {\rm (a)--(c)} of {\rm (P4)}.
\end{proposition}
\begin{proof}
The Lagrangian $L$ associated with (\ref{Q}) has the form
\begin{equation}\label{lagrangian}
L(\m{x}) = f(\m{x}) + \lambda (\m{1}^{\sf T}\m{x} - b)
- \sum_{i \in \C{L}} \mu_i x_i - \sum_{i \in \C{U}} \mu_i(x_i - 1) ,
\end{equation}
where $b = u$ if $\lambda > 0$, $b = l$ if $\lambda < 0$,
and $\g{\mu}$ stands for $\g{\mu}(\m{x}, \lambda)$.
The sets $\C{L}$ and $\C{U}$ denote $\C{L}(\m{x})$ and
$\C{U}(\m{x})$ respectively.
By the first-order optimality conditions (\ref{KT}), we have
$L(\m{x}) = f(\m{x})$ and $\nabla L(\m{x}) = \m{0}$.
Expanding the Lagrangian around $\m{x}$ gives
\[
L(\m{x}+\m{y}) = L(\m{x}) + \nabla L(\m{x})\m{y} + \frac{1}{2}
\m{y}^{\sf T} \nabla^2 L(\m{x})\m{y} = f(\m{x}) - \m{y}^{\sf T} (\m{A}+\m{D}) \m{y} .
\]
We substitute for $L$ using (\ref{lagrangian}) to obtain
\begin{eqnarray}
f(\m{x}+\m{y}) &=& L(\m{x}+\m{y})
- \lambda(\m{1}^{\sf T}(\m{x}+\m{y}) - b)
+ \sum_{i\in{\cal L}} \mu_i(x_i+y_i)
+ \sum_{i\in{\cal U}} \mu_i(x_i +y_i-1)
\nonumber \\
&=& f(\m{x}) - \lambda \m{1}^{\sf T} \m{y}
- \m{y}^{\sf T} (\m{A}+\m{D}) \m{y}
+ \sum_{i\in{\cal L}} \mu_i y_i + \sum_{i\in{\cal U}} \mu_i y_i . \label{T}
\end{eqnarray}
If (P2) is violated, then
there are indices $i$ and $j \in \C{F}(\m{x})$ such that
$d_{ii} + d_{jj} > 2a_{ij}$.
We insert $\m{y} = \alpha(\m{e}_i - \m{e}_j)$ in (\ref{T}) to obtain
\begin{equation}\label{h53}
f(\m{x} + \alpha(\m{e}_i - \m{e}_j)) = f(\m{x})
+ \alpha^2(2a_{ij} - d_{ii} - d_{jj}) .
\end{equation}
Since the coefficient of $\alpha^2$ is negative,
$\m{d} = \m{e}_i - \m{e}_j$ is a descent direction for the
objective function.
Since $0 < x_i < 1$ and $0 < x_j < 1$, feasibility is preserved
for $\alpha$ sufficiently small.
In a similar manner, if (P3) is violated by indices $i$ and $j$,
then (\ref{h53}) again holds and
$\m{d} = \pm(\m{e}_i - \m{e}_j)$ is again a descent direction where
the sign is chosen appropriately to preserve feasibility.
For example, if $i \in \C{L}_0(\m{x})$ and $j \in \C{U}_0(\m{x})$,
then $x_i = 0$ and $x_j = 1$.
Consequently, $\m{x} + \alpha (\m{e}_i - \m{e}_j)$ is feasible if $\alpha > 0$
is sufficiently small.
Finally, suppose that
$l < u$, $\lambda = 0 = \mu_i(\m{x},0)$, and $d_{ii} > 0$.
Substituting $\m{y} = \alpha\m{e}_i$ in (\ref{T}) yields
\[
f(\m{x} + \alpha\m{e}_i) = f(\m{x}) - \alpha^2 d_{ii} .
\]
Since the coefficient $d_{ii}$ of $\alpha^2$ is positive,
$\m{d} = \pm \m{e}_i$ is a descent direction.
Moreover, in any of the cases (a)--(c) of (P4),
$\m{x} + \alpha \m{d}$ is feasible for some $\alpha > 0$ with an
appropriate choice of the sign of $\m{d}$.
\end{proof}
\smallskip
We now give a necessary and sufficient condition for a local
minimizer to be strict.
When a local minimizer is not strict, it may be possible to move
to a neighboring point which has the same objective function value
but which is not a local minimizer.
\smallskip
\begin{corollary}
\label{strict_cor} If $\m{x}$ is a local minimizer for $(\ref{Q})$
and $(\ref{strong-d-condition})$ holds, then $\m{x}$ is a strict
local minimizer if and only if the following conditions hold:
\begin{itemize}
\item[{\rm (C1)}]
${\cal F}(\m{x})$ is empty.
\item[{\rm (C2)}]
$\nabla f (\m{x})_i > \nabla f (\m{x})_j$
for every $i \in \C{L}(\m{x})$ and $j \in \C{U}(\m{x})$.
\item[{\rm (C3)}]
If $l < u$, the first-order optimality conditions $(\ref{KT})$
hold for $\lambda = 0$,
and $\C{Z} := \{ i : \nabla f (\m{x})_i = 0 \} \ne \emptyset$, then either
\begin{itemize}
\item[{\rm (a)}]
$\m{1}^{\sf T}\m{x} = u$ and $x_i = 0$ for all $i \in \C{Z}$ or
\item[{\rm (b)}]
$\m{1}^{\sf T}\m{x} = l$ and $x_i = 1$ for all $i \in \C{Z}$.
\end{itemize}
\end{itemize}
\end{corollary}
\smallskip
\begin{proof}
Throughout the proof,
we let $\g{\mu}$, $\C{F}$, $\C{L}$ and $\C{U}$ denote
$\g{\mu}(\m{x}, \lambda)$, $\C{F}(\m{x})$, $\C{L}(\m{x})$, and $\C{U}(\m{x})$
respectively, where $\m{x}$ is a local minimizer for (\ref{Q}) and
the pair $(\m{x},\lambda)$ satisfies the first-order optimality
conditions (\ref{KT}).
To begin, suppose that $\m{x}$ is a strict local minimizer of (\ref{Q}).
That is, $f(\m{y}) > f(\m{x})$ when $\m{y}$ is a feasible point near $\m{x}$.
If ${\cal F}$ has at least two elements,
then by (P2) of Theorem \ref{opttheorem},
$d_{ii} + d_{jj} = 2a_{ij}$ for each $i$ and $j \in {\cal F}$.
Since the first-order optimality conditions (\ref{KT}) hold at $\m{x}$,
it follows from (\ref{h53}) that
\begin{equation}\label{constant_f}
f(\m{x}+\alpha (\m{e}_i - \m{e}_j)) = f(\m{x})
\end{equation}
for all $\alpha$.
Since this violates the assumption that $\m{x}$ is a strict local minimizer,
we conclude that $|{\cal F}| \le 1$.
If $\m{1}^{\sf T}\m{x} = u$ or $\m{1}^{\sf T}\m{x} = l$, then
since $u$ and $l$ are integers,
it is not possible for $\m{x}$ to have just one fractional component.
Consequently, ${\cal F}$ is empty.
If $l < \m{1}^{\sf T}\m{x} < u$, then by complementary slackness, $\lambda = 0$.
Suppose that $|{\cal F}| = 1$ and $i \in {\cal F}$.
By (P4) of Corollary \ref{optcorollary}, $d_{ii} = 0$.
Again, by (\ref{T}) it follows that
\[
f(\m{x}+\alpha \m{e}_i) = f(\m{x})
\]
for all $\alpha$.
This violates the assumption that $\m{x}$ is a strict local
minimizer of (\ref{Q}).
Hence, $\C{F}$ is empty.
By the first-order conditions (\ref{KT}), there exists $\lambda$ such that
\begin{equation}\label{h84}
\mu_i (\m{x},\lambda) \ge 0 \ge \mu_j (\m{x},\lambda)
\end{equation}
for all $i \in {\cal L}$ and $j \in {\cal U}$.
If this inequality becomes an equality for
some $i \in {\cal L}$ and $j \in {\cal U}$, then
$\mu_i = 0 = \mu_j$,
and by (P3) of Corollary \ref{optcorollary},
we have $d_{ii} + d_{jj} = 2a_{ij}$.
Again, (\ref{constant_f}) violates the assumption that $\m{x}$
is a strict local minimizer.
Hence, one of the inequalities in (\ref{h84}) is strict.
The $\lambda$ on each side of (\ref{h84}) is cancelled to
obtain (C2).
Suppose that $l < u$, $\lambda = 0$,
and $\C{Z} := \{ i : \nabla f (\m{x})_i = 0 \} \ne \emptyset$.
When $\lambda = 0$, we have $\g{\mu}(\m{x},0) = \nabla f (\m{x})$.
Hence, $\C{Z} = \{ i : \mu_i (\m{x},0) = 0 \} \ne \emptyset$.
It follows from (P4) that in any of the cases (a)--(c),
we have $d_{ii} = 0$.
In particular, if $l < \m{1}^{\sf T}\m{x} < u$, then by
(\ref{T}), we have $f(\m{x}+\alpha \m{e}_i) = f(\m{x})$
for all $\alpha$.
Again, this violates the assumption that $\m{x}$ is a strict
local minimum.
Similarly, if for some $i \in \C{Z}$, either $x_i > 0$
and $\m{1}^{\sf T}\m{x} = u$ or $x_i < 1$ and $\m{1}^{\sf T}\m{x} = l$,
the identity $f(\m{x}+\alpha \m{e}_i) = f(\m{x})$
implies that we violate the strict local optimality of $\m{x}$.
This establishes (C3).
Conversely, suppose that $\m{x}$ is a local minimizer and (C1)--(C3) hold.
We will show that
\begin{equation}\label{st}
\nabla f(\m{x})\m{y} > 0 \mbox{ whenever } \m{y} \ne \m{0}
\mbox{ and } \m{x} + \m{y} \mbox{ feasible in } (\ref{Q}).
\end{equation}
As a result, by the mean value theorem,
$f(\m{x}+\m{y}) > f(\m{x})$ when $\m{y}$ is sufficiently small.
Hence, $\m{x}$ is a strict local minimizer.
When $\m{x} + \m{y}$ is feasible in (\ref{Q}), we have
\begin{equation}\label{sign}
y_i \ge 0 \mbox{ for all } i \in \C{L} \mbox{ and }
y_i \le 0 \mbox{ for all } i \in \C{U}.
\end{equation}
By the first-order optimality condition (\ref{KT}),
$\mu_i \ge 0$ for all $i \in \C{L}$ and
$\mu_i \le 0$ for all $i \in \C{U}$.
Hence, we have
\begin{equation}\label{xyz}
(\nabla f(\m{x}) + \lambda\m{1}^{\sf T}) \m{y} =
\g{\mu}^{\sf T} \m{y} = \sum_{i \in \C{L}} \mu_i y_i +
\sum_{i \in \C{U}} \mu_i y_i \ge 0 .
\end{equation}
We now consider three cases.
First, suppose that $\m{1}^{\sf T}\m{y} = 0$ and $\m{y} \ne \m{0}$.
By (C1) ${\cal F}$ is empty and
hence, by (\ref{sign}), $y_i > 0$ for some $i \in \C{L}$ and
$y_j < 0$ for some $j \in \C{U}$.
After adding $\lambda$ to each side in the inequality in (C2),
it follows that either
\begin{equation}\label{e1}
\min_{i \in \C{L}} \mu_i \ge 0 > \max_{j \in \C{U}} \mu_j
\end{equation}
or
\begin{equation}\label{e2}
\min_{i \in \C{L}} \mu_i > 0 \ge \max_{j \in \C{U}} \mu_j .
\end{equation}
Combining (\ref{xyz}), (\ref{e1}), and (\ref{e2})
gives $\nabla f(\m{x})\m{y} \ge \mu_i y_i - \mu_j y_j > 0$
since either $\mu_i > 0$ or $\mu_j < 0$, and $y_i > 0 > y_j$.
Second, suppose that $\m{1}^{\sf T}\m{y} \ne 0$ and $\lambda \ne 0$.
To be specific, suppose that $\lambda > 0$.
By complementary slackness, $\m{1}^{\sf T}\m{x} = u$.
Since $\m{x} + \m{y}$ is feasible in (\ref{Q}) and
$\m{1}^{\sf T}\m{y} \ne 0$, we must have $\m{1}^{\sf T}\m{y} < 0$.
Hence, by (\ref{xyz}), $\nabla f(\m{x})\m{y} > 0$.
The case $\lambda < 0$ is similar.
Finally, consider the case $\m{1}^{\sf T} \m{y} \ne 0$ and $\lambda = 0$.
In this case, we must have $l < u$.
If the set $\C{Z}$ in (C3) is empty, then
$\nabla f(\m{x})_i = \mu_i \ne 0$ for all $i$,
and by (\ref{xyz}), $\nabla f(\m{x})\m{y} > 0$.
If $\C{Z} \ne \emptyset$, then by (C3), either
$\m{1}^{\sf T} \m{x} = u$ and $x_i = 0$ for all $i \in \C{Z}$ or
$\m{1}^{\sf T} \m{x} = l$ and $x_i = 1$ for all $i \in \C{Z}$.
To be specific, suppose that
$\m{1}^{\sf T} \m{x} = u$ and $x_i = 0$ for all $i \in \C{Z}$.
Again, since $\m{x} + \m{y}$ is feasible in (\ref{Q}) and
$\m{1}^{\sf T}\m{y} \ne 0$, we have $\m{1}^{\sf T}\m{y} < 0$.
If $\C{U} = \emptyset$, then $\m{x} = \m{0}$ since $\C{F} = \emptyset$.
Since $\m{1}^{\sf T}\m{y} < 0$, we contradict the feasibility of
$\m{x} + \m{y}$.
Hence, $\C{U} \ne \emptyset$.
Since $\m{1}^{\sf T}\m{y} < 0$, there exists $j \in \C{U}$ such that $y_j < 0$.
Since $\C{Z} \subset \C{L}$, it follows from (\ref{e1}) that $\mu_j < 0$.
By (\ref{xyz}) $\nabla f(\m{x})\m{y} \ge \mu_j y_j > 0$.
The case $\m{1}^{\sf T} \m{x} = l$ and $x_i = 1$ for all $i \in \C{Z}$ is similar.
This completes the proof of (\ref{st}), and the corollary has been
established.
\end{proof}
\smallskip
\section{Numerical results}
\label{numerics}
We investigate the performance of the branch and bound algorithm
based on the lower bounds in Section \ref{LowerBound} using a series
of test problems. The codes were written in C and the experiments
were conducted on an Intel Xeon Quad-Core X5355 2.66 GHz computer
using the Linux operating system. Only one of the 4 processors was
used in the experiments. To evaluate the lower bound, we solve
(\ref{Ltau}) by the gradient projection method with an exact
linesearch and Barzilai-Borwein steplength \cite{bb88}. The stopping
criterion in our experiments was
\[
\| P(\m{x}_k - \m{g}_k) - \m{x}_k \| \le 10^{-4},
\]
where $P$ denotes the projection onto the feasible set and $\m{g}_k$
is the gradient of the objective function at $\m{x}_k$.
The solution of the semidefinite programming problem (\ref{sdp})
was obtained using Version 6.0.1 of the CSDP code
\cite{Borchers99} available at
\begin{center}
\medskip
$ \mbox{https://projects.coin-or.org/Csdp/} $
\medskip
\end{center}
We compare the performance of our algorithm with results reported by
Karisch, Rendl, and Clausen in \cite{Karisch00} and by Sensen in
\cite{Sensen01}. Since these earlier results were obtained on
different computers, we obtained estimates for the corresponding
running time on our computer using the LINPACK benchmarks
\cite{Dongarra08}. Since our computer is roughly 30 times faster
than the HP~9000/735 used in \cite{Karisch00} and it is roughly 7
times faster than the Sun UltrSPARC-II 400Mhz machine used in
\cite{Sensen01}, the earlier CPU times were divided by 30 and 7
respectively to obtain the estimated running time on our computer.
Note that the same interior-point algorithm that we use, which is
the main routine in the CSDP code, was used to solve the
semidefinite relaxation in \cite{Karisch00}.
The test problems were based on the graph bisection problem
where $l = u = n/2$.
Two different data sets were used for the $\m{A}$ matrices
in the numerical experiments.
Most of the test problems came from the library of
Brunetta, Conforti, and Rinaldi \cite{bcr97} which is available at
\begin{center}
\medskip $\mbox{ftp://ftp.math.unipd.it/pub/Misc/equicut}$.
\medskip
\end{center}
Some of the test matrices were from the UF Sparse Matrix
Library maintained by Timothy Davis:
\begin{center}
\medskip $ \mbox{http://www.cise.ufl.edu/research/sparse/matrices/} $
\medskip
\end{center}
Since this second set of matrices is not directly connected with graph
partitioning, we create an $\m{A}$ for graph partitioning as follows:
If the matrix $\m{S}$ from the library was symmetric,
then $\m{A}$ was the adjacency matrix defined as follows:
the diagonal of $\m{A}$ is zero,
$a_{ij} = 1$ if $s_{ij} \ne 0$, and $a_{ij} = 0$ otherwise.
If $\m{S}$ was not symmetric,
then $\m{A}$ was the adjacency matrix of $\m{S}^{\sf T}\m{S}$.
\subsection{Lower bound comparison}
Our numerical study begins with a comparison of the
lower bound of Section \ref{mineig} based on the minimum
eigenvalue of $\m{A}+\m{D}$ and the best affine underestimate,
and the lower bound of Section \ref{SDP} based on semidefinite programming.
We label these two lower bounds $LB_1$ and $LB_2$ respectively.
In Table \ref{tab1}, the first 5 graphs correspond to matrices
from the UF Sparse Matrix Library, while the next 5 graphs were from
the test set of Brunetta, Conforti, and Rinaldi.
The column labeled ``Opt'' is the minimum cut and while $n$ is the problem
dimension.
The numerical results indicate that the lower bound $LB_2$ based on
semidefinite programming is generally better (larger) than $LB_1$.
In Table \ref{tab1} the best lower bound is highlighted in bold.
Based on these results,
we use the semidefinite programming-based lower bound in the
numerical experiments which follow.
\begin{table}[h!]
\caption{Comparison of two lower bounds}
\begin{center}
\begin{tabular}
{|l|r|r|r|r|}
\hline Graph & $n$ & $LB_1$ & $LB_2$ & Opt \\
\hline Tina\_Discal & 11 & 0.31 & {\bf 0.86} & 12 \\
\hline jg1009 & 9 & 1.55 & {\bf 1.72} & 16 \\
\hline jg1011 & 11 & {\bf 1.48} & 0.94 & 24 \\
\hline Stranke94 & 10 & 1.76 & {\bf 1.77} & 24 \\
\hline Hamrle1 & 32 & -1.93 & {\bf 1.12} & 17 \\
\hline 4x5t & 20 & -21.71 & {\bf 5.43} & 28 \\
\hline 8x5t & 40 & -16.16 & {\bf 2.91} & 33 \\
\hline t050 & 30 & 0.90 & {\bf 18.54} & 397 \\
\hline 2x17m & 34 & {\bf 1.33} & 1.27 & 316 \\
\hline s090 & 60 & -9.84 & {\bf 13.10} & 238 \\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\subsection{Algorithm performance}
Unless stated otherwise,
the remaining test problems
came from the library of Brunetta, Conforti, and Rinaldi \cite{bcr97}.
Table \ref{tab6} gives results for matrices associated with the
finite element method \cite{DeSouza94}.
The three methods are labeled CQB (our convex quadratic branch and bound
algorithm), KRC (algorithm of Karisch, Rendl, and Clausen \cite{Karisch00}),
and SEN (algorithm of Sensen \cite{Sensen01}).
``$n$'' is the problem dimension,
``\%'' is the percent of nonzeros in the matrix, and
``$\#$~nodes'' is the number of nodes in the branch and bound tree.
The CPU time is given in seconds.
The best time is highlighted in bold.
As can be seen in Table \ref{tab6}, CQB was fastest in 6 out of the
10 problems even though the number of nodes in the branch and bound
tree was much larger.
Thus both KRC and SEN provided much tighter relaxations,
however, the time to solve their relaxed problems was much larger than
the time to optimize our convex quadratics.
\begin{table}[b]
\caption{Mesh Instances}
\begin{center}
\begin{tabular}{|l|c|c|rr|rr|rr|}
\hline
\multicolumn{3}{|c}{} & \multicolumn{2}{c}{CQB} & \multicolumn{2}{c}{KRC} &
\multicolumn{2}{c|}{SEN} \\
\hline
graph & $n$ & $\%$ &{\small$\#$}nodes & time &{\small $\#$}nodes & time &{\small$\#$}nodes & time \\
\hline
m4 & 32 & 10 & 22 & 0.05 & 1 & {\bf 0.03} & 1 & 0.14\\
ma & 54 & 5 & 8 & 0.16 & 1 & {\bf 0.10} & 1 & 0.28\\
me & 60 & 5 & 13 & 0.20 & 1 & {\bf 0.13} & 1 & 0.28\\
m6 & 70 & 5 & 205 & {\bf 0.47} & 1 & 1.23 & 1 & 1.43\\
mb & 74 & 4 & 95 & {\bf 0.43} & 1 & 0.98 & 1 & 1.14\\
mc & 74 & 5 & 412 & {\bf 0.52} & 1 & 1.53 & 1 & 1.43\\
md & 80 & 4 & 101 & {\bf 0.55} & 1 & 0.96 & 1 & 1.28\\
mf & 90 & 4 & 99 & {\bf 0.79} & 1 & 0.80 & 1 & 1.85\\
m1 & 100 & 3 & 200 & {\bf 1.04} & 15 & 36.50 & 1 & 3.00\\
m8 & 148 & 2 & 3516 & 6.62 & 1 & 10.70 & 1 & {\bf 4.14}\\
\hline
\end{tabular}
\label{tab6}
\end{center} \end{table}
Table \ref{tab7} gives results for
compiler design problems \cite{Ferreira98,JMN93}.
For this test set, KRC was fastest in 3 out of 5 test problems.
Note though that the times for CQB were competitive with KRC.
\begin{table}[h!b!p!]
\caption{Compiler Design}
\begin{center}
\begin{tabular}{|l|c|c|rr|rr|rr|}
\hline
\multicolumn{3}{|c}{} & \multicolumn{2}{c}{CQB} & \multicolumn{2}{c}{KRC} &
\multicolumn{2}{c|}{SEN} \\
\hline
graph & $n$ & $\%$ &{\small$\#$}nodes & time &{\small $\#$}nodes & time &{\small$\#$}nodes & time \\
\hline
cd30 & 30 & 13 & 11 & 0.05 & 1 & 0.03 & 1 & {\bf 0.00}\\
cd45 & 45 & 10 & 35 & 0.27 & 1 & {\bf 0.23} & 1 & 0.57\\
cd47a & 47 & 9 & 45 & 0.34 & 1 & {\bf 0.33} & 7 & 1.00\\
cd47b & 47 & 9 & 67 & {\bf 0.29} & 35 & 3.73 & 3 & 1.43\\
cd61 & 61 & 10 & 95 & 0.86 & 1 & {\bf 0.67} & 6 & 6.00\\
\hline
\end{tabular}
\label{tab7}
\end{center}
\end{table}
Table \ref{tab8} gives results for binary de Bruijn graphs which
arise in applications related to parallel computer architecture
\cite{Collins92,Feldmann97}. These graphs are constructed by the
following procedure. We first build a directed graph using the
Mathematica command:
\begin{center}
\medskip
\verb"A = TableForm[ToAdjacencyMatrix[DeBruijnGraph[2, n]]]"
\medskip
\end{center}
To obtain the graph partitioning test problem, we add the
Mathematica generated matrix to its transpose and set the diagonal
to 0. For this test set, SEN had by far the best performance.
\begin{table}[h!b!p!]
\caption{de Bruijn Networks}
\begin{center}
\begin{tabular}{|l|c|c|rr|rr|rr|}
\hline
\multicolumn{3}{|c}{} & \multicolumn{2}{c}{CQB} & \multicolumn{2}{c}{KRC} &
\multicolumn{2}{c|}{SEN} \\
\hline
graph & $n$ & $\%$ &{\small$\#$}nodes & time &{\small $\#$}nodes & time &{\small$\#$}nodes & time \\
\hline
debr5 & 32 & 12 & 57 & 0.11 & 3 & 0.20 & 1 & {\bf 0.00}\\
debr6 & 64 & 6 & 7327 & 2.25 & 55 & 15.63 & 1 & {\bf 1.00}\\
debr7 & 128 & 3 & 16140945 & 1:22:45 & 711 & 46:36 & 1 & {\bf 10.28}\\
\hline
\end{tabular}
\label{tab8}
\end{center}
\end{table}
Table \ref{tab2} gives results for toroidal grid graphs.
These graphs are connected with an $h \times k$ grid,
the number of vertices in the graph is $n = hk$ and
there are $2hk$ edges whose weights are chosen from a
uniform distribution on the interval $[1, 10]$.
Since Sensen did not solve either this test set,
or the remaining test sets, we now compare between CQB and KRC.
We see in Table \ref{tab2} that CQB was faster than KRC in
9 of the 10 toroidal grid cases.
\begin{table}[h!b!p!]
\caption{Toroidal Grid: a weighted $h \times k$ grid with $hk$ vertices
and $2hk$ edges that received integer weights uniformly drawn from [1,10]}
\begin{center}
\begin{tabular}{|l|c|c|rr|rr|}
\hline
\multicolumn{3}{|c}{} & \multicolumn{2}{c}{CQB} & \multicolumn{2}{c|}{KRC} \\
\hline
graph & $n$ & $\%$ &{\small$\#$}nodes & time &{\small $\#$}nodes & time \\
\hline
4x5t & 20 & 21 & 13 & {\bf 0.01} & 1 & 0.03 \\
6x5t & 30 & 14 & 46 & {\bf 0.05} & 1 & 0.10 \\
8x5t & 40 & 10 & 141 & {\bf 0.16} & 1 & 0.20 \\
21x2t & 42 & 10 & 18 & {\bf 0.02} & 1 & 0.17 \\
23x2t & 46 & 9 & 78 & {\bf 0.15} & 33 & 4.16 \\
4x12t & 48 & 9 & 69 & {\bf 0.17} & 3 & 0.56 \\
5x10t & 50 & 8 & 129 & 0.24 & 1 & {\bf 0.20} \\
6x10t & 60 & 7 & 992 & {\bf 0.54} & 43 & 11.66 \\
7x10t & 70 & 6 & 844 & {\bf 0.68} & 47 & 19.06 \\
10x8t & 80 & 5 & 420 & {\bf 0.91} & 45 & 31.46 \\
\hline
\end{tabular}
\label{tab2}
\end{center}
\end{table}
Table \ref{tab3} gives results for mixed grid graphs.
These are complete graphs associated with an planar
$h \times k$ planar grid; the edges in the planar grid received integer
weights uniformly drawn from [1,100], while all the other edges needed to
complete the graph received integer weights uniformly drawn from [1,10].
For these graphs, KRC was much faster than CQB.
Notice that the graphs in this test set are completely dense.
One trend that is seen in these numerical experiments is that as
the graph density increases, the performance of CQB relative to
the other methods degrades.
\begin{table}[h!b!p!]
\caption{Mixed Grid Graphs}
\begin{center}
\begin{tabular}{|l|c|c|rr|rr|rr|}
\hline
\multicolumn{3}{|c}{} & \multicolumn{2}{c}{CQB} & \multicolumn{2}{c|}{KRC} \\
\hline
graph & $n$ & $\%$ &{\small$\#$}nodes & time &{\small $\#$}nodes & time \\
\hline
2x10m & 20 & 100 & 150 & {\bf 0.03} & 1 & {\bf 0.03} \\
6x5m & 30 & 100 & 2476 & 0.20 & 1 & {\bf 0.03} \\
2x17m & 34 & 100 & 42410 & 2.12 & 21 & {\bf 0.96} \\
10x4m & 40 & 100 & 51713 & 3.74 & 2 & {\bf 0.06} \\
5x10m & 50 & 100 & 3588797 & 296.19 & 1 & {\bf 0.06} \\
\hline
\end{tabular}
\label{tab3}
\end{center}
\end{table}
Results for planar grid graph are given in Table \ref{tab4}.
These graphs are associated with an $h \times k$ grid.
There are $hk$ vertices and $2hk - h - k$ edges whose
weights are integers uniformly drawn from [1,10].
For this relatively sparse test set,
CQB was faster in 7 out of 10 problems.
\begin{table}[h!b!p!]
\caption{Planar Grid}
\begin{center}
\begin{tabular}{|l|c|c|rr|rr|rr|}
\hline
\multicolumn{3}{|c}{} & \multicolumn{2}{c}{CQB} & \multicolumn{2}{c|}{KRC} \\
\hline
graph & $n$ & $\%$ &{\small$\#$}nodes & time &{\small $\#$}nodes & time \\
\hline
10x2g & 20 & 15 & 10 & {\bf 0.01} & 1 & 0.03 \\
5x6g & 30 & 11 & 44 & {\bf 0.05} & 1 & 0.10 \\
2x16g & 32 & 9 & 23 & {\bf 0.06} & 1 & 0.13 \\
18x2g & 36 & 8 & 19 & 0.08 & 1 & {\bf 0.06} \\
2x19g & 38 & 8 & 53 & {\bf 0.29} & 49 & 1.83 \\
5x8g & 40 & 9 & 24 & 0.08 & 1 & {\bf 0.06} \\
3x14g & 42 & 8 & 31 & {\bf 0.14} & 5 & 0.60 \\
5x10g & 50 & 7 & 178 & 0.34 & 1 & {\bf 0.30} \\
6x10g & 60 & 6 & 224 & {\bf 0.35} & 57 & 10.63 \\
7x10g & 70 & 5 & 271 & {\bf 0.63} & 61 & 18.56 \\
\hline
\end{tabular}
\label{tab4}
\end{center}
\end{table}
Table \ref{tab5} gives results for randomly generated graphs.
For these graphs, the density is first fixed and then the edges
are assigned integer weights uniformly drawn from [1,10].
For this test set, CQB is fastest in 11 of 20 cases.
Again, observe that the relative performance of CQB degrades as
the density increases, mainly due to the large number of nodes in
the branch and bound tree.
\begin{table}[h!b!p!]
\caption{Randomly Generated Graphs}
\begin{center}
\begin{tabular}{|l|c|c|rr|rr|rr|}
\hline
\multicolumn{3}{|c}{} & \multicolumn{2}{c}{CQB} & \multicolumn{2}{c|}{KRC} \\
\hline
graph & $n$ & $\%$ &{\small$\#$}nodes & time &{\small $\#$}nodes & time \\
\hline
v090 & 20 & 10 & 12 & {\bf 0.01} & 1 & 0.03 \\
v000 & 20 & 100 & 952 & {\bf 0.02} & 1 & 0.03 \\
t090 & 30 & 10 & 10 & 0.05 & 1 & {\bf 0.03} \\
t050 & 30 & 50 & 5081 & {\bf 0.32} & 17 & 0.73 \\
t000 & 30 & 100 & 122670 & 3.79 & 3 & {\bf 0.20} \\
q090 & 40 & 10 & 89 & 0.14 & 1 & {\bf 0.13} \\
q080 & 40 & 20 & 914 & {\bf 0.24} & 31 & 2.30 \\
q030 & 40 & 70 & 554652 & 32.23 & 23 & {\bf 2.06} \\
q020 & 40 & 80 & 1364517& 72.58 & 7 & {\bf 0.83} \\
q010 & 40 & 90 & 4344123& 217.16 & 13 & {\bf 1.36} \\
q000 & 40 & 100 & 8186984& 380.72 & 1 & {\bf 0.13} \\
c090 & 50 & 10 & 397 & {\bf 0.29} & 1 & 0.33 \\
c080 & 50 & 20 & 14290 & {\bf 2.20} & 45 & 6.13 \\
c070 & 50 & 30 & 136290 & 15.70 & 49 & {\bf 8.06} \\
c030 & 50 & 70 & 22858729&2756.26 & 51 & {\bf 5.46} \\
c290 & 52 & 10 & 340 & {\bf 0.34} & 1 & 0.40 \\
c490 & 54 & 10 & 1443 & {\bf 0.54} & 15 & 3.30 \\
c690 & 56 & 10 & 3405 & {\bf 0.82} & 3 & 1.00 \\
c890 & 58 & 10 & 13385 & {\bf 2.66} & 71 & 17.53 \\
s090 & 60 & 10 & 8283 & {\bf 2.01} & 37 & 9.90 \\
\hline
\end{tabular}
\label{tab5}
\end{center}
\end{table}
\section{Conclusions}
\label{conclusions}
An exact algorithm is presented for solving the graph partitioning
problem with upper and lower bounds on the size of each set
in the partition.
The algorithm is based on a continuous quadratic programming
formulation of the discrete partitioning problem.
We show how to transform a feasible $\m{x}$ for
the graph partitioning QP (\ref{Q}) to a binary feasible point $\m{y}$
with an objective function value which satisfies $f(\m{y}) \le f(\m{x})$.
The binary feasible point corresponds to a partition of the graph
vertices and $f(\m{y})$ is the weight of the cut edges.
At any stationary point of (\ref{Q}) which is not a local minimizer,
Proposition \ref{descent_direction} provides a descent direction that
can be used to strictly improve the objective function value.
In the branch and bound algorithm,
the objective function is decomposed into the sum of
a convex and a concave part.
A lower bound for the objective function is achieved by replacing
the concave part by an affine underestimate.
Two different decompositions were considered, one based on the
minimum eigenvalue of the matrix in the objective function,
and the other based on the solution to a semidefinite programming problem.
The semidefinite programming approach generally led to much tighter
lower bounds.
In a series of numerical experiments, the new algorithm CQB
(convex quadratic branch and bound) was
competitive with state-of-the-art partitioning methods;
the relative performance of CQB was better for sparse graphs
than for dense graphs.
\input{paper.bbl}
\end{document}
|
2,869,038,156,293 | arxiv | \section{#1} }\newcommand {\rebibitem}[1] {\bibitem{#1} \red{[*: #1]}
\def\relabel {\label} \def\resection {\section}\def\rebibitem {\bibitem}
\def \modtwo {{\ (\mbox {mod}\ 2)}}
\def \C {{\cal C}}
\def \P {{\cal P}}
\def \Q {{\cal Q}}
\def \C {{\cal C}}
\def \E {{\cal E}}
\def \I {{\cal I}}
\def \R {{\cal R}}
\def \E {{\cal E}}
\def \L {{\cal L}}
\def \M {{\cal M}}
\def \N {{\cal N}}
\def \X {{\cal X}}
\def \ssim {\stackrel s\thicksim}
\def \nssim {\stackrel s\nsim}
\def \iff {if and only if }
\def \ED {{\cal E}{\cal D}}
\def \F {{\cal F}}
\def \nF {\bar {\cal F}}
\def \modtwo {\ (\mbox{mod } 2)}
\newcommand \Union[2]
{
\mbox{Union}_{#1}(#2)
}
\begin{document}
\title{On non-feasible edge sets in matching-covered graphs}
\author{Xiao Zhao\thanks{Corresponding author. Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China. Email: [email protected]},
Fengming Dong\footnote{National Institute of Education,
Nanyang Technological University,
Singapore. Email: [email protected]},
Sheng Chen\footnote{
Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China. Email: [email protected]}}
\date{November 28, 2018}
\maketitle
\markboth{On non-feasible edge sets in matching-covered graphs}
{}
\renewcommand{\sectionmark}[1]{}
\begin{abstract}
Let $G=(V,E)$ be a matching-covered graph and
$X$ be an edge set of $G$.
$X$ is said to be feasible if there
exist two perfect matchings $M_1$ and $M_2$ in $G$
such that $|M_1\cap X|\not \equiv|M_2\cap X|\ (\mbox{mod } 2)$.
For any $V_0\subseteq V$, $X$ is said to be
switching-equivalent to $X\oplus \nabla_G(V_0)$,
where $\nabla_G(V_0)$ is the set of edges in $G$
each of which has exactly one end in $V_0$
and $A \oplus B$ is
the symmetric difference of two sets $A$ and $B$.
Lukot'ka and Rollov\'a
showed that
when $G$ is regular and bipartite,
$X$ is non-feasible if and only if
$X$ is switching-equivalent to $\emptyset$.
This article extends Lukot'ka and Rollov\'a's result by showing that this conclusion
holds as long as $G$ is matching-covered and bipartite.
This article also studies matching-covered graphs $G$
whose non-feasible edge sets are switching-equivalent to
$\emptyset$ or $E$ and partially characterizes these matching-covered
graphs in terms of their ear decompositions.
Another aim of this article is to construct infinite many
$r$-connected and $r$-regular graphs of class 1
containing non-feasible edge sets not switching-equivalent to either $\emptyset$ or $E$ for an arbitrary integer $r$ with $r\ge 3$,
which provides negative answers to problems
asked by Lukot'ka and Rollov\'a
and He, et al
respectively.
\end{abstract}
\section{Introduction and Preliminary}
This article studies
finite and undirected loopless graphs.
Let $G=(V,E)$ be a graph.
A {\it perfect matching} of $G$ is a set of independent edges which covers all vertices of $G$.
$G$ is said to be {\it matching-covered}
if it is connected and each edge of $G$ is contained in
some perfect matching of $G$.
It is not difficult to verify that
any regular graph of class 1 is matching-covered.
For a matching-covered graph $G$,
an edge set $X$ of $G$ is
said to be {\it feasible}
if $G$ has two perfect matchings $M_1$ and $M_2$ such that
$|M_1\cap X|\not\equiv|M_2\cap X|\ (\mbox{mod } 2)$ holds.
Thus an edge set $X$ of $G$ is non-feasible
\iff $|M_1\cap X|\equiv|M_2\cap X|\ (\mbox{mod } 2)$
holds for every pair of perfect matchings
$M_1$ and $M_2$ of $G$.
For example, $E$ and $\emptyset$ are non-feasible edge sets
of $G$.
In Theorem~\ref{main2},
we extend the definition of a feasible edge
to connected graphs which are not matching-covered.
For any $V_0\subseteq V$,
let $\nabla_G(V_0)$ be the set of edges in $G$
each of which has exactly one end in $V_0$.
For any vertex $v$ in $G$,
$\nabla_G(\{v\})$ is exactly the set of edges in $G$
which are
incident with $v$.
For any $X,Y\subseteq E$, $X$ and $Y$
are called
{\it switching-equivalent,}
denoted by $X\ssim_G Y$,
if $X=Y\oplus \nabla_G(V_0)$ holds for a set $V_0$
of vertices in $G$,
where $A\oplus B$ is
the symmetric difference of two sets $A$ and $B$,
i.e., $A\oplus B=(A-B)\cup (B-A)$.
Let $X\nssim_G Y$ denote the case when
edge sets $X$ and $Y$ are not switching-equivalent in $G$.
Lukot'ka and Rollov\'a \cite{15}
proved that the property ``being feasible"
is invariant to switching-equivalent
edge sets.
\begin{thm}[\cite{15}]\relabel{SEP}
Let $G$ be a matching-covered graph
and $X$ and $Y$ be edge subsets of $G$.
If $X\ssim_G Y$,
then $X$ is feasible \iff $Y$ is feasible.
\end{thm}
For a matching-covered graph $G=(V,E)$,
let $\F(G)$ be the set of feasible edge sets of $G$
and let $\nF(G)$ be the set of non-feasible edge sets of $G$.
Thus $\F(G)\cup \nF(G)$ is the power set of $E$.
Clearly $\{\emptyset, E\}\subseteq \nF(G)$.
Theorem~\ref{SEP} implies that
$\{X\subseteq E: X\ssim_G \emptyset\} \subseteq \nF(G)$
and $\{X\subseteq E: X\ssim_G E\}\subseteq \nF(G)$.
For bipartite and regular graphs,
Lukot'ka and Rollov\'a~\cite{15} got the following
conclusion, described by notations in this article.
\begin{thm}[\cite{15}]
\relabel{luk}
If $G$ is a bipartite and regular graph, then
$\nF(G) = \{X\subseteq E: X\ssim_G \emptyset\}$.
\end{thm}
Note that any bipartite and regular graph is matching-covered,
because any bipartite graph is a class 1 graph (see \cite{Konig})
and any regular graph of class 1 is matching-covered.
In this article, we will extend Theorem~\ref{luk}
as stated below.
\begin{thm}\relabel{main1}
Let $G=(V,E)$ be a matching-covered graph.
Then the following statements are equivalent:
\begin{enumerate}
\item $G$ is bipartite;
\item
$\nF(G) = \{X\subseteq E: X\ssim_G \emptyset\}$;
\item $\nF(G) = \{X\subseteq E: X\ssim_G E\}$.
\end{enumerate}
\end{thm}
For any matching-covered graph $G=(V,E)$,
$\{X\subseteq E: X\ssim_G \emptyset
\mbox{ or } X\ssim_G E\}$
is a subset of $\nF(G)$.
Let $\nF^*(G)=\nF(G)-\{X\subseteq E: X\ssim_G \emptyset
\mbox{ or } X\ssim_G E\}$.
Then $\nF^*(G)=\emptyset$ holds \iff
$X\ssim_G \emptyset$ or
$X\ssim_G E$ holds for each $X\in \nF(G)$.
It is natural to ask when $\nF^*(G)=\emptyset$ holds.
By Theorem~\ref{main1}, it holds if $G$ is bipartite
and matching-covered.
But there exist non-bipartite matching-covered graphs
with this property.
For example, $K_4$ is such a graph.
For a subgraph $G'$ of $G$, a {\it single ear} of $G'$
is a path $P$ of $G$ with an odd length
such that both ends of $P$ are in $G'$
but its internal vertices are distinct from
vertices in $G'$.
A {\it double ear} of $G'$ is a pair of
vertex disjoint single ears of $G'$.
An ear of $G'$ means a single ear or a double ear of $G'$.
An {\it ear decomposition}
of a matching-covered graph $G$ is a sequence
$$
G_0\subset G_1\subset\cdots\subset G_r=G
$$
of matching-covered subgraphs of $G$,
where (i) $G_0=K_2,$ and (ii)
for each $i$ with $1\leq i\leq r$,
$G_{i}$ is the union of $G_{i-1}$ and
an ear (single or double) of $G_{i-1}$.
For $i=1,2,\cdots,r$,
let $\epsilon(G_{i-1},G_i)\in \{1,2\}$
such that $\epsilon(G_{i-1},G_i)=1$
\iff $G_i$ is the union $G_{i-1}$ and a single ear.
A very important result on the study of matching-covered
graphs is the existence of an ear decomposition
for each matching-covered graph due to
Lov\'asz and Plummer \cite{19}.
Our second aim in this article is to
establish the following conclusions
on matching-covered graphs $G$
with $\nF^*(G)=\emptyset$,
based on ear decompositions of matching-covered graphs.
\begin{thm}\relabel{main2}
Let $G=(V,E)$ be a matching-covered graph
with an ear decomposition
$G_0\subset G_1\subset \cdots \subset G_r=G$,
where $r\ge 1$.
\begin{enumerate}
\item\relabel{main2-n1} If $\nF^*(G_{r-1})=\emptyset$ and
$\epsilon(G_{r-1},G_r)=1$,
then $\nF^*(G)=\emptyset$;
\item\relabel{main2-n2} if $\sum_{1\le i\le r}\epsilon(G_{i-1},G_i)\le r+1$,
then $\nF^*(G)=\emptyset$ holds;
\item\relabel{main2-n3} if
$\sum_{1\le i\le r}\epsilon(G_{i-1},G_i)\ge r+2$
and $\epsilon(G_{r-1},G_r)=2$,
then $\nF^*(G)\ne \emptyset$;
\item\relabel{main2-n4} if $\sum_{1\le i\le r}\epsilon(G_{i-1},G_i)\ge r+2$
and $\epsilon(G_{r-1},G_r)=1$, then
$\nF^*(G)= \emptyset$
\iff
$X\cap E(G_{r-1}-\{u,v\})$ is feasible in
the subgraph $G_{r-1}-\{u,v\}$
for each $X\in \nF^*(G_{r-1})$,
where $E(H)$ is the edge set of a graph $H$
and $u,v$ are the two ends of the single ear $P_r$
added to $G_{r-1}$ for obtaining $G_r$.
\end{enumerate}
\end{thm}
Note that the graph $G_{r-1}-\{u,v\}$
in Theorem~\ref{main2} ~\ref{main2-n4}
is the graph obtained from $G_{r-1}$ by
deleting $u$ and $v$
and may not be matching-covered
although it contains perfect matchings.
By definition, $X'=X\cap E(G_{r-1}-\{u,v\})$ is feasible in
$G_{r-1}-\{u,v\}$ if there exist two perfect matchings
$N_1$ and $N_2$ in $G_{r-1}-\{u,v\}$ such that
$|N_1\cap X'|\not \equiv |N_2\cap X'|\modtwo$ holds.
Lukot'ka and Rollov\'a \cite{15}
noticed that $\nF^*(P)\ne \emptyset$ holds
for the Petersen graph $P$,
which is a class 2 graph,
and asked the following problem on regular graphs of class 1,
described by notations in this article.
\begin{prob}\relabel{prob1}
Does $\nF^*(G)=\emptyset$ hold for
each regular graph $G$ of class 1?
\end{prob}
A negative answer to this problem was provided by
He, et al \cite{wei} who showed that
for any $k\ge 3$, there exist infinitely many $k$-regular
graphs $G$ of class 1
with an arbitrary large equivalent edge set belonging to $\nF^*(G)$,
where a non-empty edge set $S$ of $G$
is called an {\it equivalent set}
if $S\cap M=\emptyset$ or $S\cap M=S$ holds
for all perfect matchings $M$ of $G$.
The graphs constructed in \cite{wei}
giving a negative answer to Problem~\ref{prob1}
are not 3-connected and the following problem was further asked
in \cite{wei}.
\begin{prob}\relabel{prob2}
Does Problem~\ref{prob1} hold
for $3$-connected and
$r$-regular graph $G$ with $r\ge 3$?
\end{prob}
In Section~\ref{sec5}, we will provide negative answers
to both Problems~\ref{prob1} and~\ref{prob2} by
two constructions of
$r$-regular graphs $G$ of class 1 with $\nF^*(G)\ne \emptyset$.
\iffalse
In Subsection~\ref{sec5-2}, the graphs constructed
are $4$-connected and in Subsection~\ref{sec5-3},
the graphs constructed are
$r$-connected for an arbitrary odd number $r$ with $r\ge 3$.
\fi
\begin{thm}\relabel{main3}
For any integer $r\ge 3$,
there are infinitely many $r$-connected and
$r$-regular graphs $G$ of class 1
with $\nF^*(G)\ne \emptyset$.
\end{thm}
\section{Preliminary results on
$X\subseteq E$ with $X\ssim_G \emptyset$
or $X\ssim_G E$
\relabel{sec2}}
Let $G=(V,E)$ be any connected graph
which may be not matching-covered.
By definition,
for any subset $U\subseteq V$,
$\nabla_G(U)$ is the set
$\{e\in E: e$
joins a vertex in in $U$
and a vertex in $V-U\}$.
With the notation $\nabla_G(U)$,
an edge set $X$ of $G$ with the property that
$X\ssim_G \emptyset$
or $X\ssim_G E$ has the following
characterization due to He, et al \cite{wei}.
\begin{pro}[\cite{wei}]\relabel{pro2-1}
Let $G=(V,E)$ be a connected graph and $X\subseteq E$.
Then
\begin{enumerate}
\item $X\ssim_G \emptyset$ iff
$X=\nabla _G(U)$ for some $U\subseteq V$;
\item $X\ssim_G E$ \iff
$E(G)-X=\nabla _G(U)$ for some $U\subseteq V$.
\end{enumerate}
\end{pro}
Proposition~\ref{pro2-1} implies the following corollary
immediately.
For any graph $G$ and any set $V_0$ of vertices in $G$,
let $G[V_0]$ denote the subgraph of $G$ induced by $V_0$.
\begin{cor}\relabel{cor2-1}
Let $G=(V,E)$ be a connected graph and $X\subseteq E$.
For any $V_0\subseteq V$,
\begin{enumerate}
\item
if $X\ssim_G \emptyset$, then
$X\cap E(G[V_0])\ssim_{G[V_0]} \emptyset$;
\item
if $X\ssim_G E$, then
$X\cap E(G[V_0])\ssim_{G[V_0]} E(G[V_0])$.
\end{enumerate}
\end{cor}
Obviously,
$X\ssim_G Y$ implies that $Y\ssim_G X$.
The transitive property of the relation ``$\ssim_G$" also holds.
\begin{lem}\relabel{le2-0}
Let $G=(V,E)$ be a connected graph with $X,Y,Z\subseteq E$.
If $X\ssim_G Y$ and $Y\ssim_G Z$, then
$X\ssim_G Z$ holds.
\end{lem}
\proof Assume that
$X\ssim_G Y$ and $Y\ssim_G Z$.
Then $X=Y\oplus \nabla_G(V_1)$ and
$Y=Z\oplus \nabla_G(V_2)$ hold for some
$V_1,V_2\subseteq V$,
implying that $X=Z\oplus \nabla_G(V_1\oplus V_2)$.
Thus $X\ssim_G Z$ holds.
\endproof
\vspace{0.3 cm}
Assume that $G'$ is any connected graph with
two distinct vertices $v_1$ and $v_2$
and $P$ is any path with ends $u_1$ and $u_2$
such that $G'$ and $P$ are vertex-disjoint.
Let $\Union{(v_1,v_2)}{G',P}$
(or simply $\Union{}{G',P}$)
denote the graph obtained
from $G'$ and $P$ by identifying $u_i$ and $v_i$
for $i=1,2$.
For an ear decomposition $G_0\subset G_1\subset \cdots G_r=G$
of a matching-covered graph $G$,
if $G_i$ is the union of $G_{i-1}$ and a single ear $P_i$,
then $G_i=\Union{}{G_{i-1},P_i}$.
But, in this section, the results do not depend on
the condition that $G'$ is matching-covered.
\begin{lem}\relabel{le2-1}
Let $G=\Union{}{G',P}$.
For any edge set $X=X_0\cup X'$ of $G$,
where $X_0\subseteq E(P)$ and $X'\subseteq E(G')$,
\begin{enumerate}
\item\relabel{le2-1-n1}
if $|E(P)|\equiv 1\modtwo$ and $X\in \nF(G)$,
then $X'\in \nF(G')$;
\item\relabel{le2-1-n2} if $|X_0|\equiv 0\modtwo$, then $X\ssim_G X'$;
\item\relabel{le2-1-n3} if $|X_0|\equiv 1\modtwo$,
then $X\ssim_G X'\cup \{e\}$
for any $e\in E(P)$;
\item\relabel{le2-1-n4} if $X'\ssim_{G'} Y$, then $X\ssim_G Y\cup Y_0$
for some $Y_0\subseteq E(P)$;
\item\relabel{le2-1-n5} if $X'\ssim_{G'} \emptyset$,
then either $X\ssim_{G} \emptyset$ or
$X\ssim_{G} \{e\}$ for any $e\in E(P)$;
\item\relabel{le2-1-n6} if $X'\ssim_{G'} E(G')$,
then either $X\ssim_{G} E(G)$ or
$X\ssim_{G} E(G)-\{e\}$ for any $e\in E(P)$;
\item\relabel{le2-1-n7}
if $X\ssim_{G} \emptyset$, then $X'\ssim_{G'} \emptyset$;
if $X\ssim_{G} E(G)$, then $X'\ssim_{G'} E(G')$.
\end{enumerate}
\end{lem}
\proof
(i). Assume that the edges in $P$
are $e_1, e_2,\cdots, e_{2k-1}$ in
the order of the path $P$ such that $e_i$ and $e_{i+1}$
have a common end for all $i=1,2,\cdots,2k-2$.
Suppose that $X'\in \F(G')$.
Then $G'$ has two perfect matchings $M_1$ and $M_2$
such that
$|X'\cap M_1|-|X'\cap M_2|\equiv 1\modtwo$.
For $i=1,2$, the set $N_i$ defined below is a
perfect matching of $G$:
$$
N_i=M_i\cup
\{e_{2j}: j=1,2,\cdots, k-1\}.
$$
Observe that
$$
|X\cap N_j|=|X'\cap M_j|+|X_0\cap
\{e_{2j}: j=1,2,\cdots, k-1\}|,\qquad \forall j=1,2.
$$
Thus $|X\cap N_1|-|X\cap N_2|
=|X'\cap M_1|-|X'\cap M_2|
\equiv 1\modtwo$,
implying that
$X$ is feasible in $G$, a contradiction.
Thus \ref{le2-1-n1} holds.
\ref{le2-1-n2} and \ref{le2-1-n3} will be proved
by applying the following claim.
\noindent {\bf Claim 1}:
If $|X_0|\ge 2$, then $X=X_0\cup X'\ssim_G X_0'\cup X'$ holds
for some $X_0'\subset X_0$ with $|X'_0|=|X_0|-2$.
Assume that $|X_0|\ge 2$.
Then there exists subpath $P_0$ of $P$
such that $X\cap E(P_0)=\emptyset$
and $\nabla_G(V(P_0))\subseteq X$,
implying that $X\ssim_G X\oplus \nabla_G(V(P_0))
=X'_0\cup X'$,
where $X'_0=X_0\oplus \nabla_G(V(P_0))\subset X_0$
and $|X'_0|=|X_0|-2$.
Thus the claim holds.
(ii).
Assume that $|X_0|>0$ and $|X_0|\equiv 0\modtwo$.
\ref{le2-1-n2}
follows by applying Claim 1
repeatedly.
(iii). Applying Claim 1 repeatedly,
$X\ssim_G \{e\}\cup X'$ holds for some $e\in E(P)$.
Now let $e'$ be any edge in $P$ different from $e$.
There exists a subpath $P'$ of $P$
such that $\nabla_G(V(P'))=\{e,e'\}$.
Thus $\{e\}\cup X'\ssim_G
(\{e\}\cup X')\oplus \nabla_G(V(P'))=
\{e'\}\cup X'$
and the result holds.
(iv). It is trivial when $X'=Y$.
Now assume that $X'\ne Y$. Then
$Y=X'\oplus \nabla_{G'}(V_0)$ for some non-empty
set $V_0\subset V(G')$.
As $G=\Union{}{G',P}$,
there are three cases on the structure of $G$, i.e., $|\{v_1,v_2\}\cap V_0|\in \{0,1,2\}$,
where $v_1,v_2$ are the two vertices in $G'$
at which the ends of $P$ are identified with.
But $|\{v_1,v_2\}\cap V_0|=2$ implies that
$|\{v_1,v_2\}\cap (V(G')-V_0)|=0$.
Thus, we need only to consider the two cases:
$|\{v_1,v_2\}\cap V_0|=0$ or
$|\{v_1,v_2\}\cap V_0|=1$,
as shown in Figure~\ref{f1}.
\begin{figure}[ht]
\centering
\input{f1.pic}
(a) Case 1 \hspace{3.5 cm} (b) Case 2
\caption{Two cases for the two ends of $P$}
\relabel{f1}
\end{figure}
In both cases, $Y=X'\oplus \nabla_{G'}(V_0)$ implies that
$X\oplus \nabla_{G}(V_0) =Y\cup Y_0$ holds for some
$Y_0\subseteq E(P)$.
Thus \ref{le2-1-n4} holds.
(v). As $X'\ssim_{G'}\emptyset$,
the result of \ref{le2-1-n4} implies that
$X\ssim_{G} Y_0$ where $Y_0\subseteq E(P)$.
The results of \ref{le2-1-n2} and \ref{le2-1-n3} imply that
either $Y_0\ssim_G \emptyset$ or
$Y_0\ssim_G \{e\}$ for any $e\in E(P)$.
Thus \ref{le2-1-n5} holds.
(vi).
As $X'\ssim_{G'} E(G')$,
the result of \ref{le2-1-n4} implies that
$X\ssim_{G} E(G')\cup Y_0$ where $Y_0\subseteq E(P)$.
The results of \ref{le2-1-n2} and \ref{le2-1-n3} imply that
$(E(G')\cup Y_0)\ssim_G E(G)$
when $|Y_0|\equiv |E(P)|\modtwo$,
and $(E(G')\cup Y_0)\ssim_G E(G)-\{e\}$
when $|Y_0|\not\equiv |E(P)|\modtwo$.
Thus \ref{le2-1-n6} holds.
(vii).
Suppose that $X\ssim_G \emptyset$ holds.
By Proposition~\ref{pro2-1},
$X=\nabla_G(U)$ holds for some $U\subseteq V(G)$.
Then $X'=\nabla_{G'}(U-U_0)$,
implying that $X'\ssim_{G'} \emptyset$,
where $U_0$ is the set of internal vertices of $P$.
Now suppose that $X\ssim_G E(G)$ holds.
By Proposition~\ref{pro2-1},
$E(G)-X=\nabla_G(U)$ holds for some $U\subseteq V(G)$.
Then $E(G')-X'=\nabla_{G'}(U-U_0)$,
where $U_0$ is defined above,
implying that $X'\ssim_{G'} E(G')$.
Thus \ref{le2-1-n7} holds.
\endproof
\vspace{0.3 cm}
For distinct vertices $v_1,v_2,v_3,v_4$
in a graph $G'$ and any two vertex-disjoint
paths $P_1,P_2$ with $V(P_i)\cap V(G')=\emptyset$
for $i=1,2$, let
$\Union{(v_1,v_2,v_3,v_4)}{G',P_1,P_2}$
(or simply $\Union{}{G',P_1,P_2}$)
be the graph $\Union{(v_3,v_4)}{G'',P_2}$,
where $G''=\Union{(v_1,v_2)}{G',P_1}$.
\begin{lem}\relabel{le2-2}
Let $G=\Union{}{G',P_1,P_2}$.
For any edge set $X=X_0\cup X'$ of $G$,
where $X'\subseteq E(G')$
and $X_0\subseteq E(P_1)\cup E(P_2)$,
if $X'\ssim_{G'} \emptyset$,
then $X\ssim_G \emptyset$,
or $X\ssim_G \{e\}$ for some $e\in E(P_1)\cup E(P_2)$,
or $X\ssim_G \{e_1,e_2\}$ where $e_i\in E(P_i)$
for $i=1,2$.
\end{lem}
\proof Let $G''=\Union{}{G',P_1}$.
As $X'\ssim_{G'} \emptyset$,
Lemma~\ref{le2-1}~\ref{le2-1-n5}
implies that $X-E(P_2)\ssim_{G''} \emptyset$
or $X-E(P_2)\ssim_{G''} \{e_1\}$ for any $e_1\in E(P_1)$.
Note that $G=\Union{}{G'',P_2}$.
If $X-E(P_2)\ssim_{G''} \emptyset$,
then Lemma~\ref{le2-1}~\ref{le2-1-n5}
implies that either $X\ssim_{G} \emptyset$
or $X\ssim_{G} \{e_2\}$ holds for any $e_2\in E(P_1)$.
If $X-E(P_2)\ssim_{G''} \{e_1\}$ for any $e_1\in E(P_1)$,
then Lemma~\ref{le2-1}~\ref{le2-1-n4}
implies that $X\ssim_{G} \{e_1\}\cup Y_0$
for some $Y_0\subseteq E(P_2)$.
Lemma~\ref{le2-1}~\ref{le2-1-n2} and~\ref{le2-1-n3}
further imply that either $X\ssim_{G} \{e_1\}$ or
$X\ssim_{G} \{e_1\}\cup \{e_2\}$ holds
for any $e_2\in E(P_2)$.
Thus the result holds.
\endproof
\section{Proof of Theorem~\ref{main1}\relabel{sec3}}
For any matching-covered graph $G$, the following
basic properties follow directly from
the definitions of $\F(G)$ and $\nF(G)$.
\begin{lem}\relabel{le3-1}
Let $G$ be a matching-covered graph with $|E(G)|\ge 2$
and $X\subseteq E(G)$.
If either $|X|=1$ or $|X|=|E|-1$,
then $X\in \F(G)$.
\end{lem}
By applying Lemma~\ref{le2-1}, we can prove that
for any matching-covered graphs $G'$ and $G=\Union{}{G',P}$,
$\nF^*(G')=\emptyset$ implies that
$\nF^*(G)=\emptyset$.
\begin{lem}\relabel{le3-2}
Let $G'$ and
$G=\Union{}{G',P}$ be matching-covered graphs,
where $P$ is a single ear of $G'$.
For any $X\in \nF(G)$,
\begin{enumerate}
\item $X\cap E(G')\ssim_{G'} \emptyset$
\iff $X\ssim_G \emptyset$;
\item $X\cap E(G')\ssim_{G'} E(G')$ \iff $X\ssim_G E(G)$;
\item $X\cap E(G')\in \nF^*(G')$
\iff $X\in \nF^*(G)$.
\end{enumerate}
\end{lem}
\proof (i). ($\Leftarrow$) It follows directly from
Lemma~\ref{le2-1}~\ref{le2-1-n7}.
($\Rightarrow$)
As $X\cap E(G')\ssim_{G_{r-1}} \emptyset$,
Lemma~\ref{le2-1}~\ref{le2-1-n5} implies that
$X\ssim_G \emptyset$ or $X\ssim_G \{e\}$ for any $e\in E(P_r)$.
Suppose that $X\ssim_G \{e\}$ for some $e\in E(P_r)$.
As $X\in \nF(G)$, Theorem~\ref{SEP}
implies that $\{e\}\in \nF(G)$.
But, as $|E(G)|\ge 2$,
Lemma~\ref{le3-1}
implies that $\{e\}\in \F(G)$,
a contradiction.
Thus $X\ssim_G \emptyset$.
(ii). ($\Leftarrow$) It follows directly from
Lemma~\ref{le2-1}~\ref{le2-1-n7}.
($\Rightarrow$).
As $X\cap E(G')\ssim_{G_{r-1}} E(G')$,
Lemma~\ref{le2-1}~\ref{le2-1-n6} implies that
$X\ssim_G E(G)$ or $X\ssim_G E(G)-\{e\}$ for any $e\in E(P_r)$.
Suppose that $X\ssim_G E(G)-\{e\}$ for some $e\in E(P_r)$.
As $X\in \nF(G)$, Theorem~\ref{SEP}
implies that $E(G)-\{e\}\in \nF(G)$ holds.
As $|E(G)|\ge 2$,
Lemma~\ref{le3-1}
implies that $E(G)-\{e\}\in \F(G)$,
a contradiction.
Thus $X\ssim_G E(G)$.
(iii). By Lemma~\ref{le2-1}~\ref{le2-1-n1},
$X\in \nF(G)$ implies that $X\cap E(G')\in \nF(G')$.
Then the result follows from (i) and (ii) directly.
\endproof
\vspace{0.3 cm}
An ear decomposition
$G_0\subset G_1\subset \cdots \subset G_r$
of a matching-covered graph $G$ is called
a {\it single-ear decomposition}
if $\epsilon(G_{i-1},G_i)=1$ holds for all
$i=1,\cdots,r$.
A matching-covered graph may have no
single-ear decompositions.
For example, the complete graph $K_4$ does not have.
However, every matching-covered bipartite graph
has a single-ear decomposition.
\begin{thm}
\relabel{Ear-c}
Let $G$ be a matching-covered graph $G$.
\begin{enumerate}
\item\ \cite{19} $G$ has an ear decomposition;
\item\ \cite{Ear,L-Ear,8}
$G$ is bipartite \iff
$G$ has a single-ear decomposition.
\end{enumerate}
\end{thm}
Now we are going to prove Theorem~\ref{main1}.
\vspace{0.4 cm}
{\bf Proof of Theorem~\ref{main1}}:
By Proposition~\ref{pro2-1} (i),
each edge set $X$ with $X\ssim_G \emptyset$
induces a bipartite subgraph in $G$,
implying that $E\nssim_G \emptyset$ holds whenever
$G$ is not bipartite.
Hence, Theorem~\ref{main1} (ii) implies
Theorem~\ref{main1} (i).
For any bipartite graph $G=(V,E)$,
$E\ssim_G \emptyset$ holds.
Thus, Lemma~\ref{le2-0} implies that
$\{X\subseteq E: X\ssim_G \emptyset\}$
and $\{X\subseteq E: X\ssim_G E\}$
are the same set.
Thus,
(ii) and (iii) in Theorem~\ref{main1} are equivalent.
So, to prove Theorem~\ref{main1},
it suffices to show that
Theorem~\ref{main1} (i) implies Theorem~\ref{main1} (ii).
Assume that $G$ is bipartite and matching-covered.
By Theorem \ref{Ear-c} (ii),
$G$ has a single-ear decomposition
$G_0\subset G_1\subset\cdots\subset G_r=G$,
where $G_0\cong K_2$.
Thus, for $i=1,2,3,\cdots,r$,
$G_i=\Union {}{G_{i-1},P_i}$ holds for some single ear $P_i$
of $G_{i-1}$.
If $r=0$, i.e., $G\cong K_2$
and (i) implies (ii) obviously.
Now assume that $r\ge 1$ and the result holds for $G_{r-1}$.
For any $X\in \nF(G)$,
Lemma~\ref{le2-1}~\ref{le2-1-n1} implies that
$X\cap E(G_{r-1})\in \nF(G_{r-1})$.
By the assumption, the result holds for $G_{r-1}$.
Thus $X\cap E(G_{r-1})\ssim_{G_{r-1}} \emptyset$ holds.
Then, Lemma~\ref{le3-2} (i) implies that
$X\ssim_G \emptyset$.
Hence Theorem~\ref{main1} is proven.
\endproof
\section{Proof of Theorem~\ref{main2}\relabel{sec4}}
The following two lemmas will be applied for
proving Theorem~\ref{main2}.
\begin{lem}\relabel{le4-1}
Let $G'$ and
$G=\Union{}{G',P_1,P_2}$ be matching-covered graphs,
where $P_1$ and $P_2$ form a double ear of $G'$.
Assume that $\Union{}{G',P_i}$ is not matching-covered
for $i=1,2$.
Then $\nF^*(G)=\emptyset$ \iff $G'$ is bipartite.
\end{lem}
\proof ($\Rightarrow $)
Suppose that $G'$ is not bipartite.
Let $X_0=E(P_1)\cup E(P_2)$, where
$E(P_i)=\{e_{i,1},e_{i,2},\cdots, e_{i,2k_i-1}\}$
for $i=1,2$ and $e_{i,j}$ and $e_{i,j+1}$ have a common end
for all $j=1,2,\cdots,2k_i-2$.
As $\Union{}{G',P_i}$ is not matching-covered
for both $i=1,2$,
for each perfect matching $M$ of $G$,
one of the following holds:
$$
M\cap X_0=
\bigcup_{1\le i\le 2}
\{e_{i,2t-1} : t=1,2,\cdots,k_i\}
$$
or
$$
M\cap X_0=
\bigcup_{1\le i\le 2}
\{e_{i,2t} : t=1,2,\cdots,k_i-1\}.
$$
Thus $|M\cap X_0|\equiv 0\modtwo$
holds for all perfect matchings $M$ of $G$,
implying that $X_0\in \nF(G)$.
As $G'$ is not bipartite,
$G-X_0$ is not bipartite.
Thus
Proposition~\ref{pro2-1} (ii) implies that
$X_0\nssim_{G} E(G)$.
As $|E(P_1)|\equiv |E(P_2)|\equiv 1\modtwo$,
Lemma~\ref{le2-1}~\ref{le2-1-n3} implies that
$X_0\ssim_G \{e_1,e_2\}$,
where $e_i$ is an edge on $P_i$ for $i=1,2$.
Clearly $G-\{e_1,e_2\}$ is connected.
Then Proposition~\ref{pro2-1} (ii) implies that
$\{e_1,e_2\}\nssim_{G} \emptyset$.
Thus $X_0\nssim_G \emptyset$.
Hence $X_0\in \nF^*(G)$ and the necessity holds.
($\Leftarrow $)
Assume that $G'$ is bipartite
and $X\in \nF(G)$.
As $|E(P_1)|\equiv |E(P_2)|\equiv 1\modtwo$,
Lemma~\ref{le2-1}~\ref{le2-1-n1} implies that
$X\cap E(G')\in \nF(G')$.
As $G'$ is bipartite,
Theorem~\ref{main1} implies that
$X\cap E(G')\ssim_{G'} \emptyset$.
By Lemma~\ref{le2-2},
$X\ssim_G \emptyset$ holds or
$X\ssim_G \{e\}$ holds for some $e\in E(P_1)\cup E(P_2)$
or
$X\ssim_G \{e_1,e_2\}$ holds for some $e_1\in E(P_1)$
and $e_2\in E(P_2)$.
If $X\ssim_G \{e\}$ for some $e\in E(P_1)\cup E(P_2)$,
then Theorem~\ref{SEP} implies that $\{e\}\in \nF(G)$.
But Lemma~\ref{le3-1} implies that $\{e\}\in \F(G)$,
a contradiction.
Now consider the case that
$X\ssim_G \{e_1,e_2\}$ for $e_i\in E(P_i)$.
Lemma~\ref{le2-1}~\ref{le2-1-n3} implies that
$\{e_1,e_2\}\ssim_G (E(P_1)\cup E(P_2))$.
Thus $X\ssim_G (E(P_1)\cup E(P_2))$.
Since $G'$ is bipartite and matching-covered, $G'$ has a bipartition $(U_1,U_2)$ with $|U_1|=|U_2|$.
Since $G=\Union{}{G',P_1,P_2}$ is not bipartite,
both ends of some $P_i$
are within $U_j$ for some $j$.
Assume that both ends of some $P_1$ are within $U_1$.
As $|U_1|=|U_2|$ and $G$ is matching-covered,
both ends of some $P_2$ must be in $U_2$.
Thus $(E(P_1)\cup E(P_2))\oplus
\nabla_G(U_1\cup V(P_1))
=E(G)$,
implying that $X\ssim_G (E(P_1)\cup E(P_2)) \ssim_G E(G)$.
Hence the sufficiency holds.
\endproof
\begin{lem}\relabel{le4-2}
Let $G'$ and $G=\Union{}{G',P}$ be
matching-covered graphs,
where $P$ is a single ear of $G'$.
For any $X\in \nF(G')$,
both $X \in \F(G)$ and $X\cup E(P)\in \F(G)$ hold
\iff
$X\cap E(G^o)\in \F(G^o)$ holds, where $G^o=G'-\{u,v\}$
and $u,v$ are the two ends of $P$ in $G'$.
\end{lem}
\proof As $P$ is a single ear of $G'$, $|E(P)|$ is odd.
Let $e_1,e_2,\cdots,e_{2k-1}$ be the edges in $P$,
where $e_i$ and $e_{i+1}$ have a common end
for all $i=1,2,\cdots,2k-2$.
The set of perfect matchings of $G$ can be partitioned
into two sets $\M_0$ and $\M_1$, where
$\M_0$ is the set of perfect matchings $M$
in $G$ with $e_1\notin M$
and $\M_1$ is the set of perfect matchings $M$
in $G$ with $e_1\in M$.
Then, for each $M\in \M_0$,
$$
M\cap E(P)=\{e_{2r}: r=1,2,\cdots,k-1\}
$$
and
for each $M\in \M_1$,
$$
M\cap E(P)=\{e_{2r-1}: r=1,2,\cdots,k\}.
$$
Observe that $\M'=\{M\cap E(G'): M\in \M_0\}$
is the set of perfect matchings in $G'$.
Assume that $X\in \nF(G')$ and
$|M'\cap X|\equiv a\modtwo$ holds for all $M'\in \M'$,
where $a$ is a fixed number in $\{0,1\}$.
Thus $|M\cap X|\equiv a+k-1 \modtwo$ holds
for all $M\in \M_0$.
$(\Rightarrow )$ Assume that both $X$ and
$X\cup E(P)$ are feasible in $G$.
Since $X$ is feasible in $G$
and $|M\cap X|\equiv a+k-1 \modtwo$ holds
for all $M\in \M_0$,
$|M_1\cap X|\equiv a+k \modtwo$ holds
for some $M_1\in \M_1$.
{\bf Claim 1}:
$|M_2\cap X|\equiv a+k-1 \modtwo$ holds
for some $M_2\in \M_1$.
Suppose that Claim 1 fails.
Then $|M\cap X|\equiv a+k \modtwo$ holds
for all $M\in \M_1$,
implying that
$|M\cap (X\cup E(P))|\equiv a \modtwo$ holds
for all $M\in \M_1$.
But, for each $M\in M_0$,
$|M\cap (X\cup E(P))|\equiv |M\cap X|+k-1\equiv a \modtwo$ holds.
Thus $|M\cap (X\cup E(P))| \equiv a \modtwo$ holds
for all $M\in \M_0\cup \M_1$,
implying that $X\cup E(P)$ is non-feasible in $G$, a contradiction.
Thus Claim 1 holds.
Now there are two perfect matchings $M_1,M_2\in \M_1$
such that $|M_i\cap X|\equiv a+k+i-1 \modtwo$ holds
for $i=1,2$,
implying that $|M_1\cap X|\not \equiv |M_2\cap X|\modtwo$.
Let $X_0=X\cap E(G^o)$.
Observe that both $M_1-E(P)$ and $M_2-E(P)$
are perfect matchings in $G^o$ and
$
|(M_i-E(P))\cap X_0|=|M_i\cap X|
$
holds for $i=1,2$.
As $|M_1\cap X|\not \equiv |M_2\cap X|\modtwo$,
$|(M_1-E(P))\cap X_0|\not\equiv
|(M_2-E(P))\cap X_0|\modtwo$ holds,
implying that $X_0$ is feasible in $G^o$.
$(\Leftarrow )$
Assume that $X_0=X\cap E(G^o)$ is feasible in $G^o$.
Then there are two perfect matchings
$N_1$ and $N_2$ in $G^o$ such that
$|X_0\cap N_i|\equiv i \modtwo $ for $i=1,2$.
Clearly, $Q_i=N_i\cup \{e_{2r-1}: r=1,2,\cdots,k\}\in \M_1$
for $i=1,2$.
Observe that
$$
|(X\cup E(P))\cap Q_i|=|X_0\cap N_i|+k
\equiv k+i \modtwo,\quad \forall i=1,2,
$$
implying that $X\cup E(P)$ is feasible in $G$.
Also observe that
$$
|X\cap Q_i|=|X_0\cap N_i|
\equiv i \modtwo, \quad \forall i=1,2,
$$
implying that $X$ is feasible in $G$.
\endproof
\vspace{0.4 cm}
We are now ready to prove Theorem~\ref{main2}.
\vspace{0.2 cm}
{\bf Proof of Theorem~\ref{main2}}:
(i). It follows directly from Lemma~\ref{le3-2} (iii).
(ii). If $\sum_{1\le i\le r}\epsilon(G_{i-1},G_i)=r$,
then $G_0\subset G_1\subset \cdots \subset G_r$ is a single
ear decomposition of $G$. Thus Theorem~\ref{main1}
implies that $\nF^*(G)=\emptyset$.
Now assume that $\sum_{1\le i\le r}\epsilon(G_{i-1},G_i)=r+1$,
implying that
$\epsilon(G_{i-1},G_i)=2$ holds for exactly one $i$
with $1\le i\le r$.
We first consider the case that $\epsilon(G_{r-1},G_r)=2$.
In this case,
$\sum_{1\le i\le r-1}\epsilon(G_{i-1},G_i)=r-1$,
implying that
$G_0\subset G_1\subset \cdots \subset G_{r-1}$ is a single
ear decomposition of $G_{r-1}$.
Theorem~\ref{Ear-c} implies that $G_{r-1}$ is bipartite.
Then Lemma~\ref{le4-1} implies that
$\nF^*(G_r)=\emptyset$ holds.
Now we consider the case
that $\epsilon(G_{k-1},G_k)=2$, where $1\le k<r$.
Then $\sum_{1\le i\le k}\epsilon(G_{i-1},G_i)=k+1$.
By the proven conclusion above,
$\nF^*(G_k)=\emptyset$ holds.
The result in (i) implies that
$\nF^*(G_k)=\emptyset$ holds
for all $i=k+1,\cdots,r$.
Hence (ii) holds.
(iii). As $\sum\limits_{1\le i\le r}\epsilon(G_{i-1},G_i)\ge r+2$
and $\epsilon(G_{r-1},G_r)=2$,
$\sum\limits_{1\le i\le r-1}\epsilon(G_{i-1},G_i)\ge r$ holds.
By the definition of ear decompositions,
$G_{r-1}$ is not bipartite.
Then Lemma~\ref{le4-1} implies that $\nF^*(G)\ne \emptyset$.
Hence (iii) holds.
(iv).
($\Rightarrow $)
Assume that $\nF^*(G)=\emptyset$.
Suppose that there exists
$X\in \nF^*(G_{r-1})$ with
$X\cap E(G^o)\in \nF(G^o)$,
where $G^o=G_{r-1}-\{u,v\}$.
As $X\cap E(G^o)\in \nF(G^o)$,
Lemma~\ref{le4-2} implies that
$X\in \nF(G)$ or $X\cup E(P)\in \nF(G)$ holds.
If $X\in \nF(G)$,
as $X\in \nF^*(G_{r-1})$,
then Lemma~\ref{le3-2} (iii) implies that
$X\in \nF^*(G)$.
If $X\cup E(P)\in \nF(G)$,
it can be proved similarly that
$X\cup E(P)\in \nF^*(G)$ holds.
Thus $\nF^*(G)\ne \emptyset$,
a contradiction.
($\Leftarrow $) Assume that $\nF^*(G)\ne \emptyset$.
Then, there exists $Z\in \nF^*(G)$ and Lemma~\ref{le3-2} (iii) implies that
$X=Z\cap E(G_{r-1})\in \nF^*(G_{r-1})$.
By Lemma~\ref{le2-1}~\ref{le2-1-n2} and~\ref{le2-1-n3},
$Z\ssim_G X$ or $Z\ssim_G X\cup E(P_r)$ holds.
Then, $Z\in \nF(G)$ implies that
$X\in \nF(G)$ or $X\cup E(P)\in \nF(G)$.
Lemma~\ref{le4-2} implies that $X\cap E(G^o)\in \nF(G^o)$ holds,
where $G^o=G_{r-1}-\{u,v\}$, contradicting the given condition.
Thus the result holds.
\endproof
\section{
Regular graphs $G$ of class 1 with
$\nF^*(G)\ne \emptyset$
\relabel{sec5}
}
\subsection{Generalize the family of graphs
constructed in \cite{wei}
\relabel{sec5-1}}
In this subsection, we will
generalize the construction
in \cite{wei} which provides a negative answer
to Problem~\ref{prob1}.
For two vertex-disjoint
graphs $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$
with $e_i=x_iy_i\in E_i$ for $i=1,2$,
let $G_1\#_{e_1,e_2} G_2$ denote the graph obtained from $G_1-e_1$ and $G_2-e_2$
by adding edges $f_1=x_1x_2$ and $f_2=y_1y_2$,
as shown in Figure~\ref{f2}.
\begin{figure}[ht]
\centering
\input{f2.pic}
(a) $G_1$ and $G_2$ \hspace{4 cm}
(b) $G_1\#_{e_1,e_2} G_2$
\caption{A graph constructed from $G_1$
and $G_2$}
\relabel{f2}
\end{figure}
\begin{lem}\relabel{le5-1-1}
For $i=1,2$, assume that
$G_i=(V_i,E_i)$ is a matching-covered graph
with $|E_i|\ge 2$ and
$S_i$ is an equivalent set of $G_i$ with $e_i\in S_i$,
where $e_i=x_iy_i$.
Let $G$ denote the graph $G_1\#_{e_1,e_2} G_2$
and let $S=(S_1-\{e_1\})\cup (S_2-\{e_2\})\cup \{f_1, f_2\}$.
Then
\begin{enumerate}
\item $G$ is
matching-covered;
\item
$S$
is an equivalent set in $G$;
\item when $G_1$ and $G_2$ are 2-connected,
$G$ is also 2-connected;
\item when $G_1$ and $G_2$ are $r$-regular graphs
of class 1,
$G$ is also a $r$-regular graph of class 1;
\item
for any $S'\subseteq S$,
when $G_i-e_i-(S'\cap E_i)$ is not bipartite
for some $i\in \{1,2\}$,
$S'\nssim_G E(G)$ holds;
\item
for any $S'\subseteq S$, when $S'\cap E_j \nssim_{G_j-e_j} \emptyset$ for some $j\in \{1,2\}$, $S'\nssim_G \emptyset$ holds.
\end{enumerate}
\end{lem}
\proof For $i=1,2$ and $j=0,1$,
let $\M_{i,j}$ be the set of perfect matchings $M$ in $G_i$
with $|M\cap \{e_i\}|=j$.
Since $G_i$ is matching-covered and
$|E_i|\ge 2$ holds for $i=1,2$,
$\M_{i,j}\ne \emptyset$ for all $i=1,2$ and $j=0,1$.
Let $\M$ be the set of perfect matchings of $G$.
(i).
The following facts imply that $G$ is matching-covered:
\begin{enumerate}
\item[(a)] for any $M_i\in \M_{i,1}$, $i=1,2$,
$(M_1-\{e_1\}) \cup (M_2-\{e_2\})\cup \{x_1y_1,x_2y_2\}$
is a member in $\M$;
\item[(b)] if $N_i\in \M_{i,0}$ for $i=1,2$,
then $N_1\cup N_2\in \M$;
\item[(c)] for $i=1,2$ and any $e\in E_i$,
$e\in M_i$ holds for some $M_i\in \M_{i,1}\cup \M_{i,0}$.
\end{enumerate}
(ii). To show that $S$ is an equivalent set of $G$,
we need only to prove the two claims below:
\noindent {\bf Claim 1}: For $\{f_1,f_2\}$
is an equivalent set of $G$.
Suppose the claim fails.
Then there exists $M\in \M$ with
$|\{f_1,f_2\}\cap M|=1$.
Assume that $f_1\in M$ but $f_2\notin M$.
Then $M\cap E_1$ is a perfect matching of $G-x_1$,
implying that $|V(G_1)|\equiv 1\modtwo$,
contradicting the condition that $G_1$ is matching-covered.
Thus the claim holds.
\noindent {\bf Claim 2}:
both $\{f_1, e\}$
is an equivalent set of $G$ for any
$e\in (S_1-\{e_1\})\cup (S_2-\{e_2\})$.
We may assume that $e\in S_1-\{e_1\}$.
Suppose the claim fails.
Then there exists $M\in \M$ with
$|\{f_1,e\}\cap M|=1$.
If $e\in M$ but $f_1\notin M$, then Claim 1 implies that
$f_2\notin M$.
Thus $M_1=M\cap E_1\in \M_{1,0}$.
Clearly, $e\in M_1$ but $e_1\notin M_1$.
Thus $\{e,e_1\}$ is not an equivalent set of $G_1$,
contradicting the assumption that
$S_1$ is an equivalent set of $G_1$ with $e,e_1\in S_1$.
If $e\notin M$ but $f_1\in M$, then Claim 1 implies that
$f_2\in M$.
Thus $M_1'=\{e_1\}\cup (M\cap E_1)\in \M_{1,1}$.
Clearly, $e\notin M'_1$ but $e_1\in M'_1$,
implying that
$\{e,e_1\}$ is not an equivalent set of $G_1$,
contradicting the assumption that
$S_1$ is an equivalent set of $G_1$ with $e,e_1\in S_1$.
Hence Claim 2 holds and (ii) follows.
(iii). It is trivial to verify.
(iv). Clearly, when both $G_1$ and $G_2$ are $r$-regular,
$G$ is also $r$-regular.
Assume that both $G_1$ and $G_2$ are $r$-regular graphs of class 1.
Then the edge set of each $G_i$ can be partitioned into
$r$ independent sets $E_{i,1},\cdots,E_{i,r}$.
Assume that $e_i\in E_{i,1}$ for $i=1,2$.
Then $E(G)$ has a partition
$E_1,E_2,\cdots,E_r$
in which each subset
is an independents set of $G$, where
$$
E_1=(E_{1,1}-\{e_1\})\cup (E_{2,1}-\{e_2\})
\cup\{f_1,f_2\},
\ E_j=E_{1,j}\cup E_{2,j}, \quad \forall j=2,3,\cdots,r,
$$
implying that $G$ is of class 1.
Thus the result holds.
(v). Suppose that $S'\ssim_G E(G)$.
Corollary~\ref{cor2-1} (ii)
implies that
$G_i-e_i-(S'\cap E_i)$ is bipartite for $i=1,2$,
a contradiction.
Thus the result holds.
(vi). Suppose that
$S'\ssim_G \emptyset$.
Corollary~\ref{cor2-1} (i)
implies that
$S'\cap E(G_i-e_i)\ssim_{G_i-e_i} \emptyset$
for $i=1,2$,
a contradiction.
Thus the result holds.
\endproof
By applying Lemma~\ref{le5-1-1}, the following conclusion
follows.
\begin{figure}[ht]
\centering
\input{f5.pic}
\caption{$H_1=G_1$ and
$H_{j+1}$ is the graph
$H_{j}\#_{e'_{j},e_{j+1}} G_{j+1}$
for $j=1,2,\cdots,k-1$}
\relabel{f5}
\end{figure}
\begin{thm}\relabel{th5-1-1}
Let $G_1, G_2, \cdots, G_k$ be vertex-disjoint
$2$-connected and $r$-regular graphs of class 1
and let $S_i$ be an equivalent set of $G_i$ with $\{e_i,e'_i\}\subseteq S_i$,
where $e_i=x_iy_i$ and $e'_i=x'_iy'_i$,
for all $i=1,2,\cdots,k$.
Let $H_1=G_1$ and
let $H_{j+1}$ be the graph
$H_{j}\#_{e'_{j},e_{j+1}} G_{j+1}$
for $j=1,2,\cdots,k-1$, as shown in Figure~\ref{f5}.
Then
\begin{enumerate}
\item $H_k$ is a $2$-connected and $r$-regular graph of class 1;
\item for any subset $S$ of
$\{S_1-\{e'_1\}\}\cup \{S_k-\{e_k\}\} \cup\bigcup\limits^{k-1}_{i=2}\{S_i-\{e_i,e'_i\}\}$
with $|S|\equiv 0\modtwo$,
when $G'_i-(S\cap E(G'_i))$ is not bipartite
for some $i$ with $1\le i\le k$
and $(S\cap E(G'_j))\nssim_{G_j-e_j} \emptyset$ holds
for some $j$ with $1\le j\le k$,
$S$ is an equivalent set of $H_k$
which belongs to $\nF^*(H_k)$,
where $G'_1=G_1-\{e'_1\}$, $G'_k=G_k-\{e_k\}$
and $G'_s=G_s-\{e_s,e'_s\}$ for $2\le s\le k-1$.
\end{enumerate}
\end{thm}
\proof (i). It follows directly from Lemma \ref{le5-1-1} (iii) and (iv).
(ii).
Let $Q=\{S_1-\{e'_1\}\}\cup \{S_k-\{e_k\}\} \cup\bigcup\limits^{k-1}_{i=2}\{S_i-\{e_i,e'_i\}\}$.
Applying Lemma \ref{le5-1-1} (ii) repeatedly
shows that $Q$ is an equivalents set of $H_k$.
As $S\subseteq Q$ and $|S|\equiv 0\modtwo$,
$S\in \nF(H_k)$ holds.
As $G'_i-(S\cap E(G'_i))$ is not bipartite for some
$i$ with $1\le i\le k$,
$S\cap E(G'_i)\nssim E(G'_i)$ holds,
implying that $S\nssim_{H_k} E(H_k)$
by Corollary~\ref{cor2-1} (ii).
As $S\cap E(G'_j) \nssim_{G'_j} \emptyset$
for some $j$ with $1\le j\le k$,
Corollary~\ref{cor2-1} (ii)
implies that $S\nssim_{H_k} \emptyset$.
Hence $S\in \nF^*(H_k)$.
\endproof
By Theorem~\ref{th5-1-1}, it can be verified easily that
the graphs constructed in \cite{wei}
give a negative answer to Problem~\ref{prob1}.
\subsection{$4$-connected and $r$-regular graphs
$G$ of class 1 with $\nF^*(G)\ne \emptyset$
\relabel{sec5-2}}
In this subsection,
we construct infinitely many
$4$-connected $r$-regular graphs $G$ of class 1
with $\nF^*(G)\ne \emptyset$,
where $r$ is an integer with $r\ge 4$.
Let $\Psi_{r}$ be the set of $4$-connected
and $r$-regular graphs of class 1,
each of which contains an equivalent set of size $2$.
Let $Q_r$ \relabel{Qr} denote the graph
obtained from the
complete bipartite graph $K_{r,r}$
by removing two independent edges
$a_1b_1$ and $a_2b_2$
and adding two new edges $a_1a_2$ and $b_1b_2$,
where $a_1$ and $a_2$ are vertices in one partite set of $K_{r,r}$.
Observe that $Q_r$ is a
$r$-connected and $r$-regular graph of class 1
with an equivalent set $\{a_1a_2,b_1b_2\}$.
Thus $Q_r\in \Psi_r$.
Let $\Psi_r^*$ be the set of graphs $H\in \Psi_r$
containing an equivalent set $\{e,e'\}$
such that $H-\{e,e'\}$ is not bipartite.
From the remark in Page~\pageref{rem5-1},
it is known that $\Psi_r^*\ne \emptyset$.
For a list $L=(G_1,G_2,\cdots,G_k)$ of vertex-disjoint graphs
in $\Psi_r$, where $k\ge 3$
and $\{e_i,e'_i\}$ is an equivalent set of $G_i$
with $e_i=x_iy_i$ and $e'_i=x'_iy'_i$
for $i=1,2,\cdots,k$,
let $\C_L$ denote the graph obtained from
$G_1,G_2,\cdots,G_k$ by
deleting edges $e_i$ and $e'_i$ and
adding new edges $f_i$ and $f'_i$ for all $i=1,2,\cdots,k$,
where $f_i=x_iy_{i+1}$, $f_i'=x'_iy'_{i+1}$,
$y_{k+1}=y_1$ and $y'_{k+1}=y'_1$.
For any $i$ with $1\le i\le k$,
assume that
$G_i-\{e_i,e'_i\}$ is not bipartite
whenever $G_i\in S_r^*$.
An example of
$\C_L$ for $k=3$ is shown in Figure~\ref{f3}.
\begin{figure}[ht]
\centering
\input{f3.pic}
\caption{Graph $\C_L$, where $L=(G_1, G_2,G_3)$}
\relabel{f3}
\end{figure}
\begin{lem}\relabel{le5-2-0}
Let $L=(G_1, G_2,\cdots,G_k)$ be any list of graphs
in $\Psi_r$, where $k$ is an odd number with $k\ge 3$.
The graph $\C_L$ defined above has the following properties:
\begin{enumerate}
\item $\C_L\in \Psi_r$ with equivalent sets $\{f_i,f'_i\}$
for all $i=1,2,\cdots,k$;
\item
if $G_j-\{e_j,e'_j\}$ is not bipartte
for some $j$ with $1\le j\le k$, then
$\{f_i,f'_i: i=1,2,\cdots,k\}\in \nF^*(\C_L)$ holds.
\end{enumerate}
\end{lem}
\proof
(i). As $G_i$ is $4$-connected for all
$i=1,2,\cdots,k$, it is not difficult to show that
any two non-adjacent vertices in $\C_L$
are joined by $4$ internally vertex-disjoint paths,
implying that $\C_L$ is $4$-connected.
Clearly $\C_L$ is $r$-regular.
As $G_i$ is a $r$-regular graph of class 1
and with an equivalent set $\{e_i, e'_i\}$,
$E(G_i)$ can be partitioned into
perfect matchings $E_{i,1},E_{i,2},\cdots,E_{i,r}$
with $\{e_i, e'_i\}\subseteq E_{i,1}$.
Thus, $\C_r$ is of class 1, as its edge set
can be partitioned into $r$ perfect matchings
$\E_1, \E_2,\cdots,\E_r$, where
$$
\E_1= \bigcup_{i=1}^k
\left (\{f_i, f'_i\}\cup (E_{i,1}-\{e_i, e'_i\})\right ),
\quad \E_j=\bigcup_{i=1}^k E_{i,j},\quad \forall j=2,3,\cdots,r.
$$
To show that $\{f_i,f'_i\}$ is an equivalent set of $\C_L$,
we need to apply the following claim.
\noindent {\bf Claim 1}: For any perfect matching $M$ of $\C_L$
and any $i$ with $1\le i\le k$,
$M\cap \{f_i,f'_i\}=\{f_i\}$
implies that $M\cap \{f_{i+1},f'_{i+1}\}=\{f'_{i+1}\}$,
and $M\cap \{f_i,f'_i\}=\{f'_i\}$
implies that $M\cap \{f_{i+1},f'_{i+1}\}=\{f_{i+1}\}$.
Without loss of generality, it suffices to prove that
$M\cap \{f_1,f'_1\}=\{f_1\}$
implies $M\cap \{f_2,f'_2\}=\{f'_2\}$.
As $G_2$ is matching-covered,
$|V_2|\equiv 0\modtwo$.
Thus $M\cap \{f_1,f'_1\}=\{f_1\}$ implies that
$|M\cap \{f_2,f'_2\}|=1$.
Suppose that $M\cap \{f_2,f'_2\}=\{f_2\}$.
Then, $M_2=\{e_2\}\cup (M\cap E(G_2))$ is a perfect matching
of $G_2$.
But $e'_2\notin M_2$
contradicting the assumption
that $\{e_2,e'_2\}$ is an equivalent set of $G_2$.
Thus the claim holds.
Suppose that $\{f_i,f'_i\}$ is not an equivalent set of $\C_L$,
say $i=1$.
Then $|M\cap \{f_1,f'_1\}|=1$ holds
for some perfect matching $M$ of $\C_L$,
say $f_1\in M$ but $f'_1\notin M$.
Claim 1 implies
$M\cap \{f_2,f'_2\}=\{f'_2\}$,
$M\cap \{f_3,f'_3\}=\{f_3\}$ and so on.
As $k$ is odd, we have $M\cap \{f_k,f'_k\}=\{f_k\}$.
However, by Claim 1,
$M\cap \{f_k,f'_k\}=\{f_k\}$
implies that $M\cap \{f_1,f'_1\}=\{f'_1\}$,
a contradiction.
Hence (i) holds.
(ii).
Suppose that $G_j-\{e_j,e'_j\}$ is not bipartite for some
$j$ with $1\le j\le k$.
Let $S=\{f_i,f'_i: i=1,2,\cdots, k\}$.
As $\{f_i,f'_i\}$ is an equivalent set of $\C_L$
for all $i=1,2,\cdots,k$,
$|S\cap M|$ is even for all perfect matchings $M$ of $\C_L$,
implying that
$S\in \nF(\C_L)$ holds.
As $G_j-\{e_j,e'_j\}$ is not bipartite for some $j$ with
$1\le j\le k$,
Corollary~\ref{cor2-1} (ii) implies that
$S\nssim_{C_L} E(\C_L)$.
Suppose that $S\ssim_{\C_L} \emptyset$.
Then Proposition~\ref{pro2-1} (i) implies that
$S=\nabla_{\C_L}(U)$ for some $U\subset V(\C_L)$.
As $G_i-\{e_i,e'_i\}$ is connected,
we have $V(G_i)\subseteq U$ or
$V(G_i)\subseteq V(\C_L)-U$
for all $i=1,2,\cdots,k$.
Assume that $V(G_1)\subseteq U$.
Then $S=\nabla_{\C_L}(U)$ implies
$V(G_2)\subseteq V(\C_L)-U$,
$V(G_3)\subseteq U$
and so on.
Since $k$ is odd,
$V(G_k)\subseteq U$,
contradicting the assumption that
$f_k,f'_k\in S=\nabla_{\C_L}(U)$.
Hence $S\in \nF^*(\C_L)$ and (ii) holds.
\endproof
By Lemma~\ref{le5-2-0}, we can prove
the following result.
\begin{cor}\relabel{cor5-2-1}
$\Psi_r^*$ is an infinite set.
\end{cor}
\proof
Let $\L$ be the family of
lists $L=(G_1,G_2,\cdots,G_k)$,
where $k\ge 3$ is odd,
$G_i\in \Psi_r$ for $i=1,2,\cdots,k$
and $G_j\in \Psi_r^*$ for at least one $j$
with $1\le j\le k$.
By the remark in Page~\pageref{rem5-1},
$\Psi_r^*\ne \emptyset$.
Thus $\L\ne \emptyset$.
By Lemma~\ref{le5-2-0},
$\C_L\in \Psi_r$ holds
for any list $L\in \L$.
Furthermore, as $G_j\in \Psi_r^*$ holds for at least one $j$,
$G_j-\{e_j,e'_j\}$ is not bipartite
for an equivalent set $\{e_j,e'_j\}$,
implying that $\C_L-\{f_i,f'_i:1\le i\le k\}$ is not
bipartite.
By Lemma~\ref{le5-2-0} (i),
$\{f_i,f'_i\}$ is an equivalent set of $\C_L$
for any $i$ with $1\le i\le k$,
implying that
$\C_L\in \Psi_r^*$ holds.
Clearly, $\C_L$ is different from anyone in the list
of $L$.
Applying Lemma~\ref{le5-2-0} repeatedly implies that
the result holds.
\endproof
By Lemma~\ref{le5-2-0} and Corollary~\ref{cor5-2-1},
we get the following result.
\begin{thm}\relabel{th5-2-1}
For any $r\ge 4$,
there are infinitely many $4$-connected and
$r$-regular graphs $H$ of class 1
with $\nF^*(H)\ne \emptyset$.
\end{thm}
\subsection{$r$-connected and $r$-regular graphs
$G$ of class 1 with $\nF^*(G)\ne \emptyset$
\relabel{sec5-3}}
For any integer $r$ with $r\ge 3$,
let $\Phi_r$ be the set of $r$-connected
and $r$-regular graphs of class 1.
Clearly, $\Phi_r$ includes
the complete bipartite graph $K_{r,r}$,
the graph $Q_r$ defined in Page~\pageref{Qr}
and the complete graph $k_{r+1}$ when $r$ is odd.
For any set $S=\{G_1, G_2,\cdots,G_r\}$ of
$r$ vertex-disjoint graphs in $\Phi_r$
with $w_i\in V(G_i)$
and $N_{G_i}(w_i)=\{v_{i,j}: j=1,2,\cdots,r\}$
for $i=1,2,\cdots,r$,
let $\X_S$ denote the graph obtained
from $G_1-w_1,G_2-w_2,\cdots,G_r-w_r$
by adding vertices $u_1,u_2,\cdots,u_r$
and adding edges joining
$u_j$ to vertex $v_{i,j}$
for all $i=1,2,\cdots,r$ and $j=1,2,\cdots,r$,
without referring to vertices $w_i$ in $G_i$
for $i=1,2,\cdots,r$.
An example of $\X_S$ when $r=3$ is given in Figure~\ref{f4},
where $S=\{G_1,G_2, G_3\}$ and
$G_i\cong K_4$ for all $i=1,2,3$.
\begin{figure}[ht]
\centering
\input{f4.pic}
\caption{Graph $\X_S$ with $S=\{G_1,G_2,G_3\}$
and $G_i\cong K_4$ for $i=1,2,3$}
\relabel{f4}
\end{figure}
\begin{lem}\relabel{le5-3-1}
For any set $S=\{G_1,G_2,\cdots, G_r\}$
of graphs in $\Phi_r$,
the graph $\X_S$ constructed above has the following
properties:
\begin{enumerate}
\item $\X_S\in \Phi_r$;
\item for any $i=1,2,\cdots,r$,
if both $G_i-w_i$ and $G_{j}-w_j$ are not bipartite
for some $j\in \{1,2,\cdots,r\}-\{i\}$,
then $E(G_i-w_i)\in \nF^*(\X_S)$ holds.
\end{enumerate}
\end{lem}
\proof
(i). Observe that $\X_S$ is $r$-connected
by the two facts below:
\begin{enumerate}
\item[(a)] If both graphs $H_1$ and $H_2$ are $r$-connected
and vertex-disjoint with
$x_i\in V(H_i)$ and $N_{H_i}(x_i)
=\{z_{i,j}:j=1,2,\cdots,r\}$ for $i=1,2$,
then the graph obtained from $H_1-x_1$ and $H_2-x_2$
by adding edges joining $x_{1,j}$ and $x_{2,j}$ for all
$j=1,2,\cdots,r$
is also $r$-connected;
\item[(b)] for any $r$-connected graph $H$
and any $r$ independent edges $e_1,e_2,\cdots,e_r$,
the graph obtained from $H$ by
subdividing each $e_i=y_{i,1}y_{i,2}$
with a vertex, denoted by $q_i$,
adding $r-2$ new vertices
$z_1,z_2,\cdots,z_{r-2}$
and adding new edges joining
$z_j$ to $q_i$ for all $j=1,2,\cdots,r-2$
and all $i=1,2,\cdots,r$
is also $r$-connected.
\end{enumerate}
The two facts above can be verified by
proving that each pair of non-adjacent vertices are joined
by $r$ internally vertex-disjoint paths.
As $G_i$ is $r$-regular,
by the definition,
$\X_S$ is also $r$-regular.
For $i=1,2,\cdots,r$,
as $G_i$ is a $r$-regular graph of class 1,
$G_i$ has a $r$-edge-coloring which partitions
$E(G_i)$ into $r$ perfect matchings
$E_{i,1},E_{i,2},\cdots,E_{i,r}$ of $G_i$.
Assume that $w_iv_{i,j}\in E_{i,j}$ for all
$i=1,2,\cdots,r$ and $j=1,2,\cdots,r$.
Let $\pi_1,\pi_2,\cdots,\pi_r$ be permutations
of $1,2,\cdots,r$ such that
$\{\pi_s(i):s=1,2,\cdots,r\}=\{1,2,\cdots,r\}$ holds
for all $i=1,2,\cdots,r$.
Certainly such permutations exist.
Then $\E_1,\E_2,\cdots,\E_r$ defined below
form a partition of $E(\X_S)$ each of which is a matching of $\X_S$:
$$
\E_s=\bigcup_{i=1}^r \left ((E_{i,\pi_s(i)}-\{w_iv_{i,\pi_s(i)}\})
\cup \{u_iv_{i,\pi_s(i)}\}\right ),
\qquad \forall s=1,2,\cdots,r.
$$
Hence $\X_S$ is of class 1 and $\X_S\in \Phi_r$.
(ii). For $i=1,2,\cdots,r$,
as $G_i$ is a $r$-regular graph of class 1,
$G_i$ is matching-covered,
implying that
$|V(G_i)|\equiv 0\modtwo$.
Thus $|V(G_i-w_i)|\equiv 1\modtwo$
for all $i=1,2,\cdots,r$.
For $i=1,2,\cdots,r$,
let $W_i=E(G_i-w_i)$
and $N_i=\{u_jv_{i,j}: j=1,2,\cdots,r\}$.
As $|V(G_i-w_i)|\equiv 1\modtwo$,
$|M\cap N_i|\ge 1$ holds for each perfect matching
$M$ of $\X_S$
and all $i=1,2,\cdots,r$.
But $|M\cap (N_1\cup N_2\cup\cdots \cup N_r)|=r$,
implying that
$|M\cap N_i|= 1$ holds for each perfect matching $M$ of $\X_S$
and all $i=1,2,\cdots,r$.
Thus
$|M\cap W_i|= |V(G_i)|/2-1$
holds for each perfect matching $M$ of $\X_S$,
implying that
$W_i\in \nF(\X_S)$ for all $i=1,2,\cdots,r$.
If both $G_i-w_i$ and $G_{j}-w_j$ are not bipartite,
where $j\ne i$,
Corollary~\ref{cor2-1}
implies that $W_i\nssim_{\X_S} \emptyset$
and $W_i\nssim_{\X_S} E(\X_S)$.
Thus $W_i\in \nF^*(\X_S)$.
\endproof
For any $r\ge 3$,
let $\Phi_r^*$ be the set of graphs $G\in \Phi_r$
such that $G-w$ is not bipartite
for every vertex $w$ in $G$.
Clearly, $Q_r\in \Phi_r^*$ and
when $r$ is odd, $K_{r+1}\in \Phi_r^*$.
\begin{lem}\relabel{le5-3-2}
For any integer $r$ with $r\ge 3$,
$\Phi_r^*$ is an infinite set.
\end{lem}
\proof Note that
$\Phi_r^*\ne \emptyset$ for any $r\ge 3$.
If $S=\{G_1, G_2,\cdots, G_r\}$ is
a set of vertex-disjoint graphs in $\Phi_r$ and $G_i\in \Phi_r^*$
holds for some pair $i,j$
with $1\le i<j\le r$,
then $\X_S-w$ is not bipartite for each vertex $w$ in $\X_S$.
By Lemma~\ref{le5-3-1},
$\X_S\in \Phi_r^*$ holds.
Note that $\X_S$ is different from any one in $S$.
Thus the result holds by applying Lemma~\ref{le5-3-1} repeatedly.
\endproof
\vspace{0.2 cm}
\noindent {\bf Remark}: \relabel{rem5-1}
By the definition in Page~\pageref{Qr},
$Q_r$ is a graph in $\Phi_r$
with an equivalent set $\{a_1a_2,b_1b_2\}$.
For any $S=\{G_1,G_2,\cdots,G_r\}$, where $G_i\in \Phi_r$
for all $i=1,2,\cdots,r$,
if $G_1$ is the graph $Q_r$ and $w_1\notin \{a_1,a_2,b_1,b_2\}$,
then it is not difficult to verify that
$\{a_1a_2,b_1b_2\}$ is an equivalent set of $\X_S$.
Furthermore, if $G_j-w_j$ is not bipartite graph
for some $j$ with $2\le j\le r$,
then $\X_S-\{a_1a_2,b_1b_2\}$
is not bipartite,
implying that $\X_S\in \Psi_r^*$ when $r\ge 4$.
\vspace{0.2 cm}
Theorem~\ref{main3}
follows directly from Lemmas~\ref{le5-3-1}
and~\ref{le5-3-2}.
\iffalse
\begin{thm}\relabel{th5-3-1}
For any odd integer $r\ge 3$,
there are \red{infinitely} many $r$-connected and
$r$-regular graphs $G$ of class 1
with $\nF^*(G)\ne \emptyset$.
\end{thm}
\fi
\section*{Acknowledgements}
The work is partially supported by the China Scholarship Council for financial support
and NTU AcRF project (RP 3/16 DFM) of Singapore.
|
2,869,038,156,294 | arxiv | \section{Introduction and statement of results}
One of the most useful ideas for the study of spaces of modular forms is to relate these finite dimensional spaces to spaces spanned by polynomials. In particular, the theory of modular symbols and the Eichler-Shimura isomorphism provide a canonical cohomological theory for modular forms based on special polynomials (see \cite{PP} for an excellent summary of this theory). To be more precise, given any cuspform $f \in S_{k}$ of weight $k$ and modular on $SL_{2}(\Z)$ the {\textit{period polynomial}} is the modular integral
\begin{equation} \label{PP}
r_{f}(X) := \int_{0}^{i \infty} f(\tau) (\tau -X)^{k-2} d \tau.
\end{equation}
These polynomials encode deep arithmetic information. Specifically, their coefficients are essentially the {\textit{critical $L$-values}} of $f$:
\begin{equation} \label{l}
r_{f}(X) = - \frac{(k-2)!}{(2 \pi i)^{k-1}} \sum_{m=0}^{k-2} \frac{(2 \pi i X)^{m}}{m!} L(f, k - m -1)=
\sum_{m=0}^{k-2}i^{m+k-1} \binom{k-2}{m} X^{m} \Lambda(f, k-m-1),
\end{equation}
where the completed $L$-function is given by
\[
\Lambda(f,s):= (2 \pi)^{-s} \Gamma(s) L(f, s).
\]
The completed $L$-function has an analytic continuation to $\C$ and satisfies the functional equation $\Lambda(f, s) = \epsilon(f) \Lambda(f, k-s)$ with $\epsilon(f) = \pm 1$. From this functional equation one can see that the critical values are the integer values inside the critical strip, namely $s=1, 2, \dots, k-1$. Deep conjectures such as the Birch and Swinnerton-Dyer conjecture and the Bloch-Kato conjecture in the case of central $L$-values and Beilinson conjecture for non-central $L$-values imply that these values contain important arithmetic information (see, e.g., \cite{BK, KZ}). Manin also showed that the $L$-values satisfy certain rationality conditions.
\begin{thm}[Manin \cite{Manin}]
Let $f$ be a normalized Hecke eigenform in $S_{k}$ with rational Fourier coefficients. Then there exist $\omega_{\pm}(f) \in \R$ such that
\begin{equation*}
\Lambda(f, s)/\omega_{+}(f), \ \Lambda(f, w)/\omega_{-}(f) \in \Q
\end{equation*}
for all $s, w$ with $1 \leq s, w \leq k-1$ and $s$ even, $w$ odd.
\end{thm}
For more details about the general philosophy of the arithmetic of the periods $\omega_{\pm}(f)$ see \cite{KZ}.
The functional equation endows the period polynomial with the relation
\[
r_{f}(X) = -i^{k} \epsilon(f) X^{\frac{k-2}{2}} r_{f} \left(- \frac{1}{X} \right).
\]
This ``self-inversive'' property shows that if $\rho$ is a zero of $r_{f}(X)$ then so is $-\frac{1}{\rho}$ and so the unit circle is a natural line of symmetry for the period polynomials just as the critical line is a natural line of symmetry for the completed $L$-function. For this reason the stipulation that all roots of the period polynomials lie on the unit circle has been termed the {\textit{Riemann hypothesis for period polynomials}} (RHPP). The first work on this subject is due to Conrey, Farmer, and Imamo\u{g}lu \cite{CFI}, who showed that the odd part of the period polynomial for any level $1$ Hecke eigenform, apart from five so-called ``trivial zeros", all lie on the unit circle. Shortly thereafter, El-Guindy and Raji \cite{ER} showed that the full period polynomial for any level $1$ eigenform satisfies RHPP. Recently, Jin, Ma, Ono, and Soundararajan \cite{JMOS} used a brilliant synthesis of analytic techniques to show that the RHPP is an even more general phenomenon. Namely, they showed that the RHPP holds for any Hecke eigenform for any congruence subgroup $\Gamma_{0}(N)$. Given the broad nature of these results, it is natural to ask if these are initial cases of a more general phenomenon. In two recent papers, Diamantis and the second author \cite{DR1, DR2} have explored a generalization of the RHPP which takes into account a cohomological period polynomial attached to higher $L$-derivatives. There they conjecture that a similar phenomenon always holds and prove some test cases of this conjecture. Here we generalize the period polynomials in another aspect and find that the RHPP still holds true.
Specifically, we consider the generalization to any Hilbert modular eigenform of parallel weight on the full Hilbert modular group. Some previous results have been given on the cohomology theory of Hilbert modular forms and their periods \cite{BDG, YH}, but to the best of the authors' knowledge no direct analogue of the period polynomials in this case has been written down in an explicit form in the literature (see also the second remark preceeding Theorem~\ref{M}). In analogy with \eqref{PP} we propose
\begin{equation} \label{HPP}
r_{f}(X) := \int_{0}^{i \infty} \cdots \int_{0}^{i \infty} f( \tau)(N(\tau) - X)^{k-2} d \tau,
\end{equation}
where $f(\tau) = f(\tau_{1}, \dots, \tau_{n})$ is a parallel weight $k$ Hilbert modular eigenform for a number field $K$ of degree $n$ on the full Hilbert modular group and $N(\tau) = \tau_{1} \cdots \tau_{n}$, $d \tau = d \tau_{1} \cdots d \tau_{n}$. In further analogy with \eqref{l} we have
\begin{equation}
r_{f}(X) = (-1)^n (k-2)! \left(\frac{D_{K}}{(2 \pi i)} \right)^{k-1} \sum_{m=0}^{k-2} \frac{(-1)^{m(n+1)} \Gamma(k-m-1)^{n-1}}{m!} \left(\frac{(2 \pi i)^{n} X}{D_{K}} \right)^{m} L(f, k-m-1)
\end{equation}
or equivalently
\[
r_{f}(X) = \sum_{m=0}^{k-2} (-1)^{m} i^{n(k-m-1)}\binom{k-2}{m} X^{m} \Lambda(f, k-m-1),
\]
where $D_{K}$ is the discriminant of $K$ and $L(f,s)$ and $\Lambda(f,s)$ are defined for Hilbert modular forms in equations \eqref{HL} and \eqref{HLC} respectively.
\begin{rmk}
The definition of period polynomial for a Hilbert modular form given in equation \eqref{HPP} naturally extends the elliptic modular form definition by encoding the critical $L$-values of $f$ as coefficients. These polynomials however do not satisfy all of the period relations that the polynomials in equation \eqref{PP} satisfy. There is another natural definition of an $n$-variable function that satisfies the corresponding period relations in the Hilbert case, but it is less clear what arithmetic information the coefficients contain in this case.
\end{rmk}
\begin{rmk}
After writing this paper, the authors have learned that YoungJu Choie has also considered period polynomials for Hilbert modular forms in forthcoming work.
\end{rmk}
Our main result is as follows.
\begin{Thm}[The Riemann hypothesis for period polynomials of Hilbert modular forms] \label{M}
Let $f$ be a parallel weight $k$ Hilbert modular eigenform of degree $n$ on the full Hilbert modular group. Then all of the roots of $r_{f}(X)$ lie on the unit circle.
Moreover, as $k \to \infty$, the zeros of $r_{f}(X)$ become equidistributed on the unit circle.
\end{Thm}
\begin{rmk}
The above result seems provable as well for congruence subgroups. In particular, we included the case when the Atkin-Lehner eigenvalue $\epsilon(f)=-1$ and we would follow the same argument up until Equation (\ref{inequality}). From there, the factor in the conductor corresponding to the level would in fact lower the bounds on the weights of forms of larger level that we need to examine. However, at the time of this project, the existing infrastructure for Hilbert modular forms over cubic fields did not cover forms of larger level.
\end{rmk}
The paper is organized as follows. In Section~\ref{PrelimSection} we discuss the basic definitions and results on polynomial roots, computer calculations, and analytic number theory required for the proofs.
The proofs of the main results are then given in Section~\ref{Proofs}. Finally, we conclude with a discussion of examples and ideas for future directions in Section~\ref{FinalSection}.
\section*{Acknowledgements}
The authors thank Claudia Alfes, YoungJu Choie, David Farmer, Ahmad El-Guindy, Ken Ono, Vicen\unichar{539}iu Pa\unichar{537}ol, Wissam Raji, and Markus Schwagenscheidt for helpful comments.
\section{preliminaries}\label{PrelimSection}
\subsection{Basic definitions}
In this subsection we will review the definitions of parallel integer weight Hilbert modular forms and their $L$-functions. For more details on the general theory, we refer the reader to the survey of Bruinier in \cite{Bruinier123}. Let $K$ be a number field of degree $n$ above $\Q$. Basic Galois theory implies that there exists $n$ different embeddings $K\hookrightarrow\C$, which we will denote by $a\mapsto a^{(j)}$ for $1\leq j\leq n$.
We will assume from here forward that $K$ is totally real. Define the \textit{norm} of an element by $N(x) := \prod_{j=1}^{n} x^{(j)}$ and the \textit{trace} of an element by $Tr(x) := \sum_{j=1}^{n} x^{(j)}$. Let $\mathfrak{d}_{K}$ be the \textit{different} of $K$ so that $N(\mathfrak{d}_{K}) =:D_{K}$ is the discriminant of $K$. The general linear group $GL_{2}(K)$ embeds into $GL_{2}(\R)^{n}$ via the real embeddings of $K$. Let $GL_2^+(K) \colonequals \{ \gamma \in GL_2(K): \det \gamma \gg 0\}$ be the subgroup of matrices with totally positive determinant. It acts on $\mathbb{H}^{n}$ via fractional linear transformations,
\[
\begin{pmatrix} a & b \\ c & d \end{pmatrix} \tau := \left( \frac{a\tau_{1} + b}{c\tau_{1} + d}, \frac{a^{(2)} \tau_{2} + b^{(2)}}{c^{(2)} \tau_{2} + d^{(2)}}, \dots, \frac{a^{(n)} \tau_{n} + b^{(n)}}{c^{(n)} \tau_{n} + d^{(n)}} \right),
\]
where $\tau=(\tau_{1}, \dots, \tau_{n}) \in \mathbb{H}^{n}$. If $\mathfrak{a}$ is a fractional ideal of $K$, we define the \textit{Hilbert modular group} corresponding to $\mathfrak{a}$ as
\[
\Gamma(\mathcal{O}_{K} \oplus \mathfrak{a}) := \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in GL_{2}^+(K): a, d \in \mathcal{O}_{K}, \ b \in \mathfrak{a}^{-1}, \ c \in \mathfrak{a} \right\}.
\]
Furthermore, define $\Gamma_{K} := \Gamma( \mathcal{O}_{K} \oplus \mathcal{O}_{K}) = GL_{2}^{+}(\mathcal{O}_{K})$, which we just call the {\it full Hilbert modular group}. For $\gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in GL_{2}^+(K) \hookrightarrow GL_{2}(\R)^{n}$ and $z \in \mathbb{H}^{n}$ define the automorphic factor
\[
J(\gamma, \tau) := \det(\gamma)^{-1/2} N(c\tau + d) = \prod_{j=1}^{n} \det(\gamma_{j})^{-1/2} \left( c^{(j)} \tau_{j} + d^{(j)} \right),
\]
where $\gamma_{j} = \begin{pmatrix} a^{(j)} & b^{(j)} \\ c^{(j)} & d^{(j)} \end{pmatrix}$.
\begin{Definition} \label{Hilbert}
A holomorphic function $f\colon \mathbb{H}^{n} \to \C$ is called a holomorphic \textit{Hilbert modular form} of parallel integer weight $k= (k,k, \dots, k) \in \Z^{n}$ for $\Gamma_{K}$ if for all $\gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \Gamma_{K}$,
\[
f(\gamma \tau) = J(\gamma, \tau)^{k} f(\tau) = \det(\gamma)^{-k/2} N(c\tau + d)^{k} f(\tau).
\]
\end{Definition}
We denote the space of holomorphic Hilbert modular forms of weight $k$ on $\Gamma_{K}$ by $M_{k}(\Gamma_{K})$. If $\mathcal{O}_{K}$ has a unit of negative norm then $M_{k}(\Gamma_{K}) = \{0\}$ for $k$ odd, so we will suppose that $k$ is even. If $f \in M_{k}(\Gamma_{K})$ vanishes at the cusps we call it a cusp form and denote this space by $S_{k}(\Gamma_{K})$. Each $f \in M_{k}(\Gamma_{K})$ has a Fourier expansion of the form
\begin{equation}
\label{expansion1}
f(\tau) =a(0) + \sum_{\substack{\nu \in \mathfrak{d}_{K}^{-1} \\ \nu \gg 0}} a(\nu) e^{2 \pi i Tr( \nu \tau)},
\end{equation}
where $Tr(\nu \tau) = \sum_{j=1}^{n} \nu^{(j)} \tau_{j}$ and $\nu \gg 0$ means that $\nu$ is totally positive. Since $\nu \in \mathfrak{d}_{K}^{-1}$, each ideal $\mathfrak{n}=\nu \mathfrak{d}_{K}$ is integral. When the forms have parallel even weight, $a(\nu)=a(\nu \eta)$ for any totally positive unit $\eta \in \mathcal{O}_K^\times$ and we may rewrite \eqref{expansion1} as
\[
f(\tau) =a(0) + \sum_{\substack{\mathfrak{n} \subset \mathcal{O}_K \\ \mathfrak{n} \ne 0}} a(\mathfrak{n}) \sum_{\substack{\eta \in \mathcal{O}_{K}^{\times} \\ \eta\gg 0}}e^{2 \pi i Tr( \nu \eta \tau)},
\]
and we may identify each modular form by the coefficients $a(\mathfrak{n})$.
Therefore, $f \in S_{k}(\Gamma_{K})$ has an associated $L$-function given as a Dirichlet series by
\begin{equation} \label{HL}
L(f,s) := \sum_{\substack{ \nu \in \mathfrak{d}_{k}^{-1}/\mathcal{O}_{K}^{\times} \\ \nu \gg 0}} a(v) N(v)^{-s} = \sum_{\substack{ \mathfrak{n} \in \mathcal{O}_{K} \\ \mathfrak{n} \neq 0}} a(\mathfrak{n}) N(\mathfrak{n})^{-s}.
\end{equation}
The completed $L$-function is defined by
\begin{equation} \label{HLC}
\Lambda(f,s) := D_{K}^{s} (2 \pi)^{-ns} \Gamma(s)^{n} L(f,s)
\end{equation}
and also has the $n$-fold integral representation
\[
\Lambda(f,s) = \int_{0}^{\infty} \cdots \int_{0}^{\infty} f(iy) N(y)^{s-1} dy.
\]
This completed $L$-function satisfies the functional equation
\begin{equation}
\label{func}
\Lambda(f,s) = \epsilon(f) \Lambda(f, k-s),
\end{equation}
where $\epsilon(f) \in \{ \pm 1\}$.
\subsection{Computing period polynomials}
The proofs of our main results consist of two parts. Firstly, analytic techniques are used to guarantee that the theorem eventually holds true in different aspects. Then, detailed computer calculations are used to verify the small cases. As these calculations are intensive and use newly developed code, we provide a detailed description of this procedure.
We carry out our computations in Magma \cite{magma}. The main ingredient for our computations consist of obtaining eigenbases for subspaces of cusp forms and creating $L$-functions for these forms. The reader can find details about constructions of $L$-functions in Magma in the handbook available online. We summarize the construction for convenience. Every $L$-function in Magma is created using the command $\bfunc{LSeries}$, which relies on the functional equation (\ref{func}) and the factors in the definition of the completed $L$-function such as the conductor (in our case given by the discriminant), the weight, the $\Gamma$-factor, as well as finitely many coefficients. The coefficients can either be given at each integer, or at each prime, and then generate the Euler product. The number of coefficients required to find $L$-values up to a given precision grows with the weight and the degree of the field.
To create $L$-series of Hilbert modular forms over quadratic fields, we use the environment for Hilbert modular forms in Magma, and a slight modification of the command $\bfunc{LSeries}$. In particular, the coefficients of the $L$-function usually come embedded in an extension of $\Q$, and $\bfunc{LSeries}$ only computes their first complex embedding. We modify the function to allow for the other complex embeddings as well.
In the case of Hilbert modular forms over cubic fields, one cannot get all the necessary cusp forms in the pre-existing environment in Magma. Instead, we use the package \cite{hmf} which implements Fourier expansions of Hilbert modular forms. In our construction of $L$-series, we input the Fourier coefficients at each prime, which generate the Euler product. We describe the algorithm for finding bases of Hilbert modular forms over cubic fields in Section \ref{cubic}.
\subsection{Some basic results on $L$-functions and in the theory of self-inversive polynomials}
Our proof requires some preliminary results on Hilbert modular $L$-functions and is a generalization of the method in \cite{JMOS}. The completed $L$ function $\Lambda(f,s)$ extends to an entire function of order one. Its zeros are predicted to lie on the line ${\rm{Re}}(s) =\frac{k}{2}$ , but are known to lie in the strip $\left| {\rm{Re}}(s) - \frac{k}{2} \right| < \frac{1}{2}$. We then require the famous Hadamard factorization:
\begin{equation*}
\Lambda(f,s) = e^{A+Bs} \prod_{\rho} \left( 1 - \frac{s}{\rho} \right)e^{\frac{s}{\rho}},
\end{equation*}
where the product is over all the zeros of $\Lambda(f,s)$. Note that if $\rho$ is a zero, then so are $\bar{\rho}$ and $k-\rho$. Using the fact that $\Lambda(f,s)$ is real-valued on the real line and the functional equation, we obtain
\begin{equation*}
B=-\sum_{\rho} {\rm{Re}} \left(\frac{1}{\rho} \right) = - \sum_{\rho} \frac{{\rm{Re}}(\rho)}{|\rho|^2}.
\end{equation*}
Using this we have that
\begin{equation} \label{z}
\Lambda(f,s) = e^{A} \prod_{\rho \in \R} \left( 1 - \frac{s}{\rho} \right) \prod_{{\rm{Im}}(\rho)>0} \left| 1 - \frac{s}{\rho} \right|^{2}
\end{equation}
for real $s$.
This is the main ingredient for the following key result.
\begin{Lemma} \label{ineq}
The function $\Lambda(f,s)$ is monotonically increasing for $s \geq \frac{k}{2} + \frac{1}{2}$. Furthermore,
\begin{equation*}
0 \leq \Lambda \left(f, \frac{k}{2} \right) \leq \Lambda \left(f, \frac{k}{2} +1 \right) \leq \Lambda \left(f, \frac{k}{2} + 2 \right) \leq \dots.
\end{equation*}
If $\epsilon(f) =-1$, then $\Lambda \left(f, \frac{k}{2} \right) =0$ and
\begin{equation*}
0 \leq \Lambda \left( f, \frac{k}{2} +1 \right) \leq \frac{1}{2} \Lambda \left(f, \frac{k}{2} + 2 \right) \leq \frac{1}{3} \Lambda \left(f, \frac{k}{2} +3 \right) \leq \dots.
\end{equation*}
\end{Lemma}
\begin{proof}
The proof follows exactly mutatis mutandis from Lemma 2.1 in \cite{JMOS}.
\end{proof}
We also require the following estimate.
\begin{Lemma} \label{L}
If $0<a<b$ and $f$ is a parallel weight $k$ newform of degree $n$, then we have
\begin{equation*}
\frac{L \left( f, \frac{k+1}{2} +a \right)}{L \left(f, \frac{k+1}{2} +b \right)} \leq \frac{\zeta(1+a)^{2n}}{\zeta(1+b)^{2n}}.
\end{equation*}
\end{Lemma}
\begin{proof}
We have
\begin{equation*}
-\frac{L'}{L}(f, s) =: \sum \frac{\Lambda_{f}(\mathfrak{a})}{N(\mathfrak{a})^{s}} = \sum \frac{c_{f}(m)}{m^s}.
\end{equation*}
Since $f$ is a parallel weight $k$ newform, by the Ramanujan bound \cite{DB} we have $\Lambda_{f}(\mathfrak{a}) \leq 2 N(\mathfrak{a})^{\frac{k-1}{2}} \Lambda_{K}(\mathfrak{a})$, where $\Lambda_{k}(\mathfrak{a})$ is the Von Mangoldt function for the field $K$. We also know that if $\sum \frac{\Lambda_{K}(\mathfrak{a})}{N(\mathfrak{a})^s} = \sum \frac{c_{K}(m)}{m^s}$, then $c_{K}(m) \leq n \Lambda(m)$, where $\Lambda(m)$ is the usual Von Mangoldt function. Thus we have
\begin{align*}
-\frac{L'}{L}(f, s) &\leq 2 \sum \frac{N(\mathfrak{a})^{\frac{k-1}{2}} \Lambda_{K}(\mathfrak{a})}{N(\mathfrak{a})^s} = \sum \frac{c_{K}(m)}{m^{s-\frac{k-1}{2}}} \\
& \leq 2n \sum \frac{\Lambda(m)}{m^{s-\frac{k-1}{2}}} = -2n \frac{\zeta'}{\zeta} \left(s - \frac{k-1}{2} \right).
\end{align*}
We then have
\begin{align*}
\frac{L \left(f, \frac{k+1}{2} +a \right)}{L \left( f, \frac{k+1}{2} +b \right)} &= \exp \left( \int_{a}^{b} - \frac{L'}{L} \left(f, \frac{k+1}{2} +t \right) dt \right) \\
& \leq \exp \left( 2n \int_{a}^{b} - \frac{\zeta'}{\zeta}(1+t) dt \right) = \frac{\zeta(1+a)^{2n}}{\zeta(1+b)^{2n}}.
\end{align*}
\end{proof}
We will also use the following theorem for determining whether a polynomial has all of its roots on the unit circle \cite{LS}.
\begin{Thm} \label{roots}
A necessary and sufficient condition for all the zeros of a polynomial $P(z) = \sum_{n=0}^{d} a_{n}z^{n}$ with complex coefficients to lie on the unit circle is that there exists a polynomial $Q(z)$, with all of its zeros inside or on the unit circle, such that
\begin{equation*}
P(z) = z^{m} Q(z) + e^{i \theta} Q^{*}(z),
\end{equation*}
where for a polynomial $g(z)$ of degree $d$, $g^{*}(z) = z^{d} \overline{g}(1/z)$.
\end{Thm}
In order to use this theorem let $m := \frac{k-2}{2}$ and define the two important polynomials $P_{f}(X)$ and $Q_{f}(X)$ by
\begin{equation}
P_{f}(X) := \frac{1}{2} \binom{2m}{m} \Lambda \left(f, \frac{k}{2} \right) + \sum_{j=1}^{m} \binom{2m}{m+j} \Lambda \left(f, \frac{k}{2} +j \right) X^{j}
\end{equation}
and
\begin{equation}
Q_{f}(X) := \frac{1}{\Lambda(f,2m+1)} P_{f}(X).
\end{equation}
We will be able to apply Theorem~\ref{roots} in our situation as a short calculation shows that
\begin{equation*}
r_{f}(i^{n+2} X) = i^{n(2m+1)} \epsilon(f) \Lambda(f, 2m+1) X^m \left[ Q_{f}(X) + \epsilon(f) Q_{f}\left( \frac{1}{X} \right) \right].
\end{equation*}
\section{Proof of the main results}\label{Proofs}
\subsection{The cases $m=1$ and $m=2$}
The arguments here for small weights exactly mirror those in \cite{JMOS}. For this reason we will just sketch out the proofs and refer the reader to \cite{JMOS} for more details. For weight $k=4$ we have $m=1$ and $P_{f}(X) = \Lambda(f,2) + \Lambda(f,3)X$. If $\epsilon(f)=-1$, then $\Lambda(f,2)=0$ so we have
\begin{equation*}
P_{f}(X) - P_{f} \left(\frac{1}{X} \right) = \Lambda(f,3) \left(X - \frac{1}{X} \right),
\end{equation*}
which clearly has roots at $X=\pm 1$. If $\epsilon(f) =1$, then
\begin{align*}
P_{f}(X) + P_{f} \left(\frac{1}{X} \right) &= 2\Lambda(f,2) + \Lambda(f,3) \left( X + \frac{1}{X} \right) = 2 \Lambda(f,2) + 2\Lambda(f,3) \cos(\theta),
\end{align*}
where $X=e^{i \theta}$. By Lemma \ref{ineq} we know $\Lambda(f,2) < \Lambda(f,3)$ so the equation
\begin{equation*}
\cos(\theta) = - \frac{\Lambda(f,2)}{\Lambda(f,3)}
\end{equation*}
has two solutions with $\theta \in [0, 2 \pi)$.
For $k=6$, we have $m=2$ so
\begin{equation*}
P_{f}(X) = 3 \Lambda(f,3) + 4 \Lambda(f,4)X + \Lambda(f,5)X^2.
\end{equation*}
If $\epsilon(f)=-1$, then $\Lambda(f,3)=0$ and we have
\begin{align*}
P_{f}(X) - P_{f} \left(\frac{1}{X} \right) &= 4 \Lambda(f,4) \left(X - \frac{1}{X} \right) + \Lambda(f,5) \left( X^2 - \frac{1}{X^2} \right) \\
&= \left(X - \frac{1}{X} \right) \left[ 4\Lambda(f,4) + \Lambda(f,5) \left( X + \frac{1}{X} \right) \right].
\end{align*}
We clearly have $X = \pm 1$ as two solutions. By Lemma \ref{ineq} again we have $2 \Lambda(f,4)< \Lambda(f,5)$ so the two solutions to $\cos(\theta) = - \frac{2 \Lambda(f,4)}{\Lambda(f,5)}$ for $\theta \in [0, 2 \pi)$ give two other roots on the unit circle. If $\epsilon(f)=1$, letting $X= e^{i \theta}$ we have
\begin{equation*}
P_{f}(X) + P_{f} \left(\frac{1}{X} \right) = 6 \Lambda(f,3) + 8\Lambda(f,4) \cos(\theta) + 2 \Lambda(f,5) \cos(2 \theta).
\end{equation*}
We aim to show this has two zeros with $\theta \in [0, \pi)$ and thus four zeros with $\theta \in [0, 2 \pi)$. Noting
\begin{align*}
\frac{d}{d \theta} \left[P_{f}(e^{i \theta}) + P_{f}(e^{-i \theta}) \right] &= -8 \sin(\theta) \left( \Lambda(f,4) + \Lambda(f, 5) \cos(\theta) \right),
\end{align*}
we have critical points at $0, \pi$, and the solution $\theta_{0} \in [0, \pi)$ to $\cos(\theta) = - \frac{\Lambda(f, 4)}{\Lambda(f,5)}$. To ensure there are two roots in $[0, \pi)$ we need $P_{f}(e^{i \theta}) + P_{f}(e^{-i \theta})$ to be positive at $\theta =0$ and $\pi$ and negative at $\theta= \theta_{0}$. We clearly have positivity at $\theta=0$. Positivity at $\theta = \pi$ is equivalent to
\begin{equation*}
3\Lambda(f, 3) + \Lambda(f,5) > 4 \Lambda(f,4)
\end{equation*}
while negativity at $\theta=\theta_{0}$ is equivalent to
\begin{equation*}
2 \Lambda(f, 4)^2 + \Lambda(f,5)^2 \geq 3 \Lambda(f,3) \Lambda(f,5).
\end{equation*}
By Lemma \ref{ineq} and a result of Waldspurger \cite{W} we know that $\Lambda(f,3), \Lambda(f,4)$, and $\Lambda(f,5)$ are all non-negative. We can therefore use Lemma 4.1 in \cite{JMOS} as it is used there to prove the necessary inequalities.
\subsection{The case of large weight}
We will now prove Theorem \ref{M} for all but finitely many cases. We will compare $Q_{f}(X)$ to $X^m$ and use Rouch\'{e}'s Theorem to show $Q_{f}(X)$ has all its zeros inside the unit circle. Once this is established we apply Theorem \ref{roots} to complete the proof. On $|X|=1$ we have
\begin{align}
\label{inequality}
\begin{split}
Q_{f}(z) -X^m &=\frac{1}{2} \frac{\Gamma(m+1)^{n-2}}{\Gamma(2m+1)^{n-1}} \left(\frac{(2 \pi)^{n}}{D_{K}} \right)^{m} \frac{L(f, m+1)}{L(f, 2m+1)} \\
&+ \sum_{j=1}^{m-1} \frac{1}{j!} \left( \frac{(2 \pi)^{n}}{D_{K}} \right)^{j} \left(\frac{\Gamma(2m+1-j)}{\Gamma(2m+1)} \right)^{n-1} \frac{L(f, 2m+1-j)}{L(f, 2m+1)}.
\end{split}
\end{align}
We now use Lemma \ref{L}, the fact that $\zeta(1/2)^2 \leq \frac{11}{5}$, and Minkowski's bound
\begin{equation*}
D_{K} \geq \left(\frac{n^n}{n!} \right)^2
\end{equation*}
to obtain
\begin{align*}
\left| Q_{f}(z) - X^m \right| & \leq \frac{1}{2} \frac{\Gamma(m+1)^{n-2}}{\Gamma(2m+1)^{n-1}} \left(\frac{(2 \pi)^{n}}{D_{K}} \right)^{m} \left(\frac{\zeta(1/2)}{\zeta(1/2 +m)} \right)^{2n} \\
&+ \sum_{j=1}^{m-1} \frac{1}{j!} \left( \frac{(2 \pi)^{n}}{D_{K}} \right)^{j} \left(\frac{\Gamma(2m+1-j)}{\Gamma(2m+1)} \right)^{n-1} \left(\frac{\zeta(1/2 + m-j)}{\zeta(1/2 +m)} \right)^{2n} \\
& \leq \frac{1}{2} \frac{\Gamma(m+1)^{n-2}}{\Gamma(2m+1)^{n-1}} \left(\frac{(2 \pi)^{n} (n!)^2}{n^{2n}} \right)^{m} \left(\frac{11}{5} \right)^{n} \\
&+ \sum_{j=1}^{m-1} \frac{1}{j!} \left( \frac{(2 \pi)^{n} (n!)^2}{n^{2n}} \right)^{j} \left(\frac{\Gamma(2m+1-j)}{\Gamma(2m+1)} \right)^{n-1} \left(\frac{\zeta(1/2 + m-j)}{\zeta(1/2 +m)} \right)^{2n} \\
&=: T_{n}(m)
\end{align*}
Therefore we need to show that $T_{n}(m) <1$ for $ n \geq 2$ and $m$ big enough. The numbers $T_{n}(m)$ are decreasing as $n$ increases because each individual term is decreasing. We will now show that $T_{n}(m)$ is also decreasing in $m$. Therefore once we have $T_{2}(m_{0}) <1$ for some $m_{0}$, then we automatically have that $T_{n}(m_{0})<1$ for any $n \geq 2$ and $m \geq m_{0}$. We will do this by showing $T_{n}(m+1) - T_{n}(m) \leq 0$. The term outside the sum in $T_{n}(m+1) - T_{n}(m)$ is
\begin{align*}
&\frac{1}{2} \frac{\Gamma(m+1)^{n-2}}{\Gamma(2m+1)^{n-1}} \left(\frac{(2 \pi)^{n} (n!)^2}{n^{2n}} \right)^{m} \left( \frac{11}{5} \right)^{n} \left[ \frac{(2 \pi)^{n} (n!)^{2}}{2^{n-1} (m+1)(2m+1)^{n-2} n^{2n}} -1 \right],
\end{align*}
which is less than or equal to zero as soon as $ m \geq 4$ for $n=2$ and is true for $m \geq 1$ for any $n \geq 3$. Each term in the sum looks like
\begin{align} \label{z}
\begin{split}
&\frac{1}{j!} \left( \frac{(2 \pi)^{n} (n!)^{2}}{n^{2n}} \right)^{j} \left( \frac{\Gamma(2m+1-j)}{\Gamma(2m+1)} \right)^{n-1} \left( \frac{\zeta(1/2 + m -j)}{\zeta(1/2 + m)} \right)^{2n} \\
& \times \left[ \left( \frac{(2m+2-j)(2m+1-j)}{(2m+2)(2m+1)} \right)^{n-1} \left( \frac{\zeta(1/2 + m) \zeta( 3/2 + m -j)}{\zeta(3/2 + m) \zeta(1/2 + m -j)} \right)^{2n} -1 \right].
\end{split}
\end{align}
We can use the facts that
\begin{equation*}
\frac{1}{\zeta(3/2 + m)}, \frac{\zeta(1/2 + m)}{\zeta(1/2 + m -j)} \leq 1, \quad \zeta(3/2 + m -j)^{2} \leq \frac{8}{5} 2^{j-m} +1
\end{equation*}
to show that each term is less than or equal to zero once
\begin{equation*}
\left( \frac{(2m+2-j)(2m+1-j)}{(2m+2)(2m+1)} \right)^{n-1} \left( \frac{8}{5} 2^{j-m} +1 \right)^{n} \leq 1.
\end{equation*}
The last term to satisfy this inequality is the $j=1$ term. This case is equivalent to $\left( \frac{m}{m+1} \right)^{n-1} \left( \frac{16}{5} 2^{-m} + 1 \right)^{n} \leq 1$ which one can check is true once $m \geq 6$ for any $n \geq 2$. Once we know the inequality is satisfied for $m \geq 6$, we can go back to \eqref{z} and check the remaining values of $m$ directly. We find that equation \eqref{z} is negative for any $m \geq 1$ for $n \geq 2$. The last thing to deal with is the fact that $T_{n}(m+1)$ has one extra factor in the sum compared to $T_{n}(m)$. We will pair this term with the $j=m-1$ terms. Using similar inequalities as above we must show that
\begin{align*}
&\frac{(2 \pi)^{n} (n!)^2}{m n^{2n}} \left( \frac{m+2}{(2m+2)(2m+1)} \right)^{n-1} \left(\frac{\zeta(1/2 + m)}{\zeta(3/2 +m)} \right)^{2n} \\
&+ \left(\frac{(m+3)(m+2)}{(2m+2)(2m+1)} \right)^{n-1} \left(\frac{\zeta(1/2 +m) \zeta(5/2)}{\zeta(3/2 +m) \zeta(3/2)} \right)^{2n} \leq 1,
\end{align*}
which occurs once $m \geq 3$ for $n = 2$ and $m \geq 2$ for $n \geq 3$. We have shown that $T_{n}(m)$ is decreasing in both $n$ and $m$ so we just need find an $m_{0}$ such that $T_{2}(m_{0}) <1$. A computer calculation shows this first occurs for $m=8$. For higher degrees we can run this calculation again to reduce the number of cases that need to be checked explicitly. For example $T_{3}(m)<1$ once $m \geq 5$ and $T_{n}(m)<1$ for $m \geq 3$ once $n \geq 5$. We reduce the number of remaining cases by allowing the discriminant to vary. For $n=2$, we have the following table that shows the inequality is satisfied once $m$ is big enough depending on the discriminant.
\begin{center}
\begin{tabular}{ | c | c | c | c | c | c | c | c | c | c | c | }
\hline
$D_{K}$ & $5$ & $8$ & $12$ & $13$ & $17$ & $21$ & $24$ & $29$ & $33$ & $\geq 35$ \\ \hline
$m \geq$ & $7$ & $6$ & $5$ & $5$ & $4$ & $4$ & $4$ & $4$ & $4$ & $3$ \\ \hline
\end{tabular}
\end{center}
Similarly, for $n=3$ the inequality is satisfied for $m \geq 3$ once we have $D_{K} \geq 84$. The only other case we need to check is $n=4$. The inequality is true for $m \geq 3$ once we have $D_{K} \geq 209$ and the totally real quartic field with smallest discriminant has discriminant equal to $725$. The fact that there are not many cases to check explicitly is not too surprising after some reflection; increasing any aspect such as degree of the number field, discriminant, or weight of the form helps the polynomial satisfy the analytic conditions needed to have all its roots on the unit circle.
\subsection{Remaining cases}
We check manually the finitely many cases not covered by the previous subsection. Once we obtained the spaces of modular forms, we check that the roots are on the unit circle by testing the inequality $|Q_f(X)-X^m|<1$ as in Equation (\ref{inequality}). The inequality holds for all but $11$ polynomials associated to forms over quadratic fields. In such cases, we check that the trigonometric polynomials $P_f(X)+\epsilon(f)P_f\left(\frac{1}{X}\right)$ with $X=e^{i\theta}$ have the necessary number of roots on the interval $[0, \pi)$ as in \cite{JMOS}.
All the spaces for the quadratic fields are available in Magma. For small enough discriminant of the field and weight of the space, such computations can be done relatively fast on a personal computer. In the quadratic case, checking all the forms with precision of 15 decimal places took 4 hours on a 4 core Intel(R) Core(TM) i7--4720HQ CPU $@$ 2.60GHz personal computer with 8GB of memory.
\begin{ex}
Let $K=\Q(\sqrt{5})$. For weight $k=8$, we have a unique cusp form $f$ whose period polynomial is
\begin{align*}
r_f(X) &\approx -0.273825X^6 - 0.371966X^5 - 0.329503X^4 - 0.297572X^3 \\
&- 0.329503X^2 - 0.371966X - 0.273825,
\end{align*}
which we can write as $r_f(X) \approx -0.273825(X^6+\frac{361}{300}X^4+\frac{361}{300}X^2+1) - 0.371966(X^5+\frac{4}{5}X^3+X)$. We obtain that $\Lambda(f, 6)=\frac{25}{6}\Lambda(f,4)$, as computed by Yoshida in \cite{YH}.
\end{ex}
\begin{ex}
Let $K=\Q(\sqrt{33})$. The cusp subspace $S_8(\Gamma_K)$ has three irreducible Hecke submodules, one of which is one-dimensional. Let $g \in S_8(\Gamma_K)$ be the eigenform corresponding to this submodule. Then the period polynomial is approximately
\begin{equation*}
\begin{aligned}
r_g(X)\approx &-140158.98X^6 - 24794.709X^5 - 2025.1361X^4
\\
&- 130.74X^3 - 2025.1361X^2 - 24794.709X- 140158.98.
\end{aligned}
\end{equation*}
\end{ex}
The reason for the small precision in the quadratic case is due to slow computations of Hecke eigenvalues. In the cases of fields with small discriminants $D_K\le 17$ and narrow class number $h_+=1$, we were able to increase the precision by using the same technique described for creating spaces of Hilbert modular forms over cubic fields. However, this involved many tedious tests for finding generators of spaces, since the number of generators rises quickly with the discriminant. We illustrate an example of weight $22$, which was not reachable using the existing infrastructure.
\begin{ex}
Let $K=\Q(\sqrt{5})$. Consider the eigenform $h \in S_{22}(\Gamma_K)$ with Fourier expansion in Table \ref{5x22}. The first row of the table gives totally positive generators of the first few ideals with $\omega$ a root of the polynomial $x^2-x-1$, the second row the norm of the ideal, and the third row the coefficient corresponding to the given ideal.
\begin{table}[h]
\caption{Fourier expansion of an eigenform $h \in S_{22}(\Gamma_K)$ over the field $K=\Q(\sqrt{5})$}
\label{5x22}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$\mathfrak{n}$& (0) &(1) & $(2)$& $(\omega+2)$ & $(3)$& $(\omega+3)$ & $(4)$ \\ \hline
$N(\mathfrak{n})$ & 0 &1 &4&5 & $9$& $11$ & 16 \\ \hline
$a(\mathfrak{n})$&0 &1 & -4111360 &21640950 &-4319930070 &-94724929188 & 12505234538496 \\ \hline
\end{tabular}
\end{table}
The roots of the period polynomial $r_h(X)$, seen in Figure \ref{5x22roots}, are distributed nearly uniformly on the unit circle.
\begin{figure}
\caption{}
\label{5x22roots}
\begin{center}
\includegraphics[scale=0.6]{Roots2222} $\quad$
\end{center}
\end{figure}
\end{ex}
In the cubic case, we only need to consider the two totally real fields with discriminants $49$ and $81$, and the only special case is $m=3$. The algorithm we use to reconstruct the spaces of weight $8$ for cubic fields is outlined in Section \ref{cubic}, along with further examples. In the cubic field case, the inequality $|Q_f(X)-X^m|<1$ as in Equation (\ref{inequality}) held for all the polynomials.
\section{Examples and remarks}\label{FinalSection}
\subsection{The case of cubic fields}
\label{cubic}
We create the spaces of Hilbert modular forms using the package \cite{hmf}, which implements Fourier expansions of Hilbert modular forms, and where we can perform operations such as multiplication and applying Hecke operators. The main source of forms in the package are Eisenstein series. In general, one cannot generate full spaces of Hilbert modular forms just using Eisenstein series, but we were able to use Hecke operators on products of Eisenstein series to generate spaces of low weights $2 \le k \le 8$ for the two cubic fields we needed to investigate. In particular, we obtain Fourier expansions of forms that generate spaces of weight $k$ using Algorithm \ref{cubicalg} recursively for parallel even weights starting with $k=2$.
\begin{algorithm}
\caption{Algorithm for reconstructing full spaces of weight $k$ for a cubic field $K$}\label{cubicalg}
\begin{algorithmic}[1]
\Procedure{FullSpaceAndGenerators}{$K$}
\State Let $d_k \colonequals \dim_\C(M_k(\Gamma)) $ (see \cite[Addendum (3.14)]{TV}); \Comment{Actual dimension}
\State Let $E_k$ be the Eisenstein series of parallel weight $k$;
\State Compute the set $R$ of forms of weight $k$ obtained from multiplying forms of lower weights in $\operatorname{Gen}_i$ for $i<k$;
\State $\operatorname{Gen}_k=\{E_k\} \cup R$;
\State Let $M$ be the vector space generated by $\operatorname{Gen}_k$;
\Repeat \Comment{Keep adding new generators}
\State Let $g\colonequals T_{\mathfrak{p}}(f)$ for primes $\mathfrak{p}$ of increasing norm;
\State Let $V$ be the vector space generated by $\operatorname{Gen}_k \cup \{g\}$;
\State If $\dim_\C(V)>\dim_\C(M)$ then $\operatorname{Gen}_k=\operatorname{Gen}_k \cup \{g\}$;
\State $M = $ the vector space generated by $\operatorname{Gen}_k$
\Until{$\dim_k(M) =d_k$} \Comment{until we have filled the space}
\State \textbf{return} $M, \operatorname{Gen}_k$;
\EndProcedure
\end{algorithmic}
\end{algorithm}
In step (2) of Algorithm \ref{cubicalg}, the dimensions are given by the following Hilbert series from \cite[Addendum (3.14)]{TV}, where the space of weight $k$ corresponds to the coefficient for $t^{k/2}$. For the cubic field with $D_K=49$, we have the series
\[\frac{(1+t^4+3t^5+5t^6+4t^7+3t^8+3t^9+3t^{10}+2t^{11}-2t^{13}+t^{14})}{(1-t)(1-t^2)(1-t^3)(1-t^7)}.\] For this field, the spaces of weights $k=2,4,6,8$ are generated by Eisenstein series and their products, and we did not need to do the repeat loop in the algorithm.
For the cubic field with $D_K=81$, the Hilbert series is
\[\frac{(1-t+t^2+t^3+t^4+6t^5+4t^6-2t^7+4t^8+6t^9-t^{10}+3t^{11}+3t^{12}-3t^{13}+t^{14})}{(1-t)^2(1-t^2)(1-t^9)}.\]
Besides Eisenstein series and their products, we need additional generators for weights $k=4,6$ and $8$. For weight $4$ we take $T_2(E_2^2)$, for weight $6$ we take $T_2(E_2^3)$, and for weight $8$ we take $T_{\mathfrak{p}}(E_2^4)$ and $T_{\mathfrak{q}}(E_2^4)$, where $\mathfrak{p}$ and $\mathfrak{q}$ lie above $17$.
Once we have the full space, we can find the subspace of cusp forms, from which we need to extract a basis of eigenforms by finding matrices of Hecke operators. We use the ideal lying above $7$ for $D_K=49$, and an ideal above $17$ for $D_K=81$. Once we have a basis of eigenforms for each weight, we construct the $L$-series using the required information described earlier.
\begin{ex}
Let $K=\Q(\zeta_7+\zeta_7^{-1})$. Take the eigenform $h \in S_8(\Gamma)$ with Fourier expansion in Table \ref{49}, where $\alpha$ is a root of the polynomial $x^2 + \frac{3}{392}x - \frac{1}{21952}$.
Then the period polynomial attached to $h$, where we take the first complex embedding of $\Q(\alpha)$, is
\[r_h(X) \approx -4.12785iX^4 + 1.29074X^3 + 0.547495iX^2 - 1.29074X- 4.12785i.\]
\begin{table}[h]
\caption{Fourier expansion of an eigenform $h \in S_8(\Gamma)$ over the field $\Q(\zeta_7+\zeta_7^{-1})$}
\label{49}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$N(\mathfrak{n})$ & 0 &1 &7&8 & 13& 13 & 13 \\ \hline
$a(\mathfrak{n})$&$0$ &$1$ &$21952\alpha$ &$-43904\alpha-152 $&$21952\alpha-378$ & $21952\alpha-378$ & $21952\alpha-378$ \\ \hline
\end{tabular}
\end{table}
\end{ex}
\subsection{Numerical stability of the roots of the polynomials}
In this subsection, we perform experiments, first proposed by Zagier \cite{DZ}, to examine how much leeway such polynomials have to have all the roots on the unit circle. In particular, we decompose $r_f=r_f^++r_f^-$ into the odd and even part, and check thresholds $t>0$ for which $r_f^++t \cdot r_f^-$ still has roots on the unit circle.
Zagier noticed that in the classical case for $f=\Delta$, the interval around $1$ containing $t$ was rather small, roughly $t \in [0.999964, 1.000023]$. We investigate some classical cases for forms with larger weights and levels, as well as the Hilbert case for various weights and fields with varied discriminants. For the classical case, our experiments are summarized in Table \ref{classt}, where we take newforms with the specified level and weight. We only consider values for $t$ in the interval $[0,2]$, although values for $t$ outside this interval might work as well.
\begin{table}[h]
\caption{Values for $t$ where $r_f(X)^+ + t \cdot r_f^-(X)$ has roots on the unit circle: classical forms}
\label{classt}
\begin{tabular}{|c|c|r|}
\hline
Weight & Level& Interval for $t$ \\ \hline
12 & 1 & $[0.999964, 1.000023]$ \\ \hline
12 & 5 & $[0.97877, 1.02507]$ \\ \hline
12 & 7 & $[0.9298, 1.0558]$ \\ \hline
12 & 11 & $[0.501, 1.118]$ \\ \hline
18 & 1 & $[0.999978, 1.000054]$ \\ \hline
18 & 5 & $[0.9594, 1.015]$ \\ \hline
18 & 7 & $[0.9313, 1.032]$ \\ \hline
18 & 11 & $[0.618, 1.077]$ \\ \hline
24 & 1 & $[0.9999871, 1.0000063]$ \\ \hline
24 & 5 & $[0.9809, 1.0123]$ \\ \hline
24 & 7 & $[0.9135, 1.0273]$ \\ \hline
24 & 11 & $[0.657, 1.066]$ \\ \hline
42 & 1 & $[0.999985, 1.000013]$ \\ \hline
100 & 1 & $[0.999989, 1.000006]$ \\ \hline
\end{tabular}
\end{table}
We note a few observations. First, the intervals don't change too much as we vary the weight, but they do get much larger as we increase the level. They also get less centered around $1$ as we increase the level.
\begin{table}[h]
\caption{Values for $t$ where $r_f(X)^+ + t \cdot r_f^-(X)$ has roots on the unit circle, for some Hilbert modular forms}
\label{hilbt}
\begin{tabular}{|c|c|r|}
\hline
Weight & $K$ & Interval for $t$ \\ \hline
8 & $\Q(\sqrt{5})$ & $[0,1.1158]$ \\ \hline
10 & $\Q(\sqrt{5})$ & $[0,1.302]$ \\ \hline
12 & $\Q(\sqrt{5})$ & $[0, 1.519]$ \\ \hline
14 & $\Q(\sqrt{5})$ & $[0, 1.7283]$ \\ \hline
8 & $\Q(\sqrt{13})$ & $[0,2]$ \\ \hline
8 & $\Q(\sqrt{33})$ & $[0,2]$ \\ \hline
8 & $\Q(\zeta_7+\zeta_7^{-1})$ & $[0,2]$ \\ \hline
\end{tabular}
\end{table}
In Table \ref{hilbt}, we investigate some cases for Hilbert modular forms, as we vary the weight $k$ and the field $K$. We note that in the Hilbert case, increases in weight do increase the interval significantly, as does the increase in the discriminant of the field.
\subsection{Questions for further research}
We conclude with a few remaining topics for future investigations.
\begin{enumerate}
\item Can a full cohomology theory be developed to explain the full context of the period polynomials defined here, for example, in relation to the above cited work of \cite{DL,YH}?
\item Is there a more general RHPP behind polynomials attached to a suitable cohomology theory?
\item Is there a Manin-type theory of these zeta-polynomials, similar to that developed in \cite{ORS}?
\end{enumerate}
|
2,869,038,156,295 | arxiv | \section{\label{sec:level1}First-level heading:\protect\\ The line
break was forced \lowercase{via} \textbackslash\textbackslash}
This sample document demonstrates proper use of REV\TeX~4.2 (and
\LaTeXe) in manuscripts prepared for submission to AAPM
journals. Further information can be found in the documentation included in the distribution or available at
\url{http://www.aapm.org} and in the documentation for
REV\TeX~4.2 itself.
When commands are referred to in this example file, they are always
shown with their required arguments, using normal \TeX{} format. In
this format, \verb+#1+, \verb+#2+, etc. stand for required
author-supplied arguments to commands. For example, in
\verb+\section{#1}+ the \verb+#1+ stands for the title text of the
author's section heading, and in \verb+\title{#1}+ the \verb+#1+
stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using
\textbackslash\textbackslash. A blank input line tells \TeX\ that the
paragraph has ended.
\subsection{\label{sec:level2}Second-level heading: Formatting}
This file may be formatted in both the \texttt{preprint} (the default) and
\texttt{reprint} styles; the latter format may be used to
mimic final journal output. In addition, there is another
option available, \texttt{lengthcheck}, which formats the document as closely
as possible to an actual journal article, to facilitate the author's
performance of a length check. Either format may be used for submission
purposes; however, for peer review and production, AAPM will format the
article using the \texttt{preprint} class option. Hence, it is
essential that authors check that their manuscripts format acceptably
under \texttt{preprint}. Manuscripts submitted to AAPM that do not
format correctly under the \texttt{preprint} option may be delayed in
both the editorial and production processes.
The \texttt{widetext} environment will make the text the width of the
full page, as on page~\pageref{eq:wideeq}. (Note the use the
\verb+\pageref{#1}+ to get the page number right automatically.) The
width-changing commands only take effect in \texttt{twocolumn}
formatting. It has no effect if \texttt{preprint} formatting is chosen
instead.
\subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes}
Citations in text refer to entries in the Bibliography;
they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+.
Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly,
its entire repertoire of commands are available in your document;
see the \verb+natbib+ documentation for further details.
The argument of \verb+\cite+ is a comma-separated list of \emph{keys};
a key may consist of letters and numerals.
By default, AAPM citations are numerical; \cite{feyn54}
to give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}).
REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate.
To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983},
and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}).
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command,
where the argument is the citation key mentioned above.
\verb+\bibitem{#1}+ commands may be crafted by hand or, preferably,
generated by using Bib\TeX.
The AAPM styles for REV\TeX~4 include Bib\TeX\ style file
\verb+aapmrev4-2.bst+, appropriate for
numbered bibliography.
REV\TeX~4 will automatically choose the style appropriate for
the document's selected class options: the default is numerical.
This sample file demonstrates a simple use of Bib\TeX\
via a \verb+\bibliography+ command referencing the \verb+aapmsamp.bib+ file.
Running Bib\TeX\ (in this case \texttt{bibtex
aapmsamp}) after the first pass of \LaTeX\ produces the file
\verb+aapmsamp.bbl+ which contains the automatically formatted
\verb+\bibitem+ commands (including extra markup information via
\verb+\bibinfo+ commands). If not using Bib\TeX, the
\verb+thebibiliography+ environment should be used instead.
\paragraph{Fourth-level heading is run in.}%
Footnotes are produced using the \verb+\footnote{#1}+ command.
Numerical style citations put footnotes into the
bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}.
Note: due to the method used to place footnotes in the bibliography, \emph{you
must re-run BibTeX every time you change any of your document's
footnotes}.
\section{Math and Equations}
Inline math may be typeset using the \verb+$+ delimiters. Bold math
symbols may be achieved using the \verb+bm+ package and the
\verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can
be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and
Blackboard (or open face or double struck) characters should be
typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands
respectively. Both are supplied by the \texttt{amssymb} package. For
example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and
\verb+$\mathfrak{G}$+ gives $\mathfrak{G}$
In \LaTeX\ there are many different ways to display equations, and a
few preferred ways are noted below. Displayed math will flush left by
default.
Below we have numbered single-line equations, the most common kind:
\begin{eqnarray}
\chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2}
\left(
\begin{array}{c}
|{\bf p}|+p_z\\
px+ip_y
\end{array}\right)\;,
\\
\left\{%
\openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}%
\label{eq:one}.
\end{eqnarray}
Note the open one in Eq.~(\ref{eq:one}).
Not all numbered equations will fit within a narrow column this
way. The equation number will move down automatically if it cannot fit
on the same line with a one-line equation:
\begin{equation}
\left\{
ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}.
\end{equation}
When the \verb+\label{#1}+ command is used [cf. input for
Eq.~(\ref{eq:one})], the equation can be referred to in text without
knowing the equation number that \TeX\ will assign to it. Just
use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in
the \verb+\label{#1}+ command.
Unnumbered single-line equations can be typeset
using the \verb+\[+, \verb+\]+ format:
\[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \]
\subsection{Multiline equations}
Multiline equations are obtained by using the \verb+eqnarray+
environment. Use the \verb+\nonumber+ command at the end of each line
to avoid assigning a number:
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
\delta_{\sigma_1,-\sigma_2}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1),
\end{eqnarray}
\begin{eqnarray}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\nonumber \\
& &\times \left( \sum_{i<j}\right)
\sum_{\text{perm}}
\frac{1}{S_{12}}
\frac{1}{S_{12}}
\sum_\tau c^f_\tau~.
\end{eqnarray}
\textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline
equation if \verb+\nonumber+ is also used on that line. Incorrect
cross-referencing will result. Notice the use \verb+\text{#1}+ for
using a Roman font within a math environment.
To set a multiline equation without \emph{any} equation
numbers, use the \verb+\begin{eqnarray*}+,
\verb+\end{eqnarray*}+ format:
\begin{eqnarray*}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\\
& &\times \left( \sum_{i<j}\right)
\left(
\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}
\right)
\frac{1}{S_{12}}~.
\end{eqnarray*}
To obtain numbers not normally produced by the automatic numbering,
use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired
equation number. For example, to get an equation number of
(\ref{eq:mynum}),
\begin{equation}
g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum}
\end{equation}
A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires
\texttt{amsmath}. The \verb+\tag{#1}+ must come before the
\verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is
\textit{transparent} to the automatic numbering in REV\TeX{};
therefore, the number must be known ahead of time, and it must be
manually adjusted if other equations are added. \verb+\tag{#1}+ works
with both single-line and multiline equations. \verb+\tag{#1}+ should
only be used in exceptional case - do not use it to number all
equations in a paper.
Note the equation number gets reset again:
\begin{equation}
g^+g^+g^+ \rightarrow g^+g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~.
\end{equation}
Enclosing single-line and multiline equations in
\verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce
a set of equations that are ``numbered'' with letters, as shown in
Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below:
\begin{subequations}
\label{eq:whole}
\begin{equation}
\left\{
abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}
\right\},\label{subeq:1}
\end{equation}
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2}
\end{eqnarray}
\end{subequations}
Putting a \verb+\label{#1}+ command right after the
\verb+\begin{subequations}+, allows one to
reference all the equations in a subequations environment. For
example, the equations in the preceding subequations environment were
Eqs.~(\ref{eq:whole}).
\subsubsection{Wide equations}
The equation that follows is set in a wide format, i.e., it spans
across the full page. The wide format is reserved for long equations
that cannot be easily broken into four lines or less:
\begin{widetext}
\begin{equation}
{\cal R}^{(\text{d})}=
g_{\sigma_2}^e
\left(
\frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)
+ x_WQ_e
\left(
\frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)\;. \label{eq:wideeq}
\end{equation}
\end{widetext}
This is typed to show the output is in wide format.
(Since there is no input line between \verb+\equation+ and
this paragraph, there is no paragraph indent for this paragraph.)
\section{Cross-referencing}
REV\TeX{} will automatically number sections, equations, figure
captions, and tables. In order to reference them in text, use the
\verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a
particular page, use the \verb+\pageref{#1}+ command.
The \verb+\label{#1}+ should appear in a section heading, within an
equation, or in a table or figure caption. The \verb+\ref{#1}+ command
is used in the text where the citation is to be displayed. Some
examples: Section~\ref{sec:level1} on page~\pageref{sec:level1},
Table~\ref{tab:table1},%
\begin{table}
\caption{\label{tab:table1}This is a narrow table which fits into a
text column when using \texttt{twocolumn} formatting. Note that
REV\TeX~4 adjusts the intercolumn spacing so that the table fills the
entire width of the column. Table captions are numbered
automatically. This table illustrates left-aligned, centered, and
right-aligned columns. }
\begin{ruledtabular}
\begin{tabular}{lcr}
Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\
\hline
1 & 2 & 3\\
10 & 20 & 30\\
100 & 200 & 300\\
\end{tabular}
\end{ruledtabular}
\end{table}
and Fig.~\ref{fig:epsart}.
\section{Figures and Tables}
Figures and tables are typically ``floats''; \LaTeX\ determines their
final position via placement rules.
\LaTeX\ isn't always successful in automatically placing floats where you wish them.
Figures are marked up with the \texttt{figure} environment, the content of which
imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+).
The argument of the latter command should itself contain a \verb+\label+ command if you
wish to refer to your figure with \verb+\ref+.
Import your image using either the \texttt{graphics} or
\texttt{graphix} packages. These packages both define the
\verb+\includegraphics{#1}+ command, but they differ in the optional
arguments for specifying the orientation, scaling, and translation of the figure.
Fig.~\ref{fig:epsart}%
\begin{figure}
\includegraphics{fig_1
\caption{\label{fig:epsart} A figure caption. The figure captions are
automatically numbered.}
\end{figure}
is small enough to fit in a single column, while
Fig.~\ref{fig:wide}%
\begin{figure*}
\includegraphics{fig_2
\caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide
figure, spanning the page in \texttt{twocolumn} formatting.}
\end{figure*}
is too wide for a single column,
so instead the \texttt{figure*} environment has been used.
The analog of the \texttt{figure} environment is \texttt{table}, which uses
the same \verb+\caption+ command.
However, you should type your caption command first within the \texttt{table},
instead of last as you did for \texttt{figure}.
The heart of any table is the \texttt{tabular} environment,
which represents the table content as a (vertical) sequence of table rows,
each containing a (horizontal) sequence of table cells.
Cells are separated by the \verb+&+ character;
the row terminates with \verb+\\+.
The required argument for the \texttt{tabular} environment
specifies how data are displayed in each of the columns.
For instance, a column
may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+),
or aligned on a decimal point (\verb+d+).
(Table~\ref{tab:table4}%
\begin{table}
\caption{\label{tab:table4}Numbers in columns Three--Five have been
aligned by using the ``d'' column specifier (requires the
\texttt{dcolumn} package).
Non-numeric entries (those entries without
a ``.'') in a ``d'' column are aligned on the decimal point.
Use the
``D'' specifier for more complex layouts. }
\begin{ruledtabular}
\begin{tabular}{ccddd}
One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\
\hline
one&two&\mbox{three}&\mbox{four}&\mbox{five}\\
He&2& 2.77234 & 45672. & 0.69 \\
C\footnote{Some tables require footnotes.}
&C\footnote{Some tables need more than one footnote.}
& 12537.64 & 37.66345 & 86.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
illustrates the use of decimal column alignment.)
Extra column-spacing may be be specified as well, although
REV\TeX~4 sets this spacing so that the columns fill the width of the
table.
Horizontal rules are typeset using the \verb+\hline+
command.
The doubled (or Scotch) rules that appear at the top and
bottom of a table can be achieved by enclosing the \texttt{tabular}
environment within a \texttt{ruledtabular} environment.
Rows whose columns span multiple columns can be typeset using \LaTeX's
\verb+\multicolumn{#1}{#2}{#3}+ command
(for example, see the first row of Table~\ref{tab:table3}).%
\begin{table*}
\caption{\label{tab:table3}This is a wide table that spans the page
width in \texttt{twocolumn} mode. It is formatted using the
\texttt{table*} environment. It also demonstrates the use of
\textbackslash\texttt{multicolumn} in rows with entries that span
more than one column.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\
Ion&1st alternative&2nd alternative&lst alternative
&2nd alternative\\ \hline
K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\
Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.}
&$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\
Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page
width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. }
&$(4e)^{\text{a}}$\\
He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\
Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The tables in this document illustrate various effects.
Tables that fit in a narrow column are contained in a \texttt{table}
environment.
Table~\ref{tab:table3} is a wide table, therefore set with the
\texttt{table*} environment.
Lengthy tables may need to break across pages.
A simple way to allow this is to specify
the \verb+[H]+ float placement on the \texttt{table} or
\texttt{table*} environment.
Alternatively, using the standard \LaTeXe\ package \texttt{longtable}
gives more control over how tables break and allows headers and footers
to be specified for each page of the table.
An example of the use of \texttt{longtable} can be found
in the file \texttt{summary.tex} that is included with the REV\TeX~4
distribution.
There are two methods for setting footnotes within a table (these
footnotes will be displayed directly below the table rather than at
the bottom of the page or in the bibliography).
The easiest
and preferred method is just to use the \verb+\footnote{#1}+
command. This will automatically enumerate the footnotes with
lowercase roman letters.
However, it is sometimes necessary to have
multiple entries in the table share the same footnote.
In this case,
create the footnotes using
\verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+.
\texttt{\#1} is a numeric value.
Each time the same value for \texttt{\#1} is used,
the same mark is produced in the table.
The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular}
environment.
Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and
\ref{tab:table2}%
\begin{table}
\caption{\label{tab:table2}A table with more columns still fits
properly in a column. Note that several entries share the same
footnote. Inspect the \LaTeX\ input for this table to see
exactly how it is done.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
&$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$&
&$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\
\hline
Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1]
& 0.680 & 1.870 & 3.700 \\
Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2]
& 0.450 & 1.930 & 3.760 \\
Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3]
& 0.750 & 2.170 & 3.560 \\
Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4]
& 0.900 & 2.370 & 3.720 \\
Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2]
& 0.380 & 1.730 & 2.830 \\
Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5]
& 0.760 & 2.110 & 3.120 \\
Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5]
& 1.120 & 2.620 & 3.480 \\
Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3]
& 1.330 & 2.800 & 3.590 \\
Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4]
& 1.420 & 3.030 & 3.740 \\
In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5]
& 0.960 & 2.460 & 3.780 \\
Tl& 0.480 & 18.90 & 3.550 & & & & \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.}
\footnotetext[2]{Here's the second.}
\footnotetext[3]{Here's the third.}
\footnotetext[4]{Here's the fourth.}
\footnotetext[5]{And etc.}
\end{table}
for an illustration.
All AAPM journals require that the initial citation of
figures or tables be in numerical order.
\LaTeX's automatic numbering of floats is your friend here:
just put each \texttt{figure} environment immediately following
its first reference (\verb+\ref+), as we have done in this example file.
\begin{acknowledgments}
We wish to acknowledge the support of the author community in using
REV\TeX{}, offering suggestions and encouragement, testing new versions,
\dots.
\end{acknowledgments}
\section{\label{sec:level1}First-level heading:\protect\\ The line
break was forced \lowercase{via} \textbackslash\textbackslash}
This sample document demonstrates proper use of REV\TeX~4.2 (and
\LaTeXe) in manuscripts prepared for submission to AIP
journals. Further information can be found in the documentation included in the distribution or available at
\url{http://authors.aip.org} and in the documentation for
REV\TeX~4.2 itself.
When commands are referred to in this example file, they are always
shown with their required arguments, using normal \TeX{} format. In
this format, \verb+#1+, \verb+#2+, etc. stand for required
author-supplied arguments to commands. For example, in
\verb+\section{#1}+ the \verb+#1+ stands for the title text of the
author's section heading, and in \verb+\title{#1}+ the \verb+#1+
stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using
\textbackslash\textbackslash. A blank input line tells \TeX\ that the
paragraph has ended.
\subsection{\label{sec:level2}Second-level heading: Formatting}
This file may be formatted in both the \texttt{preprint} (the default) and
\texttt{reprint} styles; the latter format may be used to
mimic final journal output. Either format may be used for submission
purposes; however, for peer review and production, AIP will format the
article using the \texttt{preprint} class option. Hence, it is
essential that authors check that their manuscripts format acceptably
under \texttt{preprint}. Manuscripts submitted to AIP that do not
format correctly under the \texttt{preprint} option may be delayed in
both the editorial and production processes.
The \texttt{widetext} environment will make the text the width of the
full page, as on page~\pageref{eq:wideeq}. (Note the use the
\verb+\pageref{#1}+ to get the page number right automatically.) The
width-changing commands only take effect in \texttt{twocolumn}
formatting. It has no effect if \texttt{preprint} formatting is chosen
instead.
\subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes}
Citations in text refer to entries in the Bibliography;
they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+.
Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly,
its entire repertoire of commands are available in your document;
see the \verb+natbib+ documentation for further details.
The argument of \verb+\cite+ is a comma-separated list of \emph{keys};
a key may consist of letters and numerals.
By default, citations are numerical; \cite{feyn54} author-year citations are an option.
To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}).
REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate.
REV\TeX\ provides the ability to properly punctuate textual citations in author-year style;
this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off.
To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983},
and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}).
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command,
where the argument is the citation key mentioned above.
\verb+\bibitem{#1}+ commands may be crafted by hand or, preferably,
generated by using Bib\TeX.
The AIP styles for REV\TeX~4 include Bib\TeX\ style files
\verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for
numbered and author-year bibliographies,
respectively.
REV\TeX~4 will automatically choose the style appropriate for
the document's selected class options: the default is numerical, and
you obtain the author-year style by specifying a class option of \verb+author-year+.
This sample file demonstrates a simple use of Bib\TeX\
via a \verb+\bibliography+ command referencing the \verb+aipsamp.bib+ file.
Running Bib\TeX\ (in this case \texttt{bibtex
aipsamp}) after the first pass of \LaTeX\ produces the file
\verb+aipsamp.bbl+ which contains the automatically formatted
\verb+\bibitem+ commands (including extra markup information via
\verb+\bibinfo+ commands). If not using Bib\TeX, the
\verb+thebibiliography+ environment should be used instead.
\paragraph{Fourth-level heading is run in.}%
Footnotes are produced using the \verb+\footnote{#1}+ command.
Numerical style citations put footnotes into the
bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}.
Author-year and numerical author-year citation styles (each for its own reason) cannot use this method.
Note: due to the method used to place footnotes in the bibliography, \emph{you
must re-run BibTeX every time you change any of your document's
footnotes}.
\section{Math and Equations}
Inline math may be typeset using the \verb+$+ delimiters. Bold math
symbols may be achieved using the \verb+bm+ package and the
\verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can
be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and
Blackboard (or open face or double struck) characters should be
typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands
respectively. Both are supplied by the \texttt{amssymb} package. For
example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and
\verb+$\mathfrak{G}$+ gives $\mathfrak{G}$
In \LaTeX\ there are many different ways to display equations, and a
few preferred ways are noted below. Displayed math will center by
default. Use the class option \verb+fleqn+ to flush equations left.
Below we have numbered single-line equations, the most common kind:
\begin{eqnarray}
\chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2}
\left(
\begin{array}{c}
|{\bf p}|+p_z\\
px+ip_y
\end{array}\right)\;,
\\
\left\{%
\openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}%
\label{eq:one}.
\end{eqnarray}
Note the open one in Eq.~(\ref{eq:one}).
Not all numbered equations will fit within a narrow column this
way. The equation number will move down automatically if it cannot fit
on the same line with a one-line equation:
\begin{equation}
\left\{
ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}.
\end{equation}
When the \verb+\label{#1}+ command is used [cf. input for
Eq.~(\ref{eq:one})], the equation can be referred to in text without
knowing the equation number that \TeX\ will assign to it. Just
use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in
the \verb+\label{#1}+ command.
Unnumbered single-line equations can be typeset
using the \verb+\[+, \verb+\]+ format:
\[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \]
\subsection{Multiline equations}
Multiline equations are obtained by using the \verb+eqnarray+
environment. Use the \verb+\nonumber+ command at the end of each line
to avoid assigning a number:
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
\delta_{\sigma_1,-\sigma_2}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1),
\end{eqnarray}
\begin{eqnarray}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\nonumber \\
& &\times \left( \sum_{i<j}\right)
\sum_{\text{perm}}
\frac{1}{S_{12}}
\frac{1}{S_{12}}
\sum_\tau c^f_\tau~.
\end{eqnarray}
\textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline
equation if \verb+\nonumber+ is also used on that line. Incorrect
cross-referencing will result. Notice the use \verb+\text{#1}+ for
using a Roman font within a math environment.
To set a multiline equation without \emph{any} equation
numbers, use the \verb+\begin{eqnarray*}+,
\verb+\end{eqnarray*}+ format:
\begin{eqnarray*}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\\
& &\times \left( \sum_{i<j}\right)
\left(
\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}
\right)
\frac{1}{S_{12}}~.
\end{eqnarray*}
To obtain numbers not normally produced by the automatic numbering,
use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired
equation number. For example, to get an equation number of
(\ref{eq:mynum}),
\begin{equation}
g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum}
\end{equation}
A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires
\texttt{amsmath}. The \verb+\tag{#1}+ must come before the
\verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is
\textit{transparent} to the automatic numbering in REV\TeX{};
therefore, the number must be known ahead of time, and it must be
manually adjusted if other equations are added. \verb+\tag{#1}+ works
with both single-line and multiline equations. \verb+\tag{#1}+ should
only be used in exceptional case - do not use it to number all
equations in a paper.
Enclosing single-line and multiline equations in
\verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce
a set of equations that are ``numbered'' with letters, as shown in
Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below:
\begin{subequations}
\label{eq:whole}
\begin{equation}
\left\{
abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}
\right\},\label{subeq:1}
\end{equation}
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2}
\end{eqnarray}
\end{subequations}
Putting a \verb+\label{#1}+ command right after the
\verb+\begin{subequations}+, allows one to
reference all the equations in a subequations environment. For
example, the equations in the preceding subequations environment were
Eqs.~(\ref{eq:whole}).
\subsubsection{Wide equations}
The equation that follows is set in a wide format, i.e., it spans
across the full page. The wide format is reserved for long equations
that cannot be easily broken into four lines or less:
\begin{widetext}
\begin{equation}
{\cal R}^{(\text{d})}=
g_{\sigma_2}^e
\left(
\frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)
+ x_WQ_e
\left(
\frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)\;. \label{eq:wideeq}
\end{equation}
\end{widetext}
This is typed to show the output is in wide format.
(Since there is no input line between \verb+\equation+ and
this paragraph, there is no paragraph indent for this paragraph.)
\section{Cross-referencing}
REV\TeX{} will automatically number sections, equations, figure
captions, and tables. In order to reference them in text, use the
\verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a
particular page, use the \verb+\pageref{#1}+ command.
The \verb+\label{#1}+ should appear in a section heading, within an
equation, or in a table or figure caption. The \verb+\ref{#1}+ command
is used in the text where the citation is to be displayed. Some
examples: Section~\ref{sec:level1} on page~\pageref{sec:level1},
Table~\ref{tab:table1},%
\begin{table}
\caption{\label{tab:table1}This is a narrow table which fits into a
text column when using \texttt{twocolumn} formatting. Note that
REV\TeX~4 adjusts the intercolumn spacing so that the table fills the
entire width of the column. Table captions are numbered
automatically. This table illustrates left-aligned, centered, and
right-aligned columns. }
\begin{ruledtabular}
\begin{tabular}{lcr}
Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\
\hline
1 & 2 & 3\\
10 & 20 & 30\\
100 & 200 & 300\\
\end{tabular}
\end{ruledtabular}
\end{table}
and Fig.~\ref{fig:epsart}.
\section{Figures and Tables}
Figures and tables are typically ``floats''; \LaTeX\ determines their
final position via placement rules.
\LaTeX\ isn't always successful in automatically placing floats where you wish them.
Figures are marked up with the \texttt{figure} environment, the content of which
imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+).
The argument of the latter command should itself contain a \verb+\label+ command if you
wish to refer to your figure with \verb+\ref+.
Import your image using either the \texttt{graphics} or
\texttt{graphix} packages. These packages both define the
\verb+\includegraphics{#1}+ command, but they differ in the optional
arguments for specifying the orientation, scaling, and translation of the figure.
Fig.~\ref{fig:epsart}%
\begin{figure}
\includegraphics{fig_1
\caption{\label{fig:epsart} A figure caption. The figure captions are
automatically numbered.}
\end{figure}
is small enough to fit in a single column, while
Fig.~\ref{fig:wide}%
\begin{figure*}
\includegraphics{fig_2
\caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide
figure, spanning the page in \texttt{twocolumn} formatting.}
\end{figure*}
is too wide for a single column,
so instead the \texttt{figure*} environment has been used.
The analog of the \texttt{figure} environment is \texttt{table}, which uses
the same \verb+\caption+ command.
However, you should type your caption command first within the \texttt{table},
instead of last as you did for \texttt{figure}.
The heart of any table is the \texttt{tabular} environment,
which represents the table content as a (vertical) sequence of table rows,
each containing a (horizontal) sequence of table cells.
Cells are separated by the \verb+&+ character;
the row terminates with \verb+\\+.
The required argument for the \texttt{tabular} environment
specifies how data are displayed in each of the columns.
For instance, a column
may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+),
or aligned on a decimal point (\verb+d+).
(Table~\ref{tab:table4}%
\begin{table}
\caption{\label{tab:table4}Numbers in columns Three--Five have been
aligned by using the ``d'' column specifier (requires the
\texttt{dcolumn} package).
Non-numeric entries (those entries without
a ``.'') in a ``d'' column are aligned on the decimal point.
Use the
``D'' specifier for more complex layouts. }
\begin{ruledtabular}
\begin{tabular}{ccddd}
One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\
\hline
one&two&\mbox{three}&\mbox{four}&\mbox{five}\\
He&2& 2.77234 & 45672. & 0.69 \\
C\footnote{Some tables require footnotes.}
&C\footnote{Some tables need more than one footnote.}
& 12537.64 & 37.66345 & 86.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
illustrates the use of decimal column alignment.)
Extra column-spacing may be be specified as well, although
REV\TeX~4 sets this spacing so that the columns fill the width of the
table.
Horizontal rules are typeset using the \verb+\hline+
command.
The doubled (or Scotch) rules that appear at the top and
bottom of a table can be achieved by enclosing the \texttt{tabular}
environment within a \texttt{ruledtabular} environment.
Rows whose columns span multiple columns can be typeset using \LaTeX's
\verb+\multicolumn{#1}{#2}{#3}+ command
(for example, see the first row of Table~\ref{tab:table3}).%
\begin{table*}
\caption{\label{tab:table3}This is a wide table that spans the page
width in \texttt{twocolumn} mode. It is formatted using the
\texttt{table*} environment. It also demonstrates the use of
\textbackslash\texttt{multicolumn} in rows with entries that span
more than one column.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\
Ion&1st alternative&2nd alternative&lst alternative
&2nd alternative\\ \hline
K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\
Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.}
&$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\
Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page
width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. }
&$(4e)^{\text{a}}$\\
He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\
Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The tables in this document illustrate various effects.
Tables that fit in a narrow column are contained in a \texttt{table}
environment.
Table~\ref{tab:table3} is a wide table, therefore set with the
\texttt{table*} environment.
Lengthy tables may need to break across pages.
A simple way to allow this is to specify
the \verb+[H]+ float placement on the \texttt{table} or
\texttt{table*} environment.
Alternatively, using the standard \LaTeXe\ package \texttt{longtable}
gives more control over how tables break and allows headers and footers
to be specified for each page of the table.
An example of the use of \texttt{longtable} can be found
in the file \texttt{summary.tex} that is included with the REV\TeX~4
distribution.
There are two methods for setting footnotes within a table (these
footnotes will be displayed directly below the table rather than at
the bottom of the page or in the bibliography).
The easiest
and preferred method is just to use the \verb+\footnote{#1}+
command. This will automatically enumerate the footnotes with
lowercase roman letters.
However, it is sometimes necessary to have
multiple entries in the table share the same footnote.
In this case,
create the footnotes using
\verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+.
\texttt{\#1} is a numeric value.
Each time the same value for \texttt{\#1} is used,
the same mark is produced in the table.
The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular}
environment.
Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and
\ref{tab:table2}%
\begin{table}
\caption{\label{tab:table2}A table with more columns still fits
properly in a column. Note that several entries share the same
footnote. Inspect the \LaTeX\ input for this table to see
exactly how it is done.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
&$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$&
&$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\
\hline
Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1]
& 0.680 & 1.870 & 3.700 \\
Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2]
& 0.450 & 1.930 & 3.760 \\
Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3]
& 0.750 & 2.170 & 3.560 \\
Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4]
& 0.900 & 2.370 & 3.720 \\
Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2]
& 0.380 & 1.730 & 2.830 \\
Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5]
& 0.760 & 2.110 & 3.120 \\
Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5]
& 1.120 & 2.620 & 3.480 \\
Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3]
& 1.330 & 2.800 & 3.590 \\
Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4]
& 1.420 & 3.030 & 3.740 \\
In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5]
& 0.960 & 2.460 & 3.780 \\
Tl& 0.480 & 18.90 & 3.550 & & & & \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.}
\footnotetext[2]{Here's the second.}
\footnotetext[3]{Here's the third.}
\footnotetext[4]{Here's the fourth.}
\footnotetext[5]{And etc.}
\end{table}
for an illustration.
All AIP journals require that the initial citation of
figures or tables be in numerical order.
\LaTeX's automatic numbering of floats is your friend here:
just put each \texttt{figure} environment immediately following
its first reference (\verb+\ref+), as we have done in this example file.
\begin{acknowledgments}
We wish to acknowledge the support of the author community in using
REV\TeX{}, offering suggestions and encouragement, testing new versions,
\dots.
\end{acknowledgments}
\section{Rayleigh Scattering Background}
The Rayleigh scattering has been discovered by Lord Rayleigh back in the nineteenth century. It is the reason for the blue color of the sky in daytime and twilight. As we have mentioned in the method section, the Rayleigh scattering describes the refraction of electromagnetic waves (EM waves) passing through media with density fluctuation, which leads to refraction index fluctuation\cite{jackson1999classical,landau2013classical}. Considering incident wave as plain EM wave with given wavevector $\mathbf{k}_i$,
\begin{equation} \label{Rayleigh Scattering Spectra: Expansion, 0th solution}
\begin{split}
\mathbf{E}_{inc} &= \boldsymbol{\xi}_0 \exp (i \mathbf{k}_i \cdot \mathbf{r} - i \omega_i t) \\
\left| \mathbf{k}_i\right|&=\frac{\sqrt{\epsilon_{0}} \omega_i}{c}
\end{split}
\end{equation}
With $\boldsymbol{\xi}_0 $ be the polarization vector, $\mathbf{k}_i$ the incident wave vector, $\omega_i$ the incident wave frequency, and $\epsilon_0$ the dielectric constant of gas. The refraction index fluctuation is proportional to the dielectric constant of the gas $\epsilon(\rho)$, hence related to the density fluctuation to the density fluctuation as the following perturbation relation
\begin{equation} \label{Rayleigh Scattering Spectra: Expansion}
\begin{split}
\epsilon &=\epsilon_{0} + \partial_\rho \epsilon(\rho_0) \delta \rho + \cdots \\
\end{split}
\end{equation}
where the $\rho_0$ is the density of gas and $\delta \rho$ is the fluctuation of the gas density. The scattered spectra $I$ is defined by the autocorrelation function of scattered electric field $\bold{E}_1$.
\begin{equation} \label{Rayleigh Scattering Spectra: spectral density}
\begin{split}
\bold{I}(\omega_f) &= \frac{1}{2\pi}\int \left< \bold{E}_1(t') \cdot \bold{E}_1(t'+t) \right> \exp(-i\omega_f t) dt \\
\end{split}
\end{equation}
In which the $\left<Q\right>$ means the ensemble average of the quantity $Q$. The scattered spectra $I$ could be calculated by perturbation method is as follows according to \cite{Pecora1964}
\begin{equation} \label{Rayleigh Scattering Spectra: spectral density for with space-time correlation function, omega4}
\begin{split}
\mathbf{I}(\omega_f)&= \frac{N \omega_i^4 \left|\boldsymbol{\xi}_0\right|^2\sin(\psi)^2}{32 \pi^3 c^4 r^2} \left[ \partial_\rho \epsilon(\rho_0)\right]^2 \\
& \int dt \int d \mathbf{r}^3 e^{-i(\omega_f-\omega_i) t} e^{-i (\mathbf{k}_i-\mathbf{k}_f)\cdot \mathbf{r}} ( G(\mathbf{r},t) - \rho_0 ) \\
G(\mathbf{r},t)
&= \frac{1}{N} \left< \int d\mathbf{r}'^3 \rho(\mathbf{r}'-\mathbf{r},0)\rho(\mathbf{r}',t)\right> \\
\end{split}
\end{equation}
in which the $G$ is the so-called density correlation function. $N$ represents the total number of gas molecules we are considering in our integration domain, $\mathbf{k}_f$ the scattered wave vector, $\omega_f$ the scattered wave frequency, $c$ is the speed of light in vacuum and $r$ is the distance between the scattering gas medium and observer.
It is easy to derive that the term inside the integral of first equation in \ref{Rayleigh Scattering Spectra: spectral density for with space-time correlation function, omega4} is proportional to the density fluctuation spectra $\left<\rho^2\right>(\mathbf{k}_i-\mathbf{k}_f,\omega_i-\omega_f)$ for given $\mathbf{k}_i, \mathbf{k}_f$. The density fluctuation spectra is defined as, for density function $\rho(t)$, that
\begin{equation} \label{Spectral Resolution of fluctuations: The spectra, meaning}
\begin{split}
<\rho^2>(\omega, \bold{k}) &= \frac{1}{2\pi}\int \left<\rho^2\right>(t, \bold{x}) e^{-i \omega t} e^{-i \bold{k}\cdot \bold{x}} dt d\bold{x}\\
\left<\rho^2\right>(t,\bold{x}) &= \left< \rho(t_0,\bold{x}_0) \rho(t_0+t,\bold{x}_0+ \bold{x}) \right> \\
\end{split}
\end{equation}
In summary, the key of calculating the Rayleigh scattering spectra is to calculate the density fluctuation spectra.
\section{Non-dimensionalization of the governing equation}
In system equation \eqref{Non-dim Macroscopic Equations}, the non-dimensionalziation use the reference length scale $\Delta x = \frac{2\pi}{k}$, and the reference time scale follows $\frac{\Delta x}{\Delta t} = \sqrt{\frac{k_BT_0 }{m }}$. The non-dimensionalized quantity is represented with bar as $\bar{t} = \frac{t}{\Delta t}$, $\bar{x} = \frac{x}{\Delta x}$, $\bar{v}_x = \frac{\Delta t v_x}{\Delta x}$, $\bar{\rho} = \frac{\rho - \rho_0}{\rho_0}$, $\bar{T} = \frac{T-T_0}{T_0}$, $\bar{\sigma}_{xx} = \frac{m }{\rho_0 k_B T_0} \sigma_{xx}$, $\bar{q}_{x} = \frac{1}{\rho_0 }\left({\frac{m }{ k_B T_0}} \right)^{3/2} q_x$.
\section{Derivation of the DUAL model from linear combination of derivatives}
This section we will show that the DUAL model \eqref{The Linear Model on Derivatives: stress_and_flux, not flux, Fourier} is flexible enough to model all well defined linear constitutive relations. As we discussed in the paper, the well defined linear constitutive relations are of the from \eqref{The Linear Model on Derivatives: stress_and_flux} under the condition of non-increasing entropy. We will show that the DUAL model could be derived from these linear constitutive relations. The non-dimensionalized form of these well defined linear constitutive relations are
\begin{equation} \label{The Linear Model on Derivatives: stress_and_flux no dim}
\begin{split}
\bar{\sigma}_{xx} &= - \sum_{n=1}^{\infty} \frac{a_n}{\mu \Delta x^{n-1}} \frac{\partial^n \bar{v}_x}{\partial \bar{x}^n} = - \sum_{n=1}^{\infty} \bar{a}_n \frac{\partial^n \bar{v}_x}{\partial \bar{x}^n} \\
\bar{q}_{x} &=-\sum_{n=1}^{q} \frac{b_n}{\kappa \Delta x^{n-1}} \frac{\partial^n \bar{T}}{\partial \bar{x}^n}=-\sum_{n=1}^{q} \bar{b}_n\frac{\partial^n \bar{T}}{\partial \bar{x}^n} \\,
\end{split}
\end{equation}
in which $\bar{a}_n = \frac{a_n}{\mu \Delta x^{n-1}}$, $\bar{b}_n = \frac{b_n}{\kappa \Delta x^{n-1}}$. These non-dimensional quantites having no constraints on them yet. An analysis on entropy change of the system, using equation \ref{Hydrodynamic Fluctuations:Total entropy change rate, expand 1}, tells us that
\begin{equation} \label{The DSMC Result: with linear model: Entropy condition}
q_j\partial_j T \le 0; \quad \sigma_{ij} \left( \frac{\partial v_i}{\partial x_j} + \frac{\partial v_j}{\partial x_i} \right) \le 0.
\end{equation}
We require the two term independently satisfy these condition because the temperature and velocity fluctuation independently. In our 1D case, more explicitly, we have
\begin{equation} \label{The DSMC Result: with linear model: Entropy condition, 1D}
q_x\partial_x T \le 0; \quad \sigma_{xx} \partial_x v_x \le 0
\end{equation}
Especially, in our case, the system is homogeneous, hence there should not exists a particular scale that entropy decrease. We expect no entropy production for each scale. Hence if we decompose the system in Fourier modes, we wish to have
\begin{equation} \label{The DSMC Result: with linear model: Entropy condition, 1D, Fourier}
\tilde{q}_k (i k\tilde{T}_k)^* \le 0; \quad \tilde{\sigma}_{k} (i k \tilde{v}_k)^* \le 0
\end{equation}
The $*$ indicates complex conjugate. Combine with our linear model \ref{The Linear Model on Derivatives: stress_and_flux}, after a Fourier transformation on spatial coordinate, we have the condition
\begin{equation} \label{The Linear Model on Derivatives: stress_and_flux entropy}
\begin{split}
\sum_{n=1}^{\infty} (i k)^{n+1} a_n \left|\tilde{v}_k\right|^2 &\le 0 \\
\sum_{n=1}^{\infty} (i k)^{n+1} b_n \left|\tilde{T}_k\right|^2 &\le 0 \\
\end{split}
\end{equation}
The previous equation, requires that there are no imaginary part on the LHS. Hence only terms with even powers on $i$ remains, as follows
\begin{equation} \label{The Linear Model on Derivatives: stress_and_flux entropy, explicit terms}
\begin{split}
(k^2 a_1 - k^4 a_3 + k^6 a_5 - k^8 a_7 \cdots ) \left|\tilde{v}_k\right|^2 &\ge 0 \\
(k^2 b_1 - k^4 b_3 + k^6 b_5 - k^8 b_7 \cdots )\left|\tilde{T}_k\right|^2&\ge 0 \\
\end{split}
\end{equation}
In our case, the Knudsen number is determined directly by the wave number $k$, as shown in \ref{Kn}. In addition, we want the only non-dimensional quantity that govern the system to be the Knudsen number $Kn$. Hence we expect quantites $\bar{a},\bar{b}$ to be functions of the Knudsen number. The non-dimensionalized version of the constitutive relation is as
\begin{equation} \label{The Linear Model on Derivatives: stress_and_flux, Fourier}
\begin{split}
\partial_{\bar{x}} \bar{\sigma}(\bar{k}) &= (\bar{k}^2 \bar{a}_1 - \bar{k}^4 \bar{a}_3 + \bar{k}^6 \bar{a}_5 \cdots ) \tilde{v}_{\bar{k}} = \bar{k}^2 M(\bar{k})\tilde{v}_{\bar{k}}\\
\partial_{\bar{x}} \bar{q}(\bar{k}) &= (\bar{k}^2 \bar{b}_1 - \bar{k}^4 \bar{b}_3 + \bar{k}^6 \bar{b}_5 \cdots ) \tilde{T}_{\bar{k}}= \bar{k}^2 K(\bar{k})\tilde{T}_{\bar{k}} , \\
\end{split}
\end{equation}
in which function $M,K$ are the infinite sum of series and should be nonnegative functions. Besides, $\bar{k} = \frac{k \Delta x}{2 \pi}$ is the non-dimensionalized version of $k$. Particularly, the non-dimensionalized $\bar{k}$ is proportional to the Knudsen number by \eqref{Kn}. We could further deduce the stress and heat flux as the DUAL model:
\begin{equation} \label{The Linear Model on Derivatives: stress_and_flux, not flux, Fourier,appendix}
\begin{split}
\bar{\sigma}(\bar{k}) & = -i \bar{k} M(\mathrm{Kn})\tilde{v}_{\bar{k}} \quad M(\mathrm{Kn}) \ge 0 \\
\bar{q}(\bar{k}) &=-i \bar{k} K(\mathrm{Kn})\tilde{T}_{\bar{k}} \quad K(\mathrm{Kn}) \ge 0 . \\
\end{split}
\end{equation}
\section{Calculating the density fluctuation spectra}
Substitute the DUAL model into \eqref{Non-dim Macroscopic Equations}, for monatomic gas with the heat conduction coefficient $\kappa = \frac{15 k_B}{4m}\mu$, we have the non-dimensionalized system equation to be
\begin{equation}\label{The DSMC Result: with linear model}
\begin{split}
\partial_{\bar{t}} \delta \bar{\rho} + \partial_{\bar{x}} \bar{v}_x &= 0\\
\partial_{\bar{t}} \bar{v}_x + \partial_{\bar{x}}\delta \bar{T} + \partial_{\bar{x}} \delta \bar{\rho} &= \mbox{Kn} \sum_{n=1}^{\infty} \bar{a}_n \partial_{\bar{x}}^{n+1} \bar{v}_x \\
\frac{3 }{2}\partial_{\bar{t}} \delta \bar{T} + \partial_{\bar{x}} \bar{v}_x &= \frac{15 }{4 } \mbox{Kn} \sum_{n=1}^{\infty} \bar{b}_n \partial_{\bar{x}}^{n+1} \bar{T} \\
\end{split}
\end{equation}
Multiply $\rho(0,0)$ and take ensemble average, we could obtain a system on correlation functions such as $\left<\rho^2\right>(t,x)$, $\left<\rho v_x\right>(t,x)$, etc. These function are well defined since we are considering homogeneous gas, hence the fluctuations are stationary.
Before we preceed, we define the one sided Fourier transform on time coordinate:
\begin{equation} \label{Spectral Resolution of fluctuations: The FT, one sided}
x_\omega^{(+)} = \frac{1}{\sqrt{2\pi}}\int_0^{\infty} x(t) e^{-i \omega t} dt .
\end{equation}
And in this section we omit the bar that indicating non-dimensionalized variables.
Doing a Fourier transform on space coordinate, and one sided Fourier transformation on time coordinate, the macroscopic equation becomes
\begin{equation}\label{The DSMC Result: with linear model, Fourier}
\begin{split}
i \omega \left< {\rho}^2 \right>_{\omega,k}^{+} + i k \left< {\rho} {v}_x \right>_{\omega,k}^{+} &= \frac{1}{\sqrt{2\pi}}\left< {\rho}^2 \right>_{k}\\
i \omega \left< {\rho} {v}_x \right>_{\omega,k}^{+} + i k \left< {\rho} {T} \right>_{\omega,k}^{+} + i k \left< {\rho}^2 \right>_{\omega,k}^{+} &= \\
- k^2 \mbox{Kn}M(Kn) & \left< {\rho} {v}_x \right>_{\omega,k}^{+} \\
\frac{3 }{2}i \omega \left< {\rho} {T} \right>_{\omega,k}^{+} + i k \left< {\rho} {v}_x \right>_{\omega,k}^{+} =
- \frac{15 }{4 } & k^2 \mbox{Kn} K(Kn) \left< {\rho} {T} \right>_{\omega,k}^{+} \\
\end{split}
\end{equation}
The term $\left< {\rho}^2 \right>_{k}$ on the left hand side of the first equation in \ref{The DSMC Result: with linear model, Fourier} comes from integration by part. This is the spatial density fluctuation spectra which is a constant, since there are no spatial correlation between different location at macroscopic level. The term $\left< \tilde{\rho}^2 \right>_{k}$ could be calculated from fluctuation theory \cite{Landau1980, Landau1987}
\begin{equation}\label{ini condition}
\left< {\rho}^2 \right>_{k}=\frac{mN_{eff}k}{(2\pi)^{3/2} \rho_0 }
\end{equation}
Where the $m$ is the molecule mass, and the $N_{eff}$ is the effective number of molecules per particle used in the DSMC simulation, to take the Monte Carlo fluctuation into account. The integration by part will also give terms such as $\left< {\rho}{v} \right>_{k}$ and $\left< {\rho}{T} \right>_{k}$. However, it is known that they vanish according to fluctuation theory.
The density fluctuation spectra could be obtained from the spectra $\left< \tilde{\rho}^2 \right>_{\omega,k}^{+}$ which takes one sided Fourier transform on time coordinate and Fourier transform on spatial coordinate. The following relation could be used to obtain the density fluctuation spectra:
\begin{equation} \label{Spectral Resolution of fluctuations: The spectra, decompose to sides}
\left< \rho^2\right>_\omega = \left<\rho^2\right>_\omega^{(+)} + \left<\rho^2\right>_\omega^{(+)*}
\end{equation}
Substitute the linear constitutive relation model into the system equation, we have the system \ref{The DSMC Result: with linear model, Fourier}, combined with the initial condition \ref{ini condition}. The term $\left< {\rho}^2 \right>_{\omega,k}^{+}$ could be solved from the system, with the notation
\begin{equation}\label{Notation AB}
\begin{split}
A(Kn) &= Kn M(Kn)\\
B(Kn) &= \frac{15}{4}Kn K(Kn).\\
\end{split}
\end{equation}
The solution for $\left< {\rho}^2 \right>_{\omega,k}^{+}$ is
\begin{widetext}
\begin{equation}\label{The Linear Model on Derivatives: solution}
\left< {\rho}^2 \right>_{\omega,k}^{+}=
-\frac{i k N_{eff} m \left(-2 k^4 A(\text{Kn}) B(\text{Kn})-3 i k^2 \omega A(\text{Kn})-2 i k^2 \omega B(\text{Kn})-2 k^2+3 \omega ^2\right)}{(2 \pi)^2 \rho_0 \left(-2 k^4 \omega A(\text{Kn}) B(\text{Kn})-3 i k^2 \omega ^2 A(\text{Kn})+2 i k^4 B(\text{Kn})-2 i k^2 \omega ^2 B(\text{Kn})-5 k^2 \omega +3 \omega ^3\right)}
\end{equation}
\end{widetext}
The NS equation also lies in the framework of linear constitutive relation model. We just need to set the following relation
\begin{equation} \label{NS spectra}
M(Kn) = \frac{4}{3}
\end{equation}
combine with equation \ref{Fix Pr}. The spectra hence follows the same form as in equation \ref{The Linear Model on Derivatives: solution}.
As for the Grad 13 method, two addition equation on the stress and heat flux appears. as is
\begin{equation}\label{The DSMC Result: Grad 13}
\begin{split}
\partial_{ {t}} {\sigma} + \frac{4}{3}\partial_{ {x}} {v}_x + \frac{8}{15}\partial_{ {x}} {q} &= -\frac{ {\sigma}}{Kn}\\
\partial_{ {t}} {q} +\partial_{ {x}} {\sigma}+ \frac{5 }{2}\partial_{ {x}} {T} &= -\frac{2}{3}\frac{ {q}}{Kn}\\
\end{split}
\end{equation}
Again, Multiply $\rho(0,0)$ and take ensemble average, we could obtain another system on correlation functions. Doing a one sided Fourier transformation on time and Fourier transformation on space coordinates, we have
\begin{widetext}
\begin{equation}\label{The DSMC Result: Grad 13, Fourier}
\begin{split}
i \omega \left< {\rho}^2\right>_{\omega,k}^{+} + i k \left< {\rho} {v}_x\right>_{\omega,k}^{+} &= \frac{mN_{eff}k}{(2\pi)^2\rho_0}\\
i \omega \left< {\rho} {v}_x\right>_{\omega,k}^{+} + i k\left< {\rho} {T}\right>_{\omega,k}^{+} + i k \left< {\rho}^2\right>_{\omega,k}^{+} + i k \left< {\rho} {\sigma}\right>_{\omega,k}^{+} &= 0\\
\frac{3 }{2}i \omega \left< {\rho} {T}\right>_{\omega,k}^{+} + i k \left< {\rho} {v}_x\right>_{\omega,k}^{+} + i k \left< {\rho} {q}\right>_{\omega,k}^{+} &= 0\\
i \omega \left< {\rho} {\sigma}\right>_{\omega,k}^{+} + \frac{4}{3}i k \left< {\rho} {v}_x\right>_{\omega,k}^{+} + \frac{8}{15}i k \left< {\rho} {q}\right>_{\omega,k}^{+} &= -\frac{\left< {\rho} {\sigma}\right>_{\omega,k}^{+}}{Kn}\\
i \omega \left< {\rho} {q}\right>_{\omega,k}^{+} +i k \left< {\rho} {\sigma}\right>_{\omega,k}^{+}+ \frac{5 }{2}i k \left< {\rho} {T}\right>_{\omega,k}^{+} &= -\frac{2}{3}\frac{\left< {\rho} {q}\right>_{\omega,k}^{+}}{Kn}\\
\end{split}
\end{equation}
\end{widetext}
The spectra is hence calculated in the same way as NS equation and linear constitutional relation model. The result is
\begin{widetext}
\begin{equation}\label{The DSMC Result: grad 13 solution}
\left< \delta {\rho}^2 \right>_{\omega,k,Grad13}^{+} = \frac{m N_{\text{eff}} k\left(-36 i k^4 \text{Kn}^2+189 i k^2 \text{Kn}^2 \omega ^2+165 k^2 \text{Kn} \omega -20 i k^2-45 i \text{Kn}^2 \omega ^4-75 \text{Kn} \omega ^3+30 i \omega ^2\right)}{(2 \pi)^2 \rho _0 \left(135 k^4 \text{Kn}^2 \omega -75 i k^4 \text{Kn}-234 k^2 \text{Kn}^2 \omega ^3+240 i k^2 \text{Kn} \omega ^2+50 k^2 \omega +45 \text{Kn}^2 \omega ^5-75 i \text{Kn} \omega ^4-30 \omega ^3\right)}
\end{equation}
\end{widetext}
As for the velocity fluctuation spectra, the LHS of \ref{The DSMC Result: with linear model, Fourier} need to be modified, since this time we product $v_x(0,0)$ instead of $\rho(0,0)$ before taking the ensemble average. This time we have
\begin{widetext}
\begin{equation}\label{The Linear Model on Derivatives: velocity }
\begin{split}
i \omega \left< {\rho} {v}_x \right>_{\omega, k}^{+} + ik \left< {v}_x^2 \right>_{\omega, k}^{+} &= 0\\
i \omega \left< {v}_x^2 \right>_{\omega, k}^{+} + ik \left< {v}_x {T} \right>_{\omega, k}^{+} + ik \left< {\rho} {v}_x \right>_{\omega, k}^{+} &= - k^2 A(\textit{Kn}) \left< {\rho} {v}_x \right>_{\omega, k}^{+} +\frac{N_{eff} m k}{(2\pi)^2 \rho_0 } \\
\frac{3 }{2}i \omega \left< {v}_x {T} \right>_{\omega, k}^{+} + ik \left< {v}_x^2 \right>_{\omega, k}^{+} &= - k^2 B(\textit{Kn}) \left< {v}_x {T} \right>_{\omega, k}^{+} \\
\end{split}
\end{equation}
\end{widetext}
\begin{widetext}
\begin{equation}\label{The Linear Model on Derivatives: velocity result}
\left< \delta {v}_x^2 \right>_{\omega,k}^{+} =-\frac{i N_{eff} m \omega k \left(3 \omega -2 i k^2 B(\text{Kn})\right)}{(2\pi)^2 \rho (-2 k^4 \omega A(\text{Kn}) B(\text{Kn})-3 i k^2 \omega ^2 A(\text{Kn})+2 i k^4 B(\text{Kn})-2 i k^2 \omega ^2 B(\text{Kn})-5 k^2 \omega +3 \omega ^3)}
\end{equation}
\end{widetext}
\section{DSMC calculation Details}
We use the DSMC0F program by A.Bird \cite{Bird1994} to simulate the fluctuation of 1D homogeneous gas. The computing details are shown in following Table \ref{tab:table2} using SI units. A snap shot will be stored for every 5 time steps. Then the macroscopic quantities for each cell are calculated by averaging corresponding quantites of particles in each cell. The density fluctuation spectra used to train the neural network is ensemble average from 27 independent DSMC run.
\begin{table}
\caption{\label{tab:table2}
The coefficients and configuration used in DSMC0F program.}
\begin{ruledtabular}
\begin{tabular}{c|c|c|c}
Domain Length & 4.8 & Collision Model & VHS\footnotemark[1]\\
Power law\footnotemark[2] & 0.5 & Diameter\footnotemark[3] & $3.5\times10^{-10}$\\
Num of Cell & 1281 & Mean Free Path & $1.8\times10^{-2}$ \\
Simulation Particle & 128100 & Mean Free Time & $4\times10^{-5}$ \\
Density & $5\times10^{-6}$ & Temperature & 300 \\
Molecule Mass & $5\times10^{-26}$ & Sound Speed & 371.56 \\
Heat Conduction& 0.0214 & Viscosity &$2.07\times10^{-5}$ \\
$\Delta t$ & $4\times10^{-6}$ & $\Delta x$ & $3.7\times10^{-3}$\\
Subcell \footnotemark[4]& 8 & &\\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Variable hard sphere model}
\footnotetext[2]{The viscosity-temperature power law used in variable hard sphere model}
\footnotetext[3]{The reference molecule diameter}
\footnotetext[4]{The number of subcell per cell used in particle collision process}
\end{table}
\section{Neural Network Training Details}
The density fluctuation spectra is of the form $\left<\rho^2\right>(k,\omega)$, with $k$ corresponds to the wave number of our interested density fluctuation wave, which directly determines the Knudsen number, and $\omega$ be the frequency of the corresponding wave. Given $k$, the spectra $\left<\rho^2\right>_k(\omega)$ is a function of the angular frequency $\omega$ with spectra structure caused by collective behavior of gas molecules. Duirng training process of linear constitutive relation model, 200 $k$ is draw from a uniform distribution on the interval $\left[0,\frac{\pi\rho_0}{2\mu}\sqrt{\frac{k_B T_0}{m}}\right]$. for a given $k$, we do a Monte Carlo sample using acceptance-rejection method, 200 draw of $\omega$ from the range $[-3ck, 3ck]$ with probability proportional to $\left<\rho^2\right>_k(\omega)$ calculated using DSMC. $c$ is the speed of sound. Finally, the training data set consists of total 40000 $(k,\omega, \left<\rho^2\right>)$ tuple. In the same way, we sample another 400 tuple as validation set.
The function $M$ is modeled as a fully connected neural network as shown in \ref{The neural network}. The weights to be trained is $\mathbf{W}_{1,2}$, notice that in our linear layer, the is no bias as usual. This is because we need to function to satisfy constraints as discussed in the paper. The weights are initialized using uniform initialization by as default in Pytorch. We then compare the mean square difference between spectra $\left<\rho^2\right>_k(\omega)$ computed using DSMC with the spectra predicted by linear constitutiont relation model. Using the loss in \ref{The loss function}. The optimizer we use is the Adam optimizer with learning rate $\alpha = 0.005$, Beta parameter $\beta_1 = 0.9$ and $\beta_2 = 0.999$, and the parameter $\epsilon = 10^{-8}$. For each training epoch, the batch size for each step is 64. The training process stops if the loss obtained on validation set increases. Since we are doing 1D function fitting using thousands of data, over-fitting is negligible, this is also tested by the velocity spectra predicted by our model.
\end{document}
\section{\label{sec:level1}First-level heading:\protect\\ The line
break was forced \lowercase{via} \textbackslash\textbackslash}
This sample document demonstrates proper use of REV\TeX~4.2 (and
\LaTeXe) in manuscripts prepared for submission to AIP
journals. Further information can be found in the documentation included in the distribution or available at
\url{http://authors.aip.org} and in the documentation for
REV\TeX~4.2 itself.
When commands are referred to in this example file, they are always
shown with their required arguments, using normal \TeX{} format. In
this format, \verb+#1+, \verb+#2+, etc. stand for required
author-supplied arguments to commands. For example, in
\verb+\section{#1}+ the \verb+#1+ stands for the title text of the
author's section heading, and in \verb+\title{#1}+ the \verb+#1+
stands for the title text of the paper.
Line breaks in section headings at all levels can be introduced using
\textbackslash\textbackslash. A blank input line tells \TeX\ that the
paragraph has ended.
\subsection{\label{sec:level2}Second-level heading: Formatting}
This file may be formatted in both the \texttt{preprint} (the default) and
\texttt{reprint} styles; the latter format may be used to
mimic final journal output. Either format may be used for submission
purposes; however, for peer review and production, AIP will format the
article using the \texttt{preprint} class option. Hence, it is
essential that authors check that their manuscripts format acceptably
under \texttt{preprint}. Manuscripts submitted to AIP that do not
format correctly under the \texttt{preprint} option may be delayed in
both the editorial and production processes.
The \texttt{widetext} environment will make the text the width of the
full page, as on page~\pageref{eq:wideeq}. (Note the use the
\verb+\pageref{#1}+ to get the page number right automatically.) The
width-changing commands only take effect in \texttt{twocolumn}
formatting. It has no effect if \texttt{preprint} formatting is chosen
instead.
\subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes}
Citations in text refer to entries in the Bibliography;
they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+.
Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly,
its entire repertoire of commands are available in your document;
see the \verb+natbib+ documentation for further details.
The argument of \verb+\cite+ is a comma-separated list of \emph{keys};
a key may consist of letters and numerals.
By default, citations are numerical; \cite{feyn54} author-year citations are an option.
To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}).
REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate.
REV\TeX\ provides the ability to properly punctuate textual citations in author-year style;
this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off.
To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983},
and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}).
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command,
where the argument is the citation key mentioned above.
\verb+\bibitem{#1}+ commands may be crafted by hand or, preferably,
generated by using Bib\TeX.
The AIP styles for REV\TeX~4 include Bib\TeX\ style files
\verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for
numbered and author-year bibliographies,
respectively.
REV\TeX~4 will automatically choose the style appropriate for
the document's selected class options: the default is numerical, and
you obtain the author-year style by specifying a class option of \verb+author-year+.
This sample file demonstrates a simple use of Bib\TeX\
via a \verb+\bibliography+ command referencing the \verb+sorsamp.bib+ file.
Running Bib\TeX\ (in this case \texttt{bibtex
sorsamp}) after the first pass of \LaTeX\ produces the file
\verb+sorsamp.bbl+ which contains the automatically formatted
\verb+\bibitem+ commands (including extra markup information via
\verb+\bibinfo+ commands). If not using Bib\TeX, the
\verb+thebibiliography+ environment should be used instead.
\paragraph{Fourth-level heading is run in.}%
Footnotes are produced using the \verb+\footnote{#1}+ command.
Numerical style citations put footnotes into the
bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}.
Author-year and numerical author-year citation styles (each for its own reason) cannot use this method.
Note: due to the method used to place footnotes in the bibliography, \emph{you
must re-run BibTeX every time you change any of your document's
footnotes}.
\section{Math and Equations}
Inline math may be typeset using the \verb+$+ delimiters. Bold math
symbols may be achieved using the \verb+bm+ package and the
\verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can
be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and
Blackboard (or open face or double struck) characters should be
typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands
respectively. Both are supplied by the \texttt{amssymb} package. For
example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and
\verb+$\mathfrak{G}$+ gives $\mathfrak{G}$
In \LaTeX\ there are many different ways to display equations, and a
few preferred ways are noted below. Displayed math will center by
default. Use the class option \verb+fleqn+ to flush equations left.
Below we have numbered single-line equations, the most common kind:
\begin{eqnarray}
\chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2}
\left(
\begin{array}{c}
|{\bf p}|+p_z\\
px+ip_y
\end{array}\right)\;,
\\
\left\{%
\openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}%
\label{eq:one}.
\end{eqnarray}
Note the open one in Eq.~(\ref{eq:one}).
Not all numbered equations will fit within a narrow column this
way. The equation number will move down automatically if it cannot fit
on the same line with a one-line equation:
\begin{equation}
\left\{
ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}%
\right\}.
\end{equation}
When the \verb+\label{#1}+ command is used [cf. input for
Eq.~(\ref{eq:one})], the equation can be referred to in text without
knowing the equation number that \TeX\ will assign to it. Just
use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in
the \verb+\label{#1}+ command.
Unnumbered single-line equations can be typeset
using the \verb+\[+, \verb+\]+ format:
\[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \]
\subsection{Multiline equations}
Multiline equations are obtained by using the \verb+eqnarray+
environment. Use the \verb+\nonumber+ command at the end of each line
to avoid assigning a number:
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
\delta_{\sigma_1,-\sigma_2}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1),
\end{eqnarray}
\begin{eqnarray}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\nonumber \\
& &\times \left( \sum_{i<j}\right)
\sum_{\text{perm}}
\frac{1}{S_{12}}
\frac{1}{S_{12}}
\sum_\tau c^f_\tau~.
\end{eqnarray}
\textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline
equation if \verb+\nonumber+ is also used on that line. Incorrect
cross-referencing will result. Notice the use \verb+\text{#1}+ for
using a Roman font within a math environment.
To set a multiline equation without \emph{any} equation
numbers, use the \verb+\begin{eqnarray*}+,
\verb+\end{eqnarray*}+ format:
\begin{eqnarray*}
\sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}
(N^2-1)\\
& &\times \left( \sum_{i<j}\right)
\left(
\sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}}
\right)
\frac{1}{S_{12}}~.
\end{eqnarray*}
To obtain numbers not normally produced by the automatic numbering,
use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired
equation number. For example, to get an equation number of
(\ref{eq:mynum}),
\begin{equation}
g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow
q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum}
\end{equation}
A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires
\texttt{amsmath}. The \verb+\tag{#1}+ must come before the
\verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is
\textit{transparent} to the automatic numbering in REV\TeX{};
therefore, the number must be known ahead of time, and it must be
manually adjusted if other equations are added. \verb+\tag{#1}+ works
with both single-line and multiline equations. \verb+\tag{#1}+ should
only be used in exceptional case - do not use it to number all
equations in a paper.
Enclosing single-line and multiline equations in
\verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce
a set of equations that are ``numbered'' with letters, as shown in
Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below:
\begin{subequations}
\label{eq:whole}
\begin{equation}
\left\{
abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta
\frac{1\sum^{a}_{b}}{A^2}
\right\},\label{subeq:1}
\end{equation}
\begin{eqnarray}
{\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1}
(g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\
&&\times
[\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2}
\end{eqnarray}
\end{subequations}
Putting a \verb+\label{#1}+ command right after the
\verb+\begin{subequations}+, allows one to
reference all the equations in a subequations environment. For
example, the equations in the preceding subequations environment were
Eqs.~(\ref{eq:whole}).
\subsubsection{Wide equations}
The equation that follows is set in a wide format, i.e., it spans
across the full page. The wide format is reserved for long equations
that cannot be easily broken into four lines or less:
\begin{widetext}
\begin{equation}
{\cal R}^{(\text{d})}=
g_{\sigma_2}^e
\left(
\frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)
+ x_WQ_e
\left(
\frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2}
+\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2}
\right)\;. \label{eq:wideeq}
\end{equation}
\end{widetext}
This is typed to show the output is in wide format.
(Since there is no input line between \verb+\equation+ and
this paragraph, there is no paragraph indent for this paragraph.)
\section{Cross-referencing}
REV\TeX{} will automatically number sections, equations, figure
captions, and tables. In order to reference them in text, use the
\verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a
particular page, use the \verb+\pageref{#1}+ command.
The \verb+\label{#1}+ should appear in a section heading, within an
equation, or in a table or figure caption. The \verb+\ref{#1}+ command
is used in the text where the citation is to be displayed. Some
examples: Section~\ref{sec:level1} on page~\pageref{sec:level1},
Table~\ref{tab:table1},%
\begin{table}
\caption{\label{tab:table1}This is a narrow table which fits into a
text column when using \texttt{twocolumn} formatting. Note that
REV\TeX~4 adjusts the intercolumn spacing so that the table fills the
entire width of the column. Table captions are numbered
automatically. This table illustrates left-aligned, centered, and
right-aligned columns. }
\begin{ruledtabular}
\begin{tabular}{lcr}
Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\
\hline
1 & 2 & 3\\
10 & 20 & 30\\
100 & 200 & 300\\
\end{tabular}
\end{ruledtabular}
\end{table}
and Fig.~\ref{fig:epsart}.
\section{Figures and Tables}
Figures and tables are typically ``floats''; \LaTeX\ determines their
final position via placement rules.
\LaTeX\ isn't always successful in automatically placing floats where you wish them.
Figures are marked up with the \texttt{figure} environment, the content of which
imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+).
The argument of the latter command should itself contain a \verb+\label+ command if you
wish to refer to your figure with \verb+\ref+.
Import your image using either the \texttt{graphics} or
\texttt{graphix} packages. These packages both define the
\verb+\includegraphics{#1}+ command, but they differ in the optional
arguments for specifying the orientation, scaling, and translation of the figure.
Fig.~\ref{fig:epsart}%
\begin{figure}
\includegraphics{fig_1
\caption{\label{fig:epsart} A figure caption. The figure captions are
automatically numbered.}
\end{figure}
is small enough to fit in a single column, while
Fig.~\ref{fig:wide}%
\begin{figure*}
\includegraphics{fig_2
\caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide
figure, spanning the page in \texttt{twocolumn} formatting.}
\end{figure*}
is too wide for a single column,
so instead the \texttt{figure*} environment has been used.
The analog of the \texttt{figure} environment is \texttt{table}, which uses
the same \verb+\caption+ command.
However, you should type your caption command first within the \texttt{table},
instead of last as you did for \texttt{figure}.
The heart of any table is the \texttt{tabular} environment,
which represents the table content as a (vertical) sequence of table rows,
each containing a (horizontal) sequence of table cells.
Cells are separated by the \verb+&+ character;
the row terminates with \verb+\\+.
The required argument for the \texttt{tabular} environment
specifies how data are displayed in each of the columns.
For instance, a column
may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+),
or aligned on a decimal point (\verb+d+).
(Table~\ref{tab:table4}%
\begin{table}
\caption{\label{tab:table4}Numbers in columns Three--Five have been
aligned by using the ``d'' column specifier (requires the
\texttt{dcolumn} package).
Non-numeric entries (those entries without
a ``.'') in a ``d'' column are aligned on the decimal point.
Use the
``D'' specifier for more complex layouts. }
\begin{ruledtabular}
\begin{tabular}{ccddd}
One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\
\hline
one&two&\mbox{three}&\mbox{four}&\mbox{five}\\
He&2& 2.77234 & 45672. & 0.69 \\
C\footnote{Some tables require footnotes.}
&C\footnote{Some tables need more than one footnote.}
& 12537.64 & 37.66345 & 86.37 \\
\end{tabular}
\end{ruledtabular}
\end{table}
illustrates the use of decimal column alignment.)
Extra column-spacing may be be specified as well, although
REV\TeX~4 sets this spacing so that the columns fill the width of the
table.
Horizontal rules are typeset using the \verb+\hline+
command.
The doubled (or Scotch) rules that appear at the top and
bottom of a table can be achieved by enclosing the \texttt{tabular}
environment within a \texttt{ruledtabular} environment.
Rows whose columns span multiple columns can be typeset using \LaTeX's
\verb+\multicolumn{#1}{#2}{#3}+ command
(for example, see the first row of Table~\ref{tab:table3}).%
\begin{table*}
\caption{\label{tab:table3}This is a wide table that spans the page
width in \texttt{twocolumn} mode. It is formatted using the
\texttt{table*} environment. It also demonstrates the use of
\textbackslash\texttt{multicolumn} in rows with entries that span
more than one column.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
&\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\
Ion&1st alternative&2nd alternative&lst alternative
&2nd alternative\\ \hline
K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\
Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.}
&$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\
Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page
width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. }
&$(4e)^{\text{a}}$\\
He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\
Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The tables in this document illustrate various effects.
Tables that fit in a narrow column are contained in a \texttt{table}
environment.
Table~\ref{tab:table3} is a wide table, therefore set with the
\texttt{table*} environment.
Lengthy tables may need to break across pages.
A simple way to allow this is to specify
the \verb+[H]+ float placement on the \texttt{table} or
\texttt{table*} environment.
Alternatively, using the standard \LaTeXe\ package \texttt{longtable}
gives more control over how tables break and allows headers and footers
to be specified for each page of the table.
An example of the use of \texttt{longtable} can be found
in the file \texttt{summary.tex} that is included with the REV\TeX~4
distribution.
There are two methods for setting footnotes within a table (these
footnotes will be displayed directly below the table rather than at
the bottom of the page or in the bibliography).
The easiest
and preferred method is just to use the \verb+\footnote{#1}+
command. This will automatically enumerate the footnotes with
lowercase roman letters.
However, it is sometimes necessary to have
multiple entries in the table share the same footnote.
In this case,
create the footnotes using
\verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+.
\texttt{\#1} is a numeric value.
Each time the same value for \texttt{\#1} is used,
the same mark is produced in the table.
The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular}
environment.
Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and
\ref{tab:table2}%
\begin{table}
\caption{\label{tab:table2}A table with more columns still fits
properly in a column. Note that several entries share the same
footnote. Inspect the \LaTeX\ input for this table to see
exactly how it is done.}
\begin{ruledtabular}
\begin{tabular}{cccccccc}
&$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$&
&$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\
\hline
Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1]
& 0.680 & 1.870 & 3.700 \\
Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2]
& 0.450 & 1.930 & 3.760 \\
Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3]
& 0.750 & 2.170 & 3.560 \\
Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4]
& 0.900 & 2.370 & 3.720 \\
Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2]
& 0.380 & 1.730 & 2.830 \\
Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5]
& 0.760 & 2.110 & 3.120 \\
Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5]
& 1.120 & 2.620 & 3.480 \\
Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3]
& 1.330 & 2.800 & 3.590 \\
Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4]
& 1.420 & 3.030 & 3.740 \\
In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5]
& 0.960 & 2.460 & 3.780 \\
Tl& 0.480 & 18.90 & 3.550 & & & & \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.}
\footnotetext[2]{Here's the second.}
\footnotetext[3]{Here's the third.}
\footnotetext[4]{Here's the fourth.}
\footnotetext[5]{And etc.}
\end{table}
for an illustration.
All AIP journals require that the initial citation of
figures or tables be in numerical order.
\LaTeX's automatic numbering of floats is your friend here:
just put each \texttt{figure} environment immediately following
its first reference (\verb+\ref+), as we have done in this example file.
\begin{acknowledgments}
We wish to acknowledge the support of the author community in using
REV\TeX{}, offering suggestions and encouragement, testing new versions,
\dots.
\end{acknowledgments}
|
2,869,038,156,296 | arxiv | \section{Introduction}
\label{intro}
Anomalous diffusion is frequently observed in transport in porous media and was subject of
intense theoretical research in recent decades \cite{bouchaud,havlin,metzler2000,metzler2014,xu}.
In such media, the mean-square displacement $R$ of a tracer particle scales in time $t$ as
\begin{equation}
R \sim t^\nu ,
\label{R}
\end{equation}
where, in the case of subdiffusion, $\nu<1/2$.
The random walk dimension
\begin{equation}
D_W=\frac{1}{\nu}
\label{DW}
\end{equation}
is consequently larger than $2$.
In normal or Fickean diffusion, $\nu =1/2$ ($D_W=2$); superdiffusion is characterized by
$\nu>1/2$, but such case is not discussed here.
The delay in material transport in a porous medium is caused by irregularities such as impenetrable
barriers and dead ends.
The anomaly is observed if this delay has no characteristic time scale \cite{havlin},
which is in turn related to the absence of a characteristic lengthscale of the irregularities
and explains the frequent observation of subdiffusion in self-similar fractals.
Several approaches to study anomalous diffusion are based on direct solutions of
the transport problems inside structures that represent those media under certain approximations.
Many of these structures are deterministic fractals \cite{mandelbrot}, which are generated by
recursive application of a rule for generation of porous and solid phases.
Well known examples are the Sierpinski carpets (SCs), whose construction
is illustrated in Fig. \ref{constructionfractals}a,b.
Their relatively simple geometry (e. g. if compared with stochastically generated fractals)
facilitates quantitative or qualitative connections between structural and transport properties.
Random walks were already intensively studied in fractal lattices with the geometry of SCs,
with finite or infinite ramification, and in randomized versions of the SCs
\cite{benavraham1983,kim1993,fssrw,dasgupta,rwpla,rrf,kim2007,babJCP,barlow,
haber2013,darazs,balankin2015,suwannasen}.
In infiltration of a fluid or a solute in a porous medium, an external surface
is in contact with a reservoir of the species that is transported in the pores.
These features are observed in a large variety of systems, such as hydration of rocks, water or
dye absorption in soils or rocks, injection of liquids in fractures or nanoporous solids, etc.
In some cases, models of convective/advective motion and diffusion are studied in deterministic
or randomized fractals
\cite{kuntz,gerolymatou,stalgorova,atzeni,roubinet,martys,perfect2006,dentz2006}.
However, in several other cases, diffusion is the dominant mechanism in the infiltration problem,
which motivates the study of anomalous diffusion models and the study of the
geometry of porous or fractured media \cite{persson,lockington,xu2006,bru,
atzeni,gerasimov,voller,filipovitch,gisladottir}.
The infiltration of randomly moving particles in planar and three-dimensional lattices was also
illustrated in Ref. \protect\cite{sapovalbook}; this motivated the gradient
percolation problem, in which two lattice borders were kept with fixed concentrations of particles
\cite{sapovaloriginal,bunde,rosso1986}.
Fluid infiltration in fractals was considered in a recent work by Voller \cite{voller},
who studied the diffusive motion inside several SCs keeping one external border with
constant pressure.
The fraction of the area occupied by the fluid, which here we call the filling $F$, scales as
\begin{equation}
F \sim t^n ,
\label{F}
\end{equation}
with $n<1/2$, consistently with anomalous subdiffusion.
Subsequently, a Hele-Shaw cell was designed by Filipovitch et al \cite{filipovitch} to reproduce
the pore-block geometry of the SCs and used to study infiltration of glycerin.
The exponents $n$ measured in the experimental apparatus were consistent with the previous
simulation values, thus providing a clear macroscopic demonstration of the relation between
structural disorder and subdiffusion.
Hereafter, these processes are called diffusive infiltration.
In two- or three-dimensional unobstructed lattices, $n$ has the normal diffusion value $1/2$.
However, the exponents $n$ and $\nu$ are very different in the same fractal.
For instance, in the fractal in Fig. \ref{constructionfractals}a, simulation of random walks (RWs)
give $\nu\approx 0.476$ \cite{kim1993,fssrw,suwannasen},
while the infiltration simulations give $n=0.419$ \cite{voller} and the corresponding
experiments give $n=0.423$ \cite{filipovitch}.
Although the works on diffusive infiltration in SCs consider only their first three or four stages
of construction, the finite sizes seem to have small effects of $n$.
The main aim of the present work is to relate the anomalous exponents of single particle diffusion
in the bulk ($\nu$) and of the diffusive infiltration from the border ($n$) of deterministic fractals.
We consider SCs, whose dimensions are between $1$ and $2$, and Menger sponges (MSs), whose
dimensions are between $2$ and $3$.
Numerical simulations are used to obtain accurate estimates of $\nu$ and $n$ and a scaling
approach is used to show that their ratio depends only on the bulk fractal dimension and on the
fractal dimension of the boundary from which the particles come.
We also define a diffusion front in this infiltration problem, show that averaged fronts
in SCs have shapes similar to those of the experiments in the Hele-Shaw cells, and briefly discuss
their roughening in the SCs and MSs.
We stress that our work is concerned with unbiased RWs for both problems, thus it is
not expected to describe systems in which convective or advective transport is relevant.
This paper is organized as follows.
Sec. \ref{model} presents the models of fractal lattices, diffusion processes, and information on the
simulation work.
Sec. \ref{simulations} shows simulation results for single particle diffusion and diffusive infiltration
in SCs and MSs.
Sec. \ref{scaling} presents an approach to connect the scaling exponents of those problems.
In Sec. \ref{fronts}, the roughening of the diffusion fronts is analyzed.
In Sec. \ref{conclusion}, our results and conclusions are summarized.
\section{Fractal lattices, diffusion models, and their simulation}
\label{model}
The construction of the SCs studied here is shown in Figs. \ref{constructionfractals}a and
\ref{constructionfractals}b; they are respectively called SC1 and SC2.
Their fractal dimensions are $D_F^{\left( 1\right)}=\ln{8}/\ln{3}$ and
$D_F^{\left( 2\right)}=\ln{16}/\ln{5}$, respectively.
These values up to five decimal places are shown in Table \ref{tableresults}.
\begin{figure}[!h]
\includegraphics[width=\textwidth]{constructionfractals.eps}
\caption{(Color online)
Panels (a) and (b) show the first two stages of construction of SC1 and SC2, respectively.
The generator (stage $m=1$) is a square divided in $b^2$ subsquares with $k$ of them removed (blue):
In SC1, $b=3$ and $k=1$; in SC2, $b=5$ and $k=9$.
In each stage of the construction, each remaining square is replaced by the generator, thus $b$ is
the scaling factor of this process.
A SC with dimension $D_F=\ln{\left( b^2-k\right)}/\ln{b}$ is obtained after infinite iterations.
In panel (a), the pore sites of the SC1 lattice are shown as green dots in the stage $m=2$.
Panels (c) and (d) show the generators of MS1 and MS2, respectively, in which a cube is divided in $b^3$
subcubes and $k$ of them (in blue tones) are removed: in MS1, $b=3$ and $k=7$; in MS2, $b=5$ and $k=27$,
but the division of this generator is shown only in one external face of the main cube and one face
of the removed cube to facilitate visualization.
In each stage of the construction of a MS, each remaining cube is replaced by the generator, so that
a fractal with dimension $D_F=\ln{\left( b^3-k\right)}/\ln{b}$ is obtained after infinite iterations.
}
\label{constructionfractals}
\end{figure}
A lattice is defined with sites at the vertices of the squares produced at each step
of the construction of the SC.
In each stage $m$, the unit size is defined as the distance between nearest neighbor
sites, thus the lateral size of the lattice (number of sites in one border) is $L=b^m+1$,
where $b$ is the scaling factor of the generator.
The solid sites of the lattice are those located inside the lacunas.
The remaining sites form the pore network.
The distribution of pore sites is illustrated in the stage $m=2$ of SC1 in
Fig. \ref{constructionfractals}a.
Note that many pore sites are in the borders of the lacunas.
Hereafter we refer to this pore network as the SC; it actually has the same fractal dimension
of the region remaining after infinite iterations of the construction rule.
The particles executing RWs in the SC can occupy only pore sites.
An impenetrable border of the lattice is located at the $y$ axis ($x=0$), as shown in Fig.
\ref{diffmodel}a.
This means that no particle can jump to points with $x<0$.
Periodic boundary conditions are considered in the $y$ direction.
These conditions do not affect the geometric properties of the fractals.
\begin{figure}[!h]
\includegraphics[width=0.4\textwidth]{diffmodel.eps}
\caption{(Color online) (a) Some steps of a particle (red) in the single particle diffusion model
in SC1 (white pores, blue solid).
The lower dark line is the impenetrable border ($x=0$).
(b) Configuration of the diffusive infiltration model in SC1 after some steps, with particles filling
the border $x=0$.
}
\label{diffmodel}
\end{figure}
The first step of our work is to study single particle infiltration in the SCs,
with starting positions randomly chosen in the $y$ axis.
This is equivalent to the infiltration of non-interacting particles starting at that axis at $t=0$,
as proposed in a recent model of diffusion in porous deposits \cite{diffballistic}.
In one time unit, the particle randomly chooses one nearest neighbor site to jump to, and moves
to that site only if it is also a pore site; otherwise, the particle does not move.
Fig. \ref{diffmodel}a illustrates the first steps of a particle in SC1.
We simulated ${10}^7$ single particle RWs in the stages $m=6$ to $m=9$ of SC1 and $m=7$ of SC2.
The maximal time of each walk was $t_{MAX}={10}^7$ in the largest lattices.
These conditions ensure that no walker reaches the border at $x=L$.
The model of diffusive infiltration in the SCs is defined analogously to the model in planar
and cubic lattices shown in Ref. \protect\cite{sapovalbook}.
The $y$ axis (line $x=0$) is permanently filled with mobile particles that
execute RWs with excluded volume interactions, i. e. with at most one particle per site.
In one time unit, each particle executes an average of one step trial to a randomly chosen nearest
neighbor site.
The step is allowed if the target site is a pore site and is not occupied by another diffusing
particle; otherwise, the particle does not move.
If a particle leaves the $y$ axis, another particle immediately refills the available position.
This creates a pressure for the particles to move to the positive $x$ direction.
Fig. \ref{diffmodel}b illustrates the beginning of this process in SC1.
In our simulations, $50$ independent configurations of diffusive infiltration were generated
in stages $m=7$ of SC1 and $m=5$ of SC2, with maximal times $t_{MAX}={10}^5$.
Simulations in $m=6$ of SC1 were also performed to confirm the absence of finite-size effects.
The generators of the MSs studied here are shown in Figs. \ref{constructionfractals}c and
\ref{constructionfractals}d; these fractals are respectively called MS1 and MS2.
Those images differ from usual presentations of these fractals because they highlight the solid
region (dark) of the generator, with the remaining region being the porous one.
The fractal dimensions of the porous regions are
$D_F^{\left( 1\right)}=\ln{20}/\ln{3}$ for MS1 and
$D_F^{\left( 2\right)}=\ln{98}/\ln{5}$ for MS2.
The values up to five decimal places are also shown in Table \ref{tableresults}.
\begin{table}
\centering
\caption{Bulk and border dimensions of each fractal, best estimates of exponents, and
corresponding estimates of $\nu\left( D_F-D_B\right)$ for the test of Eq. (\ref{nnu}).
Simulation data were obtained in this work except where indicated.}
\label{tableresults}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Fractal & $D_F$ & $D_B$ & $\nu$ (simulation) & $n$ (simulation) & $\nu\left( D_F-D_B\right)$ \\ \hline
SC1 & $1.89279$ & $1$ & $0.475\pm 0.003$ (Ref. \protect\cite{kim1993}) & $0.424\pm 0.004$ & $0.424\pm 0.003$ \\ \hline
SC2 & $1.72271$ & $1$ & $0.455\pm 0.003$ (Ref. \protect\cite{kim1993}) & $0.334\pm 0.014$ & $0.329\pm 0.002$ \\ \hline
MS1 & $2.72683$ & $1.89279$ & $0.467\pm 0.005$ & $0.389\pm 0.002$ & $0.389\pm 0.004$ \\ \hline
MS2 & $2.84880$ & $2$ & $0.479\pm 0.013$ & $0.407\pm 0.014$ & $0.407\pm 0.012$ \\ \hline
\end{tabular}
\end{table}
Lattice sites are located at the vertices of the cubes produced at each step of the construction
of the MS and the distance between nearest neighbor sites is taken as the size unit.
At stage $m$, the lateral size of the lattice is $L=b^m+1$, where
$b$ is the scaling factor of the generator.
The solid sites are located inside the lacunas at each stage and the remaining sites are pore sites,
which may be occupied by particles executing RWs.
An impenetrable border of the MS lattice is located at the $yz$ plane ($x=0$) and
periodic conditions are considered in the directions $y$ and $z$.
In single particle infiltration in MSs, each particle is released at a randomly chosen
pore site of the border $x=0$ ($yz$ plane) and, in each time unit, chooses one nearest neighbor site
and jumps to that site only if it is also a pore site.
${10}^6$ RWs were generated in stages $m=6$ of MS1 and $m=4$ of MS2, with $t_{MAX}={10}^5$.
In diffusive infiltration in MSs, all pore sites of the border $x=0$ are permanently occupied
by mobile particles and each particle executes an average of one step trial per unit time;
when a pore site at $x=0$ becomes empty, it is instantaneously refilled.
In stages $m=6$ of MS1 and $m=4$ of MS2, we produced $20$ configurations of diffusive infiltration,
with maximal times $t_{MAX}={10}^4$.
A diffusion front $\{ h\}$ is defined in the diffusive infiltration problem.
The front height at position $y$ of a SC [$\left( y,z\right)$ of a MS] is an average
of the displacements $x$ of all particles with that position.
In general, at a given substrate position $i$, the front height $h_i$ is
\begin{equation}
h_i \left( t \right)\equiv \frac{2}{N_i}\sum_{\sigma =1}^{\sigma =N_i}{x_\sigma} ,
\label{defh}
\end{equation}
where $N_i$ is the number of particles with that substrate position and $\sigma$ runs
over all those particles.
If this definition is used for a configuration with no vacancy between particles (solid-on-solid
aggregates), then $h_i$ is equal to the position $x$ of the top particle at position $i$;
this is the usual definition of the interface in film growth and/or kinetic roughening models,
and justifies the factor $2$ in Eq. \ref{defh}.
The roughness of the diffusion front, $W\left( t\right)$, was calculated for selected times.
It is defined as the rms fluctuation of $\{ h\}$, averaged over the substrate positions
and over different configurations of the front at time $t$.
Finite-size effects on $W$ are expected only when $\langle h\rangle \sim L^z$ or longer,
where $z>1$ is the dynamical exponent of the front roughening \cite{barabasi}.
However, the infiltration simulations are restricted to $\langle h\rangle<L$, thus
$W$ is not expected to depend on $L$, i. e. roughening is in the growth regime \cite{barabasi}.
\section{Simulation results}
\label{simulations}
\subsection{Infiltration in Sierpinski carpets}
\label{simulationSC}
Fig. \ref{x2SC} shows the time evolution of the mean square displacement in the $x$ direction
in single particle diffusion in SC1 ($m=9$) and SC2 ($m=7$).
The linear fits of each data set are shown.
\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{x2SC.eps}
\caption{(Color online) Mean square displacement as a function of time of single particle RWs in
stage $m=9$ of SC1 (red squares) and $m=7$ of SC2 (blue triangles).
For clarity, we show only data points in intervals $\geq 0.05$ of $\log_{10}t$.
Solid lines are least squares fits of all the data generated in the range ${10}^3\leq t\leq{10}^7$.
}
\label{x2SC}
\end{figure}
The mean square displacement oscillates in both cases, but the oscillations are visible only
in the data for SC2.
The diffusing particle may take a long time to go around the border of lacunas, whose sizes
increase as the particle travels to more distant points.
The amplitude of oscillations are much smaller in SC1 because its lacunas are relatively small
if compared to those of SC2.
These oscillations are log-periodic, similarly to those observed in simulations of RWs in the
bulk of SCs \cite{babJCP}, and are consequence of the discrete scale invariance of those
fractals \cite{babEPL,akkermans}.
The estimates $\nu =0.478\pm 0.003$ for SC1 and $\nu=0.45\pm 0.01$ for SC2 are obtained
by performing linear fits in several time ranges in the simulated interval.
Fits of the data in smaller stages of construction of those fractals
confirm that finite-size effects are negligible.
These estimates are very close to those obtained from simulations
of RWs starting at random points of the bulk of the SCs:
$\nu=0.475\pm 0.003$ \cite{kim1993} and $0.476\pm 0.005$ \cite{fssrw,suwannasen} in SC1;
$\nu=0.458\pm 0.004$ \cite{rrf} and $0.455\pm 0.003$ \cite{kim1993} in SC2.
This comparison is important because it shows that the long time properties of single particle
RWs in the SCs do not depend on the initial positions of those particles nor on the boundary
conditions.
We understand that this is consistent with the uniqueness of Brownian motion in a
Sierpinski carpet demonstrated in Ref. \protect\cite{barlow}, which is related to the
uniqueness of the Laplacian definition in that fractal.
The most accurate estimates of $\nu$ for each fractal are shown in Table \ref{tableresults}.
The diffusive infiltration in SC1 is illustrated in Fig. \ref{infiltrationSC}a, which shows a region
near the filled border of that lattice at time $t=8000$.
Fig. \ref{infiltrationSC}a also shows an averaged diffusion front $\{ H\}$, in which
$H\left( 9j\right)$ (position $y=9j$, with integer $j\geq 0$)
is the average of the heights $h_i$ (Eq. \ref{defh}) from $i=9j-4$ to $i=9j+4$.
This is an average over $9$ substrate positions, which is the second smallest size of the lacunas
in the lattice.
This averaging highlights the long wavelength fluctuations.
Instead, the diffusion front $\{ h\}$, which is shown in Fig. \ref{infiltrationSC}b,
also has short wavelength fluctuations of large amplitude.
\begin{figure}[!h]
\includegraphics[width=0.6\textwidth]{infiltrationSC.eps}
\caption{(Color online)
(a) Configuration of the diffusive infiltration in SC1 at $t=8000$, with lacunas (solid) in light green,
empty (pore) region in white, and diffusing particles in red.
The blue curve is the averaged diffusion front $\{ H\}$.
(b) Diffusion front (not averaged) $\{ h\}$ in the same positions $y$ of the above picture.
}
\label{infiltrationSC}
\end{figure}
The averaged diffusion front has a structure of rounded mounds separated by gaps, with the
mounds located between the third level lacunas.
The main depletion of that front is close to the fourth level lacuna (the largest one at the
left side).
This morphology resembles that observed in infiltration of glycerin in the Hele-Shaw cells in
Ref. \protect\cite{filipovitch}, which reinforces the connection with that system.
The experimental front is smoother, but this is probably related
to interfacial tension effects and to the small stage of the SC used in the cell.
The filling $F\left( t\right)$ is the number of moving particles at time $t$ per lattice site.
Fig. \ref{FSC} shows the time evolution of $F$ in SC1 ($m=7$) and SC2 ($m=5$) and linear fits
of each data set.
Considering fits in various time ranges, we obtain the estimates $n=0.424\pm 0.004$ and
$n=0.334\pm 0.014$, respectively, which are reproduced in Table \ref{tableresults}.
\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{FSC.eps}
\caption{(Color online)
Filling of stage $m=7$ of SC1 (red squares) and $m=5$ of SC2 (blue triangles) as a function of time
in the diffusive infiltration model.
Only data points with difference $\geq 0.05$ in $\log_{10}t$ were plotted.
Solid lines are least squares fits of all the data generated in the range ${10}^2\leq t\leq{10}^5$.
}
\label{FSC}
\end{figure}
The value of $n$ in SC1 is very close to the estimate $n=0.419$ of Ref. \protect\cite{voller} for
infiltration simulated with a diffusion equation; in SC2, $n=0.319$ was obtained in that work,
which differs $4.5\%$ from our estimate.
The experiments of glycerin infiltration in the Hele-Shaw cells of
Ref. \protect\cite{filipovitch} give $n=0.423$ and $n=0.334$, respectively, which are
both in excellent agreement with our results.
These results suggest a universal scaling in the diffusive infiltration problem in SCs.
The exponents $n$ are much smaller than the estimates of $\nu$ in the same SCs.
The differences are $-11.5\%$ for SC1 and $-27.1\%$ for SC2, both much larger than error bars
of the estimates of both exponents.
We also performed simulations of single particle RWs and of diffusive infiltration in free
square lattices, i. e. without obstacles.
Even with a small number of configurations in a lattice of lateral size $1024$ and with
maximal simulation time ${10}^5$, we obtained $n\approx \nu\approx 0.5$ with good accuracy.
\subsection{Infiltration in Menger sponges}
\label{simulationMS}
Fig. \ref{x2MS} shows the time evolution of the mean square displacement in the $x$ direction
of single particle diffusion in MS1 ($m=6$) and MS2 ($m=4$), with linear fits of each data set.
The log-periodic oscillations due to the discrete scale invariance \cite{babJCP,babEPL} are
also observed here, and their amplitudes are also larger in the fractal with larger lacunas (MS2).
\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{x2MS.eps}
\caption{(Color online)
Mean square displacement as a function of time of single particle RWs in
stage $m=6$ of MS1 (red squares) and $m=4$ of MS2 (blue triangles).
Only data points with differences $\geq 0.05$ in $\log_{10}t$ were plotted.
Solid lines are least squares fits of all the data generated in the range ${10}^2\leq t\leq{10}^5$.
}
\label{x2MS}
\end{figure}
\begin{figure}[!h]
\includegraphics[width=0.7\textwidth]{infiltrationMS.eps}
\caption{(Color online)
Configurations of diffusive infiltration at $t=2000$ in planes (a) $z=250$ and (b) $z=360$ of MS1.
Colors are the same as in Fig. \ref{infiltrationSC}.
}
\label{infiltrationMS}
\end{figure}
Fits of the data in various time intervals yield the estimates $\nu =0.467\pm 0.005$ for MS1 and
$\nu=0.479\pm 0.013$ for MS2, which are also reproduced in Table \ref{tableresults}.
Previous estimates of $\nu$ in MSs were obtained only from lower and upper bounds
\cite{benavraham1983,balankin2015}, thus they had lower accuracy than the present ones.
Simulations in smaller stages of MS1 and MS2 give approximately the same estimates, indicating that
finite-size effects are small.
Note that the exponent $\nu$ in the MSs follows the same trend of decrease with the fractal
dimension that was observed in the SCs in Ref. \protect\cite{fssrw}.
Moreover, those exponents are near the normal diffusion value $1/2$, which is also observed in
fractals without dead ends and dimensions between $1$ and $2$, such as the SCs \cite{rrf}.
The diffusive infiltration is illustrated in Figs. \ref{infiltrationMS}a,b, which
show cross sections of MS1 near the filled boundary at $t=2000$.
In the plane $z=250$ (Fig. \ref{infiltrationMS}a), the density of obstacles near the filled
boundary is low, thus it is easier for particles to reach larger distances.
In the plane $z=360$ (Fig. \ref{infiltrationMS}b), the density of blocks near the boundary is
large, which confines many particles; however, note that three particles have already reached
an upper porous region of this plane by migrating through other planes.
Fig. \ref{FMS} shows the time evolution of the filling $F$ in MS1 ($m=6$) and MS2 ($m=4$).
Linear fits of the data with ${10}^2\leq t\leq {10}^4$ are shown.
We also analyzed fits in different time ranges to obtain the estimates
$n=0.389\pm 0.002$ and $n=0.407\pm 0.014$, respectively.
They are presented in Table \ref{tableresults}.
Again, we also observe that the exponents $\nu$ and $n$ are very different.
\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{FMS.eps}
\caption{(Color online)
Filling of stage $m=6$ of MS1 (red squares) and $m=4$ of MS2 (blue triangles) as a function of time
in the diffusive infiltration model.
Only data points with differences $\geq 0.05$ in $\log_{10}t$ were plotted.
Solid lines are least squares fits of all the data generated in the range ${10}^2\leq t\leq{10}^5$.
}
\label{FMS}
\end{figure}
We also simulated single particle RWs and the diffusive infiltration model in simple cubic
lattices.
With a small number of configurations and maximal time ${10}^5$, we obtained
$n\approx \nu\approx 0.5$, which is consistent with normal diffusion in both cases.
\section{Scaling approach}
\label{scaling}
A scheme of the diffusive infiltration in a fractal is shown in Fig. \ref{scheme},
with a characteristic length $\langle h\rangle$ filled by the diffusing species.
$L$ is the lateral size of the lattice, whose dimension is $D_F$ and whose filled boundary
has dimension $D_B$.
For SCs, the boundary is a filled line, thus $D_B=1$; for MS2, the boundary is a plane,
thus $D_B=2$; however, for MS1, the filled boundary has the geometry of SC1, thus it
has dimension $D_B=\ln{8}/\ln{3}\approx 1.89279$.
The diffusing front is expected to advance with the same scaling of single particle diffusion
because the more advanced particles move in a region with low density, in which the main
constraints are the irregularities of the fractal network and not the excluded volume
interactions.
For this reason, we expect
\begin{equation}
\langle h\rangle \sim t^\nu .
\label{hmedio}
\end{equation}
The value of $\langle h\rangle$ calculated in our simulations are consistent with this scaling,
but fluctuations are much larger than those of single particle diffusion.
\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{scheme.eps}
\caption{(Color online)
Scheme of diffusive infiltration from a border characterized by fractal dimension $D_B$
to a medium characterized by fractal dimension $D_F$ with lateral size $L$.
$\langle h\rangle$ is the average thickness of the diffusion front.
}
\label{scheme}
\end{figure}
The filled region can be divided in hypercubes of edge $\langle h\rangle$; one of them is highlighted
in Fig. \ref{scheme}.
The total filling of each hypercube is
\begin{equation}
F_H\sim \langle h\rangle^{D_F} ,
\label{FH}
\end{equation}
assuming that $\langle h\rangle$ is sufficiently large for the fractality of the medium
to be observed.
If $L\gg \langle h\rangle$, the number of hypercubes in the boundary is
\begin{equation}
N_H\sim {\left( \frac{L}{\langle h\rangle}\right)}^{D_B} .
\label{NH}
\end{equation}
The total filling consequently scales as
\begin{equation}
F = N_H F_H\sim L^{D_B}{\langle h\rangle}^{D_F-D_B} \sim L^{D_B} t^{\nu\left( D_F-D_B\right)} .
\label{Fscaling}
\end{equation}
This gives an exact relation between the single particle diffusion exponent and the diffusive
infiltration exponent:
\begin{equation}
n = \nu\left( D_F-D_B\right) = \frac{D_F-D_B}{D_W} ,
\label{nnu}
\end{equation}
where Eq. (\ref{DW}) was also used.
Table \ref{tableresults} shows the values of $\nu\left( D_F-D_B\right)$ obtained from the best
available estimates of $\nu$ and the exact dimensions $D_F$ and $D_B$.
They are in excellent agreement with the estimates of $n$ obtained in simulations.
Note that Eq. \ref{nnu} implies $n=\nu=1/2$ for systems in which bulk and boundary are regular
lattices with integer dimensions, since $D_B=D_F-1$ and single particle diffusion is normal in
those cases.
Also note that the successful application to systems whose boundaries are
compact (SCs and MS2) and fractal (MS1) is a strong support to this scaling approach.
In Ref. \protect\cite{fssrw}, it was shown that the exponent $\nu$ in SCs has an approximately
linear dependence on $D_F$ if this dimension is not much smaller than $2$.
Combination of this relation with Eq. (\ref{nnu}) gives an approximately quadratic dependence
of $n$ on $D_F$.
Indeed, such a quadratic relation was obtained by Filipovitch et al \cite{filipovitch} in the
experiments of fluid infiltration in SCs.
For many other fractals in which the RW exponent is known, Eq. \ref{nnu} can predict the
anomalous properties of diffusive infiltration and, if experiments are available, it may
help to evaluate the applicability of a given fractal model.
\section{Roughening of the infiltration fronts}
\label{fronts}
The roughness of the diffusion front was measured in all fractals at selected times, from
$t=50$ to $t=36000$ in SCs and from $t=50$ to $t=6400$ in MSs.
Fig. \ref{roughness}a shows $W$ as a function of $t$ in SC1, MS1, and in the square lattice;
Fig. \ref{roughness}b shows the same quantities in SC2, MS2, and in the simple cubic lattice.
\begin{figure}[!h]
\includegraphics[width=0.8\textwidth]{roughness.eps}
\caption{(Color online)
Roughness of diffusion fronts in: (a) SC1 (red squares), MS1 (green triangles), and square lattice (blue
crosses); (b) SC2 (red squares), MS2 (green triangles), and simple cubic lattice (blue crosses).
The data in the square (cubic) lattice is displaced $0.2$ ($0.5$) units to the bottom to avoid
intersection with other data sets.
Dashed lines are least squares fits of data in square and cubic lattices,
both with slope $0.257$.
}
\label{roughness}
\end{figure}
In square and cubic lattices, the linear fits of the data shown in Figs. \ref{roughness}a,b
give $W\sim t^\beta$ with a growth exponent $\beta\approx 0.25$.
The movement of the particles in the diffusion front is completely random,
thus the height of each position $y$ in the SCs ($yz$ in MSs) randomly fluctuates around
the average value $\langle h\rangle$.
This is characteristic of random uncorrelated deposition, in which
$W\sim \langle h\rangle^{1/2}$ \cite{barabasi}.
Since $\langle h\rangle\sim t^{1/2}$ in diffusive infiltration in those lattices,
we obtain $\beta=1/4$, which is consistent with the simulation results.
The infiltration problem defined here has similarities with that of gradient percolation,
in particular the existence of lattice borders with fixed concentration of particles;
see e. g. Ref. \protect\cite{sapovalbook}.
However, a very important difference is the existence of a fixed concentration gradient
along the $x$ direction in that case; instead, in the present problem, the concentration
gradient is continuously varying between the filled border and the diffusion front.
In the gradient percolation problem, the diffusion front is defined as the interface of
a cluster of connected particles, which also differs from the present definition.
Thus, even if correlations in particle positions were introduced in our model
(e. g. to represent surface tension effects), the roughening might be different from
that of the gradient percolation front \cite{bunde,rosso1986}.
Despite the simplicity of the diffusive infiltration front in regular lattices and the
fact that the uncorrelated growth extends to the fractal media, some interesting features
can be observed in the latter case.
As shown in Figs. \ref{roughness}a,b, the roughness oscillates in all fractals.
As the front reaches the lower borders of a set of parallel lacunas of a given size, the front can
advance only in the regions between those lacunas, which leads to large differences in the heights
at the confined and at the non-confined regions; see e. g. Figs. \ref{infiltrationSC}a and
\ref{infiltrationMS}a,b.
However, when the front reaches the upper borders of those lacunas, it enters a more homogeneous
region, in which lateral diffusion slows down the increase of height differences.
This effect is enhanced in lattices with large lacunas, which is the case of SC2 and MS2.
For instance, Figs. \ref{infiltrationSC53}a shows an infiltration profile in SC2 at
$t=1000$, in which the front has bypassed the second level lacunas but did not reach the
larger ones.
Correspondingly, Fig. \ref{roughness}b shows a plateau in the $\log{W}\times \log{t}$
plot at $t\sim 1000$.
On the other hand, Fig. \ref{infiltrationSC53}b shows an infiltration profile at
$t=5000$, in which particles enter the gaps between the third level lacunas.
Correspondingly, Fig. \ref{roughness}b shows a rapid increase of $W$ at $t\sim 5000$.
The main contribution to the roughness is that from the long wavelength fluctuations,
which are the height differences between the evolving regions (in the gaps) and the
blocked regions (below the large lacunas).
This is not a kinetic roughening feature, but an effect of the channeled geometry of the medium.
For this reason, it is meaningful to estimate a growth exponent $\beta$ in these cases.
\begin{figure}[!h]
\includegraphics[width=0.5\textwidth]{infiltrationSC53.eps}
\caption{(Color online)
Configurations of diffusive infiltration at (a) $t=1000$ and (b) $t=5000$ in SC2, with the
corresponding averaged fronts.
Colors are the same as in Fig. \ref{infiltrationSC}.
}
\label{infiltrationSC53}
\end{figure}
The roughness oscillations are also log-periodic, which is related to the discrete scale
invariance of the medium.
An important feature is that the periods have approximately the same value for the fractals
with the same scaling factors in the generators, even having very different $D_F$ and different
embedding dimensions: $b=3$ for SC1 and MS1 (Fig. \ref{roughness}a)
and $b=5$ for SC2 and MS2 (Fig. \ref{roughness}b).
The relevance of the scaling factor of the generator is consistent with the approach
of Ref. \protect\cite{babEPL} to explain these oscillations in kinetic models.
Other growth models have also been studied in substrates with the geometry of SCs
\cite{horowitz,lee,tang2010,xun2014}.
The roughness oscillations were also observed in the simulations of ballistic deposition
in SCs \cite{horowitz}.
However, a comparison with our results is not possible because the front kinetics and the
substrate effects are very different.
For instance, here the front grows in the plane in which the SC is embedded, thus it finds
different disordered environments in the lateral directions during the growth,
while those works consider growth parallel to the SC plane, so that the lateral disorder
is the same for the growing columns at all heights.
\section{Conclusion}
\label{conclusion}
Although diffusion in deterministic fractals have been intensively studied since a long time,
as reviewed in Refs. \cite{havlin,metzler2014}, novel interesting features and applications
frequently appear, as shown in recent works, e. g. Refs.
\protect\cite{balankin2015,haber2013,darazs,akkermans,forte,miyaguchi,sokolov}.
The simulation of infiltration of a diffusing fluid in a Sierpinski carpet and subsequent
experimental realization of this process in a Hele-Shaw cell provide a very interesting
macroscopic illustration of that phenomena \cite{voller,filipovitch}.
However, the significant discrepancy between the anomalous exponent of filled area
and the exponent of single particle diffusion in the same fractals was not explained.
The main aim of this work was to fill this gap.
We performed numerical simulations of the infiltration of randomly moving particles from a
permanently filled border in deterministic fractals embedded in dimensions $2$ and $3$,
viz. Sierpinski carpets (SCs) and Menger sponges (MSs).
The exponent $n$ of the time scaling of the infiltrated area/volume was measured and
confirms the accuracy of the previous infiltration simulations \cite{voller}
and experiments \cite{filipovitch}, which were obtained in smaller stages of construction of SCs.
Single particle diffusion starting from the same border was also studied numerically and the
exponent $\nu$ of the mean square displacement scaling was measured.
In SCs, the values of $\nu$ agree with previous estimates in the bulk of those fractals;
in MSs, they improve previous estimates, since they were based only on
lower and upper bounds and had large uncertainties.
A scaling approach is proposed to relate exponents $n$ and $\nu$, considering the fractal
dimensions of the infiltrated region and of the region from which the diffusing particles come.
The numerical results are in excellent agreement with this approach.
Thus, if the dimensions characterizing the porous medium and its boundaries are known,
then the single particle diffusion exponent $\nu$ (which was calculated for a
variety of fractals in more than three decades) is sufficient to determine
the scaling properties in the diffusive infiltration problem.
We also showed that the roughness of the diffusion fronts has log-periodic oscillations
in time, which is characteristic of random walks and other kinetic models in fractals
\cite{babJCP,babEPL,akkermans}.
The same oscillations are observed in the mean-square displacement of single particle RWs.
In SCs and MSs whose generators have the same scaling factor, the periods are approximately the
same, despite the very different dimensions of those fractals, which shows the relevance of
the discrete scale invariance.
\begin{acknowledgments}
The author thanks Vaughan Voller for helpful discussion.
This work was supported by CNPq and FAPERJ (Brazilian agencies).
\end{acknowledgments}
|
2,869,038,156,297 | arxiv |
\section{Introduction}\label{sec:intro}
\glsresetall
The high data rate achievable with modern wireless communications and the increasing computational power of embedded systems, along with the sharp price reduction of commercial \glspl{uav}, have enabled the use of swarms of drones for Smart City services~\cite{HosseinMotlagh2016}.
Thanks to their size, flexibility and flight ability, these swarms represent a new solution for a plethora of different applications, such as remote surveillance, distributed sensing, wireless coverage extension and object tracking~\cite{Shakhatreh2019}.
Over the past few years, researchers have proposed several \gls{uav}-based systems~\cite{Shakeri2019}, but achieving an efficient distributed control is a complex problem, whose solution is often task-dependent.
In this context, it is important to properly define the different sub-tasks of surveillance, monitoring, mapping and tracking \cite{Chung2018}.
In this work, we assume that targets are static, but occupy random positions in the monitored area.
Moving \glspl{uav} are equipped with sensors that can detect targets within a limited sensing range, \edit{and a radio interface that makes it possible to share position information and sensing data. The UAVs need to coordinate} to explore the area and find the targets \edit{without colliding with each other or with obstacles}.
The problem of identifying fixed targets arises in several practical situations, ranging from the generation of real-time flood maps~\cite{Baldazo2019} to the detailed tracking of weeds in agriculture~\cite{Albani2017}, but an efficient initial exploration is of interest even \edit{for} larger classes of problems, e.g., considering moving targets.
\edit{One such example is wildfire monitoring in dry regions~\cite{Julian2019}, which can be effective as long as the \glspl{uav} move faster than the spread of the fires.}
The dynamic nature of these problems, in which actions can have long-term consequences and affect the future evolution of the environment in complex ways, makes them a natural application area for \gls{rl} techniques~\cite{sutton1998introduction}. However, due to the curse of dimensionality, \edit{a centralized approach to the problem} (i.e., using a single controller) is feasible only for very small swarms.
In order to design a scalable system, \gls{marl} techniques need to be used, but the non-stationarity of the environment~\cite{Hernandez-Leal2017} complicates the system design and the agent training.
This additional complexity makes \gls{marl} an open research field, and the different degrees of centralization and communication between agents make the configuration of the learning system \edit{an interesting problem to investigate}.
In this work, we consider a \gls{marl} framework for exploration and surveillance.
Our aim is to find a flexible \gls{ml} strategy to explore and monitor a certain area with a swarm of \glspl{uav} \edit{that can exchange information within a certain coverage range.}
Performance is determined by the ability of the drones to find and reach the targets, which are located in unknown positions.
\edit{In our framework, the} observations of other agents are \edit{shared through a radio channel and} used to make decisions and to avoid collisions, thus encouraging cooperation. We define a \gls{dqn} algorithm and demonstrate its efficiency with limited training, comparing it to a benchmark look-ahead heuristic and showing that our approach can better explore the environment and reach the targets faster.
We also perform a transfer learning experiment, showing that agents trained on a \edit{certain} map can learn to adapt to a completely new scenario much faster than restarting the training from scratch.
\edit{We adopted a general model, using a grid-world representation and making a limited number of assumptions on the nature of the task. Nonetheless, we show that our system can be implemented in several different scenarios.
In particular, the map is not entirely visible to the \glspl{uav}, there are obstacles, and targets are in unknown positions (often clustered together, making clusters rarer and thus harder to find).
These features make \gls{marl} highly complex, especially when considering limited communication capabilities: to the best of our knowledge, our work is the first to apply it in such challenging conditions.}
\edit{Our approach to solve the problem is to model the state as a series of correlated maps, which contain different information on the environment, making the learning framework extendable to even more complicated scenarios.}
\edit{
The contributions of this paper can be summarized as follows:
\begin{itemize}
\item We formulate a \gls{ndpomdp} framework for swarm management in a complex environment and propose a \gls{marl} architecture to address such a problem;
\item We show that the proposed system can outperform computationally heavy heuristics and transfer its knowledge to different scenarios with limited retraining;
\item We analyze the effect of bigger changes in the environment, such as changing the size of the map or the number of drones, and show that transfer learning is still effective;
\item We show that the system is robust to channel impairments, and can perform very well even in realistic scenarios that differ from the more abstract models used in the training phase.
\end{itemize}
}
A preliminary version of this paper was presented at ACM DroNet 2020~\cite{venturini2020distributed}; this version has a significantly updated system model, considering different map sizes and the presence of obstacles as well as a different \gls{marl} solution, and more extensive results on the performance of our approach. \edit{Moreover, we have added the analysis of the impact of the communication channel on the system's performance, \edit{and tested} the proposed solution in a map obtained from real data.}
The rest of the paper is divided as follows: first, Sec.~\ref{sec:related} analyzes the related work in the field. The system model and \gls{marl} algorithm are presented in Sec.~\ref{sec:model}. The experimental setup is reported in Sec.~\ref{sec:setup}, while the experimental results, including transfer learning experiments, are reported in Sec.~\ref{sec:results}. Finally, Sec.~\ref{sec:conclusions} concludes the paper and presents some possible avenues of future work.
\section{Related Work}\label{sec:related}
An extensive taxonomy of multi-agent solutions was presented in \cite{Busoniu2010}. The general approaches adopted to solve the \gls{marl} problem can be cast into one of these four frameworks: $(1)$ a single agent architecture that interacts with multiple copies of itself, generating emergent behaviors; $(2)$ communication between agents of the same type with improved coordination; $(3)$ cooperation between agents with different specialized goals achieving coordinated behavior; and $(4)$ modeling other agents' behaviors and planning a response \cite{Hernandez-Leal2019}.
The authors in \cite{Zanol2019} study the first of these four approaches and use the tabular Q-learning algorithm to guide drones to survey an unknown area, showing that even the simplest \gls{marl} algorithm can improve the overall system rewards. Similarly, in \cite{Cui2019} and \cite{Cui2020} the \gls{marl} framework is applied to a more complex problem in which a \gls{uav} network is adopted to provide flexible wireless communication. However, in these works the \gls{marl} algorithm is used to optimize resource allocation instead of guiding drones, so that a coordinated exploration strategy is missing.
An interesting research direction \edit{for} \gls{marl} is pioneered in \cite{Foerster2016}, which uses \glspl{dnn} to represent and learn more complex Q-functions \cite{Mnih2015}. At first, the authors study the performance of one network trained for all agents, which then share the same parameters during the execution phase (this is also our approach). A second proposed system uses the \gls{dial} framework, in which agents learn meaningful real-valued messages to be exchanged in order to improve cooperation: this allows for faster training, but the model is limited to a very small number of agents.
Other works use \gls{rl} in the practical scenarios discussed above:
in~\cite{Baldazo2019}, the authors adopt a \gls{marl} approach to control a flood-finding swarm of \glspl{uav}. However, the model only considers a swarm with a fixed number of drones, and the experimental results are not compared to state-of-the-art heuristics.
In \cite{Albani2017}, a reinforced random walk model is exploited to map weeds in an agricultural setting, taking noisy acquisition into account and solving the issue with collective observations. Random walks are then biased based on the positions of the already discovered targets, which have to be properly mapped, along with the distances from other drones in the network. In this case, the authors considered swarms of variable sizes, but the random walk needs to be manually tuned for each setting.
Another recent study~\cite{Julian2019} considers wildfire spread monitoring, checking how the fire evolves and spreads in the map \edit{from a known starting point}. The authors define the problem as a \gls{decpomdp}~\cite{oliehoek2016concise} and carry out several experiments, as well as comparisons against a greedy heuristic (similar to the look-head method we studied in this work).
A target-tracking application for disaster scenarios, with a model similar to our own but applied to a single drone, is described in~\cite{wu2019uav}.
\edit{
Finally,~\cite{Albani2017} considers a \gls{marl} system with realistic communication, where a swarm of drones needs to explore a map, receiving a reward for each new explored location.
This is a simpler reinforcement \edit{learning} problem, as identifying and finding rare targets is much more challenging to learn due to the delayed reward.}
The \gls{marl} approaches can also fit models in which \gls{uav} connectivity is important: in \cite{challita2018deep}, a framework including \gls{rl} and game theory is used to plan the path of two drones that need to save energy and minimize the interference to the ground network while maintaining a cellular connection.
Furthermore, in~\cite{liu2018energy} the authors design a centralized \gls{rl} system to maximize coverage for a swarm of aerial base stations serving mobile users on the ground.
A similar approach is taken in~\cite{liu2019reinforcement}, which redefines the problem in terms of \gls{qoe} maximization for the users.
For a fuller communication-oriented perspective on the use of \gls{rl} for \gls{uav} networks, we refer the reader to~\cite{hu2020reinforcement}.
\edit{These works have similar objectives to our own, but either go back to the single-agent setting or have restrictive assumptions: as an example,~\cite{Julian2019} considers well-known fire patterns, which can be extensively learned, with a known starting point.
In our case, the initial positions of the targets and \edit{ of the \glspl{uav} are} not the same across different episodes, making the model more general and complicating the learning task.
Furthermore, unlike previous efforts in the literature, we exploit the transfer learning paradigm, showing how our model can easily adapt to scenarios with obstacles, realistic maps, and different swarm sizes. To the best of our knowledge, our work presents the most complex environment to date, in which a single architecture can deal with different map and swarm sizes, different numbers of targets to track, and the presence of obstacles.}
\section{System Model} \label{sec:model}
In the following, we first present the environment \edit{where the \glspl{uav} operate}. We give a full list of the notation used in Table~\ref{tab:sim_notation} as a reference to the reader.
\begin{table}[t]\centering
\footnotesize
\begin{tabular}[c]{cl|cl}
\toprule
Symbol & Description & Symbol & Description \\
\midrule
$\mathcal{M}$ & Coordinate set & $\mathcal{O}$& Observation space of the system \gls{ndpomdp}\\
$M$ & Map grid size & $\bm{\Phi}$ & Matrix of cell values\\
$K$ & Number of targets & $\mathbf{X}$ & Matrix of \gls{uav} positions\\
$\mathbf{z}_k$ & Coordinates of the $k$-th target & $\bm{\Omega}$ & Matrix of obstacle positions \\
$\sigma$ & Standard dev. of the target Gaussian functions & $\hat{\bm{\Phi}}$& Observed cell value matrix\\
$\phi(\cdot)$ & Cell value function &$\hat{\bm{\Omega}}$& Observed obstacle position matrix\\
$\mathcal{U}$ & Set of \glspl{uav} & $\mathbf{X}_u$ & Observed \gls{uav} position matrix for $u$\\
$U$ & Number of \glspl{uav} \edit{(cardinality of $\mathcal{U}$)} & $F$& Observation window size \edit{(in number of cells)} \\
$d_{\text{sparse}}$ & Minimum target distance in the sparse scenario & $\psi$ & Penalty for collisions\\
$\omega(\cdot)$ & Obstacle location function & $\theta$ & Penalty for moving to forbidden areas\\
$\eta$ & Fraction of the map occupied by obstacles & $\rho$ & Obstacle value \\
$\zeta$ & \acrlong{fov} of each \gls{uav} & $\nu_u(\mathbf{x}_u,\mathbf{a}_u)$ & Invalid move indicator function for \gls{uav} $u$\\
$h_{\min}$ & Minimum obstacle size \edit{(in number of cells)} & $\chi_u(\mathbf{X},\mathbf{A})$ & Collision indicator function for \gls{uav} $u$\\
$h_{\max}$ & Maximum obstacle size \edit{(in number of cells)} & $r_u(s,\mathbf{a})$ & Reward for \gls{uav} $u$ \\
$\bm{\ell}_i$ & Lower left corner coordinates of the $i$-th obstacle & $\pi$ & Observation-action policy\\
$\mathcal{H}_i$ & Set of cells occupied by the $i$-th obstacle &$R_{u,t}(\pi)$ & Long-term reward for $u$ using policy $\pi$ \\
$\mathbf{h}_i$ & \edit{Size} of the $i$-th obstacle & $\gamma$ & Exponential discount factor\\
$N$ & Episode duration \edit{(steps)} & $e_t$ & Experience sample\\
$\mathcal{S}$ & State space of the system \gls{ndpomdp} &$\alpha$ & Learning rate \\
$\mathcal{V}(\mathbf{s})$ & Valid move space for state $s$ & $B_{\text{size}}$ & Size of a learning batch\\
$\mathcal{A}$ & Action set &$Q(o_u,a_u)$ & Q-value estimate of $R$\\
$\mathbf{a}_u$ & Action for \gls{uav} $u$ & $n_q$ & Model update period \edit{(steps)} \\
\bottomrule
\end{tabular}
\vspace{0.1cm}
\caption{Notation definitions.}
\label{tab:sim_notation}\vspace{-0.8cm}
\end{table}
\subsection{Environment}
The system environment consists of a square grid of size $M \times M$.
Each cell of the grid \edit{(we will refer to a cell or a location interchangeably in the following)} is identified by its coordinates $\mathbf{m} \in \mathcal{M}$, where \edit{$\mathcal{M} = \mathcal{X} \times \mathcal{Y}$, and $\mathcal{X}=\mathcal{Y}=\{0,...,M-1\}$}. We place a set of $K$ targets on the map, which represent the objectives of the \gls{uav} surveillance application. The position of the $k$-th target is denoted as $\mathbf{z}_k=(x_k,y_k)$.
We then generate a set of $K$ bivariate Gaussian functions over the grid, which represent the visibility of a target to the \glspl{uav}, with the same covariance matrix $\Sigma = \bigl( \begin{smallmatrix}\sigma^2 & 0\\ 0 & \sigma^2\end{smallmatrix}\bigr)$.
The mean $\mathbf{z}_k=(z_{k,1},z_{k,2})$ corresponds to the coordinates of the target.
Note that the Gaussian functions do not represent actual distributions, but rather the full view of the \glspl{uav}, which can see a target from afar. The value of $\sigma$ can be interpreted as the distance at which a target can be identified, as larger values of $\sigma$ mean that the target is visible from further away.
Each cell can then be associated with a weight $\phi(\mathbf{m})$, which represents the \textit{value} of the location, which increases with the proximity to a target, and is given by the maximum of the Gaussian functions in that point, normalized in such a way that the target locations have values equal to $1$:
\begin{equation}
\label{eq:map_value}
\phi(\mathbf{m}) = \max_{k\in\{0,\ldots,K-1\}} e^{- \frac{1}{2} \left((\mathbf{m}-\mathbf{z}_k)^\mathbf{T}\Sigma(\mathbf{m}-\mathbf{z}_k)\right)}.
\end{equation}
If $\phi(\mathbf{m})$ is smaller than 0.01, it is set to 0, as the \glspl{uav} cannot see \edit{any} target from that location.
Under these conditions, the most valuable cells coincide with the center of each Gaussian function, which represents one of the targets in the considered scenario.
While the environment is static, the \glspl{uav} move within the map with the aim of positioning themselves over the targets as fast as possible. We denote the set of \glspl{uav} by $\mathcal{U}$, and by $U$ its cardinality.
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth]{Scenarios0.pdf}
\caption{Two examples of the sparse (left) and cluster (right) target distributions.\vspace{-0.3cm}}
\label{fig:distributions}
\end{figure}
In this work, we consider two different distributions for the targets, named \emph{sparse} and \emph{cluster}, which are characterized by different correlations among the target positions. In both cases, the first target is randomly placed on the grid following a 2D uniform distribution: $\mathbf{z}_0$ can take any value in $\mathcal{M}$ with equal probability. The other targets are then placed sequentially, according to the following rules.
In the sparse scenario, the position $\mathbf{z}_i$ of the $i$-th target is randomly chosen in the set $\mathcal{M}^{\text{sparse}}_i = \{\mathbf{m} \in \mathcal{M}: ||\mathbf{m}-\mathbf{z}_j||_2 > d_{\text{sparse}} ,\: \forall j < i \}$, with probability mass distribution
\begin{equation}
\label{eq:target_sparse}
P_{\text{sparse}}(\mathbf{z}_i = \mathbf{m}) =
\frac{||\mathbf{m}-\mathbf{z}_0||_2}{\kappa_i^\text{sparse}}.
\end{equation}
where \edit{$\kappa_i^\text{sparse}=\sum_{\mathbf{m} \in \mathcal{M}^{\text{sparse}}_i} ||\mathbf{m}-\mathbf{z}_0||_2$} is a normalization factor. Hence, the other targets tend to be distributed far from the first, with a minimum distance $d_{\text{sparse}}$ between each other.
In the cluster scenario, instead, the $i$-th target can take any position in the set $\mathcal{M}^{\text{cluster}}_i = \{\mathbf{m} \in \mathcal{M}: ||\mathbf{m}-\mathbf{z}_j||_2 > 1, \: \forall j < i \}$ with probability mass distribution
\begin{equation}
\label{eq:target_cluster}
P_{\text{cluster}}(\mathbf{z}_i = \mathbf{m}) =
\frac{1}{(1+||\mathbf{m}-\mathbf{z}_0||_2)\kappa_i^\text{cluster}}.
\end{equation}
where \edit{$\kappa_i^\text{cluster}=\sum_{\mathbf{m} \in \mathcal{M}^{\text{cluster}}_i} \frac{1}{(1+||\mathbf{m}-\mathbf{z}_0||_2)}$} is a normalization factor. In this case, the targets tend to cluster around the first one, but cannot occupy adjacent cells, since the minimum distance must be greater than 1.
An example of the two target placements is shown in Fig.~\ref{fig:distributions}.
These two distributions represent two plausible configurations of targets in tracking applications: in wildlife monitoring, some species of animals might tend to herd together, while more territorial ones will have a sparser distribution on the map.
The same goes for a battlefield scenario, in which groups of soldiers might act together as a tight formation, while guerrilla-style fighting will involve a much sparser distribution of forces.
In a more complex version of the scenario, the map does not just have targets that the \glspl{uav} need to find and reach, but \emph{obstacles} as well: in an urban scenario, these might be tall buildings or designated no-fly zones, while in a natural scenario they might correspond to natural obstacles such as \edit{boulders} or tall trees. We define a function $\omega(\mathbf{m})$, which is equal to 1 if the cell corresponds to an obstacle and 0 otherwise. Then, we denote by $\eta$ the portion of the map occupied by obstacles:
\begin{equation}
\eta=\sum_{\mathbf{m}\in\mathcal{M}}\frac{\omega(\mathbf{m})}{M^2}.
\end{equation}
\edit{Cells inside an obstacle are considered impassable, like the map borders, and the \glspl{uav} that try to move on an obstacle will remain in the same cell.}
\edit{For the training of our algorithm, we assumed that obstacles are rectangular and randomly scattered in the area.} The $i$-th obstacle is determined by its dimensions $\mathbf{h}_i$ and by the position of its lower left corner $\bm{\ell}_i$. We formally define the obstacle as the set $\mathcal{H}_i$:
\begin{equation}
\mathcal{H}_i=\left\{\mathbf{m}=(m_1,m_2)\in\mathcal{M}:m_1\in\{\ell_{i,1},\ldots,\ell_{i,1}+h_{i,1} \edit{-1} \},m_2\in\{\ell_{i,2},\ldots,\ell_{i,2}+h_{i,2} \edit{-1}\}\right\}.
\end{equation}
Obstacles are generated sequentially, like the targets, and for each obstacle $i$ the dimensions $\mathbf{h}_i$ are drawn uniformly from the set \edit{$\{h_{\min},h_{\max}\} \times \{h_{\min},h_{\max}\}$}. The lower left corner position $\bm{\ell}_i$ is then drawn from a uniform distribution in the set $\mathcal{M}^{\text{obs}}_i$, the subset of the map defined as:
\begin{equation}
\mathcal{M}^{\text{obs}}_i\!=\!\{\bm{\ell}\in\mathcal{M}: \mathcal{H}_i\subset\mathcal{M},||\mathbf{n},\mathbf{z}_k||_2>1,\forall \mathbf{n}\in\mathcal{H}_i,k\in\{0,\ldots,K-1\}, d(\mathcal{H}_i,\mathcal{H}_j)\geq2,\forall j<i\},
\end{equation}
where $d(\mathcal{H}_i,\mathcal{H}_j)=\min_{\mathbf{m}_i\in\mathcal{H}_i,\mathbf{m}_j\in\mathcal{H}_j}||\mathbf{m}_i-\mathbf{m}_j||_2$ is the distance between the obstacles $i$ and $j$.
The three constraints force the obstacle to be entirely inside the map, not to be directly adjacent to any of the targets, and not to touch other obstacles. The choice of these constraints was motivated by the necessity to \edit{guarantee the existence of a clear} path to the targets from any point in the map.
We consider multiple \emph{episodes} of $N$ steps: in each episode, the targets, \glspl{uav}, and obstacles are redistributed in the map, and the swarm must locate the targets in as few steps as possible. We consider discrete time slots, so that each drone can move by a single cell at each time step. Furthermore, we assume that a \gls{uav} has a limited \gls{fov}, i.e., it can only know the value of the cells within a radius $\zeta$.
This framework allows us to represent many different applications and scenarios by changing the size of the grid, the number of drones, targets and obstacles, the \gls{fov} range $\zeta$ and the target visibility parameter $\sigma$. It can also be easily extended to dynamic targets.
At the beginning of each episode, each \gls{uav} only knows the values of the cells within the swarm's \gls{fov}. The drones assume that all unexplored points of the map are associated with the maximum $\phi(\mathbf{m})$. Then, each \gls{uav} moves independently at each time step $n$: as the swarm explores the environment, each drone discovers the values of the map locations that it has covered, and updates its information according to $\phi(\mathbf{m})$. We highlight that the knowledge about the map is instantly shared, which means that each drone receives the observations that all the other drones have acquired. \edit{This is always true during training, whereas in some testing episodes we also experiment the scenarios in which unreliable communications affect the shared messages.} The objective of the swarm for each episode is to position each of its \glspl{uav} above a target as quickly as possible.
\edit{\subsection{Communication model}}
\label{sec:comm_model}
We consider the swarm to only have partial observations: as the size of the map might be too large for the swarm to effectively coordinate over it, we consider each \gls{uav} to have up-to-date knowledge only inside the $F\times F$ square with it at the center, with $F\leq M$. If the distance between the \gls{uav} and the edge of the map is lower than $F$, \edit{the square will consider the edge of the map as the edge of the visible region, and the \gls{uav} will no longer be at its center, in order to avoid modeling the area outside the map}. This assumption allows us to model communication constraints in the problem, as \glspl{uav} need to share the observed parts of the map with the other components of the swarm; however, $F$ should not be confused with the \gls{fov} $\zeta$, as the former represents the size of the portion of the map that each \gls{uav} considers when deciding its next action, while the latter represents the size of the portion of the map that the \gls{uav} can sense directly at each moment. \edit{In our case, we always have $F > \zeta$.}
\subsection{ND-POMDP formulation}
\label{sec:pomdp}
The described scenario is modeled as an \gls{ndpomdp}~\cite{nair2005networked}, i.e., a \gls{mdp} where the system state in not directly observable and is influenced by the actions of multiple agents, whose behavior is not centrally coordinated. Indeed, the swarm only has limited knowledge of the map, and the \glspl{uav} can take actions independently and have independent rewards. \edit{We observe that \gls{ndpomdp} is a particular class of Decentralized POMPD (\gls{decpomdp}) for which not all agents interact with each other~\cite{kumar2011scalable}. Convergence to the optimal solution for this kind of problem has been proven for classical reinforcement methods~\cite{zhang2011coordinated}, although not for deep models: as most works in the literature, we will use a benchmark to evaluate the performance of our scheme.} Formally, an \gls{ndpomdp} is identified by a 5-tuple, composed of a state space $\mathcal{S}$, an agent space $\mathcal{U}$, a joint action space $\mathcal{A}$, an observation space $\mathcal{O}$, and a reward map $r:\mathcal{S}\times \mathcal{A}\to\mathbb{R}^U$, where $U=|\mathcal{U}|$.
The complete system state $s$ is given by five matrices\edit{: a matrix for the current position of the \glspl{uav}, one matrix each for the map of the already discovered targets and obstacles, and one matrix each for the full map of targets and obstacles}. The positions of the \glspl{uav} are contained in the $2\times U$ matrix $\mathbf{X}$, while the features of the map are represented by the two $M \times M$ matrices $\bm{\Phi}$ and $\bm{\Omega}$, which contain the value $\phi(\mathbf{m})$ of each cell and the function $\omega(\mathbf{m})$ representing the location of the obstacles. \edit{Clearly, the maps with the full view of targets and obstacles are not initially known by the \glspl{uav}, which will then need to explore the area.}
Furthermore, the \glspl{uav} do not know the features of cells that have not been explored: the observed features of the map are contained in the $F \times F$ observed value matrix $\hat{\mathbf{\Phi}}$, whose elements are equal to $\phi(\mathbf{m})$ if the cell has been explored and 1 otherwise, and the $F \times F$ observed obstacle matrix $\hat{\mathbf{\Omega}}$, whose elements are equal to $\omega(\mathbf{m})$ if the cell has been explored and 0 otherwise. The observation $o_u\in\mathcal{O}$ that is available to drone $u$ is then given by $\mathbf{X}_u$, $\hat{\bm{\Phi}}_u$, and $\hat{\bm{\Omega}}_u$, defined as the $F\times F$ subsets of $\mathbf{X}$, $\hat{\bm{\Phi}}$ and $\hat{\bm{\Omega}}$ centered in $\mathbf{x}_u$.
In our case, each \gls{uav} can either stay over the same cell or move to one of the four adjacent cells. However, obstacles are impassable in our environment definition, and the \glspl{uav} cannot move outside the map, so \glspl{uav} will simply stand in place if they attempt an action that violates the constraints. We define the action space $\mathcal{A} = \{(0,0)$, $(0,1)$, $(1,0)$, $(0,-1)$, $(-1,0)\}^U$. An action for the swarm is then a vector $\mathbf{a}\in\mathcal{A}$, which contains the individual \glspl{uav}' actions, denoted as $\mathbf{a}_u$ for drone $u$. We first define function $\edit{\nu}(\mathbf{x}_u,\mathbf{a}_u)$, which is 1 if the action is valid, i.e., it does not lead the \gls{uav} to fly outside the map or into an obstacle, \edit{and zero otherwise}:
\begin{equation}
\edit{\nu}(\mathbf{x}_u,\mathbf{a}_u)=\begin{cases}
1,& \text{if }\mathbf{x}_u+\mathbf{a}_u\in\mathcal{M}\wedge\omega(\mathbf{x}_u+\mathbf{a}_u)=0;\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
The position of each drone is then updated in the following way:
\begin{equation}
\mathbf{x}_u(t+1)=\mathbf{x}_u(t)+\mathbf{a}_u(t)\edit{\nu}(\mathbf{x}_u(t),\mathbf{a}_u(t)).
\end{equation}
Fig.~\ref{fig:episode_state} shows an example of the system state at the beginning and in an advanced stage of an episode, with two drones and four targets located in a $20 \times 20$ map with no obstacles (in this case, we set $F=M=20$). In particular, the drones' positions are shown on the left (in yellow), the observed value map is in the center, and the real value map is on the right. In the figure, darker cells are associated with lower values and brighter cells are associated with higher values. In the figure, if \edit{the communication range equals or exceeds the map side, i.e., } $F \geq M$, the observed state $o$ for all \glspl{uav} would correspond to the maps on the left and in the center. \edit{On the contrary,} if $F<M$, the observation for each \gls{uav} would \edit{include} a different portion of the map.
It is easy to see how the swarm gains knowledge during the episode, as the drones explore the map and look for targets.
In this case, the \glspl{uav} found two targets relatively quickly, and a significant portion of the grid remained unexplored.
\begin{figure}[t]
\centering
\begin{subfigure}{0.7\textwidth}
\centering
\includegraphics[width=\textwidth, trim=0cm 0cm 0cm 0.5cm,clip]{Initial_situation.pdf}
\label{fig:episode_start}
\vspace{-2cm}
\end{subfigure}
\begin{subfigure}{0.7\textwidth}
\includegraphics[width=\textwidth, trim=0cm 0cm 0cm 0.5cm,clip]{Final_situation.pdf}
\label{fig:episode_end}
\end{subfigure}
\vspace{-1cm}
\caption{Drone positions (left), known map (center), real map (right). Beginning (above) and end (below) of an episode.}
\label{fig:episode_state}
\end{figure}
We give reward 1 to a \gls{uav} if it is directly above a target, reward $-\theta$ if it tries to go outside the map or to position itself over an obstacle, reward $-\psi$ if it is in the same cell as another drone, and reward 0 in any other case. The \glspl{uav} \edit{will quickly} learn to avoid actions that would take them outside the map or make them crash into obstacles, so the exact value of $\theta$ does not affect the final performance, but the value of $\psi$ affects the distance that the drones try to keep from each other: if $\psi$ is low, the drones will get close to each other if the targets are very close. Naturally, if there is a collision risk when the drones are in the same cell, the value of $\psi$ should be high. The reward depends on $\mathbf{X}$, as well as on the action vector $\mathbf{a}$.
Indicating with $\mathbf{x}_u$ and $\mathbf{a}_u$ the position and action of drone $u$, we now define the collision variable $\chi_u(\mathbf{X}, \mathbf{A})$ as
\begin{equation}
\chi_u(\mathbf{X}, \mathbf{A})=\max_{v\in(\mathcal{U}\setminus u)} \delta(\mathbf{x}_u+\mathbf{a}_u(t)\edit{\nu}(\mathbf{x}_u,\mathbf{a}_u) - \mathbf{x}_v \edit{- \mathbf{a}_v(t)}\edit{\nu}(\mathbf{x}_v,\mathbf{a}_v)).
\end{equation}
\edit{where $\delta(\mathbf{x})$ denotes a function that takes value $1$ if the vector $\mathbf{x}=0$, and zero otherwise.}
In short, $\chi_u(\mathbf{X}, \mathbf{A})$ has value 1 if one or more drones move to the same cell as drone $u$, and 0 otherwise.
The collision variable depends on the moves of other agents, so the problem is distributed.
The reward function for \gls{uav} $u$ in state $s$ if the swarm takes the joint action vector $\mathbf{A}$, denoted as $r_u(s,\mathbf{A})$, is given by:
\begin{equation}
\begin{aligned}
r_u(s,\mathbf{A})= -\theta(1-\edit{\nu}(\mathbf{x}_u,\mathbf{a}_u))-\psi \chi_u(\mathbf{X},\mathbf{A})+(1-\chi_u(\mathbf{X},\mathbf{A}))\sum_{k=0}^{K-1}\delta(\mathbf{x}_u+\mathbf{a}_u-\mathbf{z}_k).
\end{aligned}
\end{equation}
In our model, the state transitions and the system observations are both deterministic; therefore, both the state evolution and the observation are not affected by random events, but only by the agents' decisions. We define a policy $\pi(\mathbf{a}_u|o_u)$ as the conditioned probability for user $u$ to take action $\mathbf{a}_u$ \edit{given} an observation $o_u\in\mathcal{O}$.
Under these assumptions, the goal of each drone $u$ is to find the policy $\pi^*$ that maximizes the cumulative expected future discounted reward $ R_{u}(\pi) = \mathbb{E}\left[\sum_{\tau=0}^{+\infty} \gamma^{\tau} r_{u,\tau}|o_u,\pi\right]$, where $\gamma\in[0,1)$ is a discount factor.
\subsection{Distributed Deep Q-Learning}
In this subsection, we will describe our \gls{ddql} approach to solve the problem defined above. For the sake of readability, \edit{in the following} we omit the $u$ subscript to indicate the agent whenever possible.
Each agent leverages a \gls{dqn}, i.e., a \gls{nn} that takes as input the last observation $o_t$ and returns the Q-values of the possible actions that can be taken, i.e., $Q(o_t, \mathbf{a}),\,\forall\mathbf{a} \in \mathcal{A}$.
In Q-learning, the function $Q(o, \mathbf{a})$ is an estimate of the expected long-term reward $R$ that will be achieved by choosing $\mathbf{a}$ as the next action and \edit{then} following the learned policy.
In our case, we maintain a single \gls{dqn} during the training phase, whose values are shared by all the agents. In this work, we follow the approach from~\cite{Mnih2015} and leverage a \textit{replay memory} to store the agent experience $e_t = $ $\left( o_t, a_t, r_t, o_{t+1}\right)$. Whenever the agent carries out a training step, a batch of $B_{\text{size}}$ elements is picked from the replay memory, allowing to separate the algorithm training from the experience acquisition. The replay memory is shared between the agents during a training phase, and a new batch is used to train the agent at every step. \edit{We highlight that, in our system, all agents are the same (single \gls{dqn}), and they need to generalize the problem from a limited number of states.
As it would be impossible for a single \gls{uav} to experience even just a non-negligible fraction of possible states in the training, shared replay is a critical factor in the network's generalization ability.
In particular, the experience replay is extremely valuable since it allows the system to improve the variety of the training samples by getting experience from the states seen by different agents.
In other scenarios, it may \edit{not be} convenient to exploit a shared memory, especially when the agents have to learn different tasks.}
\edit{Following the \gls{dqn} example from~\cite{Mnih2015}, we exploit the \textit{double Q-learning}
technique to remove biases from the Q-value estimation and speed up the algorithm's convergence~\cite{van2016deep}. This means that, during the training, we maintain a \emph{target network}, whose output $Q_t(o,a)$ is used to evaluate actions, and an \emph{update network}, whose output $Q_u(o,a)$ is used to select the policy. In particular, the bootstrap Q-value is computed as
\begin{equation}
Q^{\text{new}}(o_t, a_t) = r_t + \gamma \max_a Q_t(o_{t+1}, a).
\label{eq:double_q}
\end{equation}
The value $Q^{\text{new}}(o_t, a_t)$ is then used to perform backpropagation on the update network with a learning rate set automatically by the \gls{radam} optimizer~\cite{liu2019variance},
and every $n_q$ training steps the update network parameters are copied to the target network.}
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth]{NN.png}
\caption{Architecture of the \gls{dqn}.}
\label{fig:architecture}
\end{figure}
In our model, the observed state of the system for each agent can be represented by four $F \times F$ matrices, representing the agent position, the locations of the other agents, the value of explored cells, and the position of known obstacles. To simplify the state space, we consider matrices $\hat{\bm{\Phi}}$ and $\hat{\bm{\Omega}}$ jointly, by feeding the \gls{nn} with the matrix $\hat{\bm{\Phi}}-\rho\hat{\bm{\Omega}}$, \edit{where $\rho$ is a scalar parameter used to facilitate learning}.
Therefore, our system approximates the function $Q(o, a)$ by a \gls{cnn}, whose architecture is described in Fig.~\ref{fig:architecture}.
In particular, we consider a \gls{cnn} exploiting three convolutional layers followed by two fully-connected layers.
The dimension of the last layer is identical to the number of actions, so that each output element can be associated to a different action $a \in A$.
Hence, each agent provides training samples for the shared replay memory, which are then used in~\eqref{eq:double_q}, so that the \gls{cnn} output can converge to the Q-values $Q(o, a),\,\forall$ $a \in A$. We implement the well-known $\varepsilon$-greedy and softmax policies to allow the agents to explore the action space during the training phase, which is carried out by simulating a sequence of episodes.
\edit{\subsection{Computational complexity}}
\edit{We now discuss the computational complexity to perform one inference procedure with the neural network. \edit{We first analyze the complexity of fully-connected layers}. We \edit{denote by} $N_k$ the number of neurons in the general $k$-th layer. To go from layer $i$ to layer $i+1$, we need to compute the value of $N_{i+1}$ nodes, each of which takes $N_i$ multiplications followed by $N_i$ additions and one non linear function, thus involving $N_{i+1}(2N_i + 1)$ operations. }
\edit{We can then compute the complexity of one convolutional layer, as done in \cite{complexityCNN}, when neither batch normalization nor pooling layers are present. We denote with $(I_w, I_h, I_d)$ the shape of the input block. At layer $i$, we then have $K_i$ filters with kernels dimension $(W_i, H_i)$, stride $S_i$ (we use the same value along the two axes), and padding $P_i$. The shape of the resulting output block will be $\Huge( \frac{I_w + 2P_i - W_i}{S_i} + 1, \frac{I_h + 2P_i - H_i}{S_i} + 1, K_i \Huge)$. The computation of each block's neuron here involves $W_i \times H_i \times I_d$ multiplications followed by the same number of additions (sum all elements plus the bias) and one non-linearity. The total number of calculations is then $\Huge( \frac{I_w + 2P_i - W_i}{S_i} + 1 \Huge) \times \Huge( \frac{I_h + 2P_i - H_i}{S_i} + 1 \Huge) \times K_i \times \Huge( 2W_i H_i I_d + 1 \Huge)$.}
\edit{If we consider the specific architecture of our \gls{nn} reported in Fig. \ref{fig:architecture}, the actual number of basic computations (multiplications, additions and non-linearities) are, respectively, $440\,000$, $3\,704\,980$ and $628\,180$ for the three convolutional layers. The following fully-connected layers require $125\,504$ and $645$ computations, thus the total number of operations for one decision is $4\,899\,309$}.
\edit{This computational complexity allows \glspl{uav} to take decisions in time, as even embedded processors can deal with much more complex architectures in real time~\cite{bianco2018benchmark}. As the physical speed of the \glspl{uav} and the much more complex vision algorithms required to identify targets are the main limiting factors for the swarm, the \gls{ndpomdp} will be performed at a relatively slow pace, with timesteps in the order of several seconds.}
\section{Simulation settings}\label{sec:setup}
In this section, we describe the simulations by which we evaluated the performance of the designed system.
All the results are derived through a Monte Carlo approach, where multiple independent simulations are carried out to obtain reliable statistical data.
In particular, the algorithms' training is executed by carrying out a total of $N_e$ episodes for each studied scenario (sparse or cluster), where each episode is given by $N_s^t$ steps.
Training episodes are far longer than test episodes, which have length $N_s^p$, since the agents need to explore the map fully.
Before training, we initialize the replay memory by executing $N_e^m = 1000$ episodes of $N_s^t$ steps each, to allow agents to immediately start the learning procedure. If the episodes are too long, a lot of samples in which large portions of the map are already explored are added to the memory replay, and the agents will not learn properly how to move at the beginning of the episode, when the map is not explored. On the other hand, short episodes have the opposite problem, as the \glspl{uav} never learn to behave in the final parts of the episodes. A prioritized memory replay can solve this problem, but requires additional parameters. We then opted for adapting the episode length in the training phase. The even training episodes have 50 steps each, while the odd episodes have 150 steps. This alternating size prevents the replay memory \edit{from being} too skewed towards situations in which the map is almost completely explored or unexplored.
Moreover, we apply transfer learning to allow the agents trained in the sparse environment to quickly adapt to the cluster scenarios (or vice-versa); to this goal, additional $N_t$ training episodes are carried out.
Finally, the performance of the proposed strategy is tested in a total of $N_p=500$ episodes for the \gls{ddql} system. The exploration rate $\varepsilon$ follows 2 different approaches, namely, $\epsilon$-greedy and softmax. In the former, a random action is chosen with probability $\epsilon$, while the best action, i.e., the action with the highest Q-value, is chosen with probability 1-$\epsilon$. The value of $\epsilon$ decreases to 0 at the end of the training, since no more exploration is needed. In the latter, at each time step the probability of each action $p_{i}$ is computed as the output of a softmax density function taking the Q-values as input. In this case, the temperature T decreases during the training, reducing the randomness during the selection of the actions:
\begin{equation}
p_{i}=\frac{e^{\frac{q_i}{T}}}{\sum_{j=1}^{A} e^{\frac{q_j}{T}}}\ ,
\end{equation}
where $A=5$ is the number of actions that each drone can take. The training and testing processes are independently performed 5 times to verify the robustness of the \gls{ddql} scheme. The complete simulation settings are reported in Tab \ref{tab:sim_param}.
\begin{table}[t]\centering
\footnotesize
\begin{tabular}[c]{ccc}
\toprule
Parameter & Value & Description \\
\midrule
$M$ &\{20, 24, 30, 40, 50\} & Map size \\
$F$ & 20 & Observed map size \\
$U$ & $\{2, 3\}$ & Number of \glspl{uav} \\
$K$ & 4 & Number of targets \\
$\sigma^{2}$ & 1 & Targets variance \\
$\zeta$ & 3 & Field of View \\
$\eta$ & \{0,0.1\} & Obstacle frequency\\
$d_{\text{sparse}}$ & 8 & Minimum target distance (sparse scenario)\\
$\theta$ & 1 & Obstacle/outside penalty \\
$\psi$ & 0.8 & Collision penalty \\
$\rho$ & 0.2 & Obstacle value\\
$\gamma$ & 0.9 & Discount factor \\
$\alpha$ & Chosen by \gls{radam} & Learning rate \\
$N_e$ & $\{250, 750, 1000, 3000\}$ & Training episodes \\
$N_s^t$ & $\{50,150\}$ & Steps per training episode \\
$N_{s}^p$ & 40 & Steps per test episode \\
$N_t$ & $\{125, 250, 375,750\}$ & Transfer learning episodes \\
$N_p$ & 100 (LA), 500 (DDQL) & Test episodes \\
$P_{\text{tx}}$ & 20 dBm & Communication power\\
$N_0$ & -76 dBm & Noise floor\\
$h$ & 40 m & \gls{uav} height\\
$R_c$ & $2/3$ & Coding rate\\
\bottomrule
\end{tabular}
\vspace{0.1cm}
\caption{Simulation settings.}
\label{tab:sim_param}\vspace{-0.8cm}
\end{table}
To assess the performance of our \gls{ddql} scheme, we compare it with a heuristic strategy inspired by \gls{mpc}, by which drones can explore the map and reach the targets.
Such a strategy is named \emph{look-ahead} and is used as a benchmark for our analysis. The look-ahead strategy tries all possible combinations of future actions and looks at the possible future rewards, as its name suggests. In order to define it, we first define the look-ahead reward $r_u^{(\ell)}(\mathbf{X},\mathbf{a})$ as:
\begin{equation}
r_u^{(\ell)}(\mathbf{X},\mathbf{a}) = \begin{cases} \frac{\hat{\phi}(\mathbf{x}_u+\mathbf{a}_u)}{\xi(\mathbf{x}_u+\mathbf{a}_u)} &\text{if } \edit{\nu}(\mathbf{x}_u, \mathbf{a}_u)=1;\\
-\infty &\text{otherwise,}
\end{cases}
\end{equation}
where $\xi(\mathbf{x})$ is the number of \glspl{uav} located in $\mathbf{x}$. The look-ahead strategy never goes outside the map or on obstacles. To decide its next action, each drone $u$ tries to maximize its expected cumulative reward over the following $n_{\ell}$ steps, assuming that none of the other drones move.
Practically, the look-ahead strategy makes each drone select the action $\mathbf{a}^*$ that maximizes
\begin{equation}
\label{eq:la_action}
\max_{\mathbb{A} \in \mathbf{\tilde{\mathcal{A}}}^{n_\ell} }
\sum_{i=0}^{n_\ell-1}
r_u^{(\ell)}\left(\mathbf{X}+\sum_{j=0}^{i-1}\mathbf{A}^j,\mathbf{a}^i\right),
\end{equation}
where
$\tilde{\mathcal{A}}^{n_\ell}$ is the set of ordered sequences $\mathbb{A}$ of action vectors $\mathbb{A}^0$, $\mathbb{A}^1$, ..., $\mathbb{A}^{n_\ell-1}$,
so that $\hat{\mathbf{a}}^0_u = a^*$ and $\mathbf{a}^i_v = (0,0),\,\forall$ $i \in \{0,...,n-1\}, v \neq u$, i.e., the set of possible move sequences of $u$ while the other \glspl{uav} are static. If several action sequences have the same expected reward, the look-ahead strategy will choose one of them randomly.
At the beginning of an episode, each drone $u$ assumes that all the map values $\phi(\mathbf{m})$ outside its \gls{fov} are equal to $1$; therefore, look-ahead forces $u$ to continuously explore the map. However, as soon as it finds a target, $u$ will hover over the target center. The target is then eliminated from the other agents' value maps, as it is already covered by a \gls{uav}.
We highlight that the performance of look-ahead mainly depends on the $n_{\ell}$ parameter: as it increases, drones can make more foresighted decisions, but at a greater computational cost. In addition, the number of targets in the map also plays a key role in determining the computational performance: when more targets are present, we have to check whether other agents are on a target more often, in order to remove it from the map of available targets. As the look-ahead strategy is computationally expensive, $N_p$ for it was set to 100.
\edit{Finally, we also consider a scenario with a realistic communication model, in which the broadcast messages sent by each \gls{uav} at every step might be lost due to the wireless channel impairments.
We used the path loss and shadowing model from~\cite{liu2019measurement}, based on actual measurements from air-to-air communications, and considering a Rayleigh fading model with an error correction code with rate 2/3.
As the simulation results will show, the physical size of the cells in the map is a critical parameter when \glspl{uav} communicate directly with each other (and not through the network infrastructure on the ground).
In particular, increasing the size of the cells will impair \edit{the} performance because of communication range issues: the model has an error probability of 50\% at approximately 110 m, corresponding to 11 cells if a cell side is 10 m and 5 cells if the side is 20 m.}
\section{Simulation results}\label{sec:results}
\edit{In what follows, we evaluate the performance of our approach in various scenarios with different characteristics.}
\subsection{Training analysis}
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth,trim=0cm 0.5cm 1.5cm 2cm,clip]{Correct_episodes_New.pdf}
\caption{Success probability over the training phase in the \emph{cluster} scenario with 2 UAVs.}
\label{fig:training_cluster}
\end{figure}
\edit{We first consider a scenario with 2 \glspl{uav} and 4 targets in a $20\times20$ map.
In particular, we perform multiple training phases of different duration; the longer training includes 3000 episodes, for a total of 300,000 training samples, which ensures that all our algorithms achieve convergence.}
The look-ahead approach is abbreviated as LA(4), as we set $n_\ell=4$.
This already had a significant computational cost, and in our simulation each look-ahead decision takes approximately 15 times longer than running a trained \gls{ddql} agent.
\edit{We do not consider $n_\ell>4$, since the computational cost of such a technique becomes excessive with limited performance gains: without coordination among the \glspl{uav}, which requires a prediction of the movements of other drones in the swarm, there is a limit on the performance of the swarm even with an infinite horizon. In some brief tests (which had to be on maps of a limited size due to the computational complexity of LA with a longer horizon), we noticed that LA(8) and even LA(12) show limited gains over LA(4), as the biggest factor in determining the speed at which the \glspl{uav} find the target becomes the coordination of the swarm once the horizon reaches 3 or 4 steps.}
Fig.~\ref{fig:training_cluster} shows the success probability in the \emph{cluster} scenario as a function of the training set size and of the considered exploration profile and approach. \gls{ddql} combined with the softmax approach catches up with LA(4) in less than 900 training episodes, converging to a success probability between 0.65 and 0.7. The $\epsilon$-greedy approach has a lower final performance and requires more time to converge with respect to the softmax profile.
The error bars show the best and worst results over 5 test phases, showing that the performance improves as the \glspl{uav} gain more experience.
The performance boost over the look-ahead approach is due to the \gls{ddql} scheme's ability to exploit the correlation among the target positions, quickly finding the other targets after the first one has been spotted.
Instead, in the \emph{sparse} scenario, the final performance of \gls{ddql} is similar \edit{to that} of LA(4), as Fig.~\ref{fig:training_sparse} shows.
In general, both \gls{ddql} and LA(4) have more success than in the cluster scenario, as finding the scattered targets is easier than finding clusters in the limited duration of an episode.
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth,trim=0cm 0.5cm 1.5cm 2cm,clip]{Correct_episodes_New_Sparse.pdf}
\caption{Success probability over the training phase in the \emph{sparse} scenario with 2 UAVs.}
\label{fig:training_sparse}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth, trim=0cm 4cm 1cm 0cm,clip]{Finalsituation_Cluster.pdf}
\caption{Cluster scenario}
\label{fig:cluster_state}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth, trim=0cm 4cm 1cm 0cm,clip]{Finalsituation_Sparse.pdf}
\caption{Sparse scenario}
\label{fig:sparse_state}
\end{subfigure}
\caption{\edit{The bars indicate the probability mass distribution of the number of \glspl{uav} that successfully accomplish their task (i.e., hover upon a target) by the end of the episode, when varying the duration of the episode. Each group of bars refers to the performance achieved by \gls{ddql} (with and without softmax) and by LA, in the Cluster (a) and Sparse (b) scenarios, with a total of 4 targets and 2 \glspl{uav}}}.
\label{fig:prob_bars}
\end{figure}
\subsection{Success rate vs time}
\edit{The next set of results refer to the performance of the strategy learned by the proposed framework. Fig.~\ref{fig:prob_bars}} reports the probability of one or both drones reaching the target as a function of the number of steps. \edit{Therefore, the figure shows the trade-off between the time needed by \glspl{uav} to accomplishing their task and the success rate. } In the cluster scenario (Fig.~\ref{fig:cluster_state}), \gls{ddql} is much faster than LA, but its performance peaks out, and after 40 steps the probability of the \glspl{uav} reaching their targets does not change significantly. \edit{Indeed, we observed that, in certain cases,} when a drone reaches the target, but the other one is far from any feature of the map, the latter can end up staying in place, as its Q-values for that scenario are not precise and all actions have a similar (low) value. This almost never happens before the first \gls{uav} reaches its target, \edit{since the change in the system state due to the movement of one \gls{uav} is generally enough to make the other \gls{uav} move.} This is not a problem for LA, whose performance keeps rising \edit{with time}; in the sparse scenario (Fig.~\ref{fig:sparse_state}), LA even ends up reaching more targets than \gls{ddql} after 50 steps. The solution we found to avoid \edit{this roadblock} is simply to maintain a low softmax temperature $\tau=0.1$ even during the test phase: the bar chart shows that the \gls{ddql} Soft system is slightly slower than the greedy \gls{ddql} at the beginning, but it can avoid getting stuck. This randomization allows the agent to get out of loops, as sometimes a random sub-optimal action will change the state and allow it to reconsider, while the greedy system will keep performing the same action and remain in the same state. LA essentially does the same, randomizing its action when it is unsure which one is the best.
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth,trim=1cm 1cm 1cm 2cm,clip]{Bad_situation.pdf}
\caption{Example of an episode where the second \gls{uav} is not able to reach the cluster}
\label{fig:Bad_situation}
\end{figure}
Fig. ~\ref{fig:Bad_situation} shows one such situation: as one \gls{uav} has reached its target, while the other is far from any identified target, its Q-values will be very similar to each other, and some of the time it will stay motionless or move in small loops, as its state never changes. The fact that most of the map is still unexplored increases the probability of the \gls{uav} getting stuck, as it will have limited information and its Q-values will be very similar. In the following, all the results are referred to the \gls{ddql} Soft system with $\tau=0.1$ unless otherwise stated.
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth]{Figura6_b.pdf}
\caption{CDF of the episode duration for different algorithms in the cluster scenario with 2 \glspl{uav}.}
\label{fig:transfer_cluster_2d}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth,trim=0cm 0.5cm 1cm 2cm,clip]{Figura7.pdf}
\caption{CDF of the episode duration for different algorithms in the sparse scenario with 2 \glspl{uav}.}
\label{fig:transfer_sparse_2d}
\end{figure}
\subsection{Adaptability and transfer learning}
\edit{Here we investigate the adaptability of the proposed \gls{ddql} scheme, and the potential of the transfer learning paradigm.
\edit{The} latter involves the execution of an additional training phase in a different scenario than the one seen during the initial training.
To this end, we consider a common target scenario, i.e., cluster (or sparse), and compare the results achieved when using strategies learned in the other domain, i.e., sparse (or cluster).
More specifically, we consider the following cases:
\begin{itemize}
\item "Cluster $N_e$": training on $N_e$ episodes in the cluster scenario;
\item "Sparse $N_e$ ": training on $N_e$ episodes in the sparse scenario;
\item "Cluster+TL $N_t$": pre-training on $N_e$ = 3000 episodes in the cluster scenario, followed by an additional training of $N_t$ episodes in the target scenario.
\item "Sparse+TL $N_t$": pre-training on $N_e$ = 3000 episodes in the sparse scenario, followed by an additional training of $N_t$ episodes in the target scenario.
\end{itemize}}
\edit{In Fig. \ref{fig:transfer_cluster_2d}} we show the \gls{cdf} of the \edit{\textit{episode duration}, defined as} the time until \edit{all the} drones reach targets or the testing episode limit \edit{(here fixed to 60 steps)} is reached. \edit{We also report the results for LA with four steps, LA(4), as a benchmark.} \edit{Each point is hence the probability that all drones have accomplished their task by a given number of steps.}
\edit{We observe that, as expected, the Cluster strategy achieves the highest success probability with a limited number of steps. LA(4) can equal its performance only when the episode duration reaches the limit of 60 steps (i.e., in less than 30\% of the cases).} Instead,
750 episodes of training in the cluster scenario are \edit{not sufficient to outperform LA(4), but actually enough to} outperform a model trained in the sparse scenario.
However, \edit{a short} retraining \edit{of such model} in the correct (cluster) scenario allows the algorithm to get a significant performance boost, \edit{outperforming LA(4) and getting very close to the performance of the Cluster 3000 model, which is fully trained in the correct scenario and with more than twice the number of episodes. }
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth,trim=0cm 0.5cm 1cm 2cm,clip]{Figura8.pdf}
\caption{CDF of the episode duration for different algorithms in the cluster scenario with 3 \glspl{uav}.}
\label{fig:transfer_cluster_3d}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth,trim=0cm 0.5cm 1cm 2cm,clip]{Figura9.pdf}
\caption{CDF of the episode duration for different algorithms in the sparse scenario with 3 \glspl{uav}.}
\label{fig:transfer_sparse_3d}
\end{figure}
We repeated the experiment \edit{by swapping the role of the sparse and cluster scenarios, and changing the number of episodes during the training phase, as reflected in the legend of Fig. \ref{fig:transfer_sparse_2d}, which reports the results. Also in} this case, \edit{LA(4)} meets the performance of \gls{ddql} \edit{only for episodes of} 60 steps, \edit{i.e., in less than 15\% of the cases.} Transfer learning is \edit{again} very effective, as a 750 episode re-training significantly boosts the baseline performance compared to starting from scratch. \edit{We highlight that, in general, the number of steps necessary to reach the targets is comparatively lower than in the previous scenario since, as already discussed, it is easier for \glspl{uav} to find targets in} the sparse scenario.
\edit{In Fig. \ref{fig:transfer_cluster_3d} and Fig. \ref{fig:transfer_sparse_3d}} we show the results for a scenario with 3 \glspl{uav}: in both cases transfer learning is effective, but the performance is lower in the sparse scenario than in the cluster one. In this case, the risk of getting stuck is increased and the algorithm needs more training to perform effectively in all maps.
\subsection{Obstacles}
\begin{figure}[t]
\centering
\begin{subfigure}{0.7\textwidth}
\centering
\includegraphics[width=\textwidth, trim=0cm 0cm 0cm 0.5cm,clip]{Beginning.pdf}
\label{fig:episode_start_obs}
\vspace{-1cm}
\end{subfigure}
\begin{subfigure}{0.7\textwidth}
\includegraphics[width=\textwidth, trim=0cm 0cm 0cm 0.5cm,clip]{End.pdf}
\label{fig:episode_end_obs}
\end{subfigure}
\vspace{-0.5cm}
\caption{Drone positions (left), known map (center), real map (right). Beginning (above) and end (below) of an episode with obstacles.}
\label{fig:episode_state_obs}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0cm 0.5cm 1cm 2cm,clip]{Obstacle.pdf}
\caption{2 \glspl{uav}.}
\label{fig:Obstacle}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth,trim=0cm 0.5cm 1cm 2cm,clip]{Obstacle_3D.pdf}
\caption{3 \glspl{uav}.}
\label{fig:Obstacle_3d}
\end{subfigure}
\caption{CDF of the episode duration for different algorithms in the obstacle scenario.}
\label{fig:obs}
\end{figure}
\edit{In what follows, we consider a modified version of the cluster scenario, where some obstacles are added to the map.
In particular, we empirically set the percentage of the map occupied by obstacles to 10\%, searching for a balance between increased system complexity and the realism of the scenario. An example of the system state representation with obstacles is shown in Fig.~\ref{fig:episode_state_obs} at the beginning and at the end of an episode. The obstacles are marked in green.}
Fig.~\ref{fig:Obstacle} shows the performance of the LA approach and \gls{ddql} in the case of 2 \glspl{uav} \edit{and 4 targets}. The \gls{ddql} solution has been trained for scenarios with 2, 3 and 4 \glspl{uav} \edit{(labeled in the plots as 2D, 3D, and 4D, respectively)}, and then tested in the scenario with 2 and 3 \glspl{uav}, with and without the use of the softmax approach in the testing phase. In both cases, it is clear that the models trained with more \glspl{uav} are able to outperform those with fewer \glspl{uav} in both considered scenarios. Furthermore, as for the case without obstacles, the use of the softmax policy during the testing phase increases the performance, especially when the episodes are longer, as it keeps the \glspl{uav} from getting stuck. In the scenario with 3 \glspl{uav}, as in Fig.~\ref{fig:Obstacle_3d}, the performance is generally lower, meaning that the swarm needs more training. \edit{However, \gls{ddql} is able to outperform the LA approach in both cases, reaching targets significantly faster in the scenario with 3 drones.}
\subsection{Transfer learning on bigger maps, with larger swarms \edit{and communication impairments}}
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Without_edges.pdf}
\caption{2 \glspl{uav}.}
\label{fig:Bigmap2drones}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Bigmap3_100steps.pdf}
\caption{3 \glspl{uav}.}
\label{fig:Bigmap3drones}
\end{subfigure}
\caption{Success probability as a function of the map size and the number of clusters.}
\label{fig:final_sit}
\end{figure}
We then show how well \gls{ddql} is able to generalize to bigger maps in the testing phase. For this reason, the algorithm has been trained on a map with $M=24$, maintaining $F=20$, and the testing phase included bigger maps and different numbers of clusters. All the results shown in the following figures are obtained with 100-step episodes: the longer duration is needed to allow the agents to reach the targets even in bigger maps. For similar reasons, the scenarios with more clusters are studied to maintain a similar proportion of surface occupied by targets even in the bigger maps. Fig.~\ref{fig:Bigmap2drones} and Fig.~\ref{fig:Bigmap3drones} show how the performance varies as a function of the size of the environment and the number of clusters present in the map. In both cases, \gls{ddql} shows a good adaptability, getting better performance than LA in all cases, with a bigger gain in bigger maps. In Fig.~\ref{fig:Bigmap2dronesobstacle} and Fig.~\ref{fig:Bigmap3dronesobstacle}, the same scenarios are studied with the addition of the obstacles in the map, covering about 10\% of the size of the map. In this case, \gls{ddql} \edit{will need some retraining to reach LA's performance} on smaller maps, while the performance is similar when the map is bigger. However, we recall that \gls{ddql} also has a significant advantage in terms of computational cost, so it is preferable if performance is similar.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Bigmap2obstacle100steps.pdf}
\caption{2 \glspl{uav}.}
\label{fig:Bigmap2dronesobstacle}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{Bigmap3obstacle100steps.pdf}
\caption{3 \glspl{uav}.}
\label{fig:Bigmap3dronesobstacle}
\end{subfigure}
\caption{Success probability as a function of the map size and the number of clusters with obstacles.}
\label{fig:final_sit}
\end{figure}
\edit{It is also interesting to test the transfer capabilities of the algorithms in more complex scenarios, including far larger swarms and imperfect communications: as \gls{ddql} relies on information from other \glspl{uav} to find targets and avoid collisions, a limited communication range can impair its performance significantly. As Fig.~\ref{fig:Dronescomm10} shows, 10 drones moving in a large map with obstacles (with 16 targets in 4 clusters, as above) can coordinate effectively with no retraining, outperforming the LA approach. Performance loss is limited even with communication restrictions if each cell is a square with a 10 m side, corresponding to a maximum range of about 11 cells with 50\% packet loss at the boundary of the coverage area. Performance loss with respect to the perfect communication scenario is limited, confirming the intuitive idea that information from neighbors inside the visible area is the most critical to find and reach the targets. If the cell side is doubled, effectively halving the communication range and introducing significant errors even for packets between immediate neighbors, the performance drops significantly, and becomes even worse if there is no communication at all between the \glspl{uav}. This would be true for any cooperative algorithm, as information from other \edit{agents} can be used to optimize the exploration of the map, but we highlight that \gls{ddql} has always been trained assuming ideal communication, and the communication impairments have been considered only in the test phase. Therefore, the \glspl{uav} might be confused by the lack of information, and a partial retraining might yield better results as the agents transfer their experience and learn to deal with the more limited feedback. On the other hand, the algorithm scales extremely well to larger swarms, slightly outperforming \edit{LA} even with no retraining in the new scenario. The same pattern holds for the case with 12 drones, which is shown in Fig.~\ref{fig:Dronescomm12}.}
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{Drones_comm_10_v2.pdf}
\caption{Performance of a swarm of 10 UAVs.}
\label{fig:Dronescomm10}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\linewidth]{Drones_comm_12_v2.pdf}
\caption{Performance of a swarm of 12 UAVs.}
\label{fig:Dronescomm12}
\end{subfigure}
\caption{Effect of imperfect communications on the performance of DDQL in a large map.}
\label{fig:comms}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.9\textwidth}
\centering
\includegraphics[width=1\linewidth]{3d_map.png}
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}{.29\textwidth}
\centering
\includegraphics[width=1\linewidth]{real_map.png}
\label{fig:sub-second}
\end{subfigure}
\begin{subfigure}{.29\textwidth}
\centering
\includegraphics[width=1\linewidth]{obstacles_map.png}
\label{fig:sub-second}
\end{subfigure}
\begin{subfigure}{.29\textwidth}
\centering
\includegraphics[width=1\linewidth]{matrix_map.png}
\label{fig:sub-second}
\end{subfigure}
\caption{Extraction of the map from building height data in a 500 m by 500 m area in the downtown Chicago Loop neighborhood.}
\label{fig:chicago_maps}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=.7\textwidth]{Realmap.pdf}
\caption{Performances on the real map of Chicago}
\label{fig:Chicagoperf}
\end{figure}
\edit{Finally, we tested an extreme transfer learning scenario, not only increasing the size of the map and the number of \glspl{uav}, but also switching from the synthetic obstacle distribution on the map to one derived from a real map. The map of obstacles was obtained from a city map of the area just east of LaSalle Street Station in downtown Chicago, in the central Loop neighborhood. As shown in Fig.~\ref{fig:chicago_maps}, we obtained the height profiles of the buildings in the area, considering as obstacles their parts with a height of over 10 stories (i.e., approximately 40 m, the minimum legal hovering altitude for \glspl{uav}). The 500 m by 500 m area was then divided into 2500 square cells with 10 m sides, converting the height profile to an obstacle map in the grid. 11\% of the map was occupied by obstacles, so the map was approximately as full as the one used in the training, which had 10\% obstacle cover, but the individual obstacles were larger and concentrated along South Wabash Avenue and South Dearborn Street. This is an additional hurdle for \gls{ddql}, which was not trained to deal with obstacles concentrated along streets, which make it more difficult to find an appropriate path. However, as Fig.~\ref{fig:Chicagoperf} shows, the \gls{ddql} system can find targets approximately as fast as LA, underperforming a little only on higher percentiles. With a modicum of retraining, \gls{ddql} should be able to adapt to the different structure, exploiting the regularities in city blocks to avoid obstacles and find targets even more quickly.}
\edit{In conclusion, we have shown that \gls{ddql} is able to find efficient strategies for the \glspl{uav} to reach targets faster than look-ahead solutions in complex environments. The algorithm is scalable to larger maps, larger swarms, and limited communications without any retraining, and can deal with obstacles and very different target distributions with a limited amount of retraining. This shows that the solution is powerful and versatile, adapting easily to new conditions. However, there is still some margin for improvement, particularly in scenarios in which almost all the \glspl{uav} in the swarm have already reached a target, while the last stragglers are far from any feature in the map. This case represents most of the residual failures of the algorithm, and solving it is an important future objective.}
\section{Conclusions and future work} \label{sec:conclusions}
In this work, we \edit{studied} the problem of area monitoring and surveillance with a swarm of drones. We modeled the environment with a 2D grid and cast the problem into the theoretical framework of \gls{ndpomdp}. We have examined various scenarios, including obstacles and maps of different sizes, and the \edit{proposed} algorithm outperformed a computationally intensive look-ahead approach in almost all scenarios.
Important research directions include the introduction of dynamic targets, which would be an important step to increase the scenario's realism, as well as different roles for the drones, which can be assigned dynamically and would allow us to examine another interesting aspect of the \gls{marl} problem, increasing the difficulty of coordinating the \glspl{uav}' actions.
\section*{Acknowledgments}
This work has been \edit{partially} supported by the U.S. Army Research Office (ARO) under Grant no. W911NF1910232, "Towards Intelligent Tactical Ad hoc Networks (TITAN)" and \edit{by} MIUR (Italian Ministry for Education and Research) under the initiative "Departments of Excellence" (Law 232/2016)".
\bibliographystyle{IEEEtran}
|
2,869,038,156,298 | arxiv | \section{Introduction}
We give a comprehensive study of slightly compressible Brinkman-Forch\-hei\-mer
equations in the following form:
\begin{equation}\label{bf1}
\begin{cases}
\Dt u-\Dx u+\Nx p+f(u)= g, \ \ u\big|_{\partial\Omega}=0,\ u\big|_{t=0}=u_0,
\\ \Dt p+\divv(Du)=0, \ \ p\big|_{t=0}=p_0
\end{cases}
\end{equation}
in a bounded domain $\Omega\subset\R^3$ with sufficiently smooth boundary $\p\Om$. Here $u=(u^1(t,x),u^2(t,x),u^3(t,x))$ and
$p=p(t,x)$ are unknown velocity vector field and pressure respectively, $D$ is a given positive
self-adjoint matrix, $f$ is a given nonlinearity and
$g$ is the external force.
\par
Equations of the form \eqref{bf1} arise in the mathematical theory of fluids in porous media and
are of a big permanent interest from both theoretical and applied points of view,
see \cite{Aul,Brin,FG,GT,HR,KZ,Lad1,MTT,Mus,R,Str,Tem1,Whit} and references therein. The first equation
of \eqref{bf1} is usually interpreted as a generalization of the Darcy law:
$$
\tau\Dt v-\beta\Dx v+f(v)=-D\Nx p,
$$
where $D$ is a normalized permeability tensor, $f(v)$ is a Forchheimer nonlinearity which
typically has a form
\begin{equation}\label{0.fF}
f(v)=\alpha v+\beta (\Cal Cv.v)^{l}v+\gamma\sqrt{(\Cal Cv.v)}v,
\end{equation}
where $\Cal C$ is another positive self-adjoint matrix and $\alpha,\beta,\gamma$ and $l\ge\frac12$ are
some constants, see e.g., \cite{Aul} and references therein, $\beta\Dx v$ is a Brinkman term with
effective viscosity parameter $\beta>0$, see \cite{Brin} and $\tau\Dt v$ is time relaxation
term which is especially important in the case of non-monotone $f$ or/and presence of the
inertial term $(v,\Nx)v$ to provide the unique expression of $v$ through $\Nx p$. The second equation
$$
\Dt p+\divv(v)=0
$$
is just a standard slightly compressible approximation of the continuity equation,
see e.g., \cite{Tem1,LST,Donat} and references therein. Making the change of variables $v=Du$, we end up with a system
of the form \eqref{bf1} with a slightly unusual term $\divv(Du)$.
\par
We also mention that the equations \eqref{bf1} in 2D case naturally arise in the dynamic
theory of tides as a generalization of the classical Laplace tidal equations. In this case,
$u:=(u^1,u^2)$ is the horizontal transport vector
(the horizontal velocity averaged over the vertical axis)
and the scalar $p$ is a vertical tidal elevation,
see, e.g., \cite{Go,Ip, Li,MK,Mo} and references therein.
\par
Equations \eqref{bf1} have a non-trivial structure which is interesting also
from purely mathematical point of view. Indeed, in the simplest case $f=g=0$ $D=1$, we may introduce
a new variable $\omega=\operatorname{curl}u$ and reduce the system to the following equations
$$
\Dt\omega-\Dx\omega=0,\ \ \Dt^2 p-\Dx \Dt p-\Dx p=0,
$$
so we see a combination of a heat equation with the so-called strongly damped wave equation. This
system is decoupled in the case of periodic boundary conditions, but in the case of Dirichlet
boundary conditions we have a non-trivial coupling already on the level of linear equations
through boundary conditions. Thus, in contrast to the incompressible case, one cannot expect instantaneous
smoothing property for $u$ and $p$, but similarly to damped wave equation, one can expect that
some components of the solution may have this property, see \cite{KZ09} for more details. Of course, for
non-zero nonlinearity $f$, we also have coupling through nonlinear terms.
\par
Another possibility is to differentiate the first equation in time and exclude
the pressure using the second equation. This gives the second order in time equation:
$$
\Dt^2 u-\Dx \Dt u+f'(u)\Dt u-\Nx\divv(Du)=0
$$
which is again a sort of strongly damped wave equation with the nonlinearity of
Van der Pol type, see e.g., \cite{KZ14} for the regularity and longtime behavior
of such equations in the scalar case. However, this form of equations \eqref{bf1} is not
convenient especially for the study of longtime behavior since the operator
$\Nx \divv(Du)$ is degenerate.
\par
The longtime behavior of solutions to {\it incompressible} Brinkman-Forch\-heimer or
Brink\-man-Forchheimer-Navier-Stokes equations (often also referred as tamed Na\-vier-Stokes equations)
is studied in many papers, see \cite{HR,KZ,MTT,WL,YZZ} and references therein. However, the slightly
compressible case is essentially less understood. To the best of our knowledge, similar problems
have been considered only in 2D case only for slightly compressible Navier-Stokes equations,
see \cite{GT} and \cite{FG} for global and exponential attractors respectively, but even
in this case, Dirichlet's boundary conditions were out of consideration because of the
problems with obtaining dissipative estimates for the pressure in $H^1(\Omega)$ which are
caused by "bad" boundary terms in higher energy estimates.
\par
The aim of the present paper is to verify the global well-posedness and dissipativity of the
problem \eqref{bf1} in the initial phase space
$$
(u_0,p_0)\in E:=H^1_0(\Omega)\times \bar L^2(\Omega),\ \ \bar L^2(\Omega):=\{p_0\in L^2(\Omega), \<p_0\>=0\},
$$
where $\<v\>$ is a mean value of the function $v(x)$ as well as in the higher energy space
$$
E^1=E\cap(H^2(\Omega)\times H^1(\Omega))
$$
and to prove the existence of global and exponential attractors for the associated solution semigroup. Note that,
similarly to \cite{GT}, we are unable to verify the dissipativity in $E^1$ using the
energy-type estimates because of the appearance of "bad" boundary integrals. We overcome this problem
using the combination of partial instantaneous smoothing property and localization technique inspired
by \cite{KZ09}. Actually, the localization technique is used here in a bit non-standard way, since it
is usually applied to verify the higher regularity. In our situation, this higher regularity is more or
less straightforward and the localization is used in order to get the dissipative estimate only,
see Appendix \ref{s4} for more details.
\par
Throughout of the paper, we assume that the external force $g\in L^2(\Omega)$ and the
nonlinearity $f(u)$ has the following form
\begin{equation}\label{0.fstr}
f(u):=\varphi(|u|^2)u,
\end{equation}
where $\varphi\in C^1((0,\infty))$ and satisfies the conditions:
\begin{equation}\label{0.f}
\begin{cases}
1.\ \ K-Cz^{-1/2}\le\varphi'(z)\le C_1z^{-1/2}(1+z^{3/2}),\\
2.\ \ -C+\alpha z^{l}\le \varphi(z)\le C(1+z^{l}),\ \ z\in\R_+
\end{cases}
\end{equation}
for some positive constants $\alpha,K,C,C_1$ and the exponent $l\in(0,2]$.
\par
Clearly these conditions are satisfied for the typical nonlinearity \eqref{0.fF} if $\Cal C=1$
(or $D\Cal CD=1$
if we take into the account the change of variables mentioned above). The case of a general
self-adjoint positive $\Cal C$ is completely analogous, we only need to take
$\Cal C u.u$ instead of $|u|^2$
in \eqref{0.fstr}, we assume that $\Cal C=1$ only for simplicity. In contrast to this, the extra assumption
that $D=1$ somehow oversimplifies the problem since some additional energy type identities hold in this
particular case, so we prefer to keep a general matrix $D$. We also mention that the exponent
${-1/2}$ is fixed in \eqref{0.f} in order to handle the term $\sqrt{(\Cal Cu.u)}\,u$ in \eqref{0.fF}.
Of course, if $l=\frac12$, we need to assume that $\gamma+\beta>0$ in \eqref{0.fF}
in order to get dissipativity.
Analogously, for $l>\frac12$, we need to assume that $\beta>0$.
\par
The paper is organized as follows. In \S1 we derive the basic dissipative estimate for
problem \eqref{bf1} in the energy phase space $E$, verify the existence and uniqueness of solutions
and prove, some instantaneous regularization for the $u$ component of the solution. Namely,
we establish that, starting from $(u(0),p(0))\in E$, at the next time moment $t$, we will
have $\Dt u(t)\in L^2(\Omega)$ and $\Nx u(t)\in L^2(\Omega)$. This regularization allows us,
similarly to the case of strongly damped wave equations, to truncate system \eqref{bf1} and
reduce the analysis to simpler equations:
\begin{equation}\label{0.trunc}
\Dt p+\divv(Du)=0,\ \ p\big|_{t=0}=p_0,\ \ -\Dx u+\Nx p+f(u)=g(t),
\end{equation}
where $g\in L^\infty(\R_+,L^2(\Omega))$ is new given external force (of course, the relation of
this system to the initial equations \eqref{bf1} is given by $g(t):=g-\Dt u(t)$).
\par
The detailed analysis of this truncated system is presented in \S\ref{s2}. In particular, we
prove there that this system is well-posed and dissipative in higher energy space
$p\in \bar L^2(\Omega)\cap H^1(\Omega)$ and also establish the exponential smoothing
property for this system, namely, we check that the ball in the space $H^1(\Omega)$
attracts exponentially fast the trajectories $p(t)$ of \eqref{0.trunc} starting from
bounded sets of $\bar L^2(\Omega)$. Returning back to the full system \eqref{bf1}, we
establish after that its well-posedness and dissipativity in higher energy space $E^1$ as well as the fact that
the proper ball in $E^1$ is an exponentially attracting set for the solutions of \eqref{bf1} starting from $E$. This
fact, in turn, is crucial for our study of global and exponential attractors.
\par
Note also that the analysis
presented in this section is heavily based on the study of linear problem \eqref{0.trunc} (which
corresponds to $f=0$) presented in Appendix \ref{s4} and, in particular, on the dissipativity
of this linear problem in higher energy space $H^1(\Omega)$. This dissipativity is proved using the
localization technique and is of independent interest.
\par
In \S\ref{s3} we verify the existence of a global and exponential attractors for
the solution semigroup associated with problem \eqref{bf1}. These results are more or less
standard corollaries of the asymptotic regularity and exponential attraction proved in \S\ref{s2},
see \cite{BV,cv,EFNT,EMZ05,MZ,Tem} for more details.
\par
Finally, in \S\ref{s7}, we also consider briefly some generalizations
of the proved results, including the case of the extra convective terms in the initial
Brink\-man-Forch\-hei\-mer equation
and discuss some open problems for further research.
\section{Well-posedness, dissipativity and partial smoothing}\label{s1}
In this section, we verify the global well-posedness and dissipativity of slightly compressible
Brinkman-Forchheimer equations:
\begin{equation}\label{1.main}
\begin{cases}
\Dt u-\Dx u+\Nx p+f(u)=g,\ \ u\big|_{\partial\Omega}=0, \ \ u\big|_{t=0}=u_0,\\
\Dt p+\divv(Du)=0,\ \ p\big|_{t=0}=p_0
\end{cases}
\end{equation}
in the energy space $(u_0,p_0)\in E$ as well as establish some partial smoothing results for the solutions of this system
which are crucial for what follows.
We start with the basic a priori estimate in the phase space $E$.
\begin{theorem}\label{Th1.E-dis} Let $g\in L^2(\Omega)$, $D=D^*>0$ and the nonlinearity $f$
satisfy \eqref{0.fstr} and \eqref{0.f}. Let also $(u(t),p(t))$ be a sufficiently
smooth solution of \eqref{1.main}. Then, the following estimate holds:
\begin{multline}\label{1.E-dis}
\|(u,p)(t)\|_{E}^2+\int_t^{t+1}(\|\Nx u(s)\|^2_{L^2}+(|f(u(s))\cdot Du(s)|,1))\,ds \le \\\le Q(\|(u,p)(0)\|^2_{E})e^{-\alpha t}+Q(\|g\|^2_{L^2}),
\end{multline}
for some monotone function $Q$ and positive constant $\alpha$ independent on $u$ and $t$.
\end{theorem}
\begin{proof} We multiply the first equation of \eqref{1.main} by $Du$ and integrate over $\Om$. Then,
integrating by parts and using the second equation, we arrive at
\begin{equation}\label{1.eq}
\frac12\frac d{dt}\(\|u\|^2_{L^2_D}+\|p\|^2_{\bar L^2}\)+\|\Nx u\|^2_{L^2_D}+(f(u),Du)=(g,Du),
\end{equation}
where $\|u\|^2_{L^2_D}:=\int_\Omega Du(x).u(x)\,dx$. Here and below $\xi.\eta$ stands for
the standard dot product of vectors $\xi,\eta\in\R^3$.
\par
This energy identity is still not enough to get the dissipative estimate since it does not contain the term
$\|p\|^2_{L^2}$ without time differentiation. To get this term we use the so-called Bogovski operator:
\begin{equation}
\mathfrak B:\bar L^2(\Omega)\to H^1_0(\Omega),\
\bar L^2(\Omega):=\{p\in L^2(\Omega),\ \<p\>=0\},\ \ \divv \mathfrak Bp=p.
\end{equation}
It is well-know that such an operator exists as a linear continuous operator if $\Omega$
is smooth enough, see e.g.,\cite{S}. Multiplying the first equation of \eqref{1.main} by $\mathfrak Bp$,
integrating with respect to $x$ and using the second equation, we get
\begin{multline}\label{1.eq1}
\frac d{dt}(u,\mathfrak Bp)+\|p\|^2_{\bar L^2}=-(p,\mathfrak B\divv(Du))-\\-
(\Nx u,\Nx \mathfrak Bp)-(f(u),\mathfrak Bp)+(g,\mathfrak Bp).
\end{multline}
Multiplying \eqref{1.eq1} by a small $\eb>0$ and taking a sum with equation \eqref{1.eq}, after using
the H\"older inequality and Sobolev embedding $H^1\subset L^6$, we get
\begin{multline}
\frac d{dt}\(\|u\|^2_{L^2_D}+\|p\|^2_{\bar L^2}+2\eb(u,\mathfrak Bp)\)+\\+
\|\Nx u\|^2_{L^2_D}+\eb\|p\|^2_{\bar L^2}+(f(u),Du)
\le\eb\|f(u)\|_{L^{6/5}}\|p\|_{\bar L^2}+C(\|g\|^2_{L^2}+1).
\end{multline}
Using our assumptions \eqref{0.f} on functions $f$ and $\varphi$, it is not difficult to verify that
\begin{equation}
|f(u)|^{6/5}\le C(|f(u).Du|+1).
\end{equation}
This gives us the following differential inequality:
\begin{equation}\label{1.Gp}
\frac d{dt}\Cal E_\eb(u,p)+\eb \Cal E_\eb(u,p)\le C\eb^6\Cal E_{\eb}^3(u,p)+C(\|g\|^2_{L^2}+1),
\end{equation}
where
\begin{equation}
\Cal E_\eb(u,p):=\|u\|^2_{L^2_D}+\|p\|^2_{\bar L^2}+2\eb(u,\mathfrak Bp).
\end{equation}
Moreover, for sufficiently small $\eb>0$, we have
$$
\frac12\|(u,p)\|_{E}^2\le\Cal E_\eb(u,p)\le\frac32\|(u,p)\|^2_{E}.
$$
Thus, applying the Gronwall lemma with a parameter to \eqref{1.Gp}, see \cite{GPZ,Pat} and also
\cite{Z07} for details, we end up with
the desired estimate \eqref{1.E-dis} and finishes the proof of the theorem.
\end{proof}
At the next step we define a weak energy solution for problem \eqref{1.main}.
\begin{definition}\label{Def1.sol} A pair of function $(u,p)\in C_w(0,T;E)$ is a weak energy
solution of problem \eqref{1.main} if, in addition,
\begin{equation}
u\in L^2(0,T;H^1_0(\Omega))\cap L^{2(l+2)}(0,T;L^{2(l+2)}(\Omega))
\end{equation}
and the equations \eqref{1.main} are satisfied in the sense of distributions.
\end{definition}
\begin{remark}\label{Rem1.cont} From the first equation \eqref{1.main}, we see that
\begin{multline}
\Dt u\in L^2(0,T;H^{-1}(\Omega))+L^{l'}(0,T;L^{l'}(\Omega))=\\=
[L^2(0,T;H^1_0(\Omega))\cap L^{2(l+1)}(0,T;L^{2(l+1)}(\Omega))]^*,
\end{multline}
where $\frac1{l'}+\frac1{2(l+1)}=1$.
Thus, multiplication of the first equation of \eqref{1.main} by $u$ or $\mathfrak Bp$ is
justified on the level of weak
energy solutions and we have, in addition,
that $u\in C(0,T;L^2(\Omega))$, see e.g., \cite{cv} for the details. The situation with
$p$-component is even simpler
since we have
$$
\Dt p\in L^2(0,T;\bar L^2(\Omega))
$$
and multiplication on $p$ is allowed. So, we also have that $p\in C(0,T;\bar L^2(\Omega))$.
In particular, all manipulations done for the derivation of the key estimate \eqref{1.Gp} are
actually justified for weak energy solutions, so all such solutions satisfy the dissipative
estimate \eqref{1.E-dis}.
\end{remark}
We now turn to the uniqueness.
\begin{theorem}\label{Th1.un} Suppose that the assumptions of Theorem \ref{Th1.E-dis} are satisfied and let $(u_1(t),p_1(t))$
and $(u_2(t),p_2(t))$ be two weak energy
solutions of problem \eqref{1.main}. Then, the following estimate holds:
\begin{multline}\label{1.lip}
\|(u_1(t)-u_2(t),p_1(t)-p_2(t))\|_E^2\le\\\le Ce^{Kt}\|(u_1(0)-u_2(0),p_1(0)-p_2(0))\|_E^2,
\end{multline}
where the constants $C$ and $K$ depend only on $f$ and $D$.
\end{theorem}
\begin{proof} We first note that it suffices to verify \eqref{1.lip} for $t\le T$ for some small,
but positive $T$. Then, to get the general estimate, it will be enough to iterate \eqref{1.main}.
Let $\bar u(t):=u_1(t)-u_2(t)$ and $\bar p(t):=p_1(t)-p_2(t)$. Then, these functions solve
\begin{equation}\label{1.dif}
\Dt\bar u-\Dx\bar u+\Nx\bar p+[f(u_1)-f(u_2)]=0,\ \ \Dt\bar p+\divv(D\bar u)=0.
\end{equation}
Integrating the second equation, we get
\begin{equation}\label{1.p}
\bar p(t)=\bar p(0)-\int_0^t\divv(D\bar u(s))\,ds.
\end{equation}
Multiplying now the first equation by $\bar u(t)$ and using that,
due to assumptions \eqref{0.fstr} and \eqref{0.f}, $f'(u)\ge-L$, after the standard
transformations, we end up with
\begin{multline}
\frac d{dt}\|\bar u(t)\|^2_{L^2}+\|\Nx\bar u\|^2_{L^2}+\\+2\(\int_0^t\divv(D\bar u(s))\,ds,\divv\bar u(t)\)\le
C\|\bar p(0)\|^2_{\bar L^2}+2L\|\bar u(t)\|^2_{L^2}.
\end{multline}
Assuming that $T$ is small enough, we estimate
\begin{multline}
\Big|2\(\int_0^t\divv(D\bar u(s))\,ds,\divv\bar u(t)\)\Big|\le \\
\le \int_0^t\|\divv(D\bar u(s))\|^2_{L^2}\,ds+
T\|\divv(\bar u(t))\|^2_{L^2}\le \\\le C\int_0^t\|\Nx\bar u(s)\|^2_{L^2}\,ds+\frac12\|\Nx\bar u(t)\|^2_{L^2}
\end{multline}
and, therefore,
\begin{multline}
\frac d{dt}\|\bar u(t)\|^2_{L^2}+\frac12\|\Nx\bar u\|^2_{L^2}-\\-C\int_0^t\|\Nx\bar u(s)\|^2_{L^2}\,ds\le
C\|\bar p(0)\|^2_{\bar L^2}+2L\|\bar u(t)\|^2_{L^2}.
\end{multline}
Integrating this inequality in time, we end up with
\begin{multline}
\|\bar u(t)\|^2_{L^2}+\int_0^t\(\frac12-C(t-s)\)\|\Nx \bar u(s)\|^2_{L^2}\,ds \le
\\
\le
C\|(\bar u(0),\bar p(0))\|^2_{E}+2L\int_0^t\|\bar u(s)\|^2_{L^2}\,ds.
\end{multline}
Fixing now $T$ small enough that the integral in the left-hand side is positive and applying the
Gronwall inequality,
we finally arrive at
\begin{equation}
\|\bar u(t)\|^2_{L^2}+\int_0^t\|\Nx\bar u(s)\|^2_{L^2}\,ds\le K\|(\bar u(0),\bar p(0))\|^2_{L^2},\ \ t\le T.
\end{equation}
The corresponding estimate for the $p$-component follows now from \eqref{1.p}. Thus, the
estimate \eqref{1.lip} is verified and the theorem is proved.
\end{proof}
\begin{corollary}\label{Cor1.sem} Let the assumptions of Theorem \ref{Th1.E-dis} hold.
Then, equations \eqref{1.main} generate a dissipative
globally Lipschitz continuous semigroup $S(t)$ in the phase space $E$:
\begin{equation}\label{1.sem}
S(t)(u_0,p_0):=(u(t),p(t)),
\end{equation}
where $(u(t),p(t))$ is a unique energy solution of \eqref{1.main}
with the initial data $(u_0,p_0)\in E$.
\end{corollary}
\begin{proof} According to theorems \ref{Th1.E-dis} and \ref{Th1.un}, we only need to verify the
existence of a weak solution. This can be done in many standard ways, one of the is to use vanishing
viscosity method. Namely, we may approximate \eqref{1.main} by a family of parabolic equations:
$$
\Dt u-\Dx u+\Nx p=g,\ \Dt p+\divv(Du)=\nu\Dx p,\ u\big|_{\partial\Omega}=0,\
\partial_np\big|_{\partial\Omega}=0,
$$
where $\nu>0$ is a small parameter. The solution of this parabolic problem can be obtained using e.g.,
the Galerkin approximations and the passage to the limit as $\nu\to0$ is also straightforward since
the analogue of \eqref{1.eq} gives the necessary uniform with respect to $\nu\to0$ estimates (although they are non-dissipative,
this is not important for the existence of a solution on a finite time interval).
So, we omit the details here.
\end{proof}
By the analogy with strongly damped wave equation (see \cite{PV06,KZ09} and references therein),
one may expect that \eqref{1.main} partially possesses
instantaneous smoothing property. The next results shows
that such a smoothing indeed holds.
\begin{theorem}\label{Th1.sm-ins} Let the assumptions of Theorem \ref{Th1.E-dis} hold and let $(u,p)$ be a weak
energy solution of problem \eqref{1.main}.
Then the following partial smoothing property holds:
\begin{multline}\label{1.sm}
t\|\Nx u(t)\|^2_{L^2}+t^2\|\Dt u(t)\|^2_{L^2}+t\|\Dt p(t)\|^2_{L^2}+\\+
\int_0^{t}s^2\|\Nx\Dt u(s)\|^2_{L^2}\,ds\le C(1+\|g\|^2_{L^2}+\|(u(0),p(0))\|^2_{E}),
\end{multline}
where $t\in[0,1]$ and the constant $C$ is independent of $t$ and $u$.
\end{theorem}
\begin{proof} Let us first multiply the first equation of \eqref{1.main} by $t\Dt u$ and integrate over $\Om$. Then,
using the gradient structure of nonlinearity $f$, we arrive at
\begin{multline}\label{1.dt}
\frac d{dt}\(\frac t2\|\Nx u\|^2_{L^2}+t(F(u),1)-t(p,\divv u)\)+t\|\Dt u\|^2_{L^2}=\\=
-(p,\divv u)+\frac12\|\Nx u\|^2_{L^2}+(F(u),1)+t(\divv(Du),\divv u)+t(g,\Dt u).
\end{multline}
where $F(u):=\frac12\int_0^{|u|^2}\varphi(z)\,dz$.
Integrating this identity in time and using the estimate \eqref{1.E-dis}, we arrive at the following
smoothing property:
\begin{multline}\label{1.sm1}
t\|\Nx u(t)\|^2_{L^2}+t\|u(t)\|_{L^{2(l+1)}}^{2(l+1)}+t\|\Dt p(t)\|^2_{L^2}+\\
+\int_0^ts\|\Dt u(s)\|^2_{L^2}
\le C\(\|(u(0),p(0))\|^2_{E}+1+\|g\|^2_{L^2}\),
\end{multline}
where $t\in[0,1]$ and $C$ is independent of $u$, $p$ and $t$.
\par
Let us now differentiate equations \eqref{1.main} in time and denote
$v:=\Dt u$ and $q=\Dt p$. Then, we end up with the following equations
\begin{equation}\label{1.mdif}
\Dt v-\Dx v+\Nx q+f'(u)v=0,\ \ \Dt q+\divv(Dv)=0.
\end{equation}
Multiplying the first equation of \eqref{1.mdif} by $t^2 v$ and integrating over $\Om$, we get
\begin{multline}\label{1.dtv}
\frac12\frac d{dt}\left(t^2\|v(t)\|^2_{L^2}\right)+t^2\|\Nx v\|^2_{L^2}+
\\+t^2(f'(u)v,v)= t^2(\Dt p,\divv v)+
t\|\Dt u\|^2_{L^2}.
\end{multline}
Integrating this equality in time and using \eqref{1.sm1} together with the assumption $f'(u)\ge-L$,
we get the desired smoothing property in the form
\begin{multline}
t^2\|\Dt u(t)\|^2_{L^2}+\int_0^ts^2\|\Nx \Dt u(s)\|^2_{L^2}\,ds \le \\\le
C(1+\|g\|^2_{L^2}+\|(u(0),p(0))\|_{E}^2),
\end{multline}
where $t\in[0,1]$ and finish the proof of the theorem.
\end{proof}
Combining smoothing estimate \eqref{1.sm} with the dissipative estimate
\eqref{1.E-dis}, we get the following result.
\begin{corollary}\label{Cor1.smdt} Let the assumptions of Theorem \ref{Th1.E-dis} hold and let $(u,p)$ be a weak
energy solution of problem \eqref{1.main}. Then, we have
the following dissipative
estimate for higher norms:
\begin{multline}\label{1.ins-sm}
\|\Nx u(t)\|^2_{L^2}+\|\Dt u(t)\|^2_{L^2}+\int_t^{t+1}\|\Nx\Dt u(s)\|^2_{L^2} \le \\
\le
\frac{t^2+1}{t^2}\(Q(\|(u(0),p(0))\|_E)e^{-\alpha t}+Q(\|g\|_{L^2})\),
\end{multline}
where the positive constant $\alpha$ and monotone function $Q$ are independent of $t$, $u$ and $p$.
\end{corollary}
This estimate, in turn, allows us (analogously to the case of strongly damped
wave equations, see \cite{PV06,KZ09}) to reduce the study of the asymptotic smoothness
for solutions to the following truncated auxiliary problem
\begin{equation}\label{1.trunc}
\begin{cases}
-\Dx u+\Nx p+f(u)=g(t),\ u\big|_{\partial\Omega}=0,\\
\Dt p+\divv(Du)=0,\ p\big|_{t=0}=p_0,
\end{cases}
\end{equation}
where the external force $g(t)=g-\Dt u(t)$ satisfies the estimate
\begin{equation}\label{1.g}
\|g\|_{L^\infty(\R_+,L^2(\Omega))}\le C
\end{equation}
which will be studied in the next sections. We also mention here that, in order to restore the
$u$-component of a solution $(p,u)$ of this problem in a unique way by the $p$-component,
we need to assume in addition that
\begin{equation}\label{1.fmon}
f'(u)\ge0.
\end{equation}
This assumption however, is not restrictive since, in a general case, the extra term $Lu$ can be added to
the nonlinearity and also to the external force $g(t)$ and the $L^2(\Omega)$-norm of this term
is under the control.
\section{Asymptotic regularity}\label{s2}
In this section, we study the asymptotic smoothing for the truncated system \eqref{1.trunc} which is also
of independent interest. We will mainly
concentrate here on the case of critical quintic growth rate of the nonlinearity ($f(u)\sim u^5$).
The subcritical case is essentially simpler since the standard linear splitting of the solution
semigroup on a contracting and compact components works. In contrast to this, we need a {\it nonlinear}
splitting in the critical case. Moreover, due to specific structure of our problem, we need a
combination of different decompositions. We start with the following splitting
$$
p=q+r, \ \ u=v+w,
$$
where
\begin{equation}\label{3.comp}
\begin{cases}
\Dt q+\divv(Dv)=0,\ \ q\big|_{t=0}=p\big|_{t=0},\\
-\Dx v+\Nx q+f(v)+Lv=0,\ \ v\big|_{\partial\Omega}=0
\end{cases}
\end{equation}
and
\begin{equation}\label{3.contr}
\begin{cases}
\Dt r+\divv(Dr)=0,\ \ r\big|_{t=0}=0,\\
-\Dx w+\Nx r+[f(u)-f(v)]=Lv+g(t),\ \ w\big|_{\partial\Omega}=0.
\end{cases}
\end{equation}
According to the results of previous section, we may assume without loss of generality that
$p(0)$ belongs to the absorbing ball in $\bar L^2(\Omega)$. Then, from the analogues of dissipative
estimates for equation \eqref{3.comp}, we conclude that
\begin{equation}\label{3.R}
\|p(t)\|_{L^2}+\|q(t)\|_{\bar L^2}+\|r(t)\|_{\bar L^2}+\|u(t)\|_{H^1}+\|v(t)\|_{H^1}+\|w(t)\|_{H^1}\le R
\end{equation}
for all $t\ge0$. We start with the contracting part $(q,v)$.
\begin{proposition}\label{Prop3.contr} Let the function $f$ satisfy \eqref{0.f},
\eqref{0.fstr} and \eqref{1.fmon}, $D=D^*>0$ and estimates \eqref{3.R} and \eqref{1.g} hold.
Then, there exists $L=L(R)$
such that the solution $r(t)$ of the problem \eqref{3.contr} satisfies the estimate:
\begin{equation}
\|q(t)\|^2_{\bar L^2}+\|v(t)\|^2_{H^1}\le Ce^{-\alpha t}\|p(0)\|^2_{\bar L^2},
\end{equation}
where positive constants $C$ and $\alpha$ are independent of $t$, $u$ and $p$.
\end{proposition}
\begin{proof} We fix $L>0$ in such a way that
$$
f(v).Dv+Lv.Dv\ge0,\ v\in\R^3
$$
(it is possible to do so since $f(0)=0$ and $f(v).Dv\ge-C$). Then, multiplying the first and second equations
of \eqref{3.comp} by $q$ and $Dv$ respectively and integrating over $\Om$, we end up with
\begin{equation}
\frac12\frac d{dt}\|q\|^2_{\bar L^2}+\|\Nx v\|^2_{L^2_D}\le 0.
\end{equation}
Multiplying now the second equation of \eqref{3.comp} by $\mathfrak Bq$ and using the inequality
$$
\|f(v)\|_{H^{-1}}\le C(1+\|v\|_{H^1}^4)\|v\|_{H^1}\le C_R\|\Nx v\|_{L^2_D},
$$
we infer that
$$
\|q\|_{\bar L^2}^2\le C'_R\|\Nx v\|_{L^2_D}^2
$$
and, therefore,
$$
\frac12\frac d{dt}\|q\|^2_{\bar L^2}+\alpha_R\|q\|^2_{\bar L^2}\le 0,
$$
for some positive $\alpha_R$ depending only on $R$. Applying the Gronwall inequality, we arrive
at the desired estimate for $q$:
$$
\|q(t)\|^2_{\bar L^2}\le e^{-\alpha_R t}\|p(0)\|^2_{\bar L^2}.
$$
To get the desired estimate for $\|v\|^2_{H^1}$, it remains to note that multiplication of
the second equation of \eqref{3.comp} by $Dv$ gives
$$
\|\Nx v(t)\|^2_{L^2_D}\le C\|q(t)\|^2_{\bar L^2}.
$$
Thus, the proposition is proved.
\end{proof}
We now turn to the smooth part $(w(t),r(t))$ of the solution generated by the problem \eqref{3.contr}.
At the first step, we derive {\it exponentially growing} estimate for this part in higher norms
which will be improved later.
\begin{proposition}\label{Prop3.gr} Let the assumptions of Proposition \ref{Prop3.contr}
hold and let $\delta\in(0,\frac12)$.
Then, the following estimate for the
solution $(w(t),r(t))$ of \eqref{3.contr} is valid:
\begin{equation}\label{3.gr}
\|r(t)\|^2_{H^\delta}+\|w(t)\|^2_{H^{1+\delta}}\le Ce^{Kt},
\end{equation}
where $K>0$ and the constant $C$ depends on $g$ (through assumption \eqref{1.g})
and $R$, but is independent of $t$, $p$ and $u$.
\end{proposition}
\begin{proof} To verify this estimate we need the following standard lemma.
\begin{lemma}\label{Lem3.w} Let $a(x)\ge0$ be a symmetric measurable matrix and
the function $w\in H^1_0(\Omega)\cap L^2_a(\Omega)$ be a solution of the following problem:
\begin{equation}\label{3.ell}
-\Dx w+a(x)w=\Nx r+g,\ \ w\big|_{\partial\Omega}=0,
\end{equation}
where $L^2_a(\Omega)$ is a weighted Lebesgue space determined by the semi-norm
$$
\|w\|_{L^2_a}^2:=\int_{\Omega}a(x)w(x)\cdot w(x)\,dx<\infty,
$$
$r\in \bar H^\delta(\Omega):=H^\delta(\Omega)\cap \bar L^2(\Omega)$ for some $\delta\in(0,\frac12)$, and $g\in L^2(\Omega)$. Then,
the following estimate holds:
\begin{equation}\label{w-est}
\|w\|_{L^s}\le C(\|r\|_{\bar H^\delta}+\|g\|_{L^2}),
\end{equation}
where the constant $C$ is independent of $a$, $g$, $w$ and $r$ and $s=\frac6{1-2\delta}$ is the Sobolev embedding
exponent for $H^{1+\delta}\subset L^s$.
\end{lemma}
\begin{proof}[Proof of the lemma] Since $g$ is more regular than $\Nx r$, it suffices
to verify the estimate for $g=0$ only. We give below only the formal derivation of \eqref{w-est} which can
be justified by standard approximation arguments. To this end, we multiply equation \eqref{3.ell} by $w|w|^n$, where
the exponent $n$ will be fixed later and integrate over $\Om$. This gives
$$
(|\Nx w|^2,|w|^n)+\|\Nx(|w|^{\frac{n+2}2})\|^2_{L^2}\le C(|r|(|\Nx w||w|^{n/2}),|w|^{n/2}).
$$
Using the proper H\"older inequality together with Sobolev embeddings, we get
$$
(|\Nx w|^2,|w|^n)+\|w\|_{L^{3(n+2)}}^{n+2}\le
C\|r\|_{\bar H^\delta}(|\Nx w|^2,|w|^n)^{1/2}\|w\|^{n/2}_{L^{mn/2}},
$$
where $\frac12-\frac\delta3+\frac12+\frac1m=1$, i.e., $m=\frac3\delta$. Therefore, we have
$$
\|w\|_{L^{3(n+2)}}^{n+2}\le C\|r\|_{\bar H^\delta}^{n+2}+\frac12\|w\|^{n+2}_{L^{\frac{3n}{2\delta}}}.
$$
Fixing now $n$ in such a way that $3(n+2)=\frac{3n}{2\delta}$, we see that $3(n+2)=s$ and the last
estimate finishes the proof of the lemma.
\end{proof}
We now return to the proof of the proposition. First, applying the lemma to the second
equation of \eqref{3.contr} with
$$
a(x):=\int_0^1f'(\kappa u(x)+(1-\kappa) v(x))\,d\kappa\ge0,
$$
we end up with
\begin{equation}
\|w(t)\|_{L^s}\le C(\|r(t)\|_{\bar H^\delta}+\|g(t)\|_{L^2}+L\|v(t)\|_{L^2})\le
C\(\|r(t)\|_{\bar H^\delta}+1\).
\end{equation}
Second, using the growth restriction on $f$ and Sobolev embedding theorems, it is not difficult to see that
\begin{equation}\label{3.ffrac}
\|f(u)-f(v)\|_{H^{-1+\delta}}\le C(1+\|u\|_{H^1}^4+\|v\|^4_{H^1})\|u-v\|_{L^s}.
\end{equation}
Therefore,
\begin{equation}\label{3.ffrac1}
\|f(u(t))-f(v(t))\|_{H^{-1+\delta}}\le C_R\|w(t)\|_{L^s}\le C_R\(\|r(t)\|_{\bar H^\delta}+1\).
\end{equation}
Third, we multiply the second equation of \eqref{3.contr} by $(-\Dx)^\delta w$ and integrate over $\Om$. This gives
\begin{multline}
\|w\|^2_{H^{1+\delta}}=((-\Dx)^{-1+\delta/2}\Nx r,(-\Dx)^{1+\delta/2}w)+\\+
(-\Dx)^{-1+\delta/2}[f(u)-f(v)],(-\Dx)^{1+\delta/2}w)-(Lv(t)+g(t),(-\Dx)^{\delta}w)
\end{multline}
and therefore
\begin{equation}\label{3.wr}
\|w\|_{H^{1+\delta}}\le C(\|f(u)-f(v)\|_{H^{-1+\delta}}+\|r\|_{\bar H^{\delta}}+1)\le
C_R\(1+\|r\|_{\bar H^\delta}\).
\end{equation}
Finally, from the first equation of \eqref{3.contr}, we get
\begin{multline}
\|r(t)\|_{\bar H^\delta}\le\int_0^t\|\divv(Dw(\tau))\|_{H^\delta}\,d\tau \le \\\le
C\int_0^t\|w(\tau))\|_{H^{1+\delta}}\,d\tau\le C_R\int_0^t(\|r(\tau)\|_{\bar H^\delta}+1)\,d\tau
\end{multline}
and the Gronwall inequality finishes the proof of the proposition.
\end{proof}
At the next step, we split following \cite{Z04} (see also \cite{MSSZ} for some improvements) the
solution $u(t)$ of \eqref{1.trunc} on uniformly small ($\bar u(t)$ and smooth
($\tilde u(t)$) parts.
\begin{proposition}\label{Prop3.split} Let $\beta>0$ be arbitrary and $\delta\in(0,\frac12)$. Let also
$(p(t),u(t))$ be a solution of \eqref{1.trunc}
satisfying \eqref{3.R}. Then, there exists $T=T_\delta$ such that
the function $u(t)$ can be split in a sum
\begin{equation}\label{3.split}
u(t)=\bar u(t)+\tilde u(t),
\end{equation}
where for every $t\ge T$
\begin{equation}\label{3.good}
\|\bar u(t)\|_{H^1}\le\beta,\ \ \|\tilde u(t)\|_{H^{1+\delta}}\le C_\beta
\end{equation}
and the constant $C_\beta$ depends only on $\beta$, $\delta$ and $R$.
\end{proposition}
\begin{proof} This splitting is an almost immediate corollary of the proved Propositions \ref{Prop3.contr}
and \ref{Prop3.gr}. Indeed, let us fix $T=T_\beta$ from the equation
$$
Ce^{-\alpha T}R^2=\beta^2,
$$
where all of the constants are the same as in Proposition \ref{Prop3.contr}. Then, for the $v$-component
of the solution $u$, we will have the estimate
$$
\|v(t)\|_{H^1}\le\beta,\ \ t\ge T.
$$
Moreover, if we fix $C_\beta$ from $Ce^{2KT}=C_\beta^2$ where the constants
are the same as in \eqref{3.gr}, we get
$$
\|w(t)\|_{H^{1+\delta}}\le C_\beta
$$
if $t\le 2T$. Thus, functions $v(t)$ and $w(t)$ give the desired splitting of $u(t)$ for $t\in[T,2T]$.
\par
To construct the desired splitting for all $t\ge T$, we define functions $(q_n(t),v_n(t))$
and $(r_n(t),w_n(t))$ for all $n\in\Bbb N$ as solutions of \eqref{3.comp} and \eqref{3.contr} respectively,
but starting from $t=T(n-1)$ with the initial conditions
$$
q_n\big|_{t=T(n-1)}=0,\ \ r_n\big|_{t=T(n-1)}=p\big|_{t=T(n-1)}.
$$
Then, arguing analogously, we see that $u(t)=v_n(t)+w_n(t)$ gives the required splitting on the
interval $t\in[Tn,T(n+1)]$. Finally, to get the desired splitting for all $t\ge T$, we define $\bar u$ and
$\tilde u(t)$ as hybrid piece-wise continuous functions:
$$
\bar u(t)=v_n(t),\ t\in[Tn,T(n+1)),\ \ \tilde u(t)=w_n(t),\ \ t\in[Tn,T(n+1)),\ \ n\in\Bbb Z.
$$
This finishes the proof of the proposition.
\end{proof}
We are now ready to refine Proposition \ref{Prop3.gr} and get the dissipative estimate for $(r(t),w(t))$.
\begin{proposition}\label{Prop3.dis} Let the assumptions of Proposition \ref{Prop3.gr} hold.
Then the solution $(r(t),w(t))$ of problem \eqref{3.contr} satisfies the estimate
\begin{equation}\label{3.notgr}
\|r(t)\|_{\bar H^{\delta}}+\|w(t)\|_{H^{1+\delta}}\le C,
\end{equation}
where the constant $C$ depends on $R$, but is independent of $u$, $p$ and $t$.
\end{proposition}
\begin{proof} Without loss of generality, we may assume that estimates \eqref{3.good} hold
for $t\ge0$. The general case is reduced to this particular one by the proper time shift. The only
difference is that we need to put non-zero initial data for $r(t)$. Since the $H^\delta$ norm of $r(t)$ on
the interval $t\in[0,T]$ can be controlled by \eqref{3.gr}, we just need to assume that
\begin{equation}
r\big|_{t=0}=r_0,\ \ \|r_0\|_{H^\delta}\le C_\beta.
\end{equation}
This also gives that
\begin{equation}\label{3.vgood}
\|v(t)\|_{H^1}\le \beta,\ \ t\ge0.
\end{equation}
Moreover, again without loss of generality, we may assume that $f'(0)=0$. In a general case, the term
$f'(0)w(t)$ is lower order and can be treated as a part of $g(t)$.
\par
The idea of the proof is to refine estimate \eqref{3.ffrac1}
using the result of Proposition \ref{Prop3.split}. First, we refine \eqref{3.ffrac} using the fact
that $f'(0)=0$, namely, this assumption gives us that
\begin{equation}\label{3.ffrac3}
\|f(u)-f(v)\|_{H^{-1+\delta}}\le C(\|u\|_{H^1}+\|v\|_{H^1})(1+\|u\|_{H^1}^3+\|v\|_{H^1}^3)\|u-v\|_{L^s}
\end{equation}
for some constant $C$ depending only on $f$. Second, we write
$$
f(u)-f(v)=[f(\bar u+\tilde u)-f(\bar u)]+[f(\bar u)-f(v)]
$$
and apply \eqref{3.ffrac3} to both terms on the right-hand side. Indeed,
since $H^{1+\delta}\subset L^s$ and the function $\tilde u$ is bounded in $H^{1+\delta}$, we have
$$
\|f(\bar u+\tilde u)-f(\bar u)\|_{H^{-1+\delta}}\le
C(1+\|u\|_{H^1}^4+\|\bar u\|_{H^1}^4)\|\tilde u\|_{H^{1+\delta}}\le C_1
$$
for some $C_1>0$ which depends on $\beta$ and $R$. Applying estimate
\eqref{3.ffrac3} to the second term and using inequalities \eqref{3.good} and \eqref{3.vgood}, we get
$$
\|f(\bar u)-f(v)\|_{H^{-1+\delta}}\le C\beta\|\bar u-v\|_{L^s}
$$
and using that
$$
\|\bar u-v\|_{L^s}=\|\tilde u-w\|_{L^s}\le \|w\|_{L^s}+C\|\tilde u\|_{H^{1+\delta}}\le \|w\|_{L^s}+C
$$
we get
\begin{equation}\label{3.ffrac4}
\|f(u)-f(v)\|_{H^{-1+\delta}}\le C\beta\|w\|_{L^s}+C_\beta,
\end{equation}
where the constant $C$ is independent of $\beta>0$. Together with
the result of Lemma \ref{Lem3.w}, we finally arrive at the refined estimate
\begin{equation}\label{3.ffrac5}
\|f(u(t))-f(v(t))\|_{H^{-1+\delta}}\le C\beta\|r(t)\|_{H^{\delta}}+C_\beta.
\end{equation}
Crucial for us is that the constant $C$ is independent of $\beta$, so the coefficient
in front of $\|r(t)\|_{H^\delta}$ can be made arbitrary small by the choice of $\delta$.
\par
We are now ready to complete the proof of the proposition. To this end, we treat equation \eqref{3.contr} as
a linear \eqref{2.au} interpreting the term\\ $f(u(t))-f(v(t))$ as a part of external force $g(t)$ and
use estimate \eqref{2.bad} with $K_\delta=-\alpha<0$, see Corollary \ref{CorA.main}. This gives
\begin{equation}
\|r(t)\|_{\bar H^\delta}\le C\|r(0)\|_{\bar H^\delta}e^{-\alpha t}+
C_\beta+C\beta\int_0^t e^{-\alpha(t-\tau)}\|r(\tau)\|_{\bar H^\delta}\,d\tau.
\end{equation}
Fixing now $\beta>0$ in such a way that $C\beta=\frac\alpha2$ and applying the Gronwall inequality,
we end up with the desired estimate
$$
\|r(t)\|_{\bar H^\delta}\le C\|r(0)\|_{\bar H^\delta}e^{-\alpha t/2}+C_1.
$$
Combining this estimate with \eqref{3.wr}, we end up with \eqref{3.notgr} and finish the
proof of the proposition.
\end{proof}
We now summarize our results concerning the truncated system \eqref{1.trunc} under the assumptions \eqref{1.fmon} and
\eqref{1.g} for the nonlinearity $f$ and
the external force $g(t)$. We first mention that the global well-posedness and dissipativity of
this problem in the space $\bar L^2(\Omega)$
can be obtained exactly as in Theorems \ref{Th1.E-dis} and \ref{Th1.un}, so we have the estimate
\begin{equation}\label{3.trunc-dis}
\|p(t)\|_{\bar L^2}^2+\|u(t)\|^2_{H^1}\le Q(\|p(0)\|_{\bar L^2})e^{-\alpha t}+Q(\|g\|_{L^\infty}),
\end{equation}
where positive constant $\alpha$ and monotone function $Q$ are independent of $p$ and $t$.
\par
Thus, problem \eqref{1.trunc} can be considered independently of problem \eqref{1.main} on the
whole phase space $\bar L^2(\Omega)$ and estimate \eqref{3.trunc-dis} gives us the existence of
an absorbing ball in $\bar L^2(\Omega)$, so the key assumptions \eqref{3.R} will be
automatically satisfied if we take the initial data from this absorbing ball.
\par
Let us denote by $\Cal U(t):\bar L^2(\Omega)\to\bar L^2(\Omega)$ the solution operator for
problem \eqref{1.trunc}:
\begin{equation}
\Cal U(t)p(0):=p(t),
\end{equation}
where $p(t)$ is a solution of \eqref{1.trunc}. Then, taking into the account that the $u(t)$-component
of the
solution can be restored in a unique way (due to Lemma \ref{Lem3.w}) if the $p(t)$-component
is known, we can reformulate the results of Propositions \ref{Prop3.dis} and \ref{Prop3.contr} as follows.
\begin{corollary}\label{Cor3.exp} Let the nonlinearity $f$ satisfy \eqref{1.fmon},
\eqref{0.f} and \eqref{0.fstr} and the function $g$ satisfy \eqref{1.g}.
Then, for a sufficiently large $R$, the $R$-ball
$\Cal B_R^\delta$ of radius $R$ in $\bar H^\delta(\Omega)$ is an exponentially attracting for the solution
operator $\Cal U(t)$, i.e., there exists positive constant $\alpha>0$ and a monotone
function $Q$ such that, for every bounded set $B\subset\bar L^2(\Omega)$,
\begin{equation}\label{3.expd}
\dist_{\bar L^2}(\Cal U(t)B,\Cal B_R^\delta)\le Q(\|B\|_{\bar L^2})e^{-\alpha t},
\end{equation}
where $\dist_H(A,B)$ stands for the non-symmetric Hausdorff distance between the sets $A$
and $B$ in a Banach space $H$.
\end{corollary}
We also have the analogue of the dissipative estimate \eqref{3.trunc-dis} in the space
$H^\delta$ for any exponent $\delta\in[0,\frac12)$.
\begin{corollary}\label{Cor3.hd} Let the assumptions of Corollary \ref{Cor3.exp} hold and let
$p(0)\in \bar H^\delta(\Omega)$ for some $\delta\in[0,\frac12)$. Then the following
dissipative estimate holds for the solution of problem \eqref{1.trunc}:
\begin{equation}\label{3.disd}
\|p(t)\|_{\bar H^\delta}^2+\|u(t)\|^2_{H^{1+\delta}}\le
Q(\|p(0)\|_{\bar H^\delta})e^{-\alpha t}+Q(\|g\|_{L^\infty(\R_+,L^2)})
\end{equation}
for some positive $\alpha$ and monotone function $Q$ which are independent of $p$ and $t$.
\end{corollary}
Indeed, this estimate can be proved analogously to the proof of Proposition \ref{Prop3.dis}, but
even simpler since we may take $q(t)=v(t)=0$, so we leave the details to the reader.
\par
Thus, we have verified that the solution operator $\Cal U(t)$ is well-defined and dissipative in
$\bar H^\delta(\Omega)$ for any $0\le\delta<\frac12$. It also worth to note that all of the
estimates obtained so far uses only that
\begin{equation}\label{7.g}
\|g\|_{L^\infty(R_+,H^{-1+\delta})}\le C,\ \ \delta\in(0,\frac12).
\end{equation}
The natural next step is to extend this
result to $\delta=1$ using bootstrapping arguments. The situation here is much simpler than for
the first step since the nonlinearity $f$ is subcritical in the phase space $\bar H^\delta(\Omega)$,
so the linear splitting may be used. Moreover, due to the embedding theorem
$H^{1+\frac15}\subset L^{10}$ and the growth restrictions on $f$, we have
\begin{equation}\label{3.fcont}
\|f(u)\|_{L^2}\le C(1+\|u\|_{H^{1+\delta}}^5),\ \ \delta\ge\frac15
\end{equation}
and, therefore, only one more step of iterations is necessary to reach $\delta=1$. Namely, we split the
solution $(p,u)$ as follows:
$$
p(t)=p_1(t)+p_2(t),\ \ u(t)=u_1(t)+u_2(t),
$$
where the decaying component $(p_1(t),u_1(t))$ solves
\begin{equation}\label{3.bcon}
\Dt p_1+\divv(Du_1)=0,\ \ -\Dx u_1+\Nx p_1=0,\ \ p_1\big|_{t=0}=p\big|_{t=0}
\end{equation}
and the smooth component $(p_2(t),u_2(t))$ is a solution of
\begin{equation}\label{3.bcom}
\Dt p_2+\divv(Du_2)=0,\ \ -\Dx u_2+\Nx p_2=g(t)-f(u(t)),\ \ p_2\big|_{t=0}=0.
\end{equation}
Then, the following proposition holds.
\begin{proposition}\label{Prop3.boot} Let $\delta\in[\frac15,\frac12)$ and let the initial
data $p(0)$ belongs to the absorbing ball $\Cal B_R^\delta$. Then the following estimates hold
for the solutions of \eqref{3.bcon} and \eqref{3.bcom}:
\begin{equation}
\|p_1(t)\|_{\bar H^{\delta}}+\|u_1(t)\|_{H^{1+\delta}}\le C\| p(0)\|_{\bar H^\delta}e^{-\alpha t}
\end{equation}
and
\begin{equation}
\|p_2(t)\|_{\bar H^{1}}+\|u_2(t)\|_{H^{2}}\le
C\|p(0)\|_{\bar H^\delta}e^{-\alpha t}+C_R(1+\|g\|_{L^\infty(L^2)}),
\end{equation}
where $\alpha>0$ and $C$, $C_R$ are independent of $u$, $p$ and $t$.
\end{proposition}
Indeed, these estimates follow immediately from estimate \eqref{2.bad} with $K_\delta=-\alpha<0$ for the linear
equation, dissipative estimate \eqref{3.disd} and estimate \eqref{3.fcont}.
\par
Analogously to Corollary \ref{Cor3.hd}, this result gives the dissipativity
in the phase space $\bar H^1$.
\begin{corollary}\label{Cor3.h1} Let the assumptions of Corollary \ref{Cor3.exp} hold and let
$p(0)\in \bar H^1(\Omega)$. Then the following
dissipative estimate holds for the solution of problem \eqref{1.trunc}:
\begin{equation}\label{3.dish1}
\|p(t)\|_{\bar H^1}^2+\|u(t)\|^2_{H^{2}}\le
Q(\|p(0)\|_{\bar H^1})e^{-\alpha t}+Q(\|g\|_{L^\infty(\R_+,L^2)})
\end{equation}
for some positive $\alpha$ and monotone function $Q$ which are independent of $p$ and $t$.
\end{corollary}
Indeed, to get this estimate, it is enough to estimate the $L^2$-norm of $f(u)$ using
Corollary \ref{Cor3.hd} and get the desired estimate for the $H^1$-norm from the linear
equation \eqref{2.au} treating $f(u(t))$ as a part of the external forces.
\par
Analogously to Corollary \ref{Cor3.exp} the result of Proposition \ref{Prop3.boot} can be rewritten
in the following form.
\begin{corollary}\label{Cor3.exp1} Let the assumptions of Corollary \ref{Cor3.exp} hold. Then, for a
sufficiently large $R$, the $R$-ball
$\Cal B_R^1$ of radius $R$ in $\bar H^1(\Omega)$ is an exponentially
attracting for the solution
operator $\Cal U(t)$ in $\bar H^\delta$, i.e., there exists positive constant
$\alpha>0$ and a monotone
function $Q$ such that, for every bounded set $B\subset\bar H^\delta(\Omega)$,
\begin{equation}\label{3.exp1}
\dist_{\bar H^\delta}(\Cal U(t)B,\Cal B_R^1)\le Q(\|B\|_{\bar H^\delta})e^{-\alpha t}.
\end{equation}
\end{corollary}
Moreover, using the Lipschitz continuity of $\Cal U(t)$ in $\bar L^2(\Omega)$,
exponential attractions \eqref{3.expd} and \eqref{3.exp1} together with the transitivity
of exponential attraction (see \cite{FGMZ}), we arrive at the following result.
\begin{corollary}\label{Cor3.exph1} Let the assumptions of Corollary \ref{Cor3.exp} hold.
Then, for a sufficiently
large $R$, the $R$-ball
$\Cal B_R^1$ of radius $R$ in $\bar H^1(\Omega)$ is an exponentially attracting for the solution
operator $\Cal U(t)$ in $\bar L^2(\Omega)$, i.e., there exists positive constant $\alpha>0$ and a monotone
function $Q$ such that, for every bounded set $B\subset\bar L^2(\Omega)$,
\begin{equation}\label{3.exph1}
\dist_{\bar L^2}(\Cal U(t)B,\Cal B_R^1)\le Q(\|B\|_{\bar L^2})e^{-\alpha t}.
\end{equation}
\end{corollary}
We conclude this section by translating the obtained results for the truncated system \eqref{1.trunc} to the
initial problem \eqref{1.main}. The next result can be considered as the main result of this section.
\begin{theorem}\label{Th3.main} Let the assumptions of Theorem \ref{Th1.E-dis} hold. Then the $R$-ball $\Bbb B_R^1$ in
the higher energy space
$$
E^1:=[H^2(\Omega)\cap H^1_0(\Omega)]\times \bar H^1(\Omega)
$$
is an exponentially attracting set for the solution semigroup $S(t): E\to E$ generated by the problem
\eqref{1.main} if $R$ is large enough, i.e., there exists $\alpha>0$ and monotone $Q$
such that, for every bounded set $B\subset E$,
\begin{equation}\label{3.mainattr}
\dist_E(S(t)B,\Bbb B_R^1)\le Q(\|B\|_E)e^{-\alpha t}.
\end{equation}
Moreover, the problem \eqref{1.main} is well-posed and dissipative in the space $E^1$ as well,
i.e., if $(u(0),p(0))\in E^1$ then the following estimate holds:
\begin{equation}\label{3.maindis}
\|(u(t),p(t))\|_{E^1}\le Q(\|u(0),p(0))\|_{E^1})e^{-\alpha t}+Q(\|g\|_{L^2})
\end{equation}
for some positive $\alpha$ and monotone $Q$.
\end{theorem}
\begin{proof} Indeed, the exponential attraction \eqref{3.mainattr} follows immediately
from Corollary \ref{Cor3.exph1} and smoothing property of Corollary \ref{Cor1.smdt}.
\par
To get the dissipative estimate \eqref{3.maindis}, we note that if the initial data $(u(0),p(0))\in E^1$,
we have from equations \eqref{1.main} that
$$
\|u(0)\|_{C}+\|\Dt u(0)\|_{L^2}+\|\Dt p(0)\|_{\bar L^2}\le Q(\|(u(0),p(0))\|_{E^1}),
$$
so, we need not to use multiplication by $t$ and $t^2$ in the estimates given in the proof
of Theorem \ref{Th1.sm-ins} in order to remove the initial data and this gives us better analogue
of estimate \eqref{1.ins-sm}:
\begin{equation}\label{3.dtdis}
\|\Nx u(t)\|_{L^2}+\|\Dt u(t)\|_{L^2}\le Q(\|(u(0),p(0))\|_{E^1})e^{-\alpha t}+Q(\|g\|_{L^2}).
\end{equation}
This, in turn, allows to use the truncated system \eqref{1.trunc} starting from $t=0$. Then the
desired dissipative estimate follows from the analogous estimate \eqref{3.dish1} for the truncated
system. Thus, the theorem is proved.
\end{proof}
\section{Attractors}\label{s3}
In this section, we use the results obtained above for constructing global and exponential
attractors for problem \eqref{1.main}. We start with a global attractor.
\begin{definition}\label{Def4.attr} Let $S(t):E\to E$, $t\ge0$ be a semigroup. Then, a set
$\Cal A\subset E$ is a global attractor for $S(t)$ in $E$ if
\par
1. $\Cal A$ is compact in $E$;
\par
2. $\Cal A$ is strictly invariant, i.e., $S(t)\Cal A=\Cal A$ for all $t\ge0$.
\par
3. $\Cal A$ is an attracting set for $S(t)$ in $E$. The latter means that for every bounded set
$B$ in $E$ and every neighbourhood $\Cal O(\Cal A)$ of the set $\Cal A$ there exist
$T=T(B,\Cal O)$ such that
\begin{equation}
S(t)B\subset \Cal A,\ \ \forall t\ge T.
\end{equation}
If $S(t)$ is a solution semigroup related with an evolutionary equation, then the attractor $\Cal A$ of
$S(t)$ is often called and attractor of this evolutionary equation, see \cite{BV,cv,Lad,MZ,Tem} for more details.
\end{definition}
\begin{theorem}\label{Th4.attr} Let the assumptions of Theorem \ref{Th1.E-dis} hold. Then equation \eqref{1.main} possesses an
attractor $\Cal A$ in $E$ which is a bounded set of $E^1$. Moreover, this attractor possesses the
following description:
\begin{equation}\label{4.rep}
\Cal A=\Cal K\big|_{t=0},
\end{equation}
where $\Cal K\subset L^\infty(\R,E)$ is a set of all complete (=defined for all $t\in\R$) bounded in $E$
solutions of equation \eqref{1.main}.
\end{theorem}
\begin{proof} According to the abstract attractor's existence theorem, see e.g., \cite{BV}, we need to
verify two properties:
\par
1. The operators $S(t)$ are continuous for every frxed $t$ as operators from $E$ to $E$;
\par
2. The semigroup $S(t)$ possesses a compact attracting set in $E$.
\par
The first property is verified in Theorem \ref{Th1.un} and the second
one follows from Theorem \ref{Th3.main}. Since the attractor is always a subset of
a compact attracting set, we get the boundedness of $\Cal A$ in $E^1$ and the
representation formula \eqref{4.rep} also follows from the abstract attractor's
existence theorem. Thus, the theorem is proved.
\end{proof}
We now turn to exponential attractors. These objects have been introduced in \cite{EFNT} in order
to overcome the major drawback of the theory of global attractors, namely, the fact that the
rate of attraction to a global attractor may be arbitrarily slow and that there is no way in general
to control this rate of attraction in terms of physical parameters of the considered equation. This makes
the global attractor sensitive to perturbations and it becomes in a sense unobservable
in finite-time simulations, see \cite{EFNT,EMZ00,EMZ05,MZ} for more details. We start with the formal definition.
\begin{definition}\label{Def4.exp} A set $\Cal M\subset E$ is an exponential attractor for
the semigroup $S(t): E\to E$, $t\ge0$, if
\par
1. $\Cal M$ is a compact set in $E$;
\par
2. $\Cal M$ is semi-invariant $S(t)\Cal M\subset\Cal M$ for $t\ge0$;
\par
3. $\Cal M$ has a finite box-counting dimension in $E$:
$$
\dim_F(\Cal A,E)\le C<\infty;
$$
\par
4. There exist positive constant $\alpha$ and monotone function $Q$ such that, for every
bounded set $B\subset E$, we have
\begin{equation}\label{4.expa}
\dist_E(S(t)B,\Cal M)\le Q(\|B\|_E)e^{-\alpha t}
\end{equation}
for all $t\ge0$.
\end{definition}
The next theorem can be considered as the main result of this section.
\begin{theorem}\label{Th4.main} Let the assumptions of Theorem \ref{Th1.E-dis} hold. Then equation \eqref{1.main}
possesses an exponential attractor $\Cal M$ in $E$ which is a bounded set in the space $E^1$.
\end{theorem}
\begin{proof} Following the general strategy, see \cite{EMZ00,EMZ05,FGMZ,MZ},
we first construct a {\it discrete}
exponential attractor $\Cal M_d\subset E^1$ for the semigroup $S_n=S_1^n$ generated by the map $S(T)$
restricted to the $R$-ball $\Cal B_R^1$ in $E^1$. Here we fix $T>0$ in such a way that
$$
S(T):\Bbb B_R^1\to\Bbb B_R^1.
$$
It is possible to do due to estimate \eqref{3.maindis}. If the discrete attractor $\Cal M_d$
is constructed its continuous analogue $\Cal M\subset E^1$ is given by the standard formula
\begin{equation}\label{4.expdc}
\Cal M:=\cup_{t\in[0,T]}S(t)\Cal M_d.
\end{equation}
This, together with \eqref{3.maindis} gives us the attraction property in $E$ for all
bounded sets of $E^1$. Combining this with the exponential attraction \eqref{3.mainattr} and
transitivity of exponential attraction (see \cite{FGMZ}), we get the desired exponential attraction of
any bounded set in $E$. The semi-invariance follows immediately from semi-invariance of
a discrete attractor and the explicit formula \eqref{4.expdc}. The compactness and
finite-dimensionality also follow from \eqref{4.expdc} if we know, in addition,
that $(t,\xi)\to S(t)\xi$ is Lipschitz (or H\"older) continuous as a map
from $[0,T]\times\Cal M_d\to E$. The Lipschitz continuity with respect
to the initial data is verified in Theorem \ref{Th1.un} and the Lipschitz continuity
in times follows from the fact that $\|\Dt u(t)\|_{L^2}$ and $\|\Dt p(t)\|_{\bar L^2}$
are uniformly bounded on $\Bbb B_R^1$ (due to estimate \eqref{3.dtdis}.
Thus, we only need to verify the existence of a discrete exponential attractor
$\Cal M_d$ on a set $\Bbb B_R^1$. To this end, we need the following standard
result on the existence of exponential attractors, see \cite{EMZ00,EMZ05,MZ}.
\begin{lemma}\label{Lem4.expattr} Let $E$ and $V$ be two B-spaces such that $V$ is compactly embedded in $E$ and
let $\Bbb B\subset E$ be a bounded set in $E$. Assume also that we are given a map
$S:\Bbb B\to\Bbb B$ such that, for every two points $\xi_1,\xi_2\in\Bbb B$, we have a splitting
\begin{equation}\label{4.sp}
S(\xi_1)-S(\xi_2)=\hat\xi+\tilde\xi,
\end{equation}
where
\begin{equation}\label{4.con}
\|\hat\xi\|_{E}\le\kappa\|\xi_1-\xi_2\|_E
\end{equation}
for some $\kappa<\frac12$ and
\begin{equation}\label{4.smo}
\|\tilde\xi\|_{V}\le K\|\xi_1-\xi_2\|_E,
\end{equation}
where $\kappa$ and $K$ are independent of $\xi_1$ and $\xi_2$. Then the discrete semigroup
generated by iterations of the map $S$ possesses an exponential
attractor $\Cal M_s\subset B$ on $B\subset E$.
\end{lemma}
To apply this lemma, we need to split the solution $(\bar u(t),\bar p(t))$ of system
\eqref{1.dif} for differences of two solutions of system \eqref{1.main} on a sum of
contracting $(\hat u(t),\hat u(t))$
and smoothing $(\tilde u(t),\tilde p(t))$ components. The first part will solve
the homogeneous linear system:
\begin{equation}\label{4.hat}
\begin{cases}
\Dt\hat u-\Dx\hat u+\Nx\hat p=0,\ \ \hat u\big|_{t=0}=\bar u\big|_{t=0},\\
\Dt\hat p+\divv(D\hat u)=0,\ \ \hat p\big|_{t=0}=\bar p\big|_{t=0}
\end{cases}
\end{equation}
and the smoothing component is taken as a solution of
\begin{equation}\label{4.tilde}
\Dt\tilde u-\Dx\tilde u+\Nx\tilde p=-l(t)\bar u,\ \ \Dt\tilde p+\divv(D\tilde u)=0,\ \
\tilde u\big|_{t=0}=\tilde p\big|_{t=0}=0,
\end{equation}
where $l(t):=\int_0^1f'(\tau u_1+(1-\tau u_2))\,d\tau$. We recall that, according to Theorem~\ref{Th1.un},
\begin{equation}\label{4.lip}
\|\bar u(t)\|_{L^2}+\|\bar p(t)\|_{\bar L^2}\le Ce^{Kt}\(\|\bar u(0)\|_{L^2}+\|\bar p(0)\|_{\bar L^2}\).
\end{equation}
Moreover, since $(u_i(0),p_i(0))\in\Bbb B_R^1$, $i=1,2$, due to \eqref{3.dtdis} the $C$-norm of $u_i(t)$ is
uniformly bounded and, therefore,
\begin{equation}\label{4.per}
\|l(t)\bar u(t)\|_{L^2}\le Ce^{Kt}\(\|\bar u(0)\|_{L^2}+\|\bar p(0)\|_{\bar L^2}\),
\end{equation}
so the term $l(t)\bar u(t)$ can be treated as an external force. Estimates \eqref{4.con} and
\eqref{4.smo} are verified in the next two lemmas.
\begin{lemma} Let the above assumptions hold. Then, the solution $(\hat u(t),\hat p(t))$
of problem \eqref{4.hat} satisfies the estimate:
\begin{equation}\label{4.decc}
\|\hat u(t)\|^2_{L^2}+\|\hat p(t)\|^2_{\bar L^2}\le
Ce^{-\alpha t}\(\|\bar u(0)\|_{L^2}^2+\|\bar p(0)\|^2_{\bar L^2}\),
\end{equation}
where the positive constants $C$ and $\alpha$ are independent of $u_i$ and $p_i$.
\end{lemma}
\begin{proof}[Proof of the lemma] Indeed, multiplying the first equation of \eqref{4.hat}
by $D\hat u$ integrating with respect to $x$ and using the second equation, we arrive at
$$
\frac12\frac d{dt}\(\|\hat u(t)\|^2_{L^2_D}+\|\hat p(t)\|^2_{\bar L^2}\)+\|\Nx u(t)\|^2_{L^2_D}=0.
$$
Moreover, multiplying the first equation on $-\mathfrak B\hat p(t)$ and
using again the second equation we get
$$
-\frac d{dt}(\hat u(t),\mathfrak B\hat p(t))+\|\hat p\|^2_{\bar L^2}-(\hat u(t),\mathfrak B\divv(D\hat u(t))=0.
$$
Multiplying this equation by small positive $\eb$ and taking a sum with the previous equation,
we finally get
$$
\frac12\frac d{dt}\(\|\hat u(t)\|^2_{L^2_D}-2\eb(\hat u(t),\mathfrak B\hat p(t))+\|\hat p(t)\|^2_{\bar L^2}\)+
\alpha\|\hat u(t)\|^2_{L^2_D}+\eb\|\hat p(t)\|^2_{\bar L^2}\le0
$$
for some positive $\alpha$. The Gronwall inequality applied to this relation gives the desired
result if $\eb>0$ is small enough. Thus, the lemma is proved.
\end{proof}
\begin{lemma}Let the above assumptions hold. Then, the solution $(\tilde u(t),\tilde p(t))$
satisfies the following estimate:
\begin{equation}\label{4.sms}
\|\tilde u(t)\|_{H^1}^2+\|\tilde p(t)\|_{\bar H^1}^2\le Ce^{Kt}
\(\|\bar u(0)\|^2_{L^2}+\|\bar p(0)\|^2_{\bar L^2}\),
\end{equation}
where the constants $C$ and $K$ depend on $R$, but are independent of $u_i$ and~$p_i$.
\end{lemma}
\begin{proof}[Proof of the lemma] Indeed, multiplying the second equation of \eqref{4.tilde}
by $\Dx \tilde u$ and using \eqref{4.per}, we get
$$
\frac12\frac d{dt}\|\tilde u(t)\|^2_{H^1}+\|\Dx\tilde u(t)\|^2_{H^2}
\le C\|\tilde p(t)\|^2_{H^1}+
Ce^{Kt}\(\|\bar u(0)\|^2_{L^2}+\|\bar p(0)\|^2_{L^2}\).
$$
Taking now $\Nx$ from the both sides of the second equation of \eqref{4.tilde} and multiplying it
by $\Nx \tilde p(t)$, we arrive at
$$
\frac12\frac d{dt}\|\Nx \tilde p(t)\|^2_{L^2}\le \|\Dx\tilde u(t)\|^2_{L^2}+C\|\Nx \tilde p(t)\|^2_{L^2}.
$$
Taking a sum of the obtained inequalities, we finally infer that
\begin{multline}
\frac12\frac d{dt}\(\|\Nx \tilde u(t)\|^2_{L^2}+\|\Nx \tilde p(t)\|^2_{L^2}\)\\\le
C\|\Nx\tilde p(t)\|^2_{L^2}+Ce^{Kt}\(\|\bar u(0)\|^2_{L^2}+\|\bar p(0)\|^2_{\bar L^2}\)
\end{multline}
and the Gronwall inequality applied to this relation finishes the proof of the lemma.
\end{proof}
We are now ready to complete the proof of the theorem. Indeed, estimates \eqref{4.decc} and
\eqref{4.sms} guarantee that the assumptions of Lemma \ref{Lem4.expattr} are satisfied if we take
$$
V:=H^1_0(\Omega)\times \bar H^1(\Omega)
$$
and fix $T$ big enough that $Ce^{-\alpha T}<\frac12$. Thus, the discrete exponential
attractor $\Cal M_d$ is constructed and the desired continuous exponential attractor $\Cal M$
can be constructed via \eqref{4.expdc} as explained above. Therefore, the theorem is proved.
\end{proof}
\section{Generalizations and concluding remarks}\label{s7}
In this section, we briefly discuss the so-called Navier-Stokes-Brinkman-Forchheimer
equation in the following form:
\begin{equation}\label{7.NS}
\begin{cases}
\Dt u+B(u,u)-\Dx u+\Nx p+f(u)=g,\ u\big|_{\partial\Omega}=0,\ u\big|_{t=0}=u_0,\\
\Dt p+\divv(u)=0,\ p\big|_{t=0}=p_0,
\end{cases}
\end{equation}
where
\begin{equation}\label{7.B}
B(u,v)=(u,\Nx)v+\frac12\divv(u)v.
\end{equation}
The extra term $\frac12 \divv(u)u$ is added to the standard Navier-Stokes inertial term in
order to preserve the energy identity, see \cite{FG,GT,Tem1} and references therein. Indeed,
in this case we have
$$
(B(u,v),v)\equiv 0,\ \ \forall u,v\in H^1_0(\Omega)
$$
and we have the energy identity \eqref{1.eq} with $D=1$ exactly as in the case $B=0$
considered above, namely,
\begin{equation}\label{7.eq}
\frac12\frac d{dt}\(\|u\|^2_{L^2}+\|p\|^2_{\bar L^2}\)+\|\Nx u\|^2_{L^2}+(f(u),u)=(g,u).
\end{equation}
The theory of this equation is very similar to the case $B=0$ considered above with the only
difference that, in order to control the extra non-linearity $B$, we need to assume that $f(u)$
has a super-cubic growth rate, see \cite{HR,KZ}, but this assumption
is already incorporated to \eqref{0.f} if $l>1$.
\par
We start with the analogue of dissipative estimate \eqref{1.E-dis}. The analogue of \eqref{1.eq} is
already obtained, so in order to get the key differential inequality \eqref{1.Gp}, we
only need to estimate the extra term
\begin{multline}
|\eb(B(u,u),\mathfrak Bp)|\le C\eb\|u\|_{L^3}\|\Nx u\|_{L^2}\|p\|_{\bar L^2}\le
C\|u\|_{L^3}^3+\\+\frac12\|\Nx u\|_{L^2}^2+C\eb\|p\|^6_{\bar L^2}\le
\frac12(f(u),u)+C+\frac12\|\Nx u\|^2_{L^2}+C\eb^6\Cal E_\eb(u,p)^3,
\end{multline}
where the constant $C$ is independent of $\eb$. Thus, analogously to Theorem \ref{Th1.E-dis}, we have
the following result.
\begin{proposition}\label{Prop7.E-dis} Let the assumptions of Theorem \ref{Th1.E-dis} hold, $l>1$ and let
$(u,p)$ be a weak energy solution of problem \eqref{7.NS}. Then, this solution satisfies the
dissipative estimate \eqref{1.E-dis}.
\end{proposition}
Let us now turn to uniqueness. This can be proved exactly as in the incompressible case (see \cite{KZ}).
Indeed, in comparison with Theorem \ref{Th1.un}, we need to estimate the extra term
$$
(B(u_1,u_1)-B(u_2,u_2),u_1-u_2)=(B(\bar u,u_2),\bar u),\ \ \bar u=u_1-u_2,
$$
where $u_1$ and $u_2$ are two solutions of \eqref{7.NS}. Integrating by parts and
using the Cauchy-Schwarz inequality, we get
$$
|B(\bar u,u_2),\bar u)|\le \frac14\|\Nx \bar u\|^2_{L^2}+C\|u_2\bar u\|^2_{L^2}.
$$
On the other hand, using assumptions \eqref{0.f}, analogously \cite{KZ}, we get
$$
(f(u_1)-f(u_2),\bar u)\ge \kappa(|u_1|^{1+l}+|u_2|^{1+l},|\bar u|^2)-L\|\bar u\|^2_{L^2}\ge
C\|u_2\bar u\|^2-\tilde L\|\bar u\|^2_{L^2}
$$
and, therefore,
$$
(B(u_1)-B(u_2),\bar u)+(f(u_1)-f(u_2),\bar u)\ge -\tilde L\|\bar u\|^2_{L^2}.
$$
Arguing further as in the proof of Theorem \ref{Th1.un}, we get estimate \eqref{1.lip} and
verify the uniqueness of the solution for problem \eqref{7.NS}. Thus, as in the case of $B=0$,
equation \eqref{7.NS}, generates a dissipative semigroup $S(t)$ in the phase space $E$.
\par
We now discuss the smoothing property and start with the instantaneous smoothing
(analog of Theorem \ref{Th1.sm-ins}).
\begin{proposition}\label{Prop7.sm-ins} Let the assumptions of Theorem \ref{Th1.E-dis} hold, $l>1$ and
let $(u,p)$ be a weak
energy solution of equations \eqref{7.NS}.
Then the following partial smoothing property holds:
\begin{multline}\label{7.sm}
t^{8/3}\|\Nx u(t)\|^2_{L^2}+t^{8/3}\|\Dt u(t)\|^2_{L^2}+t^{8/3}\|\Dt p(t)\|^2_{\bar L^2}+\\+
\int_0^{t}s^{8/3}\|\Nx\Dt u(s)\|^2_{L^2}\,ds\le Q(\|(u(0),p(0)\|^2_{E})+Q(\|g\|_{L^2}),
\end{multline}
where $t\in[0,1]$ and a function $Q$ is independent of $t$ and $u$.
\end{proposition}
\begin{proof}
Here, we have a little difference (in comparison with the proof of Theorem \ref{Th1.sm-ins}),
namely, multiplication of the equation
on $\Dt u$ does not work since we have not enough regularity to control the term $(B(u,u),\Dt u)$.
By this reason, again similarly to the incompressible case (see \cite{KZ}), we need to differentiate
the first equation of \eqref{7.NS} with respect to $t$ and multiply it by $v=\Dt u$ at the first step.
The nonlinearity
$B(u,v)+B(v,u)$ is controlled here by the second nonlinearity $f'(u)v$ exactly as in the
proof of uniqueness, so we get the following analogue of
\begin{equation}\label{7.dtv}
\frac12\frac d{dt}\(\|v(t)\|^2_{L^2}+\|\Dt p\|^2_{\bar L^2}\)+\|\Nx v\|^2_{L^2}\le \tilde L\|v\|^2_{L^2}.
\end{equation}
Moreover, from the first equation of \eqref{7.NS} and the dissipative estimate \eqref{1.E-dis},
we infer after the standard estimates that
\begin{equation}\label{7.dtu}
\|v\|_{L^{6/5}(0,1;H^{-1})}\le Q(\|(u_0,p_0)\|_{E})+Q(\|g\|_{L^2}).
\end{equation}
Estimate \eqref{7.dtu} replaces the missed control of the quantity $\int_0^ts\|v(s)\|_{L^2}^2\,ds$ and
allows us to get the desired smoothing property. Indeed, multiplying \eqref{7.dtv} by $t^{8/3}$, integrating
in time and using the estimate
\begin{multline}
\int_0^t s^{5/3}\|v(s)\|^2_{L^2}\,ds \le \\\le
\int_0^s(s^{4/3}\|v(s)\|_{L^2})^{1/2}(s^{4/3}\|v(s)\|_{H^1})^{3/4}\|v(s)\|^{3/4}_{H^{-1}}\,ds \le \\\le
\eb\sup_{s\in[0,t]}\left\{s^{8/3}\|v(s)\|_{L^2}^2\right\}+\eb\int_0^ts^{8/3}\|v(s)\|^2_{H^1}\,ds+
C_\eb\|v\|_{L^{6/5}(0,1;H^{-1})}^2,
\end{multline}
where $\eb>0$ can be taken arbitrarily small, we end up with the desired smoothing property
for the derivatives
$$
t^{4/3}\|\Dt u(t)\|_{L^2}+t^{4/3}\|\Dt p(t)\|_{\bar L^2}\le Q(\|u_0,p_0\|_{E})+Q(\|g\|_{L^2})
$$
for $t\in[0,1]$ and some monotone function $Q$. Returning back to the first equation of \eqref{7.NS},
multiplying it by $u(t)$ and integrating in $x$, we get
$$
\|\Nx u(t)\|_{L^2}^2+(|f(u(t)).u(t)|,1)\le C(\|p(t)\|^2_{\bar L^2}+\|g\|^2_{L^2}+\|\Dt u(t(\|^2_{L^2})
$$
which together with the previous estimate and dissipative estimate \eqref{1.E-dis} give the
desired smoothing property and finishes the proof of the proposition.
\end{proof}
As in the case $B=0$, this instantaneous smoothing property allows us to reduce the study of the
asymptotic smoothing to the truncated problem
\begin{equation}\label{7.trunc}
\begin{cases}
-\Dx u+\Nx p+B(u,u)+f(u)=g(t),\ \ \ u\big|_{\partial\Omega}=0\\
\Dt p+\divv(Du)=0,\ \ p\big|_{t=0}=p_0,\ ,
\end{cases}
\end{equation}
where $g(t):=g-\Dt u(t)$ satisfies \eqref{1.g}. Moreover, using the obvious estimate
\begin{equation}\label{7.Bhd}
\|B(u(t),u(t))\|_{H^{-1/2}}\le \|B(u(t),u(t))\|_{L^{3/2}}\le C\|\Nx u(t)\|^2_{L^2},
\end{equation}
we can assume without loss of generality (due to the dissipative estimate \eqref{1.E-dis} and
smoothing property \eqref{7.sm}) that the nonlinearity $B(u,u)$
is boun\-ded in $L^\infty(\R_+,H^{-1/2})$. Thus,
we may treat the nonlinearity $B(u,u)$ as a part of $g$ as well. Then the new function $g$ will
satisfy \eqref{7.g} and we may treat equations \eqref{7.trunc} exactly as equations \eqref{1.trunc}.
\par
This gives us the analogues of Corollaries \ref{Cor3.exp} and \ref{Cor3.hd} for the truncated
system \eqref{7.trunc}. In order to make the second step of bootstrapping, we note that
$$
\| B(u(t),u(t))\|_{L^2}\le C\|u(t)\|_{H^{1+\delta}}
$$
for $\delta\ge\frac14$. Therefore, if the $H^{1+\delta}$-regularity of $u(t)$ is verified
for $\delta\ge\frac14$, the next step of bootstrapping will give us the $H^2$-regularity exactly
as in the Section \ref{s2}. Thus, we have proved the following analogue of Theorem \ref{Th3.main}.
\begin{theorem}\label{Th7.main} Let the assumptions of Theorem \ref{Th1.E-dis} hold and $l>1$.
Then the $R$-ball $\Bbb B_R^1$ in
the higher energy space $E^1$
is an exponentially attracting set for the solution semigroup $S(t): E\to E$ generated by the problem
\eqref{7.NS} if $R$ is large enough, i.e., there exists $\alpha>0$ and monotone $Q$
such that, for every bounded set $B\subset E$,
\begin{equation}\label{7.mainattr}
\dist_E(S(t)B,\Bbb B_R^1)\le Q(\|B\|_E)e^{-\alpha t}.
\end{equation}
Moreover, the problem \eqref{7.NS} is well-posed and dissipative in the space $E^1$ as well,
i.e., if $(u(0),p(0))\in E^1$ then the following estimate holds:
\begin{equation}\label{7.maindis}
\|(u(t),p(t))\|_{E^1}\le Q(\|u(0),p(0))\|_{E^1})e^{-\alpha t}+Q(\|g\|_{L^2})
\end{equation}
for some positive $\alpha$ and monotone $Q$.
\end{theorem}
Finally, we have the analogue of Theorem \ref{Th4.main} on exponential attractors.
\begin{theorem}\label{Th8.main} Let the assumptions of Theorem \ref{Th1.E-dis} hold and $l>1$.
Then equation \eqref{7.NS}
possesses an exponential attractor $\Cal M$ in $E$ which is a bounded set in the space $E^1$.
\end{theorem}
The proof of this result repeats word by word the proof of Theorem \ref{Th4.main} (we have more
than enough regularity of solutions $u_1(t)$ and $u_2(t)$ to handle the extra nonlinear term)
and by this reason is omitted.
\par
We conclude the exposition by several remarks.
\begin{remark} \label{Rem7.2D} We have considered equations \eqref{1.main} and \eqref{7.NS}
in the most complicated 3D case only. The 2D case can be treated analogously, but it is actually
essentially simpler. Indeed, due to the Sobolev embedding $H^1\subset L^q$ for all $q<\infty$, the
control of the $H^1$-norm of the solution $u$ gives the control of the $L^2$-norm of $f(u)$ for
any growth exponent $l$, so the restriction $l\le2$ can be removed here and any polynomial
nonlinearity is {\it subcritical} in 2D case.
\par
Another simplification comes from the fact that in 2D case the inertial term $B(u,u)$ can
be handled without the help of the nonlinearity $f(u)$, so we do not need to require the super-cubic
growth rate of $f(u)$. In particular, the purely Navier-Stokes case $f(u)$ is also covered
by our theory and gives some new results here as well. For instance, in comparison with \cite{GT}, we
get the $E^1$-regularity of the attractor for the case of Dirichlet boundary conditions as well.
\end{remark}
\begin{remark}\label{Rem7.super} An interesting question is related with the supercritical case where
the nonlinearity grows faster than $u|u|^4$. In the case of incompressible Brinkman-Forchheimer
equations as well as in the case of strongly damped wave equations, the restriction $l\le2$
in \eqref{0.f} is not necessary as shown in \cite{KZ,KZ09}. Some methods developed
there can be extended to the case of equations \eqref{1.main} as well.
\par
For, the existence of weak solutions in this case can
be verified based on the energy identity \eqref{1.eq}, their uniqueness follows exactly as in the proof
of Theorem \ref{Th1.un} where only the monotonicity assumption $f'(u)\ge-L$ is actually used.
Moreover, the local smoothing property and estimates for $\Dt u$ stated in Theorem
\ref{Th1.sm-ins} also work for the super-critical case as well.
\par
However, there is a problem here which prevents us to treat the supercritical case, namely,
the absence of a {\it dissipative} estimate for the solution $u$ in the energy norm. Indeed,
the derivation of such an estimate in Theorem \ref{Th1.E-dis} is based on multiplication
of the equation by $\mathfrak Bp$, where $\mathfrak B$ is a Bogowski operator, but in the
supercritical case we cannot do this at least in a direct way since the term
$(f(u),\mathcal Bp)$ is out of control. We believe that this problem has a technical nature
which can be overcome and are planning to return to the supercritical case somewhere else.
\end{remark}
|
2,869,038,156,299 | arxiv |
\section{Introduction}
Pre-training language models (LMs) on large-scale web text corpora (i.e., Common Crawl and OpenWebTextCorpus~\cite{OpenWeb}) has significantly improved their language generation performances~\cite{gpt19, xlnet2019Neurips, transformerXL2019ACL, shoeybi2019arxiv, li2020_Optimus, gpt-3}, by allowing them to learn meaningful relations between words. However, since the models are trained on massive web-crawled text data which is not exhaustively filtered, they are prone to generating unexpected and undesired texts~\cite{sheng2019woman, Wallace2019UniversalAT} which are often also inappropriate (See Table~\ref{table:example}).
\input{Table/Efficiency_figure}
\input{Table/concept_figure}
Specifically, LMs trained on unfiltered texts can randomly generate racial slurs, sexually explicit and violent expressions, which are highly toxic~\cite{groenwold2020EMNLP, viviano2021ACLshort, xu2021naccl, dale2021EMNLP}. This is one of the main obstacles in deploying pre-trained LMs to real-world applications (e.g., conversational agents). Furthermore, as demonstrated in~\citet{realtoxicprompt, toxichat, detox2021}, LMs are prone to generating toxic language even from the non-toxic prompts or contexts. One simple and straightforward approach to tackle this problem is to eliminate the toxic and biased texts by detecting them from the training dataset~\cite{zhou2021challenges,zampieri2019predicting}. However, as the size of LMs increases, the training corpora have also expanded enormously~\cite{gpt-3,du2021arxiv}. Thoroughly removing or filtering out all toxic words or sentences from such a large-scale corpus and retraining the LM from scratch, could be costly and impractical~\cite{Danger21ACM}.
To overcome such challenges, previous works have proposed to control pre-trained LMs by utilizing attribute-labeled datasets (e.g., toxic and non-toxic). They modify the decoding process either by adversarially perturbing the LM with a toxicity discriminator~\cite{PPLM2019ICLR} or using additional finetuned LMs on targeted attribute data to suppress toxic logits and amplify non-toxic logits of the base LMs~\cite{Gedi20arxiv, dexperts21ACL}. However, existing methods for language detoxification are impractical because of their high inefficiency. The perturbation-based method~\cite{PPLM2019ICLR} slows down the inference time of the original GPT-2~\cite{gpt19} by 40 times due to the high cost of gradient computation. While the methods of ~\citet{Gedi20arxiv} and ~\citet{dexperts21ACL} are as fast as GPT-2, both additionally require auxiliary LMs to shift the logits toward those of non-toxic texts, which is memory inefficient.
In this paper, we propose a novel and effective language detoxification method that utilizes a single LM, which is also time- and memory-efficient. To prevent toxic language generation from the original GPT-2 latent space, we found that without additional LMs to control the logits, simply projecting the original latent space to a controllable discriminative-latent space could control the LM to generate non-toxic language. Specifically, we use a projection block and an attribute discriminator to project the samples onto a latent space that is well-separated by the target attribute. We refer to this model as an Attribute-Discriminative LM (ADLM) (Figure~\ref{fig:conceptfig}).
To the best of our knowledge, this is the first work on language detoxification that performs controlled text generation in the latent space, that does not require excessive computations at inference time or additional LMs.
\input{Table/Example_sentences}
To verify the effectiveness and efficiency of the proposed ADLM, we validate our method on two language detoxification tasks: detoxified language and dialogue generation. With 10K random prompts from the RealToxicityPrompts dataset~\cite{realtoxicprompt}, we conduct a generic language modeling experiment for detoxification. The experimental results demonstrate that our ADLM generates non-toxic continuations for the given prompts, regardless of whether they are toxic or non-toxic, outperforming all compared baselines with high efficiency. On the language detoxification task for dialogue generation~\cite{toxichat,dialoguesafety}, our ADLM generates safer responses than baselines on ToxiChat and DiaSafety datasets. Lastly, to further show the general applicability of our method to any attribute-controlled text generation tasks, we validate ADLM on a sentiment-controlled text generation task~\cite{SST-5dataset2013} on which our model also achieves impressive performance (Appendix~\ref{appendix:sentiment}). Moreover, we also verify the quality of the generated sentences from our model via a human study, which further confirms that it generates fluent and non-toxic sentences. In summary, our contributions are as follows:
\begin{itemize}[itemsep=0.5mm, parsep=1pt, leftmargin=*]
\item We propose a novel LM for language detoxification, with a projected attribute-discriminative latent space learned by training a discriminator to classify texts by their attributes.
\item We introduce a time- and memory-efficient language detoxification method using our attribute-discriminative language model (ADLM), which does not require excessive computational overhead at inference time or memory (Figure~\ref{fig:efficiency}).
\item Our method largely outperforms existing methods on both generic language detoxification and real-world dialogue detoxification tasks.
\end{itemize}
\section{Related Work}
Pre-trained language models (LMs) \cite{gpt19, shoeybi2019arxiv, gao2020arxiv, gpt-3,du2021arxiv} mostly concentrate on human-like text generation focusing on the structures of the generated texts, rather than on the content, and often are not controllable. To design LMs that can generate texts with desired properties, additional modifications are necessary~\cite{yu2017seqgan,Toward2017ICML,ziegler2019fine,seanie20ICLR}. Story generation~\cite{fan2018story, guan2020knowledge}, attribute (e.g., sentiment, topic, or emotion) controlled generation~\cite{yang2021fudge, distributional2021ICLR, cocon2021ICLR, liu2021emtions} and summarization~\cite{meansum2019icml} are active topics of research on controlled text generation.
While the literature on controlled text generation is vast, in this paper, we mainly focus on methods for language detoxification, as it has been a critical problem in deploying LMs to real-world applications~\cite{realtoxicprompt}.
The simplest methods to tackle language detoxification is to either pre-training LMs on the datasets which only contain desired attributes as done by Domain-Adaptive Pretraining (DAPT)~\cite{DAPT20ACL} or conditionally prepending a prefix ahead of each text as done by Conditional Transformer Language (CTRL)~\cite{CTRL2019arxiv} and Attribute conditioning (ATCON)~\cite{realtoxicprompt}. Since these approaches utilize a single attribute token in front, controlling the sequences does not work well. When these models are exposed to toxic texts in the pertaining phase, it becomes more difficult to perform controlled language generation. Another approach to tackle the language detoxification problem is to train auxiliary LMs to guide the base LM in the decoding phase. Generative Discriminator (GeDi)~\cite{Gedi20arxiv} employs an ATCON model as the discriminator, and Decoding-time Experts (DExperts)~\cite{dexperts21ACL} uses two experts and anti-expert LMs, each of which is a DAPT model trained only on the toxic or non-toxic subset of the dataset. However, such auxiliary LM approaches are highly memory-inefficient. On the other hand, Plug-and-Play Language Model (PPLM)~\cite{PPLM2019ICLR} uses a single LM, and utilizes an attribute discriminator to generate gradient perturbations toward the given attributes. However, during the inference, it takes a considerably longer time because it samples each word through multiple steps of the backward passes. In contrast, our method requires a single LM and does not suffer from the memory and computational efficiency of the existing methods while obtaining better performance.
\section{Method}
We now describe a novel language detoxification method using our \emph{\textbf{A}ttribute-\textbf{D}iscriminative \textbf{L}anguage \textbf{M}odel (\textbf{ADLM})}, which can efficiently perform controlled text generation for a given attribute using a projected discriminative-latent vector. In Section~\ref{ssec:3_background}, we first briefly describe the base LM architecture, general language modeling, previous detoxified language modeling and dialogue generation modeling. Then, in Section~\ref{ssec:3_slac}, we describe our model architecture, training objective, and sampling method.
\subsection{Background}
\label{ssec:3_background}
\paragraph{Language models.}
A Language Model (LM) predicts the next words for a given text sequence by learning the joint probability distribution over words in given texts~\cite{bengio2003neural, mikolov2010recurrent}. An LM can be trained either in an autoregressive or autoencoder manner to learn the distributed representations of words. The autoregressive approaches~\cite{gpt19,CTRL2019arxiv,transformerXL2019ACL,kitaev2020reformer,xlnet2019Neurips} learn to predict the next word given the sequence of previously generated words, whereas autoencoder approaches~\cite{BERT18NACCL,albert2020,liu2019roberta,distilbert2019,clark2020electra} learn to anticipate the missing or masked words utilizing bidirectional contexts.
In this paper, we use an autoregressive LM, GPT-2~\cite{gpt19}, as our base model. A GPT-2 is composed of a Transformer and a head layer. The Transformer~\cite{transformer2017} consists of multiple blocks, each of which is composed with a position-wise feed-forward network, multi-head self-attention, and layer normalization. The Transformer encodes the contextual embedding of the given input sequence $x_{1:t-1}$ where $i:j$ denotes $i^{th}$ through $j^{th}$ token in the sequence. The head layer is a linear layer that predicts the logit ($o_{t}$) of the possible next tokens $x_t$ based on the hidden states $h_{1:t-1} = [h_{1}, h_{2}, \dots, h_{t-1} ]\in \mathbb{R}^{(t-1) \times d}$ which are the outputs of the Transformer layers. Formally, we can define an LM succinctly as follows:
\vspace{-0.05in}
\begin{equation}
\label{eq:lm}
\begin{split}
h_{1:t-1} &= \texttt{Transformer}(x_{1:t-1}; \theta_\texttt{T}), \\
o_{t} &=\texttt{Head}(h_{1:t-1};\theta_\texttt{H}),
\end{split}
\end{equation}
\vspace{-0.15in}
\noindent where $o_{t}$$\in$$\mathbb{R}^{\abs{V}}$, $\abs{V}$ is the vocabulary size, $\theta_\texttt{T}$ and $\theta_\texttt{H}$ are Transformer's and head layer's parameters, respectively.
\paragraph{General language model.}
In generic language modeling, the initially given input sequence is called as a \textit{prompt} $x_{1:m-1} = (x_1, \dots, x_{m-1})$ and the text sequence generated following it is called a \textit{continuation} $x_{m:n} = (x_m, \dots, x_n)$. The goal of language modeling is then generating coherent continuation $x_{m:n}$ to the preceding prompt $x_{1:m-1}$.
\vspace{-0.1in}
\begin{equation}
\label{eq:general_language}
P(x_{m:n}\given[\big] x_{1:m-1}) = \prod_{i=m}^{n} P(x_i \given[\big] x_{<i}),
\end{equation}
where $P$ is the softmax function that calculate probability of next tokens from the input $x_{1:i-1}$. The model learns the distribution of the next sequence $x_{i}$ conditioned on the previously generated tokens, using the chain rule of probability as Equation~\ref{eq:general_language}.
\paragraph{Detoxified language model.}
The detoxified language modeling could be considered as a controlled attribute text generation task, but always have to generate non-toxic attribute sequences even from the toxic prompts. This, referred to as language detoxification, is a challenging problem that requires strong attribute control while preserving the fluency of the LM.
For language detoxification, the objective is to learn to generate texts toward the desired attribute $\texttt{a}$ (i.e., nontoxic) as follows:
\vspace{-0.1in}
\begin{equation}
\begin{gathered}
\overline{x}_{m:n} = (\overline{x}_m, \overline{x}_{m+1}, \dots, \overline{x}_n),\\
P(\overline{x}_{m:n} \given[\big] x_{1:m-1}, \texttt{a}) = \prod_{i=m}^{n} P(\overline{x}_i \given[\big] x_{<m} ,\texttt{a}),
\end{gathered}
\vspace{-0.1in}
\end{equation}
where $\overline{x}_{m:n}$ denotes the continuation that corresponds to the desirable attribute \texttt{a}. The objective is to learn the distribution of the sequence $\overline{x}_{m:n}$ conditioned on $\texttt{a}$ in an autoregressive manner.
\paragraph{Dialogue generation model.}
In the dialogue generation, the input sequence is called as a \textit{context} and the generated sequence is called as a \textit{response}. The dialogue generation model learns to generate context-related human alike responses. Since the dialogue generation models interact with users, language detoxification is an essential task for their real-world application. Similar to the detoxified language model, the dialogue generation model learns the distribution of the response sequence $\overline{x}_{m:n}$ conditioned on the attribute $\texttt{a}$ and the context sequence $x_{1:m-1}$, with an LM.
\subsection{ADLM: Attribute-Discriminative Language Model}
\label{ssec:3_slac}
Previously, the language detoxification was only applied at decoding time using additional LMs or by perturbing the LM, which is further trained on each attribute dataset to guide the logits of the pre-trained large base LM. However, they are computation- and memory-inefficient, and thus we propose a novel single-LM approach for language detoxification which uses a latent space to control the attributes of the generated texts. Specifically, we learn a projected latent embedding space in which the texts are well-discriminated by their attributes, and use it to control the attribute of generated text sequences. We discuss the ADLM's architecture, objective, and the sampling method in the following paragraphs.
\input{Table/overview}
\paragraph{Model architecture.}
Our model consists of a single LM, a projection block, and an attribute discriminator (Figure~\ref{fig:overview_training}). The projection block, \texttt{ProjB}, learns to project the original latent space onto a discriminative latent space that embeds the attribute information. The attribute is embedded onto a discriminative latent space through a single embedding layer $\texttt{AttEmb}$ followed by a projection block, as follows:
\vspace{-0.2in}
\begin{equation}
\label{eq:block}
\begin{split}
h_{1:t-1} &= \texttt{Transformer}(x_{1:t-1}; \theta_\texttt{T}), \\
z_{\texttt{a}} &= \texttt{AttEmb}(\texttt{a};\theta_\texttt{a}),\\
\overline{h}_{1:t-1} &= \texttt{ProjB}(h_{1:t-1}, z_\texttt{a}; \theta_\texttt{B}), \\
\overline{o}_{t} &=\texttt{Head}(\overline{h}_{1:t-1};\theta_\texttt{H}),
\end{split}
\end{equation}
\vspace{-0.1in}
\noindent where $\theta_\texttt{a}$ and $\theta_\texttt{B}$ are the parameters of each component. $\overline{h}_{1:t-1}$ are the projected contextual embeddings of the attribute embeddings $z_{\texttt{a}}$.
To learn a discriminative latent space $\overline{h}_{1:t-1}$ where the contextualized word embeddings are well separated by their attributes, we use an attribute discriminator (\texttt{Disc}):
\vspace{-0.1in}
\begin{equation}
\label{eq:discriminator}
\begin{split}
y &= \texttt{Disc}(\overline{h}_{1:t-1}; \theta_\texttt{D}),
\end{split}
\end{equation}
\vspace{-0.1in}
\noindent where $y \in \mathbb{R}^{\abs{A}}$ is the output logit which predicting the attribute $a$, $\abs{A}$ is the cardinality of the attribute set, and $\theta_\texttt{D}$ is the parameters of the discriminator. The module performs average pooling of $\overline{h}_{1:t-1}$ to condense the overall representation and then pass the averaged vector into an affine transformation function to determine the corresponding attribute $\texttt{a}$. The discriminator classifies the $\overline{h}_{1:t-1}$, which will render the newly constructed latent space to be an attribute-discriminative latent (See Figure \ref{fig:conceptfig}).
\input{Table/1_main}
\paragraph{Training objective.}
We further jointly train the components of \emph{ADLM} in an end-to-end manner. Let us denote the dataset $\abs{D} = \{X, A\}$, where $x \in X$ is a training text sequence and $a \in A$ is its corresponding attribute label, and the set of the model parameters is $\theta =\{\theta_\texttt{a}, \theta_\texttt{B}, \theta_\texttt{D}\}$.
Our training objective consists of three terms. The first objective is the autoregressive LM loss for conditional language modeling, which learns to reconstruct the given input text $x^{i}$ conditioned on the prompt $x^{i}_{<t}$ and the attribute $\texttt{a}^{i}$:
\begin{equation}
\label{eq:lm_loss}
\begin{gathered}
\mathcal{L}_{\text{LM}}(\theta) = - \sum^{\abs{D}}_{i=1} \sum^{T^{i}}_{t=2} log P_\theta (x^{i}_t \given[\big] x^{i}_{<t}, \texttt{a}^{i}),
\end{gathered}
\end{equation}
where $T^{i}$ is the total length of the ${i}^{th}$ input $x$. The second objective directly enforces the projected embeddings to be attribute-discriminative:
\begin{equation}
\label{eq:control_loss}
\begin{gathered}
\mathcal{L}_{\text{Disc}}(\theta) = - \sum^{\abs{D}}_{i=1} log P_\theta (\texttt{a}^{i} \given[\big] \overline{h}^{i}_{1:T^{i}}).
\end{gathered}
\end{equation}
Lastly, we also propose a regularizer for the projected latent space to preserve the relationship between the word embeddings in the original latent space, to alleviate the potential negative impact of strong detoxification on fluency. To this end, we apply Elastic Weight Consolidation (EWC)~\cite{ewc2017} regularization often used for continual learning that uses Fisher information matrix to put higher regularization weights on the update of more important parameters:
\vspace{-0.25in}
\begin{equation}
\label{eq:ewc_loss}
\begin{gathered}
\mathcal{L}_{\text{EWC}}(\theta) = - \sum^{\abs{\theta_{B}}}_{j=1} \frac{\lambda}{2} F_{j}(\theta_{\texttt{B}_{j}} - \theta^{*}_{\texttt{B}_{j}})^{2},
\end{gathered}
\end{equation}
\vspace{-0.1in}
\noindent where $j$ is the index referring the $j$-th parameter of $\theta_B$ uniquely identified by the number of parameters $\abs{\theta_{B}}$, $\theta^*_{B}$ is the parameters of $\texttt{ProjB}$ trained without the discriminator, $F$ is the Fisher information matrix applying more weights on useful parameters learned from the $\theta^*_{B}$, and $\lambda$ is a scale controlling the preservation of $\theta^*_{B}$ to $\theta_B$ .
Our final combined objective aims to minimize the sum of the two cross-entropy loss terms and an EWC regularizer term as follows:
\begin{equation}
\label{eq:total_loss}
\begin{gathered}
\argmin_{\theta} \mathcal{L} = \mathcal{L}_{\text{LM}} + \mathcal{L}_{\text{discrim}} + \mathcal{L}_{\text{EWC}}.
\end{gathered}
\end{equation}
\vspace{-0.1in}
\noindent Minimizing the total loss ($\mathcal{L}$) together allows our ADLM to control the attributes of the generated texts in the latent space.
\paragraph{Sampling.}
Our model constrains the logits of text generation to use the vocabulary toward the desired attribute. We can obtain different types of attribute logits from the attribute-discriminative latent space of ADLM, which uses much less memory during the inference compared to the previous methods.
Our model computes both types of logits $\overline{o}_t, \neg \overline{o}_t$ for the text generation based on the attributes such as the desired (non-toxic; $\texttt{a}$) and undesired (toxic; $\neg \texttt{a}$) attribute as shown in Figure~\ref{fig:overview_inference}. Each logit is computed as follows:
\begin{equation}
\begin{split}
\overline{o}_t &= \texttt{Head}(\texttt{ProjB}(h_{1:t-1}, z_{\texttt{a}})), \\
\neg \overline{o}_t &= \texttt{Head}(\texttt{ProjB}(h_{1:t-1}, z_{\neg\texttt{a}})). \\
\end{split}
\end{equation}
The non-toxic logits ($\overline{o}_t$) would have a high probability on non-toxic tokens, and toxic logits ($\overline{o}_t$) would have high probability on toxic tokens. From this difference of probability, the tokens which have greater probability in toxic logits than non-toxic logits can be presumed as toxic tokens which could lead to the generation of toxic texts. Therefore, every generation of token, we compute the difference between the logits, $\Delta o_t = \overline{o}_t - \neg \overline{o}_t$, to suppress the tokens that shows higher probability in toxic logits as follows:
\begin{equation}
\begin{gathered}
o^{\prime}_t = \left\{
\begin{array}{ll}
\overline{o}_t + \alpha \Delta o_t & \quad \Delta o_t < 0 \\
\overline{o}_t & \quad \Delta o_t \ge 0
\end{array}
\right.,
\end{gathered}
\label{eq:suppress}
\end{equation}
where $o^{\prime}_{t}$ is final logits of our decoding, and $\alpha$ is a constant value of suppressing scale.
\section{Experimental Results}
To validate our ADLM, we conduct language generation task on RealToxicityPrompts~\cite{realtoxicprompt} and dialogue generation task on ToxiChat~\cite{toxichat} and DialogueSafe~\cite{dialoguesafety} for detoxification. Further, we show the general applicability of our method to attribute-controlled language generation on a sentiment-controlled text generation task (Appendix~\ref{appendix:sentiment}). The experimental details are given in the Appendix~\ref{appendix:dataset}. The code is available at \url{https://github.com/jin8/ADLM}.
\subsection{Detoxification for Language Generation}
\label{sec:detox}
\paragraph{Baselines.} We compare against the following baselines for generic language detoxification tasks, using GPT-2 as the base language model. All compared models, including ours, are trained on Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge dataset and evaluated on random 10K prompts from RealToxicityPrompts. The details of the hyperparameters used for each model are provided in Appendix ~\ref{appendix:baseline}.
\begin{itemize}[itemsep=1.0mm, parsep=0pt, leftmargin=*]
\item \textbf{Domain-adaptive pre-training (DAPT;~\citet{DAPT20ACL}):} This baseline further trains the LM on the dataset with desired attributes (e.g., non-toxic corpus).
\item \textbf{Attribute conditioning (ATCON;~\citet{realtoxicprompt}):} This baseline learns the distribution of the generated texts conditioned on the task-specific control codes (e.g., toxic or non-toxic) prior to the texts.
\item \textbf{Plug-and-play language models (PPLM; ~\citet{PPLM2019ICLR}):} This baseline consists of a classifier that backpropagates the gradients to the LM multiple times to generate texts with desired attributes. Due to the high computational cost, we only sample 10 sentences per prompt as ~\citet{realtoxicprompt} setting.
\item \textbf{Generative discriminators (GeDi; \citet{Gedi20arxiv}):} GeDi utilizes additional LM that is trained with ATCON~\cite{realtoxicprompt} to guide the base LM in the decoding time. GeDi weighs the attribute probability from ATCON using the Bayes rule on logits of the base LM.
\item \textbf{Decoding-time Experts (DExperts; \citet{dexperts21ACL}):} DExperts employs expert (non-toxic DAPT) and anti-expert (toxic DAPT) LMs to guide the base LM at the decoding time. DExperts add expert's logit and subtract anti-expert's logit on the base LM's logit to detoxify.
\end{itemize}
\paragraph{Automatic Evaluation.}
To validate our language detoxification
method, we evaluate the toxicity of the generated texts using it, as well as the efficiency. Moreover, we examine the diversity of the generated texts. To automatically measure the toxicity of the generated texts, we utilize Perspective API~\footnote{\href{https://github.com/conversationai/perspectiveapi}{Perspective API}} that returns the toxicity scores of given texts. To measure diversity, we calculate the mean of distance n-grams~\cite{diversity2016} that is normalized by the total text length.
The results in Table~\ref{table:table1} show that ADLM largely outperforms baselines in the language detoxification performance. Compared to GeDi, ADLM can lower the toxicity of the generated texts to 0.28 with a significantly smaller number of parameters ($1/7$) and $\times2$ faster inference time. Moreover, our model is able to generate more diverse texts compared to those generated by baselines.
\input{Table/analyze_distribution}
\paragraph{Ablation study.} We examine the effect of each component of our ADLM, i.e., architectural design, dataset design, and training modules, in Table~\ref{table:ablation_training}. We observe that balancing the toxic and non-toxic data is the most important factor to construct a well discriminative latent space. Moreover, when we utilize a discriminator, our model is able to discriminate the texts more effectively along with the attribute embedding tokens which supports our hypothesis that obtaining a well-discriminated projected latent space is the key factor to success in detoxification.
\paragraph{Analysis of toxicity types.}
We further examine which types of toxic texts are highly suppressed by our model compared to GPT-2. As shown in Figure~\ref{fig:distribution}, our model suppresses all types of the toxic level of the generated texts compare to baselines. Notably, ADLM successfully suppresses toxicity on the \emph{threat} type, which DExperts fail to detoxify. The threat is one of the frequent types of toxic sentences that GPT-2 generates with the highest probability (0.624). This explains why DExperts is vulnerable to \emph{threats}, Since DExperts eventually employ the original latent space of GPT-2 and thus cannot significantly change its language generation behavior. On the other hand, our ADLM modifies the original latent space into attribute-discriminative ones, and thus can effectively suppress them. Another notable point is that all models, including ADLM, cannot handle \emph{flirtations} well. However, by checking the generated examples, we found that the perspective API assign high flirtation scores on sentences with words such as women, her, she, like, etc. appear, which results in misclassifications of sentences that do not contain any flirting contexts since they are commonly used words.
\input{Table/3_toxichat}
\input{Table/Dialogue_safe}
\subsection{Detoxification for Dialogue Generation}
\paragraph{Baselines.} For detoxified dialogue generation task, we use DialoGPT~\cite{dialoGPT} as a baseline language model. We compare against the DialoGPT, DAPT, and ATCON which is the baseline introduced in ~\citet{toxichat} for dialogue generation on ToxiChat~\cite{toxichat} and DiaSafety~\cite{dialoguesafety}. The details of the hyperparameters used for each model are provided in Appendix ~\ref{appendix:baseline}.
\paragraph{Automatic Evaluation.} To validate dialogue detoxification performance, we evaluate responses by the percentage of bad words and offensiveness using classifiers which predict the degree of toxicity and types of toxic sentences~\cite{toxichat, dialoguesafety}. Further, we also test the \emph{stance} of the responses, which tells whether they agree with the context or not. Table~\ref{table:table3} shows that our model better suppresses the toxic responses compared to the baselines. We further examine our methods on another dialogue toxic dataset: DiaSafety. As shown in Figure~\ref{fig:dialogue_safe}, our method generates more safe responses for different categories of toxic dialogues. The results on both datasets show that our method achieves consistent language detoxification performance on dialogue generation tasks for diverse categories of toxic languages, effectively suppressing the toxicity of the generated responses even when the model is exposed to toxic data, which is essential to real-world dialogue application.
\subsection{Perplexity of Detoxified Texts}
To examine the quality of the generated texts, perplexity (PPL) is frequently used as an automatic evaluation measure of fluency. However, since strong detoxification methods may generate texts that largely disagree with ones in the test dataset (i.e. generating non-toxic continuation for toxic prompts), higher PPL is somewhat inevitable. As shown in Table~\ref{table:ppl}, our model generates around twice more non-toxic continuations from toxic prompts with as much as 46.75\% reduced toxicity compared to baselines, but yields 109.05\% higher PPL compared to that of DExperts. However, the increased PPL mostly results from generating incoherent text sequences to avoid toxic language generation for toxic prompts, and the increased PPL does not necessarily imply that the quality of the generated texts is degraded. This is clearly shown from the results in the human study (Figure~\ref{fig:human}) in the next subsection, in which the participants ranked the fluency of the language generated by our method higher, while its toxicity as lower.
\subsection{Human Evaluation of Generated Texts}
Although we demonstrate the effectiveness of our method with automatic evaluation, in language generation, human judgment is the the most important measurement. Thus, we performed a human evaluation of generated texts using our method, by comparing it to ones generated by the best-performing baselines, DExperts and GeDi (Figure~\ref{fig:human}). We evaluate the toxicity of generated texts and the quality of the generated texts, e.g. grammatical correctness, coherent topic, and overall fluency, by recruiting 45 participants on Mechanical Turk. The experimental details are provided in Appendix~\ref{appendix:human}.
The results show that our model is considered to have the best detoxification performance even by human judgments (lower the better) with $p<0.05$ in paired t-test. Notably, our model is evaluated to have better fluency over the baselines (higher the better). The texts generated by our model are evaluated to be grammatically correct and fluent compared to those generated by GeDi and DExperts with p-value of less than 0.05 in paired t-test. As for coherency, there was no difference among the compared models, with $p>0.05$. These results reconfirm that our model generates fluent and effective detoxified texts.
\input{Table/ppl}
\input{Table/human_evaluation}
\section{Conclusion}
In this paper, we proposed a novel and an effective attribute-controllable language model, ADLM, for efficient language detoxification. Our ADLM learns an attribute-discriminative latent space with a projection Transformer layer on top of the original pretrained LM and attribute discriminator that differentiate texts by their attributes. Ours is shown to be effective for detoxifying texts for both language and dialogue generation tasks, outperforming all baselines in automatic and human evaluation, without requiring large computational and memory overhead unlike existing methods that use multiple LMs or additional computations.
\section{Limitations}
Recent Transformer-based language models are prone to generating toxic texts such as insults, threats, and profanities. Therefore, ensuring safety in language generation is a crucial task that is necessary for their deployments to real-world applications. We achieve this goal with an efficient solution that does not require multiple LMs or further pretraining on a large refined corpus, which is computationally expensive. However, even with our techniques, the language model is not guaranteed to be completely safe and may generate toxic languages, albeit at a significantly lower rate. Furthermore, when the toxic prompts are provided, the model may generate incoherent sequences to avoid toxic generation, which leads to reduced fluency compared to that of the original language model. Yet, this is a general limitation of detoxified language modeling, which is inevitable as no method can change the given prompts.
\section{Terminology}
\label{appendix:teminology}
Here, we will describe a more detailed description of the terminology we used in the manuscript.
\paragraph{Attribute.} The characteristic of the sentence in terms of toxicity. Toxic and non-toxic are types of attributes in the toxicity task.
\paragraph{Latent space.} We denote the hidden space between the head layer of language model and Transformer as a latent space.
\paragraph{Toxicity.} The score of being harmful or unpleasant in the provided texts. Toxicity is scored from 0 to 1.0. A sentence with a score of larger than 0.5 is considered as toxic. The sentence with a score smaller than 0.5 is considered as non-toxic.
\paragraph{Type of toxic.} The Perspective API detects the toxic sentence with 8 different types, e.g., profanity, sexually explicit, identity attack, flirtation, threat, insult, severe toxicity, \textit{toxicity}. The results that are calculated in the main manuscript are based on the score of the \textit{toxicity}.
\paragraph{Toxicity probability.} Toxicity probability is the probability of generating toxic sentences from 25 generations. The probability to generate toxic sentences ($\geq 0.5$) in 25 generations from single prompts. If there are five sentences that have a score larger than 0.5 in the results of 25 generations, toxicity probability is $1/5=0.2$.
\paragraph{Expectation of max toxicity.} Expectation Max Toxicity (Exp. Max Toxicity) is calculated by the mean of max toxicity from 25 generations. The average value of toxicity of the largest score in 25 generations in the evaluation set.
\paragraph{Fluency} Fluency is the measurement of how fluent the continuation is. Automatic evaluation of fluency is calculated based on GPT-2 xl. Fluency is measured as the perplexity of generated output to GPT-2 xl and the targeted models.
\paragraph{Diversity} Diversity is the measurement of how diverse words are generated from the models. Automatic evaluation of diversity is computed by counting the unique n-grams normalized by the total length of text. Dist-1, Dist-2, Dist-3 stand for values of 1-gram, 2-grams, 3-grams, respectively.
\section{Experimental Setup}
\label{appendix:experiment_setup}
\subsection{Dataset}
\label{appendix:dataset}
\paragraph{Toxicity dataset.}
For the train set, we use a dataset from \emph{Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge}~\footnote{\href{https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification}{Kaggle dataset}}. The dataset is annotated by humans. We denote toxic class datasets that are greater than 50\% annotator choose the comments as toxic examples. For the non-toxic class dataset, we use comments that none of the annotators choose as toxic. The toxic and non-toxic classes consist of 160K comments and 1.4M comments, respectively. Since we need to control our hidden states, we duplicate toxic comments as large as the size of non-toxic comments to balance between the non-toxic comments to format a stable representation.
For the evaluation set, we use several subset from the RealToxicityPrompts~\cite{realtoxicprompt} dataset. 100K dataset is total evaluation prompts from RealToxicityPrompts. Random 10K prompts are random samples of 5K toxic prompts and 5K non-toxic prompts from RealToxicityPrompts dataset~\cite{dexperts21ACL}. We sample 25 continuations from the single prompt with 0.9 probability in sampling. Temperature is set as 1 and max length of continuation is set as 20.
\paragraph{Toxicity dataset for dialogue generation.}
We train our model on the Reddit conversation dataset from~\citet{toxichat}. Each conversation is consist of a title, post, and response with offensive and stance labels whether it is a toxic or conforming comment.
\subsection{Baseline}
\label{appendix:baseline}
\paragraph{DAPT.}
For the language detoxification task, DAPT is further trained on the non-toxic corpus, OpenWebText~\cite{OpenWeb}. The results of DAPT (small) are from ~\citet{realtoxicprompt} which is evaluated on 10K RealToxicityPrompts.
\paragraph{ATCON.}
ATCON is a model that learn the distribution of the generated text by conditioning on the given control codes that are specific for each task. For language detoxification task, the text is prepended with control codes: $ \big \langle \texttt{toxic} \big \rangle $ and $\big \langle \texttt{nontoxic} \big \rangle $. The results of ATCON is evaluated on 10K RealToxicityPrompts ~\cite{realtoxicprompt}.
\paragraph{PPLM.}
PPLM consists of a classifier that backpropagates the gradients to the LM to generate texts with desired attributes multiple times. Because of the high computational cost of this model, 10 sentences are sampled from single prompts. For the language detoxification task, the results of PPLM are reported results from ~\citet{realtoxicprompt} on random 10K prompts RealToxicityPrompts. The model is GPT-2 medium-based.
\paragraph{GeDi.}
GeDi is a model that guides the generation of each token by determining the attribute probability of given text which can be obtained by the Bayes rule normalizing over two attribute-conditional distribution of next tokens. To this end, they use two LM: base and discriminator. The discriminator LM is trained as ATCON which learns the attribute conditional-distributions and the base LM focuses on generation with the guidance of the discriminator LM. For the language detoxification task, the results of GeDi are evaluated on random 10K prompts from RealToxicityPrompts. We utilized the provided model from ~\citet{Gedi20arxiv} which is GPT-2 medium-based.
\paragraph{DExperts.}
Under the concept of expert and anti-expert, DExperts use three LMs: base expert, and anti-expert. The expert and anti-expert are respectively, trained on a specific subset in the dataset: toxic and non-toxic texts in the language detoxification task and positive and negative texts in the sentiment-controlled task. DExperts use both logits from experts which support the base LM to suppress and to amplify logit values so that the base LM samples desired vocabularies. For the language detoxification task, the results of DExperts are evaluated on random 10K prompts from RealToxicityPrompts. We reproduced the DExperts with small experts which is GPT-2 small based where the toxic performance was the best among the other sizes of GPT-2.
\input{Table/human_evaluation_screenshot}
\subsection{Human evaluation}
\label{appendix:human}
We conduct a human evaluation as shown in Figure ~\ref{appendix:human_screen}. We conduct a human evaluation with 45 participants. We compare against DExperts, and GeDi for this experiment, which is the best two performing baseline by the automatic evaluation. We first randomly choose 20 prompts each from the random-10K subset. Then, we also randomly select one of the generated continuations among 25 generations for each prompt and show the generated texts by our model, DExperts, and GeDi in random order.
Therefore, for language detoxification, 45 participants evaluated 60 continuations with i) toxicity, ii) grammatical fluency, iii) topic coherency, and iv) overall fluency. For each question, the participants scored from 1 to 5 on whether provided continuation is toxic or fluent. For the results, we average the score of all 20 sequences for each question.
We provided the standard of the score for each question. For toxicity, scores 1, 3, and 5 mean not toxic at all, feel toxic, and very toxic (contains toxic words), respectively. For grammatical correctness, score 1, 2, 3, 4, and 5 stands for grammatically poor, weak, understandable, minor mistake, and good. For topic coherency, scores 1, 3, and 5 are a totally different topic, similar topic but not fluent, and good coherency, respectively. For fluency, the score 1, 2, 3, 4, and 5 are does not make any sense, weak, limited, understandable, and good.
As shown in Figure~\ref{fig:human}, our model is 2.24, 3.60, 3.00, and 3.39 for toxicity, grammatical correctness, coherency, and fluency, respectively. In sum, our model generates texts that are less than feel toxic, with a few minor mistakes in grammar, similar topic texts but not fluent, and weak fluency.
\section{ADLM Details}
\label{appendix:details}
\subsection{Modeling Details}
We use GPT-2 from HuggingFace Transformers version 4.2.0~\cite{huggingface2020}, implemented in the PyTorch framework.
For RealToxicityPrompts~\cite{realtoxicprompt}, our ADLM is trained with 128 block sizes, 32 batch sizes per GPU, $5e^{-5}$ learning rate, and 3 epochs. Same setting is used for sentiment-controlled text generation. Since the sizes of training datasets differ in dialogue generation tasks, the hyperparameters are empirically determined. For ToxiChat~\cite{toxichat}, our ADLM and baselines are trained with 32 batch sizes per GPU, $2e^{-5}$ learning rate and three epochs. For DiaSafety~\cite{dialoguesafety}, our ADLM and baselines are trained with eight batch sizes per GPU, $2e^{-5}$ learning rate and five epochs. The block sizes of both dialogue datasets are not truncated unless they exceed 512. For all datasets, we set $\lambda$ as 0.1 for EWC loss and use AdamW optimizer with $1e^{-8}$ epsilon and a linear scheduler. Trainings are performed on a single NVIDIA RTX 2080 Ti or Quradro RTX 8000.
\subsection{Generation}
For RealToxicityPrompts~\cite{realtoxicprompt} and sentiment-controlled text generation, we set the same setting in generation for all baselines and our models, except for PPLM~\cite{PPLM2019ICLR}. We perform a total of 25 generations on each prompt. The max length of generated sentences is 20. For PPLM~\cite{PPLM2019ICLR}, we generate 10 generations on each prompt due to computational costs. For our generation, we set $\alpha$ to 4.0 for the language detoxification task. For dialogue generations, the generation setup is different. For ToxiChat~\cite{toxichat}, the models generate until the end-of-token appears or until the max sequence threshold is 500. The $\alpha$ is set to $1.5$. Lastly, for DiaSafety~\cite{dialoguesafety}, the max length of a generation is set to 128 and the $\alpha$ is set to $1.5$. All the generations use nucleus sampling with $0.9$ top-p probability and $1.0$ temperature scaling for the softmax.
\section{Experiments}
\label{appendix:sentiment}
\input{Table/sentiment_exp}
\subsection{Sentiment-Controlled Text Generation}
\paragraph{Sentiment dataset.}
For sentiment-controlled text generation task, we train our model on sentiment movie review dataset from \emph{Standford Sentiment Treebank (SST-5)}~\cite{SST-5dataset2013}. Each review in the dataset is rated on a scale from 1 to 5 (very negative to very positive). The reviews with ratings 4 to 5 are assigned as positive reviews and ratings 1 to 2 are assigned as negative reviews. For the evaluation set, there are 2.5K prompts for each sentiment that is provided from ~\citet{dexperts21ACL} which is obtained from OWTC~\cite{OpenWeb}.
\paragraph{Baselines.} For sentiment-controlled text generation, the positive and negative DAPT~\cite{DAPT20ACL} models have been independently trained on each subset of SST-5 dataset. Similar to ATCON, CTRL~\cite{CTRL2019arxiv} which uses $\texttt{"Reviews Rating:}$ $\texttt{5.0"}$ and $\texttt{"Reviews Rating:}$ $\texttt{1.0"}$ as control code are used. The results of DAPT, CTRL, GeDi, PPLM and DExperts on sentiment-controlled text generation task are reported values from ~\citet{dexperts21ACL}.
\paragraph{Automatic Evaluation.}
To guarantee that our method is generally applicable to any controllable text generation tasks, we further validate our model on sentiment-controlled text generation problem. To this end, we consider the problem of generating continuations which has opposite semantics from the given prompts (e.g., positive continuation for negative prompts). For automatic evaluation, to validate whether the generated text matches with targeting sentiment, we use HuggingFace's sentiment analysis classifier~\cite{huggingface2020}.
The results in Table~\ref{table:table_sentiment} show that our model achieves impressive performance on controlled text generation as well. This suggests that our method is applicable to any attribute-controlled text generation tasks.
\input{Table/appendix_alpha_lambda}
\subsection{Ablation experiment}
To evaluate fluency, we measure the mean perplexity of the continuations according to the GPT-2 XL model. We conduct the ablation experiment $\alpha$ in Eq.~\ref{eq:suppress} and $\lambda$ in Eq.~\ref{eq:ewc_loss}. As shown in Figure~\ref{appendix:alpha_lambda}, when alpha decreases and lambda increases, the toxicity increases while the perplexity decreases. The toxicity control performance and fluency are in somewhat a trade-off relationship, and we can increase and decrease them at the expense of the others by controlling the values $\alpha$ and $\lambda$.
\subsection{Generation examples}
\label{appendix:examples}
The Table~\ref{table:appendixexample} and Table~\ref{table:appendixexample2} are the examples generated from our model for language detoxification task. The Table~\ref{table:appendix_dialogue} and Table~\ref{table:appendix_dialogue2} are the examples generated from our model for dialogue detoxification task on ToxiChat dataset.
\input{Table/appendix_examples}
\input{Table/appendix_dialogue_examples}
|
2,869,038,156,300 | arxiv | \section{Introduction}
Lattice methods are widely used in studies of quantum few- and many-body problems in
nuclear, hadronic, and condensed matter systems, see
{\it e.g.}~Refs.~\cite{Lee2009_PPNP63-117,Drut2013_JPG40-043101,Chen2004_PRL92-257002,Borasoy2007_EPJA31-105,Borasoy2007_EPJA34-185}.
A necessary step in such studies is the computation of scattering phase shifts and mixing angles from an underlying microscopic lattice Hamiltonian.
Remarkably, the same problem arises in the context of experiments on optical lattices. Several groups have pioneered the use of ultracold atoms in optical lattices produced by
standing laser waves, to emulate the properties of condensed matter systems and quantum field theories~\cite{Hague:2011a,Hague:2015a,Li:2015a,Banerjee:2012a,Kuno:2014a}.
The basic concept is to tune the interactions of the atoms, both with each other and with the optical lattice, to reproduce the single-particle properties and particle-particle interactions
of the ``target theory''. Such studies often require a more general setup than a simple cubic lattice, for instance in the case of the hexagonal Hubbard model~\cite{Meng:2010},
which closely resembles the physics of graphene~\cite{Buividovich:2012nx} and carbon nanotubes~\cite{Luu:2015gpl}.
Clearly, a robust and accurate method for
computing scattering parameters on arbitrary lattices is needed.
For the scattering of particles on a cubic lattice,
L\"uscher's finite-volume method~\cite{Luescher1986_CMP105-153} uses periodic boundary conditions to infer elastic scattering
phase shifts from energy eigenvalues. The method has been widely used in lattice QCD simulations with applications to different angular
momenta~\cite{Bernard2008_JHEP08-024,Luu2011_PRD83-114508,Gockeler:2012yj,Li2013_PRD87-014502,Briceno2013_PRD88-034502} as
well as partial-wave mixing~\cite{Briceno:2013bda}, see Ref.~\cite{Drut2013_JPG40-043101} for a recent review.
An important advantage of L\"uscher's method is that periodic boundary conditions are typically already used in
lattice calculations of nuclear, hadronic, ultracold atomic, and condensed matter
systems. Since no additional boundary constraints are needed, the method is easily applied to a wide class of systems.
However, L\"uscher's method requires that the finite-volume energy
levels can be accurately determined, with errors small compared
to the separation between adjacent energy levels. This is not practical in cases such as nucleus-nucleus scattering,
where the separation between finite-volume energy levels is many orders of magnitude smaller than the total energy of the system.
Fortunately, this problem has been solved using an alternative approach called the adiabatic projection
method~\cite{Pine:2013zja,Elhatisari:2014lka,Rokash:2015hra,Elhatisari:2015iga}. There, initial cluster states are
evolved using Euclidean time projection and used to calculate an effective two-cluster Hamiltonian (or transfer matrix).
In the limit of large projection time, the spectral properties of the effective two-cluster Hamiltonian coincide with those of the original
underlying theory. This method has been applied to nuclei and ultracold atoms, while
applications to lattice QCD simulations of relativistic hadronic systems are currently being investigated.
Since the adiabatic projection method reduces all scattering systems to an effective two-cluster lattice Hamiltonian,
additional boundary conditions can be applied to the effective lattice Hamiltonian in order to compute scattering properties.
This opens the door to methods more accurate than L\"uscher's by removing the effects of the periodic boundary conditions,
which are otherwise a significant source of rotational symmetry breaking. One promising approach is to place the particles
in a harmonic oscillator potential and extract phase shifts from the energy eigenvalues~\cite{Luu2010_PRC82-034003,Stetcu2010_AP325-1644}.
Another prominent example is the method used in Refs.~\cite{Carlson1984_NPA424-47,Borasoy2007_EPJA34-185}, whereby a
``spherical wall'' is imposed on the relative separation between the two scattering particles. Phase shifts are
then determined using the constraint that the wave function vanish at the wall boundary. This method has
been applied to the two-nucleon problem in lattice
effective field theory (EFT)~\cite{Borasoy2008_EPJA35-343,Epelbaum2009_EPJA41-125,Epelbaum2010_EPJA45-335,Epelbaum2010_PRL104-142501}
and to lattice simulations of nucleus-nucleus scattering using the adiabatic projection method~\cite{Pine:2013zja,Elhatisari:2014lka,Rokash:2015hra,Elhatisari:2015iga}.
In spite of such progress in lattice scattering theory, all methods are still lacking in precision, especially when partial-wave mixing
and high angular momenta are concerned. In previous work, numerical approximations were used for
the study of coupled-channel systems~\cite{Borasoy2007_EPJA34-185}.
We now describe an extension of the spherical wall method, which enables an efficient and precise determination of
two-particle scattering parameters for arbitrary energies and angular momenta. We use angular momentum projection
and solve the lattice radial equation with spherical wall boundaries, supplemented
by an ``auxiliary potential''. We test our method on a lattice model with strong tensor interactions that induce appreciable
partial-wave mixing. We expect our method to be applicable in theoretical lattice studies of nuclear, hadronic, ultracold atomic, and condensed matter systems,
as well as in the experimental design of optical lattices. While we discuss only non-relativistic wave mechanics in our examples here, the extension to
relativistic systems simply entails replacing the non-relativistic dispersion relation with the relativistic one.
\section{Benchmark system}
We begin with the eigenvalue equation
\begin{equation}
\left[-\frac{\nabla^{2}}{2\mu}+V(\bm{r},\bm{\sigma}_{1},\bm{\sigma}_{2})\right]\psi=E\psi,\label{eq:eigenvalue}
\end{equation}
where $\bm{r}$ is the relative displacement, $\bm{\sigma}_{i}$, with $i=1,2$, are the spins of the two scattering nucleons with $m_N^{} \equiv 2\mu = 938.92$~MeV.
Following Ref.~\cite{Borasoy2007_EPJA34-185}, we take
\begin{eqnarray}
\hspace{-.5cm}
V \!\!\!\! &=& \!\!\!\!
C\left\{ 1+\frac{r^{2}}{R_{0}^{2}}\left[3(\hat{\bm{r}}\cdot\bm{\sigma}_{1})(\hat{\bm{r}}\cdot\bm{\sigma}_{2})-\bm{\sigma}_{1}\cdot\bm{\sigma}_{2}\right]\right\}
\nonumber \\
\!\!\!\! &\times& \!\!\!\!
\exp\left(-\frac{r^{2}}{2R_{0}^{2}}\right),
\label{eq:interaction}
\end{eqnarray}
with $C=-2.00$~MeV and $R_{0}=2.00\times10^{-2}$~MeV$^{-1}$. We only consider
states of total intrinsic spin $S=1$.
The radial equation is
\begin{equation}
\left[-\frac{1}{2\mu r}\frac{\partial^{2}}{\partial r^{2}}r+\frac{L(L+1)}{2\mu r^{2}}+V_{J}(r)\right]
\psi_{J}(r)=E\psi_{J}(r),
\label{eq:radialequation}
\end{equation}
where $L$ is the orbital angular momentum and $J$ the total angular momentum. The ``effective'' potential is
\begin{equation}
V_{J}(r)=C\left(1+\frac{2r^{2}}{R_{0}^{2}}\right)\exp\left(-\frac{r^{2}}{2R_{0}^{2}}\right),
\label{eq:uncoupledradial}
\end{equation}
for uncoupled channels, and
\begin{eqnarray}
\hspace{-.5cm}
V_{J}(r) \!\!\!\! &=& \!\!\!\!
C\left[1+\frac{r^{2}}{R_{0}^{2}}\left(\begin{array}{cc}
-\frac{2(J-1)}{2J+1} & \frac{6\sqrt{J(J+1)}}{2J+1}\\
\frac{6\sqrt{J(J+1)}}{2J+1} & -\frac{2(J+2)}{2J+1}
\end{array}\right)\right]
\nonumber \\
\!\!\!\! &\times& \!\!\!\!
\exp\left(-\frac{r^{2}}{2R_{0}^{2}}\right),
\label{eq:coupledradial}
\end{eqnarray}
for coupled ones.
In the continuum, phase shifts and mixing angles are obtained by solving Eq.~(\ref{eq:radialequation}) using the
potentials~(\ref{eq:uncoupledradial}) and~(\ref{eq:coupledradial}) with appropriate boundary conditions.
As rotational
symmetry is broken by the lattice, the energy eigenstates of Eq.~(\ref{eq:eigenvalue}) belong to the irreducible
representations (\textit{irreps}) $A_{1}$, $A_{2}$, $E$, $T_{1}$
and $T_{2}$ of the cubic group $SO(3,Z)$ rather than the
full $SO(3)$ rotational group~\cite{Borasoy2007_EPJA34-185,Johnson:1982yq,Lu2014_PRD90-034507}.
For cubic periodic boundary conditions, as in L\"uscher's method~\cite{Luescher1986_CMP105-153}, the cubic symmetry remains exact, thus our solutions can still be classified
by cubic~\textit{irreps}. Nevertheless, the rotational symmetry breaking due to the boundaries makes it difficult to identify states of high angular momentum and to
extract scattering parameters. In order to remove these effects, we
impose a hard spherical wall of radius $R_{W}$,
\begin{equation}
V\rightarrow V+\Lambda\theta(r-R_{W}),
\end{equation}
where $\theta$ is the Heaviside step function and $\Lambda$ is a (large) positive constant, intended to sufficiently suppress the wave function beyond $R_{W}$ (we set $\Lambda = 10^8$~MeV).
We take $R_W$ to exceed
the range of the interaction, such that the boundary is placed in the asymptotic (non-interacting) region. We also take $2R_W$ to be less than the difference of the box size and
the interaction range, which ensures that cubic boundary effects remain negligible.
\section{Angular momentum decomposition}
Let $|\vec{r}\rangle
\otimes |S_z \rangle$ denote a two-body quantum state with separation $\vec r$ and $z$-component of total intrinsic spin $S_z$. We define radial lattice
coordinates $(\rho,\varphi)$ by grouping equidistant mesh points,
as shown in Fig.~\ref{fig:Schematic}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{schematicCombined}
\end{center}
\caption{\label{fig:Schematic}(Color online) Left panel: Grouping of
mesh points according to lattice coordinates $(\rho,\varphi)$, with lattice spacing $a$. Right panel:
Spherical wall radius $R_W$, interaction regions I-III as discussed in the text and
effective potential $V_{J}(r)$ for uncoupled channels with $V_0=-25$~MeV.
}
\end{figure}
To construct radial wave functions, we
project onto states with total angular momentum $(J,J_z)$ in the continuum limit, using
\begin{eqnarray}
|m\rangle^{(J),(J_z)}_{(L)} \!\!\!\! &\equiv& \!\!\!\!
\sum_{\vec{n},L_z,S_z}C^{J,J_z}_{L,L_z,S,S_z}Y_{L,L_z}^{\,}(\hat{n}) \nonumber \\
\!\!\!\! &\times& \!\!\!\!
\delta_{\rho_m,|\vec{n}|}
\, |\vec{n}\rangle \otimes |S_z \rangle,
\label{eq:basis}
\end{eqnarray}
where the $Y_{L,L_z}$ are spherical harmonics with orbital angular momentum $(L,L_z)$. The $C^{J,J_z}_{L,L_z,S,S_z}$ are Clebsch-Gordan coefficients.
The parentheses around $J$, $J_z$ and $L$ on the left hand side signify that these quantum numbers are not exactly good quantum numbers.
Note that Eq.~(\ref{eq:basis}) is applicable to arbitrary geometries. Here,
$\vec n$ runs over all lattice points and the ``radial shell'' is given by the integer $m$. Then, $\rho_m$ is the distance from the origin in units of the lattice spacing $a$, and $\delta_{\rho_m,|\vec n |}$
picks out all lattice points for which $\rho_m = |\vec{n}|$.
It may be practical (especially for non-cubic lattices) to relax this condition to include all lattice points with $|\rho_m - |\vec{n}|| < \delta$ for small, positive $\delta$.
On the lattice, the $|m\rangle^{J,J_z}_L$ form a complete (but non-orthonormal) basis. We therefore compute the
norm matrix of these states before solving for the eigenstates of the lattice Hamiltonian.
We find that rotational symmetry breaking is almost entirely
due to the non-zero lattice spacing $a$.
As we take
$a \to 0$ at fixed $R_W$, rotational symmetry is exactly restored. The degree of mixing between different total angular
momenta $J$ and $J'$ is a useful indicator of rotational symmetry breaking. Such effects can be interpreted as arising
from the non-orthogonality of wave functions in different partial waves when their inner product is computed as a sum over discrete lattice points.
The degree of mixing is difficult to estimate {\it a priori}, as it depends strongly on the details of the interaction.
\begin{table*}
\caption{\label{tab:Energy-levels-calculated}Energy levels and differences $\Delta$ (in MeV) with
(w/) and without (w/o) unphysical $J$-mixing matrix elements. In the former case, we compute the eigenstates of the lattice Hamiltonian without
a spherical harmonic projection.}
\begin{center}
\begin{tabular}{|llccc>{\centering}p{0.3cm}llccc|}
\hline
\multicolumn{5}{|c}{Even parity} & & \multicolumn{5}{c|}{Odd parity}\tabularnewline
\cline{1-5} \cline{7-11}
state & \hspace{-.15cm}\textit{irrep} & w/ & w/o & $\Delta$ & & state & \hspace{-.15cm}\textit{irrep} & w/ & w/o & $\Delta$\tabularnewline
\hline
$1{}^{3}S(D)_{1}$ & $T_{1}$ & 0.037 & 0.038 & 0.001 & & $1^{3}P_{1}$ & $T_{1}$ & 0.917 & 0.918 & 0.001\tabularnewline
$1^{3}D_{2}$ & $E$ & 2.764 & 2.766 & 0.002 & & $1^{3}P(F)_{2}$ & $E$ & 1.795 & 1.796 & 0.001\tabularnewline
$1^{3}D(G)_{3}$ & $T_{1}$ & 3.347 & 3.351 & 0.004 & & $1^{3}P_{0}$ & $A_{1}$ & 3.048 & 3.053 & 0.005\tabularnewline
$1^{3}G_{4}$ & $A_{1}$ & 6.562 & 6.567 & 0.005 & & $1^{3}F_{3}$ & $A_{2}$ & 4.616 & 4.620 & 0.004\tabularnewline
$1^{3}G_{4}$ & $T_{1}$ & 6.624 & 6.637 & 0.013 & & $1^{3}F(H)_{4}$ & $A_{1}$ & 4.998 & 5.003 & 0.005\tabularnewline
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\includegraphics[width=\columnwidth]{Hamiltonian}
\caption{\label{fig:Hamiltonian}(Color online) Illustration of rotational symmetry breaking effects in the Hamiltonian matrix, given in the basis of Eq.~(\ref{eq:basis}).
The colors show the magnitude of the matrix elements. To study
unphysical mixings, we remove the tensor component of $V_J(r)$. The resulting Hamiltonian matrix should ideally be block-diagonal in the $S$-, $D$- and $G$-waves {\it etc.}
Clearly, the matrix elements that cause unphysical mixings are suppressed by several orders of magnitude.
In each block, the row and column indices represent the radial coordinates of the mesh points. For higher partial waves,
entire ``radial shells'' $\rho_m^{}$ vanish due to the angular dependence of the wave function, and such redundant rows and columns have been removed.}
\end{figure}
Given a simple cubic lattice with a cubic-invariant interaction, unphysical $J$-mixing only occurs between cubic \textit{irreps} of the same type. If the objective is to describe a
rotationally invariant system on the lattice, then we may simply drop all unphysical couplings between channels with different $J$. We find
that rotational symmetry breaking is numerically insignificant at low energies in the spherical wall method. Still, it is instructive to study the sizes of the unphysical
$J$-mixings. For this purpose, we use a simple cubic lattice with $a=100$~MeV$^{-1}$ and $R_{W}=10.02a$.
In the radial basis~(\ref{eq:basis}), the Hamiltonian matrix becomes nearly block-diagonal, with each block corresponding to a specific $J$.
The non-block-diagonal elements induce unphysical $J$-mixing.
In Table~\ref{tab:Energy-levels-calculated}, we examine the lowest energy levels with and without
$J$-mixing matrix elements. When $J$-mixing is included, we solve directly for the eigenstates of the lattice Hamiltonian without
a spherical harmonic projection.
In Fig.~\ref{fig:Hamiltonian}, we show the Hamiltonian matrix elements in the projected basis defined in Eq.~(\ref{eq:basis}). In order to focus entirely
on unphysical mixings caused by rotational symmetry breaking, we have neglected the tensor component of $V_J^{}(r)$ in Fig.~\ref{fig:Hamiltonian}.
The magnitude of such unphysical mixing matrix elements is found to be greatly suppressed.
\section{Auxiliary potential}
We first consider uncoupled channels, where
$V$ vanishes beyond an ``inner'' radius $R_{I}$.
A hard wall at $R_W$ gives access to discrete energy eigenvalues only, and a very large box is needed
at low energies. To resolve these issues, we define an ``outer'' radius $R_{O}$, between $R_{I}$ and $R_{W}$, as shown in
Fig.~\ref{fig:Schematic}.
We also introduce a Gaussian
``auxiliary'' potential in region~III,
\begin{equation}
V_{{\rm aux}}(r) \equiv
V_0 \: \exp\left[-(r-R_{W})^{2}/a^{2}\right], \label{eq:auxiliarypotential}
\end{equation}
with $R_{O}\leq r\leq R_{W}$,
where the separation between $R_{O}$ and $R_W$ is chosen such that $V_{{\rm aux}}$ is negligible at $R_{O}$.
Note that $V_{{\rm aux}}$ vanishes in regions~I and~II. The energy eigenvalues can now be adjusted continuously as a function of $V_0$.
In Fig.~\ref{fig:Schematic}, we show $V_{J}(r)$ for $V_0=-25$~MeV.
In order to extract phase shifts, we express $\psi(r)$ in region~II as
\begin{equation}
\psi(r)\cong Ah_{J}^{-}(kr)-Bh_{J}^{+}(kr), \label{eq:sphericalbesselexpan}
\end{equation}
for $R_{I}\leq r\leq R_{O}$,
where $h_{J}^{+}(kr)$ and $h_{J}^{-}(kr)$ are spherical Bessel functions, and $k=\sqrt{2\mu E}$.
The constants $A$ and $B$ can be determined {\it e.g.} by a least-squares fit in region~II.
We note that
\begin{equation}
B=SA,
\label{eq:scateq}
\end{equation}
with $S\equiv\exp(2i\delta_{J})$,
from which $\delta_{J}$ can be obtained.
For coupled channels, $\psi$ has two components with $L = J \pm 1$.
Given Eq.~(\ref{eq:coupledradial}), both satisfy the spherical Bessel equation in region~II, and are therefore of
the form~(\ref{eq:sphericalbesselexpan}).
If we denote $A\equiv(A_{J-1},A_{J+1})^{T}$ and $B\equiv(B_{J-1},B_{J+1})^{T}$, the $S$-matrix couples
channels with $L=J\pm 1$. In the Stapp parameterization~\cite{Stapp1957_PR105-302},
\begin{eqnarray}
S &\equiv& \left[\begin{array}{cc}
\exp(i\delta_{J-1}) \\
& \exp(i\delta_{J+1})
\end{array}\right]
\nonumber \\
&\times& \left[\begin{array}{cc}
\cos(2\epsilon_{J}) & \hspace{.48cm} i\sin(2\epsilon_{J}) \\
i\sin(2\epsilon_{J}) & \hspace{.48cm} \cos(2\epsilon_{J})
\end{array}\right]
\nonumber \\
&\times& \left[\begin{array}{cc}
\exp(i\delta_{J-1})\\
& \exp(i\delta_{J+1})
\end{array}\right],\label{eq:Stapper}
\end{eqnarray}
where $\epsilon_{J}$ is the mixing angle.
When solving $S$ from Eq.~(\ref{eq:scateq}) as in the uncoupled case, we encounter a subtle
problem. For a simple hard wall boundary, only one independent solution per lattice energy eigenvalue is obtained. In order to determine $S$
unambiguously, two linearly independent vectors $A$ and $B$ are needed.
In Ref.~\cite{Borasoy2007_EPJA34-185}, this
problem was circumvented by taking two eigenfunctions with approximately
the same energy and neglecting their energy difference. However, such a procedure introduces
significant uncertainties.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{phaseshiftsummary}
\end{center}
\caption{\label{fig:summaryphaseshifts} (Color online)
Phase shifts and mixing angles for $J\leq4$ and $S=1$. Full, open
and ``half-open'' squares correspond to $V_0=0$, $V_0=-25$~MeV and $V_0=-20$~MeV, respectively.
For $V_0=-20$~MeV, only partial results are shown in order to reduce clutter.
Solid lines denote continuum results.}
\end{figure*}
As the potential~(\ref{eq:coupledradial}) is real
and Hermitian, an exact time-reversal symmetry results. We now add to $V_{J}(r)$ an
imaginary component,
\begin{equation}
V_{J}(r)\rightarrow V_{J}(r)+\left[\begin{array}{cc}
& iU_{\rm aux}(r)\\
-iU_{\rm aux}(r)
\end{array}\right],
\end{equation}
where $U_{\rm aux}(r)$ is an arbitrary, real-valued function with
support in region~III only. This leaves $V_{J}(r)$ Hermitian and the energy eigenvalues real, while
the time-reversal symmetry is broken. Also, $\psi$
and $\psi^{*}$ are now linearly independent and satisfy Eq.~(\ref{eq:radialequation}) in regions~I and~II with identical
energy eigenvalues. In addition to Eq.~(\ref{eq:scateq}),
we have the conjugate expression,
\begin{equation}
A^{*}=SB^{*},\label{eq:stateq2}
\end{equation}
and the $S$-matrix
\begin{equation}
S=\left[\begin{array}{cc}
B & A^{*}\end{array}\right]\left[\begin{array}{cc}
A & B^{*}\end{array}\right]^{-1},\label{eq:coupledSmatrix}
\end{equation}
from~(\ref{eq:scateq}) and~(\ref{eq:stateq2}).
Phase shifts and mixing angles can then be obtained from Eq.~(\ref{eq:Stapper}).
Note that the inverse in Eq.~(\ref{eq:coupledSmatrix}) cannot be computed without $U_{\rm aux}(r)$,
since in that case $A=-B^{*}$.
We use
\begin{equation}
U_{\rm aux}(r) = U_0 \, \delta_{r, r_0}, \label{eq:deltaauxiliarypotential}
\end{equation}
for $R_{O}\leq r_{0}\leq R_{W}$,
where $r_{0}$ is a radial mesh point in region~III and $U_0$ is an
arbitrary real constant. We find that the distortion of the energy eigenvalues and radial wave function introduced by this choice
is minimal. The same methodology
we have applied here for coupled partial waves can also be applied to more general problems with different scattering constituents.
\section{Numerical results}
We benchmark our method numerically with
the interaction~(\ref{eq:interaction}) using a cubic lattice with $a~$=~100 MeV$^{-1}$ ($\pi/a=314$~MeV), box size $35a$, and
we take $R_{I} = 9.02a$, $R_{O} = 12.02a$, and $R_{W} = 15.02a$.
For all
channels, we use the real auxiliary potential~(\ref{eq:auxiliarypotential}), while for coupled
channels we add the complex auxiliary potential~(\ref{eq:deltaauxiliarypotential}) with $U_0=20.0$~MeV and
$r_{0} \simeq R_{W}$.
In Fig.~\ref{fig:summaryphaseshifts}, we show our lattice phase
shifts and mixing angles. We compare with continuum
results, obtained by solving the Lippmann-Schwinger equation
for each channel.
All our lattice results agree well with the continuum ones,
from threshold to a relative center-of-mass momentum of $p_{{\rm CM}} \equiv k =140$~MeV.
We note the marked improvement over Ref.~\cite{Borasoy2007_EPJA34-185} for the same benchmark system.
\section{Application to arbitrary lattices}
While L{\"u}scher's method has been extended
to asymmetric rectangular boxes~\cite{Li:2007ey}, no standard method yet exists for an arbitrary lattice.
Our method can be used to characterize particle-particle interactions on arbitrary lattices, in any number of spatial dimensions.
This is significant for optical lattices, as the lattice geometry is then engineered to reproduce the single-particle energies of a given condensed matter or
quantum field theoretical system. Anisotropic lattices exhibit more breaking of rotational
invariance than a simple cubic lattice does. This is often an essential feature, {\it e.g.}\ in the crossover from a three-dimensional
system to a layered two-dimensional one. In Fig.~\ref{fig:anisotropiclattice}, we show the $^1S_0$ phase shift on an anisotropic rectangular lattice,
where the spacing along the $z$ axis, $a_z$, exceeds those along the $x$ and $y$ axes, denoted collectively by $a$.
The unit cell volume is $100^3$~MeV$^{-3}$ in all cases.
While we find good agreement with the continuum up to $a_z \simeq 1.4a$, this breaks down when
$a_z$ becomes comparable to the range of the interaction, with increasing deviation at high $p_{{\rm CM}}$.
Such a crossover to two-dimensional behavior can be characterized in terms of mixing between the $^1S_0$ and
$^1D_2$ ($J_z = 0$) partial waves, an effect of rotational symmetry breaking. The low-energy particle-particle interactions of any lattice system can be similarly described.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{anisotropicLattice}
\end{center}
\caption{\label{fig:anisotropiclattice} (Color online)
Phase shift for the $^1S_0$ channel on anisotropic rectangular lattices.
Circles, triangles, diamonds and squares denote results for lattice spacings $a_z=1.2a$, $a_z=1.4a$, $a_z=3.0a$ and $a_z=5.0a$, respectively.
The dashed lines are intended as a guide to the eye.}
\end{figure}
\section{Summary and discussion}
We have described a general and systematic method for the calculation of scattering parameters on arbitrary lattices, which we
have benchmarked using a lattice model of a finite-range interaction with a strong tensor component. Extensions to more general interactions are straightforward.
The Coulomb interaction can be accounted for by replacing the spherical Bessel functions by Coulomb functions, and by defining the
distance between particles as the minimum distance on a periodic lattice. The spherical wall then removes unphysical boundary effects. When combined with the adiabatic projection method, the techniques we have discussed can be applied to any scattering system in nuclear, hadronic, ultracold atomic, or condensed matter physics. We expect our method to be applicable to optical lattice experiments, in addition to its immediate usefulness for lattice studies
in nuclear, hadronic, and condensed matter theory. In fact, the method proposed here has already been
used to significantly improve the adibatic projection method, as detailed in Ref.~\cite{Elhatisari:2016hby}.
\section*{Acknowledgments}
We are grateful for discussions with Serdar Elhatisari, Dan Moinard and Evgeny Epelbaum.
We acknowledge partial financial support from the Deutsche Forschungsgemeinschaft
(Sino-German CRC 110), the Helmholtz Association (Contract No.\ VH-VI-417),
BMBF (Grant No.\ 05P12PDTEE), the U.S. Department of Energy (DE-FG02-03ER41260),
the Chinese Academy of Sciences (CAS) President's International Fellowship Initiative (PIFI) (Grant No.\
2015VMA076) and the Magnus Ehrnrooth Foundation
of the Finnish Society of Sciences and Letters.
\section*{References}
|
2,869,038,156,301 | arxiv |
\section{INTRODUCTION}
With the development of artificial intelligence, autonomous driving is attracting increasingly more attention in both industry and academia.
For autonomous vehicles~(AVs), the perception module, as the most upstream part, is vital for downstream tasks like localization, prediction, decision making, and path planning.
Of the perception system, the sensor configuration plays the most crucial and fundamental role, which determines what can be perceived by the AVs, i.e. the upper-bound of the perceptions. Therefore, before developing an autonomous driving system, it is important to design a sensor configuration that can provide powerful perceptual abilities of the surrounding environments.
As sensing technologies grow rapidly, there exist various types of sensors that can be adopted~\cite{carballo2020libre}. Modern AVs usually fuse multiple sensors with different modalities, such as LiDAR, camera, and radar, to achieve robust perception performance for objects of different sizes under different conditions. How to select the set of the sensors and place them at the correct positions is a critical problem for the AV architect.
Few studies are investigating the sensor configuration evaluation and design. The mainstream solutions are heuristic approaches, which are based on the human experiences combined with vehicle information and the field of view~(FOV) of the sensors.
They usually rely on straight-forward manual calculations, for example, adjusting the height of the LiDAR mounted on the top of vehicles to reduce the blind spot by the occluding roof~\cite{open_autonomous_driving_2019}.
However, these kinds of solutions only roughly formulate the perception potential as a binary problem, i.e. whether the space within or beyond the FOV of the sensor. Even within the FOV, the number of pixels or laser points also significantly influences the results. Thus, the approaches are too rough to quantitatively evaluate the perception abilities of the sensor configuration.
In addition, some works are focusing on the LiDAR configuration approaches~\cite{mou2018optimal, liu_2019}. However, these kinds of methods cannot be applied to other kinds of sensors.
As a result, we aim to formulate a unified framework to quantitatively evaluate the perceptivity of different sensor types and optimize the entire sensor configuration design based on the proposed evaluation metric.
In this paper, the perception potential of sensors is modeled as the theoretical possibility of objects being accurately perceived in the surrounding environment.
We address the sensor configuration optimization problem within the conditional entropy framework of Bayesian possibility theory and formulate the detected targets as a Gaussian distribution conditioned on sensor configurations.
Based on this formulation, a new metric, \emph{perception entropy}, is introduced to measure the performance of different sensor configurations. It is calculated by estimating the uncertainty of the detected targets under different configurations. The higher the entropy is, the lower the uncertainly is and, correspondingly, the better the specific configuration will be.
Therefore, our goal becomes finding the configuration with the highest perception entropy.
Additionally, our method covers sensors of different modalities elegantly by utilizing different reasonable sensor fusion strategies.
The main contributions of this paper could be summarized as four-fold:
\textit{(i)} We propose a novel metric, perception entropy, to evaluate the perception potential of a particular sensor configuration, which can be used for sensor configuration design containing multiple sensors of different modalities.
\textit{(ii)} We find that the average precision~(AP) of the perception algorithms has a specific relationship with the number of pixels or cloud points on the targets. Then we use this feature to estimate the uncertainty of the targeted objects based on different sensor configurations.
\textit{(iii)} Reasonable sensor fusion strategies are proposed to unify different types of sensors into a single framework, which makes our method suitable for multi-modal sensor configuration design.
\textit{(iv)} To the best of our knowledge, this is the first method to tackle the multi-modal sensor configurations problem for AVs.
\section{RELATED WORK}
\subsection{Optimize Sensor Configuration}
Existing methods of optimizing sensor configurations only focus on sensors of a single modality.
\cite{Dybedal2017OptimalPO} found the optimal 3D sensors mounting positions to achieve the largest field of view through mixed-integer linear programming.
\cite{Rahimian2017OptimalCP} optimized the camera's pose for motion capture systems by computing target point visibility in the presence of dynamic occlusion from cameras.
To decrease the computation burden of previous mixed-integer programming methods, ~\cite{Dybedal2020GPUBasedOO} tried to solve the optimal placement of 3D sensors by exploiting the parallel processing and 3D architecture of a CUDA-based GPU.
In the field of autonomous driving, Mou et al.~\cite{mou2018optimal} firstly proposed a lattice-based approach combining with a cylinder-based approximation to solve the LiDAR configuration model. However, this method suffers from the curse of dimensionality and computation cost, which makes it unpractical for the real application.
Liu et al. ~\cite{liu_2019} extended this work and proposed a new bionic metric named volume to surface area ratio (VSR) to analyze the trade-offs between perception capability and design cost, but their modeling structure is still complex for real-world application and hardly reaches the global optimum despite simplifying the representation.
Besides, there seems little relationship between their bionic metric VSR and the performance of perception algorithms, which also severely restricts the deployment of this approach to practical AVs' design.
Researchers also studied optimal sensor placement in the general setting of solving a linear inverse problem, which aims at determining the model parameters that produce the sensor measurement, mainly through greedy algorithms~\cite{ranieri2014near,jiang2016sensor,jiang2019group}. But for sensor placement on AVs, it's hard to represent the sensor measurement and configurations as state vectors, which is not suitable for applying those algorithms.
\subsection{Information Theory}
Approaches have also been proposed using standard Bayesian information theory to find the optimal sensor placement in other applications. Wang et al. modeled a sensor's potential of reducing the target state uncertainty using the entropy of the projection of the current state on the sensor subtracted by the entropy of the sensor noises for the task of target localization and tracking~\cite{wang2004entropy}. Their method only deals with the selection of one sensor in a 2D environment with only one object. Yang et al. extended this work by considering the placement of multiple heterogeneous sensors~\cite{yang2013optimal}. They developed an optimal strategy that maximizes the determinant of the Fisher information matrix by assuming the target has a Gaussian Prior. Furthermore, Krause et al. addressed the problem of optimal sensor placement in Gaussian processes of spatial phenomena by maximizing the mutual information between sensor locations~\cite{krause2008near}. \cite{naeem2009cross} applied cross-entropy optimization to the problem of selecting a subset of sensors to minimize parameter estimation error. These methods assume the perception results are deterministic by a fixed function. However, the perception algorithms used in AVs are usually based on deep neural networks, and hence the perception potential can be hardly optimized by those techniques. In this work, we propose a novel metric that incorporates the information theory, the detection performance of the neural networks, and sensor measurement together to formulate a unified framework for multiple sensors configuration design and evaluation.
\section{METHODOLOGY}
In this section, we first introduce the two concepts, \textit{perception space} and \textit{sensor measurement}, and review the basic knowledge of conditional entropy.
Perception space defines the space we care about in the perception problem. Sensor measurement refers to the information of the target we can obtain for the sensor configuration, like the cloud points or pixels.
Then we formulate the sensor configuration optimization problem within the conditional entropy framework. Under this formulation, perception entropy is introduced to measure the uncertainty of the perception results conditioned on the sensor configuration and corresponding sensor measurement of the targets.
Thus, we can get the perception entropy for a single sensor.
Finally, the fusion strategy is proposed to achieve a reasonable evaluation for a sensor configuration containing different types of sensors.
\subsection{Perception Space}
To simplify the sensor configuration problem and simulation calculation, it is common to set up a perception space where the perception system is operated~\cite{mou2018optimal, liu_2019}. We only consider the perception potential within this space. Here we follow the conventional ground coordinate system $\mathrm{G}$ with the $x$-axis pointing forwards, the $y$-axis pointing leftwards, and the $z$-axis pointing upwards, with the origin lying at the center of the $x-y$ plane, where the AV is placed. As shown in Fig.~\ref{fig:space}, the AV is visualized in the perception space to determine exact sensor installation candidates, for more fine-grained simulation especially nearby the vehicle.
\figspace
\subsection{Sensor Measurement}
\label{subsec:meas}
We introduce a new concept, sensor measurement, to represent the perceptual signals from the targeted object to the candidate sensor configuration. It is an observation of the target for the specific setting, which depends on two factors, the target and the sensor configuration.
For LiDAR, it denotes the 3D scanning point clouds on a targeted object with the location under LiDAR coordinate and intensity information of each reflected point.
For camera, it denotes a cluster of pixels that covers the target with color or darkness information.
We notice that the main 3d detection algorithms for LiDAR and monocular camera usually use 3D bounding boxes outlining the points or pixels to define where is the candidate object.
Empirically, compared to the fine-grained information like the intensity of the cloud points, the RGB of pixels, or their distributions, the sensor configuration mainly influences their numbers.
Therefore, we make the assumption\footnote{We mark all the assumptions used in our work with footnote.} that the sensor measurement only cares about the number of the pixels or cloud points.
In this paper, when using \emph{sensor measurement}, it is equivalent to the number of pixels or cloud points on the targeted objects.
The impact of sensor configuration on perception potential is reflected in the sensor measurement of a specific object. For example, if we adopt a camera with higher resolutions, it obtains more pixels in the captured image for the targeted object, which makes it more possible to detect the object. Similarly, compared to a 64-channel LiDAR, the fewer laser beams of a 16-channel LiDAR would produce more sparse and incomplete scanning results, especially at distant ranges.
Therefore, the configuration consists of two parts, sensor selections, and their mounting positions. The former relies on the type of the sensor, \textit{i.e.}, different laser beam numbers and maximum range of LiDAR, different FOV, and active resolution of the camera.
The latter can be characterized by a sensor's extrinsic parameter with respect to the ground coordinate system $\mathrm{G}$, which consists of the translation $\mathbf{t} = [t_x,t_y,t_z]^T$, and the rotation $\mathbf{R}$ represented by three Euler angles: $($\textit{roll, pitch, yaw}$)$.
For the LiDAR measurement, we need to compute the formula of each beam to determine whether it goes through the target. Hence the zenith angle $\theta$ and the azimuth angle $\phi$ of each beam, as well as its maximum range, are required. Its direction $\mathbf{v}^{\mathrm{G}}$ in the ground coordinate system $\mathrm{G}$ can be computed as
\begin{equation}
\label{eqn:1}
\mathbf{v}^{\mathrm{G}} = \mathbf{R}^{-1}\cdot [\mathrm{sin}(\theta)\mathrm{cos}(\phi),\mathrm{sin}(\mathrm{\theta})\mathrm{sin}({\phi}),\mathrm{cos}({\theta})]^T .
\end{equation}
For the camera measurement, we need its intrinsic matrix $\mathbf{K}$ to project a target to the image, which could be approximately estimated through the horizontal FOV~(HFOV) $r_h$ of camera lens and its active resolution $H \times W$ as follows:
\begin{equation}
\label{eqn:2}
\mathbf{K} = \begin{bmatrix}
\frac{W}{2tan(\frac{r_h}{2})} & 0 & \frac{W}{2}\\
0 & \frac{W}{2tan(\frac{r_h}{2})} & \frac{H}{2} \\
0 & 0 & 1
\end{bmatrix}.
\end{equation}
For any point $\mathbf{p}^\mathrm{G}$ in the ground coordinate system $\mathrm{G}$, its projected pixel location $\mathbf{p}^\mathrm{C}$ can hence be obtained as:
\begin{equation}
\label{eqn:3}
\mathbf{p}^\mathrm{C} \equiv \mathbf{K} \cdot (\mathbf{R} \cdot \mathbf{p}^\mathrm{G} + \mathbf{t}),
\end{equation}
and the pixel numbers of the target could be easily calculated subsequently.
Additionally, we simplify the problem by voxelizing each type of object into small cubes with a side length of $0.1$ m, then the perception potential of each object can be estimated by that of voxels belonging to the object for both LiDAR and camera as shown in Fig. \ref{fig:voxelize}.
For each voxel, its state is a three-dimensional vector $\mathbf{s} = [s_x,s_y,s_z]^T$ to represent its position in the ground coordinate system $\mathrm{G}$. Here we ignore the rotation part since it would barely affect sensor measurement.
\figvoxelize
We denote the configuration of a sensor as $q$, which includes both the sensor selection and its placement. The measurement of a voxel is denoted as $m$, which is determined by the state of the voxel $\mathbf{s}$ and the sensor configuration $q$, as well as noise in the environment such as the light condition and dust. Here we assume\footnotemark{} the effect of noise on sensor measurement $m$ is very small. Hence the noise is ignored in our calculation and the sensor measurement can be modeled as follows:
\begin{equation}
m = f(\mathbf{s}, q),
\end{equation}
where $f$ is a function that outputs the accurate sensor measurement when the object state and sensor configurations are given, using \eqref{eqn:1}\eqref{eqn:2}\eqref{eqn:3}.
\subsection{Conditional Entropy Theory and Formulation}
The conditional entropy of a random variable $U$ on another random variable $V$ is a measure of the amount of uncertainty in $U$ if we have some knowledge about $V$. The conditional entropy is defined as follows using the probability distribution $p_V$ and the joint probability distribution $p_{(U,V)}$
\begin{equation}
H(U|V) = -\int_{\mathcal{V}} \int_{\mathcal{U}} p_{(U,V)}(u,v) \mathrm{ln}\left(\displaystyle\frac {p_{(U,V)}(u,v)}{p_V(v)}\right)\,du\, dv .
\end{equation}
We can further rewrite the formulation of the conditional entropy $H(U|V)$ as follows:
\begin{equation}
\label{eqn:CE}
\begin{split}
H(U|V) &= -\int_{\mathcal{V}} \int_{\mathcal{U}} p(u|v) \mathrm{ln}(p(u|v))\, du \;p(v)\,dv \\
& = E_{v\sim p_{V}} H(U|v) .
\end{split}
\end{equation}
Hence in this formulation, the conditional entropy can be interpreted as the expectation of the uncertainty of $U$ given $v$ with $v$ taken from the distribution $p_{V}$.
In other words, the smaller the conditional entropy is, the more certain we are about $U$ after observing $V$.
\subsection{Perception Entropy}
Our method is based on a conditional entropy framework, by considering sensor measurement $M$ and the state of a voxel $\mathbf{S}$ as random variables.
First of all, $\mathbf{S}$ is defined as the position of a voxel from the pre-defined perception space whose prior distribution is obtained from the real data.
Next, given the target configurations $q$, the conditional entropy $H(\mathbf{S}|M,q)$ of the state of the voxel $\mathbf{S}$ on sensor measurement $M$ is adopted as our metric, and we call it \textit{perception entropy}. The smaller the perception entropy is,
the more certain we are about the voxel position $\mathbf{S}$ after taking measurement $M$, and the better perception potential the sensor configuration has. Following \eqref{eqn:CE}, we have:
\begin{equation}
H(\mathbf{S}|M,q) = E_{m\sim p_{M|q}} H(\mathbf{S}|m,q).
\end{equation}
As $m$ is represented by a function $f(\mathbf{s},q)$ from our sensor measurement model, we can change the random variable of the expectation from $M$ to $\mathbf{S}$ using the law of unconscious statistician, so the conditional entropy becomes:
\begin{equation}
H(\mathbf{S}|M,q) = E_{\mathbf{s}\sim p_\mathbf{S}} H(\mathbf{S}|m=f(\mathbf{s},q),q).
\end{equation}
To compute this conditional entropy, the prior distribution of $p_\mathbf{S}$ is required, which describes the probability density that a voxel at position $\mathbf{S}$ is occupied by an object. $p_\mathbf{S}$ should depend on the actual distribution of objects in the environment. In the presence of a large dataset, this can be directly estimated by voxelizing each type of object in the dataset, and we denote the estimated distribution of $\mathbf{S}$ for an object type $c$ by $p_{\mathbf{S}_c}$.
Moreover, the actual distribution of objects in the environment does not always meet the perception requirements. In some tourist attractions, perceiving objects in front of the vehicle is more important than the rear part as the vehicle is mostly driving forwards. Or the vehicle cares more about perceiving small objects like cones to perform obstacle avoidance.
Therefore, we add a weighting factor: $w(\mathbf{s},c)$, to represent the perception focus on different areas and types of objects. The final distribution $p_\mathbf{S}$ can be expressed as:
\begin{equation}
p_\mathbf{S}(\mathbf{s}) = \eta \sum_c w(\mathbf{s},c) p_{\mathbf{S}_c}(\mathbf{s}),
\end{equation}
where $c$ is the specific object type, and $\eta$ is the normalizing factor.
Finally, we uniformly sample $\mathbf{s}$ within the perception space with an interval of $0.1$ m and compute the sensor measurement $m = f(\mathbf{s},q)$ as well as the entropy of $\mathbf{S}$ given $m$ and $q$ at each position. The final perception entropy can thus be calculated with a weighted average of all entropy using distribution $p_\mathbf{S}$.
\subsection{Gaussian approximation}
For simplicity, we model the probability distribution $p(\mathbf{S}|m,q)$ as a Gaussian distribution. The mean of this Gaussian distribution is unbiased and lies at the origin position $\mathbf{\tilde{s}}$ in distribution $p_\mathbf{S}$ with $\mathbf{\tilde{s}} = [\tilde{s}_x,\tilde{s}_y,\tilde{s}_z]^T$. As for the covariance, since for objects on the ground plane, their elevation is constant and for most perception algorithms the error of the perception results in the $z$ direction should be much smaller than that in the $x$ and $y$ direction, so we fix $\mathbf{S_z}$ as $\tilde{s}_z$. Furthermore, we treat the standard deviation in the $x$ and $y$ direction independently as the same standard deviation $\sigma$, so the distribution is symmetric. The resulting distribution is a 2D Gaussian distribution as follows:
\begin{equation}
p((S_x,S_y)|m , q) = \mathcal{N}(\bm{\mu} = (s_x,s_y), \mathbf{\Sigma} = \sigma^2 \mathbf{I}).
\end{equation}
The entropy of this Gaussian distribution is calculated as
\begin{equation}
\label{eqn:sigma}
H(\mathbf{S}|m,q) = 2\ln(\sigma) +1+\ln(2\pi),
\end{equation}
which is only determined by the standard deviation $\sigma$.
Till this point, all our discussions are under the framework of probability theory, where entropy represents the difference in information and is only a \emph{relative} measure of the uncertainty. Therefore, the next step is to give the perception entropy a practical meaning that can be used to distinguish the performance between different sensor configurations. What's more, as it's meaningless to directly compare the number of laser points and the pixel size, we also need to unify measurements from multi-modality sensors together under the same metric. As LiDARs and cameras are both mainly used for 3D object detection, we can relate them together through their 3D detection performance, and use the performance to reflect their perception potential. Therefore, we adopt the most popular evaluation metric for 3D detection performance: \emph{average precision~(AP)}, to bridge the gap between measurement and perception potential. Since $\sigma$ represents the uncertainty of the target estimations, it is tightly connected to the detection performance of that target. Intuitively, the higher AP is, the smaller $\sigma$ should be. Specifically, when AP reaches its maximum 1,
the uncertainty, $\sigma$, is close to its minimum. When AP equals 0, the algorithm has can never detect the target and hence $\sigma$ approximates the infinity. For simplicity, we assume\footnotemark{} $\sigma$ and AP have the following relationship:
\begin{equation}
\label{eqn:i}
\sigma = \frac{1}{\text{AP}}-1.
\end{equation}
Next, we investigate the connection between measurement $m$ and AP. Naturally, $m$ has a positive correlation with AP: the more laser points and the larger pixel size the voxel has, the better perception performance should be.
However, the exact relationship between AP and $z$ is not straightforward and can be hardly captured by pure hypothesis. For example, if we double the laser point number of an object at a near range to get a more dense result, its AP may only increase by a small margin and would saturate at its upper limit 1. In terms of sensor configuration, that means the improvement in perception potential by using more and better sensors may not always meet its cost. Also, AP is dependent on the specific 3D detection algorithm, which should affect the perception potential as well. In this work, we assume\footnotemark{} that the sensor configuration design is bound to the algorithm used for 3D detection, and hence the relationship between AP and $z$ can be estimated by the performance of the specific perception algorithm. We adopt PointPillars~\cite{lang2019pointpillars} for 3D LiDAR perception and RTM3D~\cite{RTM3D} for 3D monocular camera perception and evaluate AP on KITTI moderate dataset \cite{Geiger2012CVPR} for each type of object, and meanwhile, we compute the measurement $z$ for each type of object given the sensor configuration used in KITTI. As the size of different types of objects varies, we further normalize $z$ by the ratio between the surface area of the object and the surface area of the voxel.
After that, we fit a curve between the measurement of the voxel $m$ and AP as shown in Fig.~\ref{fig:curve}.
As the fitted curve indicates that AP has a linear relationship with the natural logarithm of $m$ for both LiDAR and camera, we model the correlation between AP and $m$ as follows:
\begin{equation}
\label{eqn:ap}
\begin{aligned}
\text{AP} \approx a \ln (m) + b .
\end{aligned}
\end{equation}
And we assume\footnotemark{} that once the linear relationship fits, it would not change a lot if the principle of algorithms doesn't change dramatically, and which is also universal to the same type of sensors. Then for other detection algorithms, the correlation could be established through solving the regression problem on the coefficients $(a, b)$.
Combing \eqref{eqn:sigma}\eqref{eqn:i}\eqref{eqn:ap}, we can finally compute $H(S|m,q)$ for a single sensor as:
\begin{equation}
H(\mathbf{S}|m,q) = 2\ln \left(\frac{1}{a\ln(m) + b} -1\right) +1+\ln(2\pi).
\end{equation}
In practice, we clamp AP to be in the range of $[0.001,0.999]$ for numerical stability.
\figcurve
\subsection{Sensor Fusion}
The perception systems of AVs usually contain both cameras and LiDARs. To fuse the measurement of different sensors, state-of-the-art perception systems mainly adopt two strategies: \emph{early fusion} and \emph{late fusion}.
The former mixed data from all sensors together and could be regarded as a \textit{Super Sensor}, while the latter usually operate on the layer of each sensor's independent perception result. The application of different fusion strategies depends on the specific type of sensors and the principle of perception algorithms. For different LiDARs, early fusion is preferred as point clouds could be combined directly as a merged point cloud. For fusion between LiDARs and cameras or different cameras, late fusion is more reasonable, as the point cloud and the image are in different forms and images cannot be merged.
For the early fusion strategy, suppose we have $n$ sensors in total, and their configurations are denoted as $\mathbf{q} = (q_1,q_2,...,q_n)$. Then their measurement $\mathbf{m} = (m_1,m_2,...,m_n)$, with $m_i = f(\mathbf{s},q_i)$. When the early fusion strategy is adopted, the sensor measurement $m_i$ are accumulated:
\begin{equation}
m_{fused} = \sum_i m_i,
\end{equation}
\textit{i.e.}, the numbers of laser points from different LiDARs are summed together. And the distribution of $\mathbf{S}$ is calculated using the accumulated measurement $m_{fused}$.
For the late fusion strategy,
we calculate the product of distributions of all sensors and normalize it to a new Gaussian distribution. Suppose the standard deviation of the normal distribution of the n sensors are $\mathbf{\sigma} = (\sigma_1,\sigma_2,...,\sigma_n)$,
then the product of these weighted normal distributions follows another Gaussian distribution after normalization, with standard deviation
\begin{equation}
\small
\sigma_{fused} = \sqrt{\displaystyle\frac{1}{\sum_i \frac{1}{\sigma_i^2}}}.
\end{equation}
In practice, there are multiple sensors with different modalities and these two fusion strategies are used simultaneously in the perception algorithm. Therefore, to incorporate both strategies in our calculation, we first sum the sensor measurement for the part of early fusion and formulate each early-fused distribution, and then we compute the final entropy using all distributions for the late fusion part. Our fusion strategies in calculation make reasonable differences for sensor configurations and the details could be found in Sec. \ref{sec:experiments}.
Finally, we utilize the whole process to calculate the perception entropy and iteratively search for the optimal configurations with very high efficiency. For each sensor selection, its sensor placements are optimized using Algorithm~\ref{alg:1} by random search. And the optimal sensor configuration is obtained by comparing and selecting the best sensor selection.
\alg
\section{EXPERIMENTS}
\label{sec:experiments}
The perception spatial space is set as a rectangular region, with $x \in [-80\text{ m}, 80\text{ m}]$, $y \in [-40\text{ m}, 40\text{ m}]$, and $z \in [0\text{ m}, 5\text{ m}]$. We consider five types of objects to perceive: cars, pedestrians, cyclists, trucks, and cones. For the prior distribution $p_\mathbf{S}$, we collect the per-class distribution $p_{\mathbf{S}_c}$ from our road test and the perception weights $w(c,\mathbf{s})$ is set to be 1 for all objects in all areas by default and will be adjusted for comparison. In our experiment, we also adopt PointPillars~\cite{lang2019pointpillars} for 3D LiDAR perception and RTM3D~\cite{RTM3D} for 3D monocular camera perception and the coefficients $(a,b)$ for \eqref{eqn:ap} are $(0.152,0.659)$ and $(0.055,0.155)$ respectively for those two algorithms.
The specifications of different LiDARs used in our experiments are summarized in Table~\ref{table:spec}.
For cameras, the active resolution is fixed to be $1920\times 1080$ and we will only adjust the HFOV.
In the following experiments, we only search for $t_x,t_y,t_z$, and the \textit{pitch} angle of sensor placement, while the \textit{roll} and \textit{yaw} angle are fixed to be $0^{\circ}$ for symmetry of the placement.
Our implementation code is in C++ and all experiments are conducted on an Intel Core i7-8700 desktop CPU. The initial neighborhood $N$ is set to be $[-1\text{ m},1\text{ m}]$ for translation and $[-30^\circ,30^\circ]$ for rotation. The final neighborhood $N_0$ is set to be $[-0.01\text{ m},0.01\text{ m}]$ for translation and $[-0.3^\circ,0.3^\circ]$ for rotation. For each neighborhood, we randomly generate $1000$ sensor placement, and the decay factor $\mathrm{k}$ is set to be $\frac{1}{2}$. The running time of our algorithm is less than a second for a specific configuration and the whole optimization takes around an hour, which is much faster than previous methods~\cite{mou2018optimal, liu_2019}.
\tablelidar
\figresult
\subsection{Velodyne HDL-64E v.s. Hesai Pandar64}
Here we consider two types of 64-channel LiDAR installing on a car.
The optimal installation placement for both LiDARs is listed in experiment \#1 of Table~\ref{table:result} and visualized in Fig.~\ref{fig:result}(a) and (b). The perception entropy of Pandar64 is $1.6429$, which is much smaller than HDL-64E, $2.1212$.
Compared to HDL-64E, we noticed that Pandar64 has a non-uniform vertical resolution, which is $0.167\degree$ in the middle channels.
Although the bottom channel laser of Pandar64 is blocked by the car hood, this selection could result in more laser beams and points on targets at a farther distance, while the same result of testing is shown in~\cite{carballo2020libre}.
That could further prove the perception entropy could accurately reflect the minimal difference between sensors.
\subsection{60\degree HFOV camera v.s. 120\degree HFOV camera}
A camera with $60^{\circ}$ HFOV and another one with $120^{\circ}$ HFOV are compared in this experiment. We also double the weight $w(\mathbf{s},c)$ for $|s_x|>40\text{ m}$ to focus more on perceiving distant objects. The optimal installation placement for both cameras is shown in experiment \#2 of Table~\ref{table:result} and simulated in Fig. \ref{fig:result}(c) and (d).
The result shows the perception entropy of the 60\degree HFOV camera ($2.0055$) outperforms that of the 120\degree HFOV camera ($2.0237$). Intuitively, the smaller FOV camera has a larger focal length, resulting in a higher resolution of objects which is crucial for perception from a long distance but sacrifices a lot of horizontal views at a near distance.
That also validates that the perception entropy could
help us balance the perception potential of sensors for \textit{seeing far} and \textit{seeing wide}.
\subsection{Early Fusion of LiDARS}
\label{subsec:ef}
One LiDAR mounted on the top roof is unrealistic for an autonomous bus, so we explore the placement of multiple LiDARs and two Pandar40 LiDARs are chosen for the experiment.
In experiment \#3 of Table~\ref{table:result}, the first set is under the default setting, while the weight $w(\mathbf{s},c)$ for $s>0$ is doubled in the second set to focus more on the front area, and the optimal placements of simulation result are shown in Fig.~\ref{fig:result}(e) and (f) respectively.
The former has two LiDARs mounted on the diagonal line of the bus, which indicates that covering the whole 360\degree surrounding environments completely is the main factor for LiDAR configuration.
After we increase the weight of the front area, the second set has two LiDARs mounted on the front A-pillar of the bus.
Note that the optimal placement is not exactly symmetric to the bus center, and there is a small \textit{pitch} angle offset between these two LiDARs.
Intuitively, this could not only prevent the overlap of laser beams from different LiDARs, but one LiDAR's discrete space between two adjacent beams will be also inserted by the other's laser beam, contributing more laser points on the targets. This characteristic could be captured by our perception entropy shown in Fig. \ref{fig:pitch}.
\figpitch
\subsection{LiDAR-Camera Late Fusion}
In the final part, we consider real-world settings where multiple sensors of different modalities are used. As shown in Fig. \ref{fig:result}(g), we extend the second configuration of Sec.~\ref{subsec:ef} with two cameras. The front camera has 30\degree HFOV to benefit the perception at far ranges, and the rear camera with 60\degree HFOV could perceive the areas that both LiDARs can't reach. And the perception entropy is much better than the LiDAR only configuration.
\tableresult
\section{CONCLUSIONS}
In this paper, we propose a metric named perception entropy to calculate the perception potential of a sensor configuration and iteratively optimize the installation configurations of sensors with very high efficiency. Perception entropy could accurately reflect the difference between any two sensors or sensor configurations,
and provide the theoretical quantitative upper bound of a specific sensor configuration's perception potential.
Due to the thoughtful sensor fusion strategies, our method could cover multiple sensors elegantly, so that the trade-off between perception ability and redundancy could be well-considered and reached the most cost-effective.
Finally, the simulation results, extensive comparison, and analysis could demonstrate the superior performance of our proposed approach.
As the perception entropy in our calculation is related to the actual perception algorithms, if
the principle of perception algorithms changes fundamentally, the definition of sensor measurement may change as well as its relationship with the algorithms' performance.
In the future, the perception entropy could incorporate the impact of environmental conditions, e.g., rains and fogs would weaken the signal of LiDAR and produce much more noise.
In addition, we could model the motion of different types of objects to better simulate the perception potential by introducing the concepts of tracking algorithms.
\bibliographystyle{IEEEtran}
|
2,869,038,156,302 | arxiv | \section{Introduction}\label{sec:intro}
Type Ia supernovae (SNe Ia) are the thermonuclear explosions of
white dwarf (WD) stars that are destabilized by mass accretion from a close
binary companion. They are important for a wide range of topics in astrophysics,
e.g. galactic chemical evolution \citep{Ko06, An16,Pr18}, studies of dark energy
\citep{Ri98,Pe99} and constraints on $\Lambda$CDM parameters \citep{Be14,Res14}.
Yet, basic aspects of SN Ia physics, such as
the nature of their stellar progenitors and the
triggering mechanism for the thermonuclear runaway, still remain obscure.
Most proposed scenarios for the progenitor systems of SNe Ia fall into two broad categories:
the single degenerate (SD), where the WD companion
is a nondegenerate star, and the double degenerate (DD), where the WD companion
is another WD \citep[see] [for recent reviews]{Wa12,Ma14,Liv18,Sok18,Wa18}.
In the SD scenario, the WD accretes material from its companion over a relatively
long timescale (t$ \, {\sim} \, 10^{6}$ years) and explodes
when its mass approaches the Chandrasekhar limit
M$_{\rm{Ch}} \, {\simeq} \, 1.4 \, M_{\odot}$ \citep{No84,Th86,Ha96,Ha04}.
Conversely, in most DD scenarios, the WD becomes unstable after a merger or a collision
on a dynamical timescale \citep{Ib84} and explodes with a mass
that is not necessarily close to M$_{\rm Ch}$ $\,$
\citep[e.g.,][]{Ras09,vaK10,Ku13}. In theory, distinguishing between SD
and DD systems should be feasible, given that some observational probes
are sensitive to the duration of the accretion process or to the total mass prior to the explosion
\citep[e.g.,][]{Ba07,Ba08a,Sei13a,Mrg14,Sca14,Ya15,Chom16,MR16}.
Sub-Chandrasekhar models \citep[e.g.,][]{Wo94,Si10,Wo11} are a
particular subset of both SD and DD SN Ia progenitors. To first order,
the mass of $\rm{^{56}Ni}$ synthesized, and therefore the brightness of the supernova,
is determined by the mass of the exploding WD.
A sub-M$_{\rm Ch}$\ WD cannot detonate spontaneously without some kind of external compression --
double-detonations are frequently invoked \citep[e.g.,][]{Sh13,Sh14a,Sh14b,Sh18}.
Here, a carbon-oxygen (C/O) WD accretes material from a companion
and develops a helium-rich layer that eventually becomes unstable,
ignites, and sends a shock wave into the core. This blast wave converges and
creates another shock that triggers a carbon denotation, which
explodes the WD. Violent mergers \citep[e.g.,][]{Pak12,Pak13} are an alternative
scenario where, right before the secondary WD is disrupted, carbon burning
starts on the surface of the primary WD and a detonation propagates through the
whole merger, triggering a thermonuclear runaway.
Other studies present pure detonations of sub-M$_{\rm Ch}$\ C/O
WDs with different masses without addressing the question of how they were
initiated. However, these studies are still able to reproduce many
observables such as light curves, nickel ejecta masses, and isotopic mass
ratios \citep{Si10,Pi14,Ya15,Blo17,MR17,Gol18,Sh18}.
After the light from the supernova (SN) fades away, the
ejecta expand and cool down until their density becomes comparable to that of
the ambient medium, either the interstellar medium (ISM)
or a more or less extended circumstellar medium (CSM) modified by the
SN progenitor. At this point, the supernova remnant (SNR) phase begins.
The ejecta drive a blast wave into the ambient medium
(``forward shock'', FS), and the pressure gradient
creates another wave back into the ejecta
\citep[``reverse shock'', RS;][]{McK95,Tru99}.
\begin{table*}
\footnotesize
\begin{center}
\caption{ Total yields for the sub-M$_{\rm Ch}$\ and M$_{\rm Ch}$\ progenitor models. See \citet{Br18} for details and extended yields. \label{table:progenitor_models}}
\hskip-3cm
\makebox[1 \textwidth][c]{
\begin{tabular}{ccccccccccccc}
\tableline
\noalign{\smallskip}
\tableline
\noalign{\smallskip}
Progenitor & $M_{\rm{C}}$ & $M_{\rm{O}}$ & $M_{\rm{Ne}}$ & $M_{\rm{Mg}}$ & $M_{\rm{Si}}$ & $M_{\rm{S}}$ & $M_{\rm{Ar}}$ & $M_{\rm{Ca}}$ & $M_{\rm{Cr}}$ & $M_{\rm{Mn}}$ & $M_{\rm{Fe}}$ & $M_{\rm{Ni}}$ \\
\noalign{\smallskip}
$\mathrm{}$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ & $(M_{\odot})$ \\
\noalign{\smallskip}
\tableline
\noalign{\smallskip}
SCH088 & 3.95E-03 & 1.40E-01 & 2.54E-03 & 1.99E-02 & 2.79E-01 & 1.66E-01 & 3.70E-02 & 3.72E-02 & 6.90E-03 & 2.68E-03 & 1.82E-01 & 1.19E-03 \\
SCH097 & 1.62E-03 & 7.66E-02 & 8.24E-04 & 7.80E-03 & 2.09E-01 & 1.36E-01 & 3.26E-02 & 3.52E-02 & 1.12E-02 & 4.24E-03 & 4.50E-01 & 3.18E-03 \\
SCH106 & 6.91E-04 & 3.74E-02 & 2.80E-04 & 2.61E-03 & 1.38E-01 & 9.62E-02 & 2.39E-02 & 2.63E-02 & 9.11E-03 & 3.46E-03 & 7.01E-01 & 1.54E-02 \\
SCH115 & 2.75E-04 & 1.47E-02 & 8.99E-05 & 6.34E-04 & 7.66E-02 & 5.66E-02 & 1.47E-02 & 1.66E-02 & 6.31E-03 & 2.40E-03 & 9.25E-01 & 2.71E-02 \\
\noalign{\smallskip}
\tableline
\noalign{\smallskip}
DDT12 & 4.88E-03 & 1.75E-01 & 3.88E-03 & 2.64E-02 & 3.84E-01 & 2.34E-01 & 5.29E-02 & 5.32E-02 & 1.50E-02 & 7.12E-03 & 3.84E-01 & 3.15E-02 \\
DDT16 & 2.52E-03 & 1.19E-01 & 1.83E-03 & 1.55E-02 & 3.05E-01 & 1.98E-01 & 4.79E-02 & 5.20E-02 & 2.02E-02 & 8.76E-03 & 5.70E-01 & 3.16E-02 \\
DDT24 & 1.26E-03 & 7.15E-02 & 7.06E-04 & 7.26E-03 & 2.10E-01 & 1.42E-01 & 3.54E-02 & 3.98E-02 & 2.20E-02 & 1.00E-02 & 8.00E-01 & 3.23E-02 \\
DDT40 & 5.33E-04 & 3.80E-02 & 2.62E-04 & 2.88E-03 & 1.35E-01 & 9.43E-02 & 2.38E-02 & 2.66E-02 & 1.59E-02 & 7.51E-03 & 9.69E-01 & 5.03E-02 \\
\noalign{\smallskip}
\tableline
\end{tabular}
}
\end{center}
\end{table*}
\placefigure{f1}
\begin{figure*}
\centering
\includegraphics[scale=0.45]{f1a.pdf}
\hspace{1.0 cm}
\includegraphics[scale=0.45]{f1b.pdf}
\caption{Chemical composition for our SN Ia models listed in Table
\ref{table:progenitor_models}. The vertical, dashed lines
indicate the outer surface of each ejecta model. The arrows depict the
locations of the RS at 538 years for
$\rho_{\rm{amb}} \, = \, 2 \times 10^{-24} \, \rm{g \, cm^{-3}}$
(see the discussion in Section \ref{subsec:syn_spectra}).}
\label{fig:Chemprof}
\end{figure*}
The X-ray emission from young (${\sim}$ a few 1000 years) SNRs is often-times
dominated by strong emission lines from the shocked ejecta that can be used to
probe the nucleosynthesis of the progenitor. These thermal
(${\sim} \, 10^{7}\,$K) X-ray spectra are as
diverse as their SN progenitors, and not even remnants of similar ages are
alike. Their evolution and properties depend on various factors such as
the structure and composition of the ejecta, the energy of the explosion, and
the structure of the CSM that is left behind by the progenitor
\citep[e.g.,][]{Ba03,Ba07,Pat12,Pat17,Woo17,Woo18}.
Therefore, young SNRs offer unique insights into both the supernova explosion
and the structure of the ambient medium.
They are excellent laboratories to study the SN phenomenon
\citep[e.g.,][]{Ba05,Ba06,Ba08b,Ba10,Vi12,Lee13,Lee14,Lee15,Sla14,Pat15}.
The X-ray spectra of SNRs, unlike the optical spectra of SNe Ia, allow us to
explore these issues without having to consider the complexities of radiative
transfer \citep[e.g.,][]{Ste05,Ta11,Ash16,Wilk18}, because the plasma is at
low enough density to be optically thin to its own radiation.
It is known that M$_{\rm Ch}$\ models interacting with a uniform ambient medium can
successfully reproduce the bulk properties of SNRs, such as ionization
timescales \citep{Ba07}, Fe K$\alpha$ centroid energies,
Fe K$\alpha$ luminosities \citep{Ya14a},
and radii \citep{PatB17}. However, there has been no exploration
of the parameter space associated with the evolution of sub-M$_{\rm Ch}$\ explosion
models during the SNR stage for various dynamical ages.
Here, we develop the first model grid of sub-M$_{\rm Ch}$\ explosions in the SNR
phase. We compare the bulk spectral and dynamical
properties of M$_{\rm Ch}$\ and sub-M$_{\rm Ch}$\ models to the observed
characteristics of Galactic and Magellanic Cloud Ia SNRs.
This paper is organized as follows. In Section \ref{sec:method}, we describe
our hydrodynamical SNR models and the derivation of synthetic X-ray spectra. In
Section \ref{sec:discussion}, we compare the bulk properties predicted by
our model grid with observational data of Type Ia SNRs. Finally, in Section
\ref{sec:conclusions}, we summarize our results and outline future analyses
derived from our work.
\section{Method}\label{sec:method}
\subsection{Supernova explosion models}\label{subsec:sn_models}
We use the spherically symmetric M$_{\rm Ch}$\ and sub-M$_{\rm Ch}$\ explosion models
introduced in \citet{Ya15}, \citet{MR17} and \citet{McW18}, which are
calculated with a version of the code described in \citet{Br12}, updated
to account for an accurate coupling between hydrodynamics and nuclear
reactions \citep{Br16, Br18}.
The M$_{\rm Ch}$\ models are delayed detonations
\citep{Kh91} with a central density
$\rho_{\rm{c}} = 3 \times 10^{9} \, \rm{g \, cm^{-3} }$,
deflagration-to-detonation densities
$\rho_{\rm{DDT}}$ $\rm{[10^{7} \, g \, cm^{-3}]} =$
$1.2, 1.6, 2.4, 4.0$ and kinetic energies
$ E_{k} \, [10^{51} \, \rm{erg}] = 1.18, 1.31, 1.43, 1.49$.
They are similar to the models DDTe, DDTd, DDTb, and DDTa
($\rho_{\rm{DDT}}$ $\rm{[10^{7} \, g \, cm^{-3}]} =$
$1.3, 1.5, 2.6, 3.9$) by \citet{Ba03,Ba05,Ba06,Ba08b}. We label these
explosions as DDT12, DDT16, DDT24, and DDT40.
The sub-M$_{\rm Ch}$\ models are central
detonations of C/O WDs with a core temperature
$T_{\rm{c}} \, [\rm{K}] = 10^{8}$,
masses $ M_{\rm{WD}} \, [M_{\odot}] = 0.88, 0.97, 1.06, 1.15$,
and kinetic energies $ E_{k} \, [10^{51} \, \rm{erg}] = 0.92, 1.15, 1.33, 1.46$,
similar to the models by \citet{Si10}. We label these explosions as
SCH088, SCH097, SCH106, and SCH115. For both sets of models, the progenitor
metallicity is $Z = 0.009$ ($0.64 \, Z_{\odot}$ taking
$Z_{\odot} = 0.014$, \citealt{As09}).
We choose this value because it is close to
the metallicity $Z = 0.01$ employed by \citet{Ba03,Ba05,Ba06,Ba08b} in
their M$_{\rm Ch}$\ progenitors. The intermediate-mass elements (Si, S, Ar, Ca)
are produced in the outer region of the exploding WDs, whereas the
iron-peak elements (Cr, Mn, Fe, Ni) are synthesized in the inner layers.
Table \ref{table:progenitor_models} presents the total yields for some
representative elements in these M$_{\rm Ch}$\ and sub-M$_{\rm Ch}$\ models. Figure
\ref{fig:Chemprof} shows the chemical profiles as a function of the
enclosed mass for each model.
\subsection{Supernova remnant models}\label{subsec:snr_models}
\placefigure{f2}
\begin{figure}
\centering
\hspace{-1.5 cm}
\includegraphics[scale=0.65]{f2.pdf}
\caption{Log-normal probability distribution functions (PDFs) for
the diffuse gas in the Milky Way \citep{Ber08}. The shaded contours
represent the 2$\sigma$ regions for each PDF. The six $n_{\rm{amb}}$
values used in this work ($0.024, 0.06, 0.12, 0.60,
1.20, 3.01 \, \rm{cm^{-3}}$)
are depicted along a black, horizontal line.}
\label{fig:ISM}
\end{figure}
We study the time evolution of these SN Ia models with a self-consistent
treatment of the nonequilibrium ionization (NEI) conditions in young SNRs
performed by the cosmic ray-hydro-NEI code, hereafter \texttt{ChN}
\citep{Ell07,Pat09,Ell10,Pat10,Cas12,Lee12,Lee14,Lee15}.
\texttt{ChN}\ is a one-dimensional Lagrangian hydrodynamics code based on the multidimensional
code \texttt{VH-1} \citep[e.g.,][]{Blo93}. \texttt{ChN}\ simultaneously calculates
the thermal and nonthermal emission at the FS and RS in the expanding SNR
models. It couples hydrodynamics, NEI calculations, plasma emissivities,
time-dependent photoionization, radiative cooling, forbidden-line emission,
and diffusive shock acceleration, though we do not include diffusive shock
acceleration in our calculations. \texttt{ChN}\ is a tested, flexible code that
has successfully been used to model SNRs in several settings
\citep[e.g.][]{Sla14,Pat15}.
Young Ia SNRs are in NEI because, at the low
densities involved ($n \, {\sim} \, 1 \, \rm{cm}^{-3}$), not enough time
has elapsed since the ejecta were shocked to equilibrate the ionization and
recombination rates \citep{Itoh77,Ba10}. Consequently, these NEI plasmas are
underionized when compared to collisional ionization equilibrium plasmas
\citep{Vi12}.
The shock formation and initial plasma heating do not stem from Coulomb interactions,
but from fluctuating electric and magnetic fields in these so-called collisionless
shocks \citep[][]{Vi12}. In the ISM, the mean free path and the typical ages
for particle-to-particle interactions are larger than those of SNRs
($\approx 10^{2}-10^{3} \, \rm{years}, \approx 1-10 \, \rm{pc}$).
The efficiency of electron heating at the shock transition,
i.e., the value of $\beta = T_{e} / T_{i}$ at the shock, is not well determined
\citep[see, e.g.,][]{Bo01}. In principle, the value of $\beta$ can range between
$\beta = \beta_{\rm{min}} = m_{e} / m_{i}$ and full equilibration ($\beta = 1$),
with partial equilibration being the most likely situation
\citep[$\beta_{\rm{min}} < \beta < 1$,][]{Bo01,Gha07,Ya14b}. Here we set
$\beta = \beta_{\rm{min}}$ for illustration purposes, even though previous studies
\citep[e.g.,][]{Ba05,Ba06,Ya14a} have shown that $\beta$ has an important effect on
the Fe K$\alpha$ luminosities. This can be critical when trying to fit an SNR spectrum
with a specific model, but here we are just interested in the bulk properties of the
models, and we defer detailed fits to future work.
We consider uniform ambient media composed of hydrogen
\citep[$\rho_{\rm{amb}} = m_{H} \, n_{\rm{amb}}$, e.g.][]{Ba03,Ba06,Ba08b,PatB17}
with a range of densities:
$\rho_{\rm{amb}} \, [10^{-24} \, \rm{g \, cm^{-3}}] = 0.04, 0.1, 0.2, 1.0, 2.0, 5.0
\equiv $ $n_{\rm{amb}} \, [\rm{cm^{-3}}] = 0.024,0.06, 0.12, 0.60, 1.20, 3.01$.
We label each SNR model from the SN model and ambient medium density, e.g.
SCH115\_0p04, SCH115\_0p1, SCH115\_0p2,
SCH115\_1p0, SCH115\_2p0, and SCH115\_5p0. We have chosen these ambient
densities to be in the same range considered by \citet{Pat15}. The three
highest densities were used in the studies by \citet{Pat12} and
\citet{Ya14a}, so we will be able to compare our results to theirs. This
makes a total of 48 SNR models that we evolve up to an expansion age of
5000 years. For each SNR model, we record a total of 30 time epochs,
starting at 105 years. The time bins are linearly spaced at young ages
and smoothly become logarithmically spaced at late ages. We also
record 30 Lagrangian profiles in linearly spaced time bins for each model.
Our choice of ambient medium densities is motivated by observations of the
ISM in the Milky Way. Interstellar gas can be found in five different phases
\citep{Fe98,Fe01}: molecular
($T_{mol} \, {\sim} \ 10-20 \ \rm{K}$, $n_{mol} \ {\sim} \ 10^{2}-10^{6} \ \rm{cm}^{-3}$),
cold neutral
($T_{cold} \, {\sim} \ 50-100 \ \rm{K}$, $n_{cold} \ {\sim} \ 20-50 \ \rm{cm}^{-3}$),
warm neutral
($T_{warm,n} \, {\sim} \ 6000-10000 \ \rm{K}$, $n_{warm,n} \ {\sim} \ 0.2-0.5 \ \rm{cm}^{-3}$),
warm ionized
($T_{warm,i} \, {\sim} \ 8000 \ \rm{K}$, $n_{warm,i} \ {\sim} \ 0.2-0.5 \ \rm{cm}^{-3}$), and
hot ionized
($T_{hot} \, {\sim} \ 10^{6} \ \rm{K}$, $n_{hot} \ {\sim} \ 0.0065 \ \rm{cm}^{-3}$).
Among these, the warm ionized phase
has the highest filling factor and therefore is the most likely environment for
Type Ia SNRs. \citet{Wol03} gives a mean value for the neutral hydrogen density
in the Galactic disk $\left<n_{H I}\right> \, = 0.57 \ \rm{cm}^{-3}$. More
recently, \citet{Ber08} fit log-normal distributions to the diffuse gas in the
MW centered on $\left<n_{H I}\right> \, \approx 0.3 \ \rm{cm}^{-3}$ (cold and
warm ionized) and $\left<n_{H I}\right> \, \approx 0.1 \ \rm{cm}^{-3}$ (warm
ionized). We compare these distributions to our uniform density values
in Figure \ref{fig:ISM}.
\placefigure{f3}
\begin{figure}
\centering
\hspace{-1.5 cm}
\includegraphics[scale=0.37]{f3.pdf}
\caption{Time evolution of the electron temperature $T_e$, density $\rho$,
ionization timescale $\tau = n_{e} t$, average efficiency of
post-shock equilibration $T_{e} / \left<T_{i}\right>$
and average iron effective charge state
$\left<z_{\rm{Fe}}\right>$ profiles
as a function of the enclosed mass for model SCH115\_2p0.
The CD between the ejecta (thick lines)
and the ambient medium swept up by the FS (thin lines)
is depicted as a dashed, black vertical line, located at $1.15 \, M_{\odot}$.
The spatial location of the RS can be
appreciated in the navy ($\sim 0.55 \, M_{\odot}$), the crimson
($\sim 0.1 \, M_{\odot}$) and the turquoise
($\sim 0.002 \, M_{\odot}$) profiles.}
\label{fig:Hydro}
\end{figure}
Figure \ref{fig:Hydro} shows the profile time evolution for
a fiducial model, explosion progenitor SCH115 with an ambient density
$\rho_{\rm{amb}} \, = \, 2 \times 10^{-24} \, \rm{g \, cm^{-3}}$.
The profiles for 186 (navy), 518 (crimson), and 1016 (turquoise)
years show the RS propagation toward the center of the SNR. After reaching
the center, the RS bounces back and moves outwards into the previously
shocked ejecta, creating more reflected shocks when it reaches the contact
discontinuity (CD). This effect can be seen in the first and the second
panel of Figure \ref{fig:Hydro} ($T_{e}$ versus $M$, $\rho$ versus $M$)
around $M \, {\sim} \, 0.05 \, M_{\odot}$ and
$M \, {\sim} \, 20 \, M_{\odot}$ at 5000 years (brown).
$T_e$ increases with time in the inner layers after they are swept by
the RS. As the SNR expands, the density $\rho$
of the shocked ejecta and ISM decreases steadily, and therefore the
electron density $n_{e}$ diminishes with time.
In \texttt{ChN}, the unshocked plasma is assumed to be 10\% singly ionized.
The salient features in the evolution of this particular SNR model are
representative of the entire grid. The ejecta with the highest ionization
state are always found close to the contact discontinuity (CD), since they were
shocked at an earlier age and higher density. Because this is also the
densest region at all times, it has the highest emission measure and
thus will dominate the spatially integrated X-ray emission. However, since the
chemical composition of SN Ia ejecta is markedly stratified, it is often
the case that different chemical elements sample different parts of the
SNR structure, and therefore show different ionization timescales and
electron temperatures \citep[see the discussions in][]{Ba03,Ba05}. This
feature of the models is in good agreement with observations of young
SNRs \citep[e.g.][]{Ba07}.
\placefigure{f4}
\begin{figure*}
\centering
\includegraphics[scale=0.70]{f4.pdf}
\caption{Integrated RS synthetic spectra normalized to $D = 10$ kpc for the model shown in Figure
\ref{fig:Hydro} at the nearest time snapshots (see the explanation in the text).
The relevant atomic transitions are labeled. The zoomed boxes depict different energy regions:
Mg (up left), Si, S (up right), Ar, Ca (low left), and Fe (low right). The latter shows
the time evolution of the Fe K$\alpha$ centroid energy (dashed, vertical lines).}
\label{fig:Spectra_ages}
\end{figure*}
\placefigure{f5}
\begin{figure*}
\centering
\includegraphics[scale=0.70]{f5.pdf}
\caption{Integrated RS synthetic spectra normalized to $D = 10$ kpc for model SCH115,
for the four highest ambient densities ($\rho_{0p2}$, $\rho_{1p0}$, $\rho_{2p0}$, $\rho_{5p0}$)
and a fixed expansion age of 538 years.
The zoomed boxes are identical to those of Figure \ref{fig:Spectra_ages}.}
\label{fig:Spectra_densities}
\end{figure*}
\placefigure{f6}
\begin{figure*}
\centering
\includegraphics[scale=0.70]{f6.pdf}
\caption{Integrated RS synthetic spectra normalized to $D = 10$ kpc for models SCH088,
SCH097, SCH106, and SCH115 at a fixed expansion age of 538 years and a fixed ambient
density $\rho_{\rm{amb}} \, = \, 2 \times 10^{-24} \, \rm{g \, cm^{-3}}$.
The zoomed boxes are identical to those of Figure \ref{fig:Spectra_ages}.}
\label{fig:Spectra_models_SCH}
\end{figure*}
\placefigure{f7}
\begin{figure*}
\centering
\includegraphics[scale=0.70]{f7.pdf}
\caption{Integrated RS synthetic spectra normalized to $D = 10$ kpc for models DDT12,
DDT16, DDT24, and DDT40 at a fixed expansion age of 538 years and a fixed ambient
density $\rho_{\rm{amb}} \, = \, 2 \times 10^{-24} \, \rm{g \, cm^{-3}}$.
The zoomed boxes are identical to those of Figure \ref{fig:Spectra_ages}.}
\label{fig:Spectra_models_DDT}
\end{figure*}
\subsection{Synthetic spectra}\label{subsec:syn_spectra}
Our ejecta models determine the masses, chemical abundances, and initial
velocities for each mass layer. We consider 19 elements: H, He, C, N, O,
Ne, Na, Mg, Al, Si, P, S, Ar, Ca, Ti, Cr, Mn, Fe, and Ni, with a total of
297 ions. For each ion species $I$ corresponding to an element $X$,
we calculate the differential emission measure (DEM) in 51 equally log-spaced
$T_{e}$ bins between $10^{4}$ and $10^{9}$ K, normalized
to a distance of $D = 10$ kpc \citep{Ba03,Ba06}:
\begin{equation}
(\rm{DEM})_{I,X} = n_{I} \, n_{e} \times \dfrac{dV}{dT_{e}}
\times \dfrac{1}{4 \pi D [cm]^{2}} \times \dfrac{10^{-14}}{\rm{angr}(X)}
\end{equation}
where $n_{I}, n_{e}$ are the ion and electron densities, $dV$ is the
volume element for each layer, $\rm{angr}(X)$ are the \texttt{XSPEC} \citep{Ar96}
default conversion factors for the solar abundances \citep{AG89} and $10^{-14}$ is
a normalization applied to the emissivities in \texttt{XSPEC}. We couple
these DEMs to the atomic emissivity code \texttt{PyAtomDB}
\citep[\texttt{AtomDB} version 3.0.9; see, e.g.,][]{Fo12,Fo14} in order to
calculate the emitted flux for each model at a given photon energy. We separate
the RS and the FS contribution and generate nonconvolved photon spectra in
10000 equally spaced bins of size 1.2 eV between 0.095 and 12.094 keV.
Thermal broadening and line splitting due to bulk motions are ignored in this
version of the synthetic spectra, but we plan to include them in future versions.
\begin{table*}
\begin{center}
\caption{Data corresponding to the Ia SNRs in our sample. \label{table:remmants}}
\begin{tabular}{cccccccc}
\tableline
\noalign{\smallskip}
\tableline
\noalign{\smallskip}
Name
& $E_{\, \rm{Fe}_{K\alpha}}$\footnote{Centroid energies and fluxes from \cite{Ya14a}, except for G1.9+0.3 \citep{Bo13} and DEM L71 \citep{Mag16}, who report luminosities.\label{foot:Yam}}
& $F_{\, \rm{Fe}_{K\alpha}}$\footref{foot:Yam} & Distance & $L_{\, \rm{Fe}_{K\alpha}}$ & Radius\footnote{For remnants with distance uncertainties, we calculate their radii using the angular diameters listed in Table 1 from \cite{Ya14a}.} & Age & References\footnote{Representative references:
(1) \cite{Reyn99}; (2) \cite{San05}; (3) \cite{Rey07}; (4) \cite{Park13}; (5) \cite{Sa05}; (6) \cite{Lea16}; (7) \cite{Ba06}; (8) \cite{Tia11}; (9) \cite{Wi11a}; (10) \cite{Ya12a}; (11) \cite{Cas13};
(12) \cite{Ya08}; (13) \cite{Ra06}; (14) \cite{Ya12b}; (15) \cite{Gia09}; (16) \cite{Pan14}; (17) \cite{Le03}; (18) \cite{Res05}; (19) \cite{Wi14}; (20) \cite{War04}; (21) \cite{Res08}; (22) \cite{Kos10}; (23) \cite{Rey08}; (24) \cite{Bo13}; (25) \cite{Hu03}; (26) \cite{vaH03}; (27) \cite{Mag16}.} \\
\noalign{\smallskip}
& $\mathrm{eV}$ & ($\mathrm{10^{-5}\,ph\,cm^{-2}\,s^{-1}}$) & ($\mathrm{kpc}$) & ($\mathrm{10^{40}\,ph\,s^{-1}}$) & ($\mathrm{pc}$) & ($\mathrm{years}$) & \\
\noalign{\smallskip}
\tableline
\noalign{\smallskip}
Kepler & $6438 \pm 1$ & $34.6 \pm 0.2$ & $3.0-6.4$ & $91 \pm 66$ & $2.3 \pm 0.9$ & $414$ & 1, 2, 3, 4 \\
3C 397 & $6556 ^{+ 4} _{-3}$ & $13.7 \pm 0.4$ & $6.5-9.5$ & $105 \pm 39$ & $5.3 \pm 0.5$ & $1350-5300$ & 5, 6 \\
Tycho & $6431 \pm 1$ & $61.0 \pm 0.4$ & $2.5-3.0$ & $55 \pm 10$ & $3.3 \pm 0.3$ & $446$ & 7, 8 \\
RCW 86 & $6408 ^{+ 4} _{-5}$ & $14.0 \pm 0.7$ & $2.5$ & $10.5 \pm 0.5$ & $16$ & $1833$ & 9, 10, 11 \\
SN 1006 & $6429 \pm 10$ & $2.55 \pm 0.43$ & $2.2$ & $1.5 \pm 0.3$ & $10$ & $1012$ & 12 \\
G337.2$-$0.7 & $6505 ^{+ 26} _{-31}$ & $0.21 \pm 0.06$ & $2.0-9.3$ & $0.8 \pm 1.1$ & $4.9 \pm 3.2$ & $5000-7000$ & 13 \\
G344.7$-$0.1 & $6463 ^{+ 9} _{-10}$ & $4.03 \pm 0.33$ & $14$ & $95 \pm 8$ & $16$ & $3000-6000$ & 14 \\
G352.7$-$0.1 & $6443 ^{+ 8} _{-12}$ & $0.82 \pm 0.08$ & $7.5$ & $5.5 \pm 0.5$ & $6$ & ${\sim} \, 1600$ & 15, 16 \\
N103B & $6545 \pm 6$ & $2.15 \pm 0.10$ & 50\footnote{Distance to the Large Magellanic Cloud (LMC) from \cite{Piet13}.
\label{foot:LMC} \\} & $643 \pm 30$ & $3.6$ & ${\sim} \, 860$ & 17, 18, 19 \\
0509$-$67.5 & $6425 ^{+ 14} _{-15}$ & $0.32 \pm 0.04$ & 50\footref{foot:LMC} & $96 \pm 12$ & $3.6$ & ${\sim} \, 400$ & 18, 20, 21 \\
0519$-$69.0 & $6498 ^{+ 6} _{-8}$ & $0.93 \pm 0.05$ & 50\footref{foot:LMC} & $278 \pm 15$ & $4.0$ & ${\sim} \, 600$ & 18, 21, 22 \\
G1.9+0.3 & $6444$ & - & ${\sim} \, 8.5$ & 1 & ${\sim} \, 2.0$ & ${\sim} 150$ & 23, 24 \\
DEM L71 & $6494 \pm 58$ & - & 50\footref{foot:LMC} & $26 ^{+ 8} _{-9}$ & 8.6 & ${\sim} \, 4700$ & 25, 26, 27 \\
\noalign{\smallskip}
\tableline
\end{tabular}
\end{center}
\end{table*}
We generate synthetic spectra for both RS and FS convolved with the \textit{Suzaku}
spectral and ancillary responses \citep{Mit07}. We choose
\textit{Suzaku} over \textit{Chandra} or \textit{XMM--Newton} for illustration
purposes, given its superior spectral resolution around the K$\alpha$ transitions
from Fe-peak elements ($\approx 5.5-8.0$ keV). For simplicity, we do not include the
effect of interstellar absorption (relevant below $\sim$ 1 keV). In any case, most Ia
SNRs have column densities smaller than $10^{22} \, \rm{cm^{-2}}$
\citep[e.g.,][]{Le03,War04,Ba06,Rey07,Kos10,Ya14b}. All the convolved and nonconvolved
spectra are publicly available in a repository (\url{https://github.com/hector-mr}).
\placefigure{f8}
\begin{figure*}
\centering
\includegraphics[scale=0.37]{f8a.pdf}
\includegraphics[scale=0.268]{f8b.pdf}
\caption{Left: Photon, \textit{Suzaku} and \textit{XRISM} spectra for model
SCH115\_2p0 at a fixed expansion age of 538 years (Top: Reverse shock. Bottom:
Forward shock). Right: Zoomed-in reverse shock spectra around the Fe K$\alpha$
complex. The relevant atomic transitions are labeled.}
\label{fig:Sp_phSuXA}
\end{figure*}
Figure \ref{fig:Spectra_ages} shows the time evolution of the X-ray flux from the RS
for the fiducial model shown in Figure \ref{fig:Hydro}. We do not show the thermal
spectrum from the FS because it is very weak or absent in many young Type Ia SNRs,
often being replaced by nonthermal synchrotron emission \citep[e.g.,][]{War04,War05,Ca08}.
While the \texttt{ChN}\ code has the capability to model the modification of the FS dynamics
and spectrum due to particle acceleration processes \citep[e.g.,][]{Sla14},
this falls outside the scope of the present work. The thermal RS flux shown in Figure
\ref{fig:Spectra_ages} decreases with time because the ejecta density decreases steadily,
and the emission measure scales as $n_{e}^{\, 2}$. This effect usually dominates over the
steady increase in $T_{e}$ due to electron-ion collisions in the shocked plasma
(see Figure \ref{fig:Hydro}), which tends to increase the emitted flux. The centroids of
the K$\alpha$ transitions move to higher energies with time, especially for Ca, Fe, and Ni,
because those elements have a large range of charge states. For elements with lower
atomic numbers, like Si and S, the centroid energies saturate when the He-like ions become
dominant, and then the Ly$\alpha$ transitions from H-like ions begin to appear. For
this fiducial model, the spectrum at 5000 years (brown) shows a Ti K$\alpha$ feature at
$\approx 4.5$ keV.
Figure \ref{fig:Spectra_densities} shows the effect of varying the ambient medium density
on the RS spectra for the same explosion model (SCH115) at a fixed expansion age of 538
years. Higher $\rho_{\rm{amb}}$ translate into higher ejecta densities due to a slower
ejecta expansion. This yields higher fluxes and centroid energies for all transitions due
to the increased rate of ionizing collisions. As $\rho_{\rm{amb}}$ increases, the Fe L-shell
transitions dominate the flux around ${\sim} \,$1 keV. Figures \ref{fig:Spectra_models_SCH} and
\ref{fig:Spectra_models_DDT} show the RS spectra for all sub-M$_{\rm Ch}$\ and M$_{\rm Ch}$\ progenitor
models with the same $\rho_{\rm{amb}} \, \left(2 \times 10^{-24} \, \rm{g \, cm^{-3}}\right)$
and expansion age (538 years). The differences between the models are largest in the bands
dominated by the Fe L-shell and K-shell transitions. This is due to the different
distribution of Fe-peak elements in the inner ejecta region for different models. In
sub-M$_{\rm Ch}$\ models with larger masses and M$_{\rm Ch}$\ models with higher DDT transition densities,
the Fe-peak elements extend further out in Lagrangian mass coordinate (see Figure \ref{fig:Chemprof}).
This translates into very different shocked masses of each element at a given age and
ambient medium density for different explosion models, and therefore into large differences
in the X-ray spectra.
For Si and S, on the other hand, most of the ejected mass is already shocked at 538 years
in all models ($M_{\rm{shocked}} = 0.81, 0.90, 0.98, 1.06 \, M_{\odot}$ for models
SCH088\_2p0, SCH097\_2p0, SCH106\_2p0, SCH115\_2p0, and
$M_{\rm{shocked}} = 1.16, 1.18, 1.20, 1.21 \, M_{\odot}$ for models DDT12\_2p0,
DDT16\_2p0, DDT24\_2p0, DDT40\_2p0, shown in Figure \ref{fig:Chemprof}), which translates into
a smaller dynamic range of X-ray emitting masses and therefore smaller differences for the
corresponding lines in the spectra. Elements like Mg and O are also fully shocked at this age,
but their spectral blends show larger variations than those of Si and S because the dynamic
range in ejected masses is much larger (see Table \ref{table:progenitor_models}).
Our spectral models can also be convolved with the response matrices for future facilities,
like the X-Ray Imaging and Spectroscopy Mission
\citep[\textit{XRISM}, a.k.a. X-Ray Astronomy Recovery Mission, \textit{XARM},][]{Tas18} or
\textit{Athena} \citep{Na13}. The left panel of Figure \ref{fig:Sp_phSuXA} shows the RS and FS spectra for
model SCH115\_2p0 at 538 years, unconvolved (photon flux) and after convolution with both
\textit{Suzaku} and \textit{XRISM} responses. It is worth noting that \textit{XRISM} will not be
able to separate the FS and RS for the remnants in our sample.
The improved energy resolution of \textit{XRISM} reveals a wealth of transitions that
cannot be seen with \textit{Suzaku}, as shown in the
right panel of Figure \ref{fig:Sp_phSuXA}. There are two transitions at
$\approx$ 5.4 and $\approx$ 5.65 keV in both the \textit{Suzaku} and the \textit{XRISM}
synthetic spectrum that do not appear in real \textit{Suzaku} observations. We defer this
to a future study.
The one-dimensional nature of our models deserves some comments. Multidimensional hydrodynamics
coupled with NEI calculations \citep{War13,Or16} are computationally expensive, and do not allow to produce
extensive model grids for an exhaustive exploration of parameter space like the one we present here. The
results from \citet{War13}, who studied the impact of clumping and Rayleigh-Taylor instabilities in the
morphology and ionization (but not emitted spectra) of Type Ia SNRs in 3D,
do not show major deviations from one-dimensional calculations.
\section{Discussion}\label{sec:discussion}
\subsection{Type Ia SNRs: Bulk properties}\label{subsec:bulk}
Here we describe the bulk properties (expansion age, radius, Fe K$\alpha$ centroid, and
Fe K$\alpha$ luminosity) of our M$_{\rm Ch}$\ and sub-M$_{\rm Ch}$\ models and compare them with the available
observational data for Ia SNRs. We use the Fe K$\alpha$ blend because it is sensitive to the
electron temperature and ionization timescale in SNRs, with the centroid energy being a strong
function of mean charge state \citep{Vi12,Ya14a,Ya14b}. This results in a clear division between
Ia SNRs, which tend to interact with a low-density ambient medium, and core collapse (CC) SNRs,
which often evolve in the high density CSM left behind by their massive and short-lived
progenitors (first noted by \citealt{Ya14b}, see also \citealt{Pat15,PatB17}). In their analysis,
\citet{Ya14a} already found that the bulk properties of the SNRs identified as Ia in their sample
(those with Fe K$\alpha$ centroid energies below 6.55 keV) were well reproduced by the M$_{\rm Ch}$\,
uniform ambient medium models of \citet{Ba03,Ba05}. Here, we perform a more detailed comparison
to our models, which also assume a uniform ambient medium, but are based on an updated code and
atomic data, and include both M$_{\rm Ch}$\ and sub-M$_{\rm Ch}$\ progenitors. We also comment briefly on some
individual objects of interest.
We calculate the Fe K$\alpha$ centroid energy $E_{\, \rm{Fe}_{K\alpha}}$ and luminosity
$L_{\, \rm{Fe}_{K\alpha}}$ for each model as
\begin{equation}
E_{\, \rm{Fe}_{K\alpha}} = \dfrac{ \mathlarger{\int_{E_{\rm{min}}}^{E_{\rm{max}}} \left( F \times \, E \right) \, dE} } { \mathlarger{\int_{E_{\rm{min}}}^{E_{\rm{max}}} F \, dE} }
= \dfrac{ \mathlarger{\sum_{i \subseteq}^{}} \, F_{i} \times E_{i} \times dE_{i} } { \mathlarger{\sum_{i \subseteq}^{}} \, F_{i} \times dE_{i} }
\end{equation}
\vspace{-0.3cm}
\begin{equation}
F_{\, \rm{Fe}_{K\alpha}} = \mathlarger{\int_{E_{\rm{min}}}^{E_{\rm{max}}}} F \, dE = \mathlarger{\sum_{i \subseteq}^{}} \, F_{i} \times dE_{i}
\end{equation}
\vspace{-0.3cm}
\begin{equation}
L_{\, \rm{Fe}_{K\alpha}} = 4 \pi D[\textrm{cm}]^{2} \times \, F_{\, \rm{Fe}_{K\alpha}}
\end{equation}
where $F$ is the differential flux from the nonconvolved spectrum after continuum subtraction,
$dE$ is the constant (1.2 eV) energy step, and $E_{\rm{min}} - E_{\rm{max}}$ is an energy interval
that covers the entire Fe K$\alpha$ complex (6.3 $-$ 6.9 keV). We only compute these numbers
when the Fe K$\alpha$ emission is clearly above the continuum.
Table \ref{table:remmants} summarizes the relevant observational properties of the 13 Type Ia SNRs in our
sample. The data are taken from \citet{Ya14a} (\textit{Suzaku} observations). We also include the
\textit{Chandra} measurements for G1.9+0.3 \citep{Bo13} and the \textit{XMM--Newton} results for
DEM L71 \citep{Mag16}. The contours in Figures \ref{fig:LvsE}$-$\ref{fig:Rvst} show the parameter
space spanned by our models, with symbols indicating the observed properties of individual SNRs.
We display $L_{\, \rm{Fe}_{K\alpha}}$ versus $E_{\, \rm{Fe}_{K\alpha}}$ (Figure \ref{fig:LvsE}),
$E_{\, \rm{Fe}_{K\alpha}}$ versus FS radius ($R_{\rm{FS}}$, Figure \ref{fig:EvsR}),
$E_{\, \rm{Fe}_{K\alpha}}$ versus expansion age (Figure \ref{fig:Evst}),
and $R_{\rm{FS}}$ versus expansion age (Figure \ref{fig:Rvst}).
The main features of the models shown in these plots merit some comments. In Figures
\ref{fig:LvsE}$-$\ref{fig:Evst}, for the models with $\rho_{1p0}$, $\rho_{2p0}$ and
$\rho_{5p0}$, $E_{\, \rm{Fe}_{K\alpha}}$ decreases for a short time
$\approx 1000-2000$ years after the explosion instead of increasing monotonically with time.
This is due to the reheating of the shocked ejecta after the RS bounces at the SNR center.
The reshocked material becomes denser and hotter, and therefore more luminous. This results in
a lower luminosity-weighted ionization state for the shocked ejecta, which prior to RS bounce
was dominated by the dense, highly ionized material close to the CD. As time goes on and the
entire ejecta is reshocked, the material close to the CD dominates the spectrum again,
and the ionization state continues to increase monotonically.
The strength of this feature is due to the spherical symmetry of our models, at least to some
extent, but we expect a qualitatively similar (if weaker) effect in reality. We note that,
although our model predictions are qualitatively similar to those from \citet{Ba03,Ba05,Ba06},
\citet{Ya14a} and \citet{Pat15}, there are small deviations $-$ for instance, we predict a
slightly higher $E_{\, \rm{Fe}_{K\alpha}}$ for the same ambient medium density and age
(${\sim}$ 6.6 keV versus ${\sim}$ 6.5 keV). This is likely due to differences in the hydrodynamic code,
atomic data, and explosion models. In addition, \citet{Pat15} stopped their calculations when
the RS first reached the center of the SNR, while we continue ours until the models reach an age
of 5000 years.
\placefigure{f9}
\begin{figure*}
\centering
\hspace{-1.1 cm}
\includegraphics[scale=0.31]{f9a.pdf}
\includegraphics[scale=0.31]{f9b.pdf}
\vspace{1 cm}
\hspace{-1.1 cm}
\includegraphics[scale=0.31]{f9c.pdf}
\includegraphics[scale=0.31]{f9d.pdf}
\caption{Left: Centroid energies and line luminosities of Fe K$\alpha$ emission from various Type Ia
SNRs in our Galaxy (circles) and the LMC (squares). The shaded regions depict the Fe
K$\alpha$ centroids and luminosities predicted by our theoretical sub-M$_{\rm Ch}$\ and M$_{\rm Ch}$\ models
with various uniform ISM densities (SCH088: gray; SCH097: magenta; SCH106: orange;
SCH115: blue; DDT12: pink; DDT16: green; DDT24: light brown; DDT40: purple). Right:
Individual tracks for each model. The
$L_{\, \rm{Fe}_{K\alpha}}$ $-$ $E_{\, \rm{Fe}_{K\alpha}}$ tracks corresponding to the
two lowest ambient densities ($\rho_{0p04}$, $\rho_{0p1}$) do not appear in the plots
because their $L_{\, \rm{Fe}_{K\alpha}}$ values are considerably small.}
\label{fig:LvsE}
\end{figure*}
\placefigure{f10}
\begin{figure*}
\centering
\hspace{-1.1 cm}
\includegraphics[scale=0.31]{f10a.pdf}
\includegraphics[scale=0.31]{f10b.pdf}
\vspace{1 cm}
\hspace{-1.1 cm}
\includegraphics[scale=0.31]{f10c.pdf}
\includegraphics[scale=0.31]{f10d.pdf}
\caption{Fe K$\alpha$ centroid energy versus forward shock radius for the Type Ia SNRs
in our sample. The shaded regions correspond to the models shown in Figure
\ref{fig:LvsE}.}
\label{fig:EvsR}
\end{figure*}
\placefigure{f11}
\begin{figure*}
\centering
\hspace{-1.1 cm}
\includegraphics[scale=0.31]{f11a.pdf}
\includegraphics[scale=0.31]{f11b.pdf}
\vspace{1 cm}
\hspace{-1.1 cm}
\includegraphics[scale=0.31]{f11c.pdf}
\includegraphics[scale=0.31]{f11d.pdf}
\caption{Fe K$\alpha$ centroid energy versus expansion age for the Type Ia SNRs in
our sample. The shaded regions correspond to the models shown in Figures \ref{fig:LvsE}
and \ref{fig:EvsR}.}
\label{fig:Evst}
\end{figure*}
\placefigure{f12}
\begin{figure*}
\centering
\hspace{-1.1 cm}
\includegraphics[scale=0.31]{f12a.pdf}
\includegraphics[scale=0.31]{f12b.pdf}
\vspace{1 cm}
\hspace{-1.1 cm}
\includegraphics[scale=0.31]{f12c.pdf}
\includegraphics[scale=0.31]{f12d.pdf}
\caption{Forward shock radius versus expansion age for the Type Ia SNRs in our sample. The shaded
regions correspond to the models shown in Figures \ref{fig:LvsE}, \ref{fig:EvsR}
and \ref{fig:Evst}.}
\label{fig:Rvst}
\end{figure*}
Figures \ref{fig:LvsE}$-$\ref{fig:Rvst} show that the parameter space covered by
our spherically symmetric, uniform ambient medium models is in good agreement with
the observed data. While there are exceptions, which we discuss in detail below,
it is clear that our models are a good first approximation to interpret the bulk
dynamics of real Type Ia SNRs, and can be used to infer their fundamental physical
properties. For example, denser ambient media and more energetic progenitor models
predict higher $E_{\, \rm{Fe}_{K\alpha}}$ and $L_{\, \rm{Fe}_{K\alpha}}$ at a given
expansion age, as seen in Figure \ref{fig:LvsE}. Thus, the SNRs with the highest
$L_{\, \rm{Fe}_{K\alpha}}$, like 0519$-$69.0 and 0509$-$67.5, are only compatible
with the brightest, most Fe-rich progenitor models (SCH106, SCH115, DDT16, and DDT24).
The Fe K$\alpha$ emission from SNR N103B, in particular, can only be reproduced by
model DDT40 at the highest ambient medium density. As shown in Figures \ref{fig:EvsR}
and \ref{fig:Rvst}, $R_{\rm{FS}}$ has a weak dependence on the ejecta mass,
but it is quite sensitive to the ambient density because
$R_{\rm{FS}} \propto M^{1/3} \rho^{-1/3}$ \citep{McK95}. Therefore, objects
surrounded by low-density media (e.g. RCW 86, SN 1006, and G344.7$-$0.1) clearly
stand apart from those evolving in high density media (e.g. 3C 397, N103B, and
Kepler): the former have large $R_{\rm{FS}}$ and low $E_{\, \rm{Fe}_{K\alpha}}$
centroids, while the latter have small $R_{\rm{FS}}$ and high $E_{\, \rm{Fe}_{K\alpha}}$.
We note that the ages of these remnants differ from one another.
In general, the densities we infer from simple comparisons to our models are in good
agreement with detailed studies of individual objects. For instance, \citet{So14}
and \cite{Wi14} determined $n_{\rm{amb}} \gtrsim 2.0 \, \rm{cm^{-3}}$ for N103B, and
\citet{Lea16} found $n_{\rm{amb}} \ {\sim} \ 2-5 \, \rm{cm}^{-3}$ for 3C 397, which
are close to the highest value of $\rho_{\rm{amb}}$ in our grid
($n_{\rm{amb}} = 3.01 \, \rm{cm^{-3}}$).
For all the observables shown in Figures \ref{fig:LvsE}$-$\ref{fig:Rvst}, the main
sources of variation in the models are the ambient density and the expansion age.
This implies that the details of the energetics and chemical composition in the
supernova model, and in particular whether the progenitor was M$_{\rm Ch}$\ or sub-M$_{\rm Ch}$,
are not the main drivers for the bulk dynamics of Type Ia SNRs. This does not imply
that our SNR models do not have the power to discriminate Type Ia SN explosion
properties - detailed fits to the X-ray spectra of individual objects have shown
that they can do this very well \citep[e.g.,][]{Ba06,Ba08a,Pat12}. However, the bulk
SNR properties \textit{on their own} are not very sensitive to the explosion
properties, especially for objects whose expansion ages or distances are not well
determined. To discriminate explosion properties, additional information needs to
be taken into account, like specific line flux ratios
(e.g. $\rm{Si \,\, K\alpha \, / \, Fe \,\, K\alpha}$,
$\rm{S \,\, K\alpha \, / \, Fe \,\, K\alpha}$, and
$\rm{Ar \,\, K\alpha \, / \, Fe \,\, K\alpha}$), which can distinguish M$_{\rm Ch}$\
from sub-M$_{\rm Ch}$\ progenitors, or even better, detailed fits
to the entire X-ray spectrum, which can reveal a wealth of information about the
explosion \citep[e.g.,][]{Ba06,Ba08a,Pat12}. We defer these applications of our models
to future work.
To evaluate the degree to which a particular model works well for a given SNR, it is
important to examine \textit{all} its bulk properties at the same time. By doing this,
we can single out individual objects whose bulk dynamics cannot be reproduced by our
models, modulo any uncertainties in the expansion age and distance. Not surprisingly,
the SNR that shows the largest deviation from our models is RCW 86. This remnant is
known to be expanding into a low-density cavity, presumably excavated by a fast,
sustained outflow from the SN progenitor \citep{Ba07,Wi11a,Bro14}, and therefore its
$R_{\rm{FS}}$ is too large for its expansion age and $E_{\, \rm{Fe}_{K\alpha}}$.
In addition, its classification as a Type Ia SNR is still under debate \citep{Gva17}.
The Galactic SNR G344.7$-$0.1 also shows a similar deviation,
albeit less strong, but this might be related to an overestimated distance and $R_{\rm{FS}}$
\citep[][and references therein]{Ya12b}.
Among the objects interacting with low-density
media, the size of SN 1006 is compatible with our lowest-density models, which agrees with
the value $n_{\rm{amb}} \ {\sim} \ 0.03 \, \rm{cm}^{-3}$ found by \citet{Ya08}, and its
$E_{\, \rm{Fe}_{K\alpha}}$ and $L_{\, \rm{Fe}_{K\alpha}}$ are within the parameter space
covered by the models. We examine the case of SN 1006 in more detail in Section
\ref{subsec:historical}. Among the objects interacting with high density media, 3C 397
and N103B have $E_{\, \rm{Fe}_{K\alpha}}$ values that are too high for their physical
sizes and expansion ages. This has been pointed out by \citet{PatB17}, and could be due
to some sort of interaction with dense material, possibly (but not necessarily) a CSM
modified by the SN progenitor \citep{Sa05,Wi14,Li17}. Remarkably, the bulk dynamics of the
Kepler SNR, which is often invoked as an example of CSM interaction in Type Ia SNRs
\citep[e.g.,][]{Rey07,Chi12,Bu13} are compatible with a uniform ambient medium interaction,
although a detailed spectral analysis suggests the presence of a small cavity
around its progenitor system \citep{Pat12}. Finally, the
Galactic SNR G337.2$-$0.7 appears to be underluminous for its relatively high
$E_{\, \rm{Fe}_{K\alpha}}$, but this could be due to the large uncertainty in its distance
\citep{Ra06}.
\placefigure{f13}
\begin{figure*}
\centering
\includegraphics[scale=0.55]{f13.pdf}
\caption{Fe K$\alpha$ luminosity, radius and expansion age as a function of the
Fe K$\alpha$ centroid energy for Ia (red) and CC (blue) SNRs
\citep[][and references therein]{Lov11,Vo11,Park12,Tia14,Ya14a}.
For a more updated sample and further discussion, see \citet{Mag17}.
The shaded regions depict the predictions from our theoretical
M$_{\rm Ch}$\ (khaki) and sub-M$_{\rm Ch}$\ (dark orange) models with uniform ISM densities. }
\label{fig:Contours_Ia}
\end{figure*}
\placefigure{f14}
\begin{figure*}
\centering
\includegraphics[scale=0.55]{f14.pdf}
\caption{Fe K$\alpha$ luminosity, radius and expansion age as a function of the
Fe K$\alpha$ centroid energy for G1.9+0.3, 0509$-$67.5, Kepler, Tycho, and SN 1006.
The shaded regions depict the predictions from our theoretical
M$_{\rm Ch}$\ and sub-M$_{\rm Ch}$\ models with uniform ISM densities for different expansion ages:
150 (black), 416$-$444 (light coral), and 1012 (blue) years. }
\label{fig:Historical_contours_Ia}
\end{figure*}
We summarize our comparisons between models and data in Figure \ref{fig:Contours_Ia},
which shows $L_{\, \rm{Fe}_{K\alpha}}$, $R_{\rm{FS}}$ and expansion age for our M$_{\rm Ch}$\
and sub-M$_{\rm Ch}$\ models and for the SNRs as a function of $E_{\, \rm{Fe}_{K\alpha}}$, the only
property that can be determined from the observations alone. We re-emphasize that our
uniform ambient medium, spherically symmetric models, can reproduce the bulk dynamics
of most Type Ia SNRs quite well. This suggests that, unlike CC SN progenitors,
most Type Ia SN progenitors do not
strongly modify their circumstellar environments, as previously noted by
\citet{Ba07}, \citet{Ya14a}, \citet{PatB17}, and other authors.
This conclusion is in good agreement with
the (hitherto unsuccessful) attempts to detect prompt X-ray and radio emission from
extragalactic Type Ia SNe \citep{Mrg14,Chom16}, but we note that SNR studies probe spatial
and temporal scales \citep[${\sim}$ pc and ${\sim} \, 10^{5}$ years,][]{PatB17} that are more
relevant for the pre-SN evolution of Type Ia progenitor models. In this sense, the lack
of a strongly modified CSM sets Type Ia SNRs clearly apart from CC SNRs \citep{Ya14a},
which we also include in Figure \ref{fig:Contours_Ia} for comparison. The only two SNRs
with well-determined properties that are clearly incompatible with our uniform ambient
medium models are RCW 86 and N103B. These SNRs are probably expanding into some sort
of modified CSM. In the case of RCW 86, the modification is very strong, and clearly
due to the formation of a large cavity by the progenitor. In the case of N103B (and
perhaps also 3C 397), the modification could be due to some dense material left behind
by the progenitor, but detailed models with nonuniform ambient media are required to
verify or rule out this claim. In any case, it is clear from Figure \ref{fig:Contours_Ia}
that the modification of the CSM by the progenitor in N103B must be much weaker than what
is seen around typical CC SNRs.
\subsection{Type Ia SNRs: Remnants with well-determined expansion ages}\label{subsec:historical}
A reduced subset of Type Ia SNRs have well-determined ages, either because they are
associated with historical SNe (Kepler, Tycho, and SN 1006 have ages of 414, 446, and
1012 years, respectively), because they have well-observed light echoes \citep[0509$-$67.5
has an age of ${\sim} \,$ 400 years,][]{Res08}, or because their dynamics put very strong
constraints on their age
\citep[G1.9+0.3 has an age of ${\sim} \,$ 150 years,][]{Rey08,Car11,DeH14,Sar17b}.
These objects are particularly valuable benchmarks for our models, because their known
ages remove an important source of uncertainty in the interpretation of their bulk
dynamics.
We perform more detailed comparisons for this set of objects by taking our
models at 150 years (G1.9+0.3), 416$-$444 years
(0509$-$67.5, Kepler, and Tycho) and 1012 years (SN 1006). Figure
\ref{fig:Historical_contours_Ia} shows the same quantities as Figure \ref{fig:Contours_Ia},
but here we display the parameter space covered by our M$_{\rm Ch}$\ and sub-M$_{\rm Ch}$\ models at all
densities for each of the three age ranges mentioned above. The models at 416$-$444 years
can reproduce the observed properties of Kepler, Tycho, and 0509$-$67.5 quite well, even
with the added constraints from the known expansion ages, but we stress that detailed fits
to the entire X-ray spectra might reveal additional information (see \citealt{Pat12} for
Kepler, \citealt{Sla14} for Tycho).
In any case, we can say that the bulk dynamics of these three objects
disfavor variations from a uniform medium interaction as large as those
seen in typical CC SNRs. We note
that we have made no attempt to quantify the extent of the deviation from a uniform
ambient medium that could be accommodated while still yielding results that are consistent
with the observations, as it is beyond the scope of the present work.
For SN 1006, $R_{\rm{FS}}$, $E_{\, \rm{Fe}_{K\alpha}}$, and $L_{\, \rm{Fe}_{K\alpha}}$
are well reproduced by our models at 1012 years; though, given its surrounding ambient
density and physical size, $E_{\, \rm{Fe}_{K\alpha}}$ is larger than can be
explained by a uniform ambient medium interaction. For G1.9+0.3, $R_{\rm{FS}}$ and
$L_{\, \rm{Fe}_{K\alpha}}$ are close to the values predicted by our models at 150 years,
but $E_{\, \rm{Fe}_{K\alpha}}$ is too high to be reconciled with a uniform ambient medium
interaction. In both cases, the bulk properties of the SNRs might indicate an early interaction
with some sort of modified CSM. For SN 1006, this might be a low-density cavity, perhaps smaller
in size than the SNR. For G1.9+0.3, a thin, dense shell that changed the ionization state without
strongly affecting the dynamics might have been involved, as suggested by \cite{Ch16}. In both
cases, a detailed exploration of the parameter space for CSM interaction in Type Ia SNRs is
required to confirm or rule out specific scenarios.’
\section{Conclusions}\label{sec:conclusions}
We have presented a new grid of one-dimensional models for young SNRs arising from the
interaction between Type Ia explosions with different M$_{\rm Ch}$\ and sub-M$_{\rm Ch}$\ progenitors
and a uniform ambient medium. We have generated synthetic X-ray spectra for each model
at different expansion ages, separating the reverse and forward shock contributions.
Our model spectra are publicly available, and can easily be convolved with the spectral
responses of current and future X-ray missions like \textit{Chandra}, \textit{XRISM},
and \textit{Athena}. We have studied the bulk spectral and dynamical properties of our
models (Fe K$\alpha$ centroid energies and luminosities, radii, and expansion ages), and
have found that they provide an excellent match to the observations of most known Type Ia
SNRs, indicating that the majority of SN Ia progenitors do not seem to substantially modify
their surroundings on scales of a few parsecs, at least in comparison with CC SN progenitors.
In our models, the ambient medium density and expansion
age are the main contributors to the diversity of the bulk SNR properties, but detailed fits
to X-ray spectra can discriminate progenitor properties. We have also identified a few
objects that cannot be easily reproduced by SNR models with a uniform ambient medium
interaction, notably RCW 86, which is known to be a cavity explosion, and N103B, which is
probably interacting with dense material of some sort. A detailed exploration of the
parameter space for CSM interaction in Type Ia SNRs is required to gain further insight from
these objects.
\acknowledgments Support for this work has been provided by the Chandra Theory award TM8-19004X.
H.M.-R., C.B., and S.P. are funded by the NASA ADAP grant NNX15AM03G S01.
H.M.-R. also acknowledges support from a PITT PACC and a Zaccheus Daniel Predoctoral
Fellowship. D.J.P. acknowledges support from the Chandra Theory Program NASA/TM6-17003X and the
NASA contract NAS8-03060. S.-H.L. is supported by the Kyoto University Foundation
(grant No. 203180500017). E.B. acknowledges funding from the MINECO-FEDER grant AYA2015-63588-P.
The authors wish to thank the Lorentz Center and the organizers and participants
of the workshop ``Observational Signatures of Type Ia Supernova Progenitors (III)'' for
stimulating discussions that helped finish this work. We also thank Karin Sandstrom and
Rachel Bezanson for assistance with references regarding the Galactic hydrogen density probability
distribution function. This research has made extensive use of NASA's Astrophysics Data
System (ADS, \url{http://adswww.harvard.edu/}).
\software{\texttt{ChN}\ \citep{Ell07,Pat09,Ell10,Pat10,Cas12,Lee12,Lee13,Lee14,Lee15}, \texttt{Matplotlib}
\citep{Hun07}, \texttt{IPython} \citep{PeG07}, \texttt{Numpy} \citep{vaW11}, \texttt{AtomDB}
\citep{Fo12, Fo14}, \texttt{PyAtomDB} (\url{http://atomdb.readthedocs.io/en/master/}),
\texttt{Astropy} \citep{Astro13,Astro18}, \texttt{Python} (\url{https://www.python.org/}),
\texttt{SciPy} (\url{https://www.scipy.org/}).}
\begin{comment}
\newpage
|
2,869,038,156,303 | arxiv | \section{Introduction}
\label{sect:intro}
In the field of wave-based elastography, shear waves are used to characterize biomechanical properties of tissues \cite{Parker_2010}. While isotropy is a common assumption, tissues (e.g. muscle, heart, tendon, kidney, and possibly the brain) have an underlying principal direction of structures. Such principal direction is also known as the axis-of-symmetry in a transverse isotropic model of elasticity in solids \cite{Feng_2013,LEVINSON_1987}, or crystal/optic axis for electromagnetic wave propagation in anisotropic crystals \cite{Yariv:book,Aleman2016}. Therefore, the study of anisotropy of tissues in elastography is important and continues to be an emerging field.
Historically, contributions in the measurement of tissue anisotropy have been made for transient mechanical wave propagation in ultrasound elastography (USE) \cite{Gennisson_2010, Royer_2011, Wang_2013}, magnetic resonance elastography (MRE) \cite{Schmidt_2016, Chatelin_2016}, and optical coherence elastography (OCE) \cite{Singh_2016, Singh_2019}. Recently, developments in reverberant elastography have been conducted in USE \cite{Parker_2017,Ormachea_2018,Ormachea_2019} and OCE \cite{Zvietcovich_2019}. A reverberant shear wave (RSW) field is a limiting case of a statistically uniform distribution of plane shear waves propagating in all directions within a 3D elastic medium. Although reverberant elastography has been proven to be very effective in the biomechanical characterization of tissues with complex boundary conditions \cite{Zvietcovich_2019}, and highly attenuating media \cite{Ormachea_2019}, the theoretical derivation still relies on the assumption of an isotropic media.
In this paper, we present, for the first time, closed-form solutions to the case of RSW in anisotropic media using key concepts in the analysis of anisotropic crystals with electromagnetic waves. We derive analytical expressions to the complex autocorrelation of RSW fields in materials exhibiting a transverse isotropic model of elasticity for variable directions of: (1) the material's axis-of-symmetry, (2) the motion measurement vector direction (sensor), and (3) the complex autocorrelation function. Moreover, we develop a general solution for the isotropic model which includes the previous specific solutions derived in \cite{Zvietcovich_2019}. Analytical results are compared with finite element simulations for further validation. Finally, experimental results in chicken tibialis muscle are conducted using an optical coherence tomography (OCT) acquisition system for the characterization of degree of anisotropy using RSW fields and the proposed analytical solutions.
A different approach to random waves in media is passive elastography, \cite{Catheline_2008,Brum_2008,Gallot_2011,Benech_2013} also known as time reversal elastography. This is a fundamentally separate method: the autocorrelation used in RSW is a complex autocorrelation in both time and space derived from the limiting case of a distribution of waves across all directions, rather than a real autocorrelation only in time. In both cases, passive and RSW elastography, anisotropy has not been considered before.
The organization of this paper is as follows. In Section 2, we recall the theory behind electromagnetic waves in anisotropic media and its direct extension to mechanical shear waves in the reverberant case. In Section 3, the different combinations of shear wave polarizations, the material's axis-of-symmetry, and sensor directivity are examined, leading to a general treatment of the complex autocorrelation of RSW fields and the estimators that can characterize the anisotropy of tissues. In Section 4, numerical simulation results using finite elements are compared to the analytical equations for validation. In Section 5, OCE experiments are conducted in \emph{ex vivo} chicken muscle samples for the characterization of the shear modulus along the plane-of-isotropy (in-plane) and in the transverse plane parallel to the axis-of-symmetry (out-of-plane), assuming a transverse isotropic elastic model. Finally, in Section 6, we summarize the contributions of this paper to the field of reverberant elastography and, more generally, the elastography of anisotropic tissues.
\section{Electromagnetic waves in anisotropic media}
\subsection{Introduction}
The behavior and propagation of electromagnetic waves, as well as mechanical waves, differs strongly from isotropic to anisotropic materials. In isotropic media, the wave encounters the same response from the material, no matter its propagation and polarization (oscillation or perturbation) directions, resulting in a homogeneous and singular speed of propagation. However, in anisotropic media the response will depend on the direction of the perturbation, which is linked to the propagation direction in the case of shear waves. Hence, the propagation speed or effective optical index perceived by the wave will vary within a range depending on its characteristics.
The treatment of light in anisotropic crystals has long been a subject of interest, and modern theories include a formal dielectric tensor and an ellipsoid of wave normals \cite{Jenkins:book,Born:book,Yariv:book,Aleman2016}. In such crystals, a given plane transversal wave can be decomposed in two eigenmodes of propagation, generally called in uniaxial materials \textit{ordinary} and \textit{extraordinary}. These have orthogonal polarization states, however not necessarily the same speed of propagation. We use plane waves since any field can be expressed using plane-wave decomposition and because they are compatible with the reverberant studies done previously \cite{Parker_2017,Ormachea_2018,Ormachea_2019,Zvietcovich_2019}. The following approach concerns electromagnetic waves, and it will be extended directly to mechanical shear waves in Section 3.
\subsection{Theory}
We will assume an homogeneous and non-magnetic (or at least magnetically isotropic) medium without free charges or currents. Given these assumptions, we can focus only on the electric field $\mathbf{E}=\mathbf{E_0}e^{i(\mathbf{k}\cdot \mathbf{r}-\omega t)}$,
where $\mathbf{k}$ is the wave vector, $\mathbf{r}$ is the 3D position vector, and $\omega$ its frequency. For light, we have that $\mathbf{k}=\frac{\omega n_{\text{eff}}}{c}\mathbf{\hat{g}}$, in which $c$ is the speed of light in vacuum, $n_{\text{eff}}$ is the effective refractive index perceived by the wave inside the medium, and $\mathbf{\hat{g}}$ is the unitary wave vector direction. Besides Maxwell's equations, the constitutive relations describe how media responds to electromagnetic fields, in our case the electric field produces an electric displacement field inside the material of $\mathbf{D}=\boldsymbol{\epsilon}\cdot\mathbf{E}$,
where $\boldsymbol{\epsilon}$ is the dielectric tensor, a second order tensor. As opposed to isotropic materials, $\mathbf{D}\nparallel\mathbf{E}$, which leads to a \textit{walk-off angle}\cite{Born:book,Yariv:book} between the Poynting vector (energy propagation direction) and the wave vector (phase acquisition direction).
In general there exists a coordinate system such that the dielectric tensor becomes a diagonal matrix where the entries are the principal dielectric responses:
\begin{equation}
\boldsymbol{\epsilon}=\begin{pmatrix}
\epsilon_x &0 &0 \\
0& \epsilon_y & 0 \\
0& 0& \epsilon_z
\end{pmatrix}.
\end{equation}
Another expression for non-magnetic media is given in terms of the principal optical indices and the electric permittivity in a vacuum, $\epsilon_0$, using $n_i^2=\epsilon_i/\epsilon_0$, so the tensor becomes $\epsilon_o\boldsymbol{n}=\boldsymbol{\epsilon}$, where $n_i$ stands for the corresponding principal optic indexes.
Both principal directions and its values depend on the structure of the media, and so dielectric materials can be grouped in three different categories:
\begin{itemize}
\item \textit{Isotropic,} where $\epsilon_x=\epsilon_y=\epsilon_z$, so $\boldsymbol{\epsilon}$ can be reduced to a scalar and $\mathbf{D}\parallel\mathbf{E}$.
\item \textit{Uniaxial birefringent}, where two principal dielectric responses are equal, meaning that there is a plane in which all the directions of perturbation are equivalent. In literature, the two terms that are repeated correspond to the ordinary index $n_o$, and the extraordinary index $n_e$. For these materials there is one unique propagation direction in which the optical index is independent of porlarization, hence the name uniaxial. This direction is referred to as the crystal axis direction\cite{Aleman2016} (we refrain from the term \textit{optic axis}\cite{Yariv:book} to avoid confusion with the system's \textit{optical axis}), which we indicate as $\mathbf{A}$. For example, if the crystal axis were in the $\mathbf{z}$ direction, then the coefficients would be $\epsilon_x=\epsilon_y=\epsilon_0n_o^2$ and $\epsilon_z=\epsilon_0n_e^2$.
\item \textit{Biaxial birefringent}, where $\epsilon_x\neq\epsilon_y\neq \epsilon_z$.
Here there are two directions of propagation in which the optical index is independent of the polarization, hence the name \textit{biaxial}
\end{itemize}
Nevertheless the experiment's geometry doesn't generally correspond to this very specific coordinate system in which $\boldsymbol{\epsilon}$ is diagonal, and we are interested in what happens when light does not oscillate/propagate in any of the principal directions.
To address this condition, in both an analytical and graphical way, we need to use the wave equation, which in the $k$-domain (or using our plane-wave assumption) is
\begin{equation}
\mathbf{k}\times\left(\mathbf{k}\times\mathbf{E}\right)=-\omega^2 \mu \boldsymbol{\epsilon}\mathbf{E},
\end{equation}
where $\mu$ is the magnetic permeability of the material. Then, using $c^2\approx1/(\mu\epsilon_0)$ since we are assuming non-magnetic materials\cite{Born:book,Yariv:book}, the equation is reduced to a homogeneous system
\begin{equation} \label{eq:wave_eq__plane}
\left((\mathbf{\hat{g}}\mathbf{\hat{g}})-\mathbb{I}+\frac{1}{n_{\text{eff}}^2}\mathbf{n}\right)\mathbf{E}=0,
\end{equation}
where $\mathbb{I}$ is the identity matrix, and $(\mathbf{\hat{g}}\mathbf{\hat{g}})$ is the dyadic product (i.e. the tensor whose entries are of the form $g_ig_j$, where $g_{i}$ is the components of the normalized wave vector). Note that the determinant of Eq. (\ref{eq:wave_eq__plane}) must vanish in order to obtain non-trivial solutions. Hence we obtain
\begin{equation}\label{eq:normal_surfaces}
\mathcal{G}:~~~\begin{vmatrix}
\frac{n^2_x}{n^2_{\text{eff}}}-(1-g_x^2) & g_xg_y & g_xg_z\\
g_yg_x & \frac{n^2_y}{n^2_{\text{eff}}}-(1-g_y^2) & g_yg_z\\
g_zg_x & g_zg_y& \frac{n^2_z}{n^2_{\text{eff}}}-(1-g_z^2) \\
\end{vmatrix}=0.
\end{equation}
The surfaces defined by this equation consist of two shells in $k$-space, also called normal-surfaces, which have a nice interpretation: they are the surfaces made by all the eigenvectors of the material, meaning that in any direction there are two eigenmodes of wave propagation that have different wave vector magnitudes and have orthogonal polarizations with respect to each other. In other words, a plane wave propagating inside the material in a given direction will be decomposed in to two parallel-propagating plane waves which perceive, in general, different effective optical indices. Commonly, these two shells have four points in common (biaxial materials have four points, while uniaxial only two), and the lines that pass through them and the origin define the crystal axes previously discussed, see Figure \ref{fig:normal_surface}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{NewFig1.pdf}
\caption{Hypothetical normal-surfaces with their respective crystal axes shown as thick black lines. (a) Biaxial case, here $n_x<n_y<n_z$, so the crystal axes lie in the $x$-$z$ plane. The surfaces cannot be expressed in terms of simple geometrical objects. (b) Uniaxial case with $n_o=n_x=n_y$ and $n_x<n_z=n_e$. One of the surfaces is always a sphere, while the other is an ellipsoid that touches the sphere along the crystal axis. (c) Isotropic case, for which both shells become one single sphere and there is no definite crystal axis.}
\label{fig:normal_surface}
\end{figure}
To determine the polarization (oscillation or perturbation) direction of each propagating plane wave eigenmode, one needs to solve the full eigenvalue/eigenvector problem, i.e. solve the wave equation given by Eq. (\ref{eq:wave_eq__plane}).
This problem can also be handled via the Fresnel equations \cite{Born:book}. In the following subsections isotropic and uniaxial scenarios are discussed, and their polarization states are described. The biaxial case is far more cumbersome, nevertheless, as expected,the eigenmodes are orthogonally polarized, i.e. $\mathbf{D_1}\cdot\mathbf{D_2}=0$, where the subscripts 1 and 2 are their corresponding labels.
\subsubsection{Isotropic media}
As shown in Figure \ref{fig:normal_surface}c, in isotropic materials the normal surface becomes a single sphere centered at the origin, so in each direction of propagation the wave vector will have the same magnitude, i.e., perceive the same effective index.
Additionally, since the eigenmode problem is degenerate, any plane wave with a given polarization can be regarded as an eigenmode of propagation.
\subsubsection{Uniaxial media}
Uniaxial birefringent materials (analogous to cornea and muscles for mechanical waves) can be described by a crystal axis direction $\mathbf{\hat{A}}$ and two optical indices: the ordinary $n_o$ and the extraordinary $n_e$ indices. It follows that Eq. (\ref{eq:normal_surfaces}) simplifies to
\begin{equation}\label{eq:unniaxial_normal}
\mathcal{G}:~~~ \left(\frac{n^2_{\text{eff}}}{n_o^2}-1\right)\left(\frac{n^2_{\text{eff}}\sin^2\psi}{n_e^2}+\frac{n^2_{\text{eff}}\cos^2\psi}{n_o^2}-1\right)=0,
\end{equation}
where $\psi$ is the angle between the wave vector and the crystal axis, i.e. $\cos\psi=\mathbf{\hat{g}}\cdot\mathbf{\hat{A}}$. The normal-surfaces correspond to a sphere (first term) and an ellipsoid (second term), both centered at the origin and that touch along the crystal axis direction. Note that the ellipsoid is symmetric with respect to the crystal axis. According to Eq. (\ref{eq:unniaxial_normal}), one of the two eigenmodes of wave-propagation perceives the same effective index no matter its direction of propagation, while for the other eigenmode, $n_2$, it depends on the angle of the wave vector with respect to the crystal axis, varying within the two extremae values $n_o$ and $n_e$. Explicitly,
defining $k_{\text{eff}}=2\pi n_{\text{eff}}/\lambda$, we have
\begin{equation}\label{eq:k_dependnecy}
\begin{aligned}
k_{\text{eff},1}&=k_o, & k_{\text{eff},2}&=\frac{k_ok_e}{\sqrt{k_o^2\sin^2\psi +k_e^2\cos^2\psi}},
\end{aligned}
\end{equation}
which is shown for two hypothetical $k_o$ and $k_e$ in Figure \ref{fig:normal_surface}b.
As mentioned previously, the effective index will vary depending on the wave propagation direction, and also on its polarization state, which can be obtained from solving the eigenvalue/eigenvector problem in Eq. (\ref{eq:wave_eq__plane}). The resulting polarization directions (normalized)\cite{Yariv:book} are
\begin{equation}\label{eq:optical_polarizations}
\begin{aligned}
\mathbf{\hat{D}}_1&=\frac{\mathbf{\hat{g}}\times\mathbf{\hat{A}}}{|\mathbf{\hat{g}}\times\mathbf{\hat{A}}|}=\frac{\mathbf{\hat{g}}\times\mathbf{\hat{A}}}{\sin\psi}, \\ \mathbf{\hat{D}}_2&=\frac{(\mathbf{\hat{g}}\times\mathbf{\hat{A}})\times\mathbf{\hat{g}}}{|\mathbf{\hat{g}}\times(\mathbf{\hat{g}}\times\mathbf{\hat{A}})|}=
\frac{\mathbf{\hat{A}}-\mathbf{\hat{g}}\cos\psi }{\sin\psi},
\end{aligned}
\end{equation}
meaning that an ordinary mode does not have any component along the crystal axis direction
\section{Reverberant elastography in anisotropic media}
\subsection{Introduction}
The generalization of the case involving mechanical waves is far more complicated than the electromagnetic case (Section 2). While in electromagnetism waves are only transversal, in the mechanical case, the elastic media support the propagation of three types of waves: two shear waves with orthogonal and transversal motion polarization, and one compression wave with longitudinal motion polarization. Furthermore, the role of the $3\times3$ electromagnetic tensor is now played by the stiffness tensor $\mathbf{c}$, a $3\times3\times3\times3$ tensor. Fortunately instead of 81 coefficients, given symmetry and energy conservation conditions, $\mathbf{c}$ only has 21 independent elements\cite{Graff:book} -- compared with $\boldsymbol{\epsilon}$ that has three independent elements.
In this section, the reverberant theory is extended to anisotropic materials, specifically to uniaxial birefringent media which, in elastic solids, is equivalent to the transverse isotropic model \cite{Feng_2013}. The expressions for wave-number $\mathbf{k}$ (equivalent to effective index for electromagnetic waves), and the motion direction (polarization states for electromagnetic waves) of the mechanical wave perturbation need to be defined from Section 2 since they transit from electromagnetic to mechanical shear waves. We are extending the transversal wave dynamics of light into elastic bodies, ignoring completely the compression waves, which in any case propagate at much higher speeds and are not considered in this paper. In Section 3.2, we revisit the isotropic reverberant case providing a generalization of equations provided in previous works\cite{Parker_2017,Ormachea_2018,Ormachea_2019,Zvietcovich_2019} for specific cases, and finalizing with the derivation for the anisotropic case in Section 3.3.
\subsection{Isotropic media}
A spatio-temporal particle velocity (motion) reverberant field is defined as $\mathbf{V}(\mathbf{r},t)$, where $\boldsymbol{r}$ represents the 3D position vector and $t$ is time. This field
is the superposition of all possible plane shear waves traveling in random directions with the same wave-number, $k=|\mathbf{k}|$, and frequency, $\omega_0$,
\begin{equation}
\mathbf{V}(\mathbf{r},t)=\sum_{q,l}\mathbf{\hat{V}}_{ql} v_{ql}e^{i\left(k\mathbf{\hat{g}}_q\cdot\boldsymbol{r}-\omega_0t\right)}.
\end{equation}
The subscript $q$ specifies a realization of $\mathbf{\hat{g}}_q$, a random unit vector indicating the direction of wave propagation, and the index $l$ indicates a realization of $\mathbf{\hat{V}}_{ql}$, the random vector describing the direction of perturbation (particle velocity for mechanic waves, corresponding to polarization of light for the field $\mathbf{\hat{D}}$). Since we are dealing with transversal waves, $\mathbf{\hat{V}}_{ql}\cdot \mathbf{\hat{g}}_q=0$. Lastly, $v_{ql}$ is an independent, identically-distributed random variable describing the magnitude of the particle velocity within a realization. The summation over $q$ is understood to be taken over the $4\pi$ solid angle, while over $l$ it is taken over a $2\pi$ angle within a disk perpendicular to the wave direction given by $\mathbf{\hat{g}}_q$.
Here we proceed differently than in previous works \cite{Parker_2017,Ormachea_2018,Ormachea_2019,Zvietcovich_2019} using the fact that any oscillation can be decomposed in a vector basis consisting of two directions orthogonal to the wave propagation, $\mathbf{\hat{g}}$. Thus our sampling consists of independent realizations of these two directions, instead of sampling overall possible directions of oscillation
These two approaches are equivalent and arrive at the same expressions, nevertheless we opt for the decomposition method since it can be extended directly to tackle the anisotropic problem.
Let us use spherical coordinates to express the direction of wave propagation. For simplicity we choose the following basis, note the resemblance to Eq. (\ref{eq:optical_polarizations}),
\begin{equation}\label{eq:decmposition_iso}
\begin{aligned}
\mathbf{\hat{V}}_{1}&=\frac{\mathbf{\hat{g}}\times\mathbf{\hat{z}}}{\sqrt{1-(\mathbf{\hat{g}}\cdot\mathbf{\hat{z}})^2}}=\boldsymbol{\hat{\varphi}}, \\ \mathbf{\hat{V}}_2&=\frac{(\mathbf{\hat{g}}\times\mathbf{\hat{z}})\times\mathbf{\hat{g}}}{\sqrt{1-(\mathbf{\hat{g}}\cdot\mathbf{\hat{z}})^2}}=\boldsymbol{\hat{\theta}},
\end{aligned}
\end{equation}
where $\boldsymbol{\hat{\theta}}=\cos(\theta)\cos(\varphi)\mathbf{\hat{x}}+\cos(\theta)\sin(\varphi)\mathbf{\hat{y}}-\sin(\theta)\mathbf{\hat{z}}$ and $\boldsymbol{\hat{\varphi}}=\cos(\varphi)\mathbf{\hat{y}}-\sin(\varphi)\mathbf{\hat{x}}$ are the unit vectors in the polar and azimuthal directions at ($\theta$,$\varphi$), respectively, and consequently $\mathbf{\hat{x}},~\mathbf{\hat{y}}$ and $\mathbf{\hat{z}}$ are the unitary Cartesian coordinate vectors, see Figure \ref{fig:geometry}. Therefore we have
\begin{equation}\begin{split}
\mathbf{V}(\mathbf{r},t)=\sum_{q_1,l_1}\mathbf{\hat{V}}_{q_1,l_1} v_{q_1,l_1}e^{i\left(k_1\mathbf{\hat{g}}_{q_1}\cdot\boldsymbol{r}-\omega_0t\right)}+ \\
\sum_{q_2,l_2}\mathbf{\hat{V}}_{q_2l_2} v_{q_2l_2}e^{i\left(k_2\mathbf{\hat{g}}_{q_2}\cdot\boldsymbol{r}-\omega_0t\right)},
\end{split}\end{equation}
where both contributions come from independent realizations.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{Figure2.pdf}
\caption{Mode decomposition for any shear wave with direction given by $\mathbf{\hat{g}}_q$ (radial direction, defined by the angles $\theta$ and $\varphi$). Any perturbation direction $\mathbf{\hat{V}}_{ql}$, since it is transversal, can be expressed in terms of its $\boldsymbol{\hat{\theta}}$ and $\boldsymbol{\hat{\varphi}}$ components.
Therefore, instead of sampling randomly this perturbation direction and obtaining the projections, we sample each mode independently, i.e. sample $\mathbf{\hat{V}}_{q_1,l_1}$ and $\mathbf{\hat{V}}_{q_2,l_2}$. }
\label{fig:geometry}
\end{figure}
Given that ultrasound and OCT systems typically measure the particle velocity in one direction, which we denote as the sensor axis, $\mathbf{\hat{e}_s}$, we will project this resulting particle velocity, $ V_s(\mathbf{r},t)=\mathbf{V}(\mathbf{r},t)\cdot\mathbf{\hat{e}}_s$, according to the desired geometry,
\begin{equation}\label{eq:V_projected}\begin{split}
V_s(\mathbf{r},t)=\sum_{q_1,l_1}V_{q_1,l_{1s}} v_{q_1,l_1}e^{i\left(k\mathbf{\hat{g}}_{q_1}\cdot\boldsymbol{r}-\omega_0t\right)} + \\
\sum_{q_2,l_2}V_{q_2l_{2s}}v_{q_2l_2}e^{i\left(k\mathbf{\hat{g}}_{q_2}\cdot\boldsymbol{r}-\omega_0t\right)},
\end{split}\end{equation}
where $V_{ql_s}=\mathbf{\hat{V}}_{ql}\cdot\mathbf{\hat{e}}_s$ becomes a scalar random variable. We are interested in the autocorrelation function of Eq. (\ref{eq:V_projected}) in both space and time, which we denote as $B_{V_sV_s}$, and is defined as
\begin{equation}\label{eq:full_autocorrelation}
B_{V_sV_s}(\boldsymbol{\Delta r},\Delta t)=\mathbb{E}\left\{V_s(\mathbf{r},t)V_s^*(\mathbf{r}+\boldsymbol{\Delta r},t+\Delta t)\right \}
\end{equation}
where $\mathbb{E}$ represents an ensemble average and the asterisk represents conjugation. Many of the terms correspond to cross terms which will vanish given that they correspond to independent realizations, so Eq. (\ref{eq:full_autocorrelation}) simplifies to
\begin{equation}\begin{split}\label{eq:full_autocorrelation1}
B_{V_sV_s}(\boldsymbol{\Delta r},\Delta t)=\frac{\overline{v^2}}{2}e^{i\omega_0\Delta t}~ \times~~~~~~~~~ \\ \mathbb{E}\left\{ \sum_{q_1,l_1} V_{{q_1l_1}_s}^2 e^{-ik\mathbf{\hat{g}}_{q_1}\cdot\boldsymbol{\Delta r}}+
\sum_{q_2,l_2} V_{{q_2l_2}_s}^2 e^{-ik\mathbf{\hat{g}}_{q_2}\cdot\boldsymbol{\Delta r}} \right\},
\end{split}\end{equation}
in which we renamed the expected value of the squared velocity of the particle in each direction, i.e. $\expval{v_{q_1l_1}^2}_{q_1l_1}=\expval{v_{q_2l_2}^2}_{q_2l_2}=\overline{v^2}/2$, assuming that each component has half the energy. Note that we could factor Eq. (\ref{eq:full_autocorrelation1}) out given the independence between $v_{q l}$ and $\{\hat{g}_q,V_{ql_s}\}$. In an ideal reverberant field this ensemble average becomes the average over all possible directions of wave propagation (over $4\pi$), specified in spherical coordinates with $(\theta,\varphi)$. Therefore, renaming $B_{V_sV_s}:=B_{\text{iso}}$, we have
\begin{equation}\label{eq:B_isotropic_integral}\begin{split}
B_{iso}(\boldsymbol{\Delta r},\Delta t)=\frac{\overline{v^2}}{8\pi}e^{i\omega_0\Delta t}\int_{0}^{2\pi}\int_{0}^{\pi}\left[V_{1,s}^2(\theta,\varphi) + \right. \\ \left.
V_{2,s}^2(\theta,\varphi)\right]e^{-ik\mathbf{\hat{g}}\cdot\boldsymbol{\Delta r}}\sin\theta d\theta d\varphi.
\end{split}\end{equation}
To solve this integral, we choose the direction of correlation that results in the greatest simplification, i.e. along the $z$-axis, $\boldsymbol{\Delta r}=(\Delta z) \boldsymbol{\hat{z}}$, so
\begin{equation}\label{eq:correlationdirection}
\mathbf{\hat{g}}\cdot(\Delta z)\mathbf{\hat{z}}=k\Delta z \cos\theta.
\end{equation}
We must set the direction along which the particle velocity will be measured (also called sensor axis), and, given the symmetry around the $z$-axis, we choose it to be somewhere along the $xz$ plane, so $\mathbf{\hat{e}}_s=\cos\theta_s \mathbf{\hat{z}}+\sin\theta_s \mathbf{\hat{x}}$, where $\theta_s$ is the angle of the sensor with respect the $z$ axis:
\begin{equation}\label{eq:velocitydecomposition}
\begin{aligned}
V_{1,s}(\theta,\varphi)&=-\sin\varphi\sin\theta_s, \\
V_{2,s}(\theta,\varphi)&=\cos\varphi\cos\theta\sin\theta_s-\sin\theta\cos\theta_s.
\end{aligned}
\end{equation}
Note that whenever $\theta_s=0$ the sensor is parallel to the correlation direction, while when $\theta_s=\pi/2$ the sensor and correlation directions become perpendicular. These canonical scenarios are the two cases that have been studied previously \cite{Parker_2017,Ormachea_2018,Ormachea_2019,Zvietcovich_2019}. Substituting Eqs. (\ref{eq:correlationdirection}-\ref{eq:velocitydecomposition}) into Eq. (\ref{eq:B_isotropic_integral}) and solving the integral leads to
\begin{equation}\label{eq:isotropic_result}
\begin{aligned}\begin{split}
B_{\text{iso}}(\Delta z,\Delta t)=\overline{v^2}e^{i\omega_0 \Delta t} \left\{ \frac{\sin^2\theta_s}{2} \left[ j_0 (k \Delta z)- \right. \right. \\ \left. \left.
\frac{j_1(k \Delta z)}{k \Delta z} \right]+\cos^2\theta_s \frac{j_1(k \Delta z)}{k \Delta z} \right\},
\end{split}\end{aligned}
\end{equation}
where $j_n(x)$ are the spherical Bessel functions of order $n$. Analogously, considering the setup frame in which the sensor is generally fixed, we can define the sensor axis to be the $z'$ axis and interpret $\theta_s$ as the autocorrelation direction angle with respect to $z'$ (sensor axis). Then, note that Eq. (\ref{eq:isotropic_result}) is a linear combination of the two canonical cases: correlation parallel or perpendicular to the sensor reported in \cite{Parker_2017, Zvietcovich_2019}. It follows that the width of the central region is related to the wave-number, $k$, and so its value can be estimated by fitting the measurements, see Figure \ref{fig:Bvv_FullIsotropic}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Figure3.pdf}
\caption{Autocorrelation functions for an isotropic material. The functions are normalized, and the actual maximum at the origin is $1/3$. (a) 2D autocorrelation map. The sensor direction corresponding to the $z'$ axis. (b) 1D profiles for different correlation directions, with an angle $\theta_s$ with respect the sensor axis. Only one half of the plots are shown given that they are symmetric.}
\label{fig:Bvv_FullIsotropic}
\end{figure}
\subsection{Anisotropic media: uniaxial case}
Unlike the isotropic case, for uniaxial materials we cannot select any two vectors to decompose the oscillation, but instead we have to use the natural decomposition in ordinary and extraordinary modes. Our assumption is that both eigenmodes are equally represented and that each carries half of the energy, given the reverberant chamber condition. Therefore, the reverberant field is given by the summation of ordinary and extraordinary waves,
\begin{equation}\begin{split}
\mathbf{V}(\mathbf{r},t)=\sum_{q_1,l_1}\mathbf{\hat{V}}_{q_1l_1} v_{q_1l_1}e^{i\left(k_1\mathbf{\hat{g}}_{q_1}\cdot\boldsymbol{r}-\omega_0t\right)}+ \\
\sum_{q_2,l_2}\mathbf{\hat{V}}_{q_2l_2} v_{q_2l_2}e^{i\left(k_2\mathbf{\hat{g}}_{q_2}\cdot\boldsymbol{r}-\omega_0t\right)},
\end{split}\end{equation}
where the labels $1$ and $2$ stand for ordinary and extraordinary modes, respectively. Similarly to the isotropic case, both contributions are independent from each other and random, so cross terms vanish. Consequently, the autocorrelation ends up being the average of both ordinary and extraordinary contributions over $4\pi$, so
\begin{equation}\label{eq:B_general_integral}\begin{split}
B_{\text{aniso}}(\boldsymbol{\Delta r},\Delta t)=\frac{\overline{v^2}}{8\pi}e^{i\omega_0\Delta t}\int_{0}^{2\pi}\int_{0}^{\pi}\left(V_{1,s}^2 e^{-ik_1\mathbf{\hat{g}}\cdot\boldsymbol{\Delta r}} \right. \\ \left.
+V_{2,s}^2 e^{-ik_2\mathbf{\hat{g}}\cdot\boldsymbol{\Delta r}}\right)\sin\theta d\theta d\varphi.
\end{split}\end{equation}
Before proceeding, we need to revisit the corresponding oscillation directions for each eigenmode. Unlike the electromagnetic case, in which the electric field $\mathbf{E}$ may oscillate along any arbitrary direction and the dielectric tensor responds differently to each direction, for mechanical shear waves the stiffness tensor and the stress are defined in planes rather than directions. Let us consider only the shearing dynamics of a transverse isotropic elastic model of a linear-elastic medium and write the corresponding part of the stiffness tensor in the coordinate system which diagonalizes it \cite{Graff:book},
\begin{equation}
\begin{pmatrix}
\gamma_{X'Y'}\\
\gamma_{X'Z'}\\
\gamma_{Y'Z'}
\end{pmatrix}=\begin{pmatrix}
1/G_e & 0 & 0\\
0& 1/G_o &0\\
0 &0& 1/G_o
\end{pmatrix} \begin{pmatrix}
\sigma_{X'Y'}\\
\sigma_{X'Z'}\\
\sigma_{Y'Z'}
\end{pmatrix}.
\end{equation}
Since it is still a $3\times 3$ tensor, the mathematics remain the same as in the electromagnetic case, however, the physical interpretation changes dramatically. Here the eigenvalue corresponding to the extraordinary mode (multiplicity of one) is related to shear deformations along the plane perpendicular to the axis-of-symmetry, $\boldsymbol{\hat{A}}$ ($\boldsymbol{\hat{z}'}$). However, the ordinary eigenvalue is related to components that include this axis. Therefore, the oscillation of each eigenmode propagating along $\mathbf{\hat{g}}$ are swapped with respect to the electromagnetic case, i.e.
\begin{equation}\label{eq:mechanical_polarizations}
\begin{aligned}
\mathbf{\hat{V}}_1&=\frac{(\mathbf{\hat{g}}\times\mathbf{\hat{A}})\times\mathbf{\hat{g}}}{\sqrt{1-(\mathbf{\hat{g}}\cdot\mathbf{\hat{A}})^2}}, & \mathbf{\hat{V}}_2&=\frac{\mathbf{\hat{g}}\times\mathbf{\hat{A}}}{\sqrt{1-(\mathbf{\hat{g}}\cdot\mathbf{\hat{A}})^2}}.
\end{aligned}
\end{equation}
Additionally, when considering anisotropy not only do calculations get convoluted, but more cases appear since the axis-of-symmetry (crystal axis in optics) $\mathbf{\hat{A}}$ has to be considered along with the correlation direction and the sensor axis. Nevertheless, there is an immediate conclusion obtained from Eq. (\ref{eq:mechanical_polarizations}): the extraordinary mode oscillation doesn't have any component along the axis-of-symmetry, see Figure \ref{fig:Polarization_Uniaxial}. As a result, whenever the sensor is along $\mathbf{\hat{A}}$, only the ordinary contribution will be measured, and so we expect $k$ to be related only to $k_o$. In the rest of the cases we expect the extraordinary
contribution to spread the range of values of $k$ within $k_o$ and $k_e$.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{NewFig4.pdf}
\caption{The polarization eigenmodes are shown in k-space, all extraordinary modes do not have any component along the axis-of-symmetry (black arrow). }
\label{fig:Polarization_Uniaxial}
\end{figure}
In order to solve analytically the integral in Eq. (\ref{eq:B_general_integral}), an approximation must be made about the amount of anisotropy: we assume that it is small. In other words,
\begin{equation} \label{eq:Delta_e}
|\delta_e|=\left|\frac{k_e^2-k_o^2}{k_e^2}\right|\ll1,
\end{equation}
such that the exponential can be expanded as a Taylor series around $k_o$ and with respect to $\delta_e$, i.e.
\begin{equation}\begin{split}
e^{-ik_2\mathbf{\hat{g}}\cdot\boldsymbol{\Delta r}}\approx e^{-ik_o\mathbf{\hat{g}}\cdot\boldsymbol{\Delta r}}\left(1-i\frac{k_o\delta_e}{2}\left(\mathbf{\hat{g}}\cdot\boldsymbol{\Delta r}\right) \right. \\ \left. \left[1-(\mathbf{\hat{g}}\cdot\mathbf{\hat{A}})^2\right]\right).
\end{split}\end{equation}
Given the expansion, the autocorrelation can be rearranged such that the resulting expressions can be regarded as adding corrections to the isotropic results derived
in Eq. (\ref{eq:isotropic_result}). Explicitly:
\begin{equation}\begin{split}
B_{\text{aniso}}(\boldsymbol{\Delta r},\Delta t;k_o,k_e)=B_{\text{iso}}(\boldsymbol{\Delta r},\Delta t;k_o)+\\
\delta B(\boldsymbol{\Delta r},\Delta t;k_o,k_e),
\end{split}\end{equation}
where the anisotropic correction, assuming $\mathbf{\hat{e}_s}$ as sensor axis, becomes
\begin{equation}\label{eq:anisotropic_correction}\begin{split}
\delta B=-i\frac{k_o\delta_e}{2}\frac{\overline{v^2}}{8\pi}e^{i\omega_0\Delta t}\int_{0}^{2\pi}\int_{0}^{\pi}\left(\mathbf{\hat{g}}_{q}\cdot\boldsymbol{\Delta r}\right) \\
(\mathbf{\hat{e}_s}\cdot [\mathbf{\hat{g}}\times \mathbf{\hat{A}}])^2 e^{-ik_2\mathbf{\hat{g}}_{q}\cdot\boldsymbol{\Delta r}} \sin\theta d\theta d\varphi,
\end{split}\end{equation}
and in which the explicit dependency of $\delta B$ with respect $\mathbf{\Delta r}$ and $\Delta t$ was dropped.
There are several studies characterizing anisotropic samples such as muscles \cite{LEVINSON_1987,Wang_2013} and tendons \cite{Brum_2014,Aubry2013}.
Although assuming small anisotropy is acceptable in many optical materials\cite{Yariv:book}, for mechanical waves it may not be, e.g. muscles with weight loads. In these mechanical cases, it may be safer to define $k_m=(k_o+k_e)/2$ and $k_d=(k_e-k_o)/2$, so the expansion can be done around $k_m$ and with respect to the relative anisotropy $\delta=k_d/k_m$. However, the resulting expressions become longer since the zeroth order terms cannot be grouped to retrieve the known isotropic results.
Finally we only have to proceed with the calculation of the anisotropic correction. As in the isotropic case, we choose the correlation direction along $z$ to simplify the integration. We consider two cases: correlation perpendicular to the sensor direction, and correlation parallel to it. For both cases, an arbitrary axis-of-symmetry of the medium is given by its spherical coordinates $(\theta_A,\varphi_A)$ or in Cartesian coordinates by $\mathbf{\hat{A}}=\alpha \mathbf{\hat{x}}+\beta\mathbf{\hat{y}}+\gamma\mathbf{\hat{z}}=\sin\theta_A\cos\varphi_A\mathbf{\hat{x}}+\sin\theta_A\sin\varphi_A\mathbf{\hat{y}}+\cos\theta_A\mathbf{\hat{z}}$.
\begin{enumerate}
\item \textbf{Perpendicular correlation and sensor directions, i.e. $\boldsymbol{\mathbf{\theta_s}=\pi/2}$.}
Given that the correlation direction is along $z$, for $\mathbf{\theta_s}=\pi/2$, we choose the sensor axis to lie along $\mathbf{\hat{x}}$. Then, for an arbitrary axis-of-symmetry direction $\mathbf{\hat{A}}$, the integration of Eq. (\ref{eq:anisotropic_correction}) leads to
\begin{equation}\begin{split}
\delta B_{\perp}=-\frac{\delta_e}{4}\left\{\beta^2 \left[2j_2(k_o \Delta z)- j_0(k_o \Delta z) + \right.\right. \\ \left. \left.
\cos(k_o \Delta z)\right]+\gamma^2j_2(k_o \Delta z)\right\}.
\end{split}\end{equation}
There is a harmonic term which does not decay with correlation distance, as would be expected. This is not a contradiction, but rather an artifact from the Taylor expansion: we are expanding the exponential and as correlation distance increases this first order approximation fails and more terms are needed.
Note that the component of the axis-of-symmetry along the sensor direction doesn't appear explicitly. This was expected since along the sensor axis the correction vanishes (extraordinary contribution becomes zero). Therefore varying $\alpha$ changes the magnitude of the correction, but doesn't alter its shape, which depends solely on the ratio between $\beta$ and $\gamma.$ The complete autocorrelation function becomes
\begin{equation}\label{eq:CaseA}
\begin{split}
B_{V_sV_s}&=\frac{1}{2}\left( j_0(k_o\Delta z)-\frac{j_1(k_o\Delta z)}{k_o\Delta z}\right) \\
-&\frac{\delta_e}{4}\bigg\{\cos^2\theta_A j_2(k_o \Delta z)- \\
\sin^2\theta_A\sin^2\varphi_A \big[&2j_2(k_o \Delta z)- j_0(k_o \Delta z)+ \cos(k_o \Delta z)\big]\bigg\}.
\end{split}\end{equation}
Figure \ref{fig:GroupCaseA} shows the anisotropic result for different axis-of-symmetry orientations. When the axis-of-symmetry lies in the $yz$-plane ($\varphi_A=\pi/2$), i.e. $\mathbf{\hat{A}}=\sin\theta_A\mathbf{\hat{y}}+\cos\theta_A\mathbf{\hat{z}}$, the sensor is perpendicular to both the axis-of-symmetry and correlation directions. This case corresponds to the maximum anisotropic contribution given any $\theta_A$. Even if departure of central lobes is not pronounced, their difference becomes significant after the first zero. On the other hand, whenever $\varphi_A=0$, the axis-of-symmetry lies in the $xz$-plane as $\mathbf{\hat{A}}=\sin\theta_A\mathbf{\hat{x}}+\cos\theta_A\mathbf{\hat{z}}$, and the anisotropic contribution is the smallest (since the axis-of-symmetry projection on the sensor direction is the highest given a certain $\theta_A$). As seen in Fig. \ref{fig:GroupCaseA}, the autocorrelation function does not vary strongly for weak anisotropy in this configuration.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.85\linewidth]{NewFig5.pdf}
\caption{Autocorrelation function obtained with sensor perpendicular to correlation direction. Comparison of different axis-of-symmetry directions $(\theta_A,\varphi_A)$, and with a normalized anisotropy constant $\delta_e\approx0.23$. Scaling factor of $3$ is used for all the curves.}
\label{fig:GroupCaseA}
\end{figure}
\item \textbf{Parallel correlation and sensor directions, i.e. $\boldsymbol{\mathbf{\theta_s}=0}$.}
In this case, both the sensor and the correlation directions are along $z$; then $\mathbf{\hat{e}}_s=\mathbf{\hat{z}}$ and the integration of Eq. (\ref{eq:anisotropic_correction}) leads to
\begin{equation}
\delta B_{\parallel}=-\frac{\delta_e}{4}(1-\gamma^2)j_2(k_o\Delta
z).
\end{equation}
Hence the complete expression of the autocorrelation becomes
\begin{equation}\label{eq:CaseB}
B_{V_sV_s}=\frac{j_1(k_o\Delta
z)}{k_o\Delta
z}-\frac{\delta_e}{4}\sin^2(\theta_A)j_2(k_o\Delta
z),
\end{equation}
where, again, $\theta_A$ is the angle between the axis-of-symmetry and the correlation direction. Figure \ref{fig:GroupCaseB} shows the resulting autocorrelation for three different $\theta_A$ values, and $\delta_e\approx 0.23$. The central lobe width, given by the first zero position, exhibits a small but noticeable change, greater than those in Figure \ref{fig:GroupCaseA}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{NewFig6.pdf}
\caption{Autocorrelation function obtained for sensor parallel to both correlation directions. Comparison of three different $\theta_A$ when $\delta_e\approx0.23$. Scaling factor of 3 used for all the curves.}
\label{fig:GroupCaseB}
\end{figure}
\end{enumerate}
\subsection{Practical cases in USE and OCE}
In reverberant OCE \cite{Zvietcovich_2019}, the motion measurement (sensor) direction is typically fixed along an axis, let us say the $x'$ axis, and 2D autocorrelations are taken along a plane perpendicular to it, the $y'z'$-plane. Then, Case A for $\varphi_A=\pi/2$, is of particular interest when the axis-of-symmetry of the material (e.g., orientation of fibers in muscle tissue) lies in the Y$'$Z$'$ plane at a certain $\theta_A$ angle. Here, $\theta_A$ is interpreted as the angle between the axis-of-symmetry and the correlation direction when the axis-of-symmetry is fixed to the $z'$ axis. Then, when $\theta_A=0$, the correlation direction corresponds to the $z'$ axis ($\Delta z'$), and when $\theta_A=\pi/2$, the correlation direction corresponds to the $y'$ axis ($\Delta y'$).
In Figures \ref{fig:2DCaseA2}.a, and \ref{fig:2DCaseA2}.b, the full 2D autocorrelation maps are shown for two different axis-of-symmetry angles: parallel to $z'$ axis, and at 45$^\circ$ from both the $z'$ and the $x'$ axes. Then, by detecting the major and minor axes of the ellipses, not only the direction of fibers in muscle can be detected, but also their corresponding ordinary and extraordinary wave-numbers which are related to the shear modulus parallel, and perpendicular to the fibers, respectively. When the axis-of-symmetry of the material is parallel to the sensor along $x'$ axis and 2D autocorrelations are taken along the $y'z'$-plane, Case A for $\varphi_A=0$ and $\theta_A=\pi/2$ is useful. As expected, in Figure \ref{fig:2DCaseA2}.c, the autocorrelation obtained is rotationally symmetric since plane $y'z'$, in this case, is the plane of isotropy in the transverse isotropic model of elasticity.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Figure7.pdf}
\caption{(a-c) Comparison between three different sample-sensor geometries having the sensor axis fixed along $x'$. Material's axis-of-symmetry: (a) parallel to $z'$; (b) at 45$^\circ$ from both $z'$ and $y'$; (c) along the sensor axis. (d-f) Resulting 2D autocorrelation maps in the $y'z$'-plane using the same scaling factor and $\delta_e\approx0.55$, corresponding to each geometry (a)-(c), i.e. axis-of-symmetry pointing at: (d) $z'$; (e) 45$^\circ$ from both $z'$ and $y'$; (f) parallel to $x'$.}
\label{fig:2DCaseA2}
\end{figure}
We have derived the autocorrelation function for two cases, (A) correlation perpendicular to the sensor, and (B) correlation parallel to the sensor, given by Eqs. (\ref{eq:CaseA}) and (\ref{eq:CaseB}), respectively. Nevertheless, it is of interest to compare our results to earlier isotropic equations, since that has been the strategy used in previous work \cite{Ormachea_2018, Zvietcovich_2019}. Figure \ref{fig:CaseA90} shows the case for the sensor perpendicular to the axis-of-symmetry and correlation directions as in Figures \ref{fig:2DCaseA2}.a, and \ref{fig:2DCaseA2}.b. The comparison is made for the orthogonal cases $\theta_A=0$ (along fibers), and $\theta_A=\pi/2$ (perpendicular to fibers) for different values of anisotropy $\delta_e$ including the isotropic case using $k_o$. As shown, for a constant $k_o$, the larger the anisotropy, the larger the separation of the second lobe in the $\theta_A=0$ case with respect to the $\theta_A=\pi/2$ case.
In reverberant USE \cite{Ormachea_2018}, when the motion measurement direction is typically located along the $x$ axis, due to USE capabilities in imaging larger depths, 2D autocorrelations are taken along the XY or XZ plane. Then, Case B is relevant. Figure \ref{fig:CaseB90} shows the comparison between the anisotropic result and three isotropic equations using $k_o$, $k_e$, and $k_m=(k_0 + k_e)/2$ for parallel sensor and correlation directions, and orthogonal axis-of-symmetry. Here, the isotropic equation using $k_m$ fits very well the central and side lobe. Therefore, $k_m$ in conjunction with the estimation of $k_o$ in Case A of Figure \ref{fig:CaseA90}, allows for the calculation of $k_e$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.87\linewidth]{NewFig8.pdf}
\caption{Comparison of Eq. (\ref{eq:CaseA}) ($\theta_s=\pi/2$, $\varphi_A=\pi/2$) for two canonical cases of axis-of-symmetry angles: $\theta_A=0$, and $\theta_A=\pi/2$ when the material has three different levels of anisotropy $\delta_e$. Curves are compared to the isotropic case using $k_o$.}
\label{fig:CaseA90}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\linewidth]{NewFig9.pdf}
\caption{Comparison of three isotropic functions using $k_o$, $k_e$, and $k_m$, and the anisotropic expression up to first order. Here $\delta_e\approx0.23$ and $\theta_A=\pi/2$. Same scaling factor used for all the curves. Here the fitting by $k_m$ has a broader region of validity.}
\label{fig:CaseB90}
\end{figure}
Thus, as seen, the central region of the autocorrelation function can be fitted quite well using the isotropic expression. If birefringence is small, a more extended range is required to observe stronger differences, both in the zero positions and in the relative magnitude of side lobes. This explains why the isotropic theory was used successfully in the past for cornea\cite{Zvietcovich_2019}, although it is not isotropic \cite{PINSKY_2005,Singh_2016}.
To fully implement the derived anisotropic autocorrelation, for example, one must first select the geometry of sensor-correlation (which in principle can always be chosen, although in practice may be restricted) and then fit the expression using 4 parameters: $k_o$, $\delta_e$, $\theta_A$, and $\varphi_A$. One measurement grants access to three different correlation directions (ideally many more since the correlation is done in 3D and interpolation could be employed to obtain profiles at other angles) which can be used together to determine the anisotropy of the system as well as the axis orientation without any \emph{a priori} assumption of the axis-of-symmetry direction.
\section{Numerical simulations}
\subsection{Simulation setup}
Numerical simulations of a reverberant shear wave field produced by multiple shear-displacement contacts applied to the surface of a 3D solid volume were conducted using finite elements in Abaqus/CAE version 6.14-1 (Dassault Systems, Velizy-Villacoublay, France). The 3D solid of 30 x 30 x 30 mm is subjected to spatially-uniform (square shape) and temporal-harmonic (2700 Hz) displacement field at different surface locations as shown in Figure \ref{fig:CaseAbaqus}a. Zero displacement and rotation were applied at the base of the cube. The solid was meshed with an approximate grid size of 0.1 mm and using linear hexahedral dominant elements (C3D8R). The type of simulation was selected to be steady-state dynamic direct. After the simulation, a 3D complex-valued displacement field along the $x$ axis (sensor axis) is extracted as shown in Figure \ref{fig:CaseAbaqus}b. Finally, the complex autocorrelation is evaluated in regions of interest (ROI) of 18 mm x 18 mm along the YZ plane throughout the 3D displacement volume.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{Figure10.pdf}
\caption{Numerical simulation of a reverberant shear wave field in anisotropic media. (a) Dimensions and boundary conditions of a 3D solid subjected to multiple shear sources vibrating at 2700 Hz. (b) Displacement magnitude field (color bar in $\mu$m) measured along the $x$ axis after simulation. (c) Cases of axis-of-symmetry orientation of the material along the $z$ axis (left), and $x$ axis (right).}
\label{fig:CaseAbaqus}
\end{figure}
\subsection{Material properties}
The solid material is represented using a linear and transverse isotropic model of elasticity with a density of $\rho$ = 1000 kg/m$^{3}$ and parameters defined in Table \ref{tab:Table1}. In this model, the material properties are symmetric within the plane-of-isotropy ($p$), which is perpendicular to the axis-of-symmetry ($t$) direction (also called direction of fibers in muscle). The compliance tensor of a transverse isotropic material can be represented with the following 7 parameters: $E_p$, and $E_t$, corresponding to the Young?s moduli in the plane-of-isotropy and along the axis-of-symmetry, respectively; $G_p$, and $G_t$, corresponding to shear moduli in the plane-of-isotropy, and in a transverse plane parallel to the axis-of-symmetry, respectively; and $\nu_p$, $\nu_{pt}$ (and $\nu_{tp}$), corresponding the the Poisson's ratios in the plane-of-isotropy, and two transverse planes parallel to the axis-of-symmetry, respectively. Finally, these variables can be reduced to 3 independent parameters if the material is considered incompressible (such as soft tissues) \cite{Itskov_2002}.
\begin{table}[h!]
\caption{Material parameters using the transverse isotropic model defined in Abaqus/CAE version 6.14-1. Elastography parameters are also calculated for further comparison.}
\label{tab:Table1}
\centering\includegraphics[width=1\linewidth]{Tables1.pdf}
\end{table}
In dynamic elastography, we are interested in the propagation of shear waves, leaving $G_p$ and $G_t$ as the most important parameters since they can be related to shear wave speeds $c_p$ and $c_t$, using $c_p=\sqrt{G_p/\rho}$ and $c_t=\sqrt{G_t/\rho}$, respectively \cite{Royer_2011}. On the other hand, in reverberant elastography \cite{Ormachea_2018, Zvietcovich_2019}, for a vibration frequency $f$, wave-numbers are typically estimated. Then, $G_p$ and $G_t$ can be related to the extraordinary $k_e$ and ordinary $k_o$ wave-numbers using $k_e=2\pi f/\sqrt{G_p/\rho}$ and $k_o=2\pi f/\sqrt{G_t/\rho}$, respectively. Calculations of these wave-numbers for $f$ = 2700 Hz, and shear wave speeds, based on the simulation parameters, are also reported in Table \ref{tab:Table1}. In reverberant OCE \cite{Zvietcovich_2019}, the sensor is usually fixed in one axis, and autocorrelations are taken along a plane perpendicular to the sensor. Then, we define the $x$ axis as the sensor direction, and the YZ plane as the autocorrelation plane. Two cases are explored: (Case 1) when the axis-of-symmetry is oriented along the $z$ axis (Figure \ref{fig:CaseAbaqus}c-left), and (Case 2) when the axis-of-symmetry is oriented along the $x$ axis (Figure \ref{fig:CaseAbaqus}c-right in which the autocorrelation plane is also the plane-of-symmetry).
\subsection{Results and discussion}
In Case 1, the average 2D autocorrelation calculated from ROIs along the YZ plane of the 3D displacement volume is fitted to Eq. (\ref{eq:CaseA}) ($\theta_s=\pi/2$) when $\varphi_A=\pi/2$ (Figure \ref{fig:SimFit_A}a). Here, $\theta_A$ is interpreted as the angle between the axis-of-symmetry and the correlation direction when the axis-of-symmetry is fixed to the $z$ axis. Then, when $\theta_A=0$, the correlation direction corresponds to the $z$ axis ($\Delta z$), and when $\theta_A=\pi/2$, the correlation direction corresponds to the $y$ axis ($\Delta y$). An elliptical shape in the plot is clearly observed in Figure \ref{fig:SimFit_A}a indicating that the anisotropic properties of the material are different parallel ($\Delta z$) and perpendicular ($\Delta y$) to the axis-of-symmetry. The major and minor axes of the ellipse corresponding to the $\Delta z$ and $\Delta y$ autocorrelation axes, respectively, are shown with Eq. (\ref{eq:CaseA}) ($\theta_s=\pi/2$) curve fittings in Figure \ref{fig:SimFit_A}b. Fitting parameters $k_o$ and $\delta_e$ are shown and compared against simulation ground truth parameters in Table \ref{tab:Table2}.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{Figure11.pdf}
\caption{Fitting of Eq. (\ref{eq:CaseA}) with simulation results in Case 1. (a) 2D average autocorrelation along the YZ plane, obtained from the simulated 3D displacement volume, is fitted to Eq. (\ref{eq:CaseA}) for $\theta_s=\pi/2$ and $\varphi_A=\pi/2$ (discontinuous red line representing the zeros of Eq. (\ref{eq:CaseA})). Colorbar represents normalized autocorrelation in arbitrary units. (b) Major and minor axes of the ellipse corresponding to $\Delta z$ and $\Delta y$ autocorrelation axes, respectively, are compared against simulation results. Fitting parameters $k_o$ = 2147.7 rad/m and $\delta_e$ = 0.334 were estimated providing a close match to the ground truth.}
\label{fig:SimFit_A}
\end{figure}
\begin{table}[h!]
\caption{Estimated ordinary and extraordinary wave-numbers based on the fitting parameters $k_o$, and $\delta_e$ in Case 1 and 2. Average parameters are compared against ground truth parameters set in the simulation (Table \ref{tab:Table1}).}
\label{tab:Table2}
\centering\includegraphics[width=1\linewidth]{Tables2.pdf}
\end{table}
Similarly, in Case 2, the average 2D autocorrelation is taken along the YZ plane when the axis-of-symmetry is oriented along the $x$ axis and fitted to Eq. (\ref{eq:CaseA}) ($\theta_s=\pi/2$) when $\varphi_A=0$ and $\theta_A=\pi/2$ (Figure \ref{fig:SimFit_A2}a). Here, the interpretation of $\theta_A$ is the same as in Section 3.3. As expected, the plot shape is circular and symmetric as Eq. (\ref{eq:CaseA}) in this case is the same for any correlation direction perpendicular to the sensor and axis-of-symmetry directions. Autocorrelation axes along $\Delta z$ and $\Delta y$ are shown with Eq. (\ref{eq:CaseA}) ($\theta_s=\pi/2$, $\varphi_A=0$, and $\theta_A=\pi/2$) curve fittings in Figure \ref{fig:SimFit_A2}b. Fitting parameters $k_o$ and $\delta_e$ are shown and compared against simulation ground truth parameters in Table \ref{tab:Table2}.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{Figure12.pdf}
\caption{Fitting of Eq. (\ref{eq:CaseA}) with simulation results in Case 2. (a) 2D average autocorrelation along the YZ plane, obtained from the simulated 3D displacement volume, is fitted to Eq. (\ref{eq:CaseA}) for $\theta_s=\pi/2$, $\varphi_A=0$, and $\theta_A=\pi/2$ (discontinuous red line representing the zeros of Eq. (\ref{eq:CaseA})). Colorbar represents normalized autocorrelation in arbitrary units. (b) Autocorrelation axes $\Delta z$ and $\Delta y$ are compared against simulation results. Fitting parameters $k_o$ = 2091.2 rad/m and $\delta_e$ = 0.320 were estimated, providing a good assessment of the material properties used in the simulation.}
\label{fig:SimFit_A2}
\end{figure}
Estimations of $k_o$ and $\delta_e$ are used in Eq. (\ref{eq:Delta_e}) for the calculation of $k_e$ in each case as reported in Table \ref{tab:Table2}. Average estimations are compared against ground truth parameters set in the simulation (Table \ref{tab:Table1}). We found a maximum accuracy error of 3.54\% and a minimum of 0.06\%, validating the effectiveness of the anisotropic derivation in reverberant shear wave fields. This has important implications in the elastography of transverse isotropic elastic tissues: (1) the axis-of-symmetry of tissues (for example the fiber direction in muscle) can be estimated by finding the major axis of the elliptical plot of Eq. (\ref{eq:CaseA}) in Case 1; (2) the complete characterization of shear moduli in every direction ($G_p$, and $G_t$) can estimated based on $k_o$ and $\delta_e$ provided by Eq. (\ref{eq:CaseA}) in Cases 1 and 2; and (3) more complex situations in which the axis-of-symmetry of the tissue is not parallel to one of the axes can be fully characterized by building libraries of cases using Equations (\ref{eq:CaseA}) and (\ref{eq:CaseB}) and machine learning tools.
\section{Reverberant OCE experiments}
\subsection{Sample preparation}
Using a surgical scalpel, three (n = 3) cubical samples (2 x 2 x 2 cm) were dissected from a fresh roaster chicken tibialis anterior muscle. Each cubical sectioning was conducted so that the fiber orientation of the muscle is parallel to one of the axes of the cube. The epithelium was removed from all sides of the cubic sample since OCE measurements are usually constrained to the surface of the sample. During experiments, the side of the cubical sample containing all fibers oriented to one of the axes of the cube was measured (Figure \ref{fig:ExpSetup}a). The muscle was not subjected to any external force in order to prevent a passive muscle resistance effect.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\linewidth]{Figure13.pdf}
\caption{Experimental opto-mechanical setup for the generation and measurements of reverberant shear wave fields in chicken muscle tissue. (a) Orientation of the chicken muscle sample with respect to the OCT scanning probe. Average orientation of fibers was aligned along the $z$ axis, while the motion measurement (sensor) was oriented along the $x$ axis (depth). (b) Phase-sensitive OCT system based on a swept source laser. A 2 kHz mechanical excitation was generated in the sample using a 3D printed pronged ring allowing for motion measurement along the $yz$-plane within the ROI (9 mm x 9 mm). }
\label{fig:ExpSetup}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{Figure14.pdf}
\caption{Experimental reverberant OCE results in chicken muscle. (a) 3D structural OCT volume of one of the muscle samples. (b) Structural \emph{en face} OCT image of the muscle along the $yz$-plane. Color map represents normalized intensity. (c) Motion snapshot of a 2 kHz reverberant field measured at the surface of the muscle sample at $t_0$ = 2ms instant. Color bar represents normalized particle velocity in arbitrary units. (d) 2D autocorrelation of the reverberant field extracted from a 6 mm x 6 mm region (white discontinuous line) in (c). Color bar represents the normalized real part of the complex autocorrelation in arbitrary units. Discontinuous red line represents the zeros of Eq.(\ref{eq:CaseA}) for $\theta_s=\pi/2$ and $\varphi_A=\pi/2$. (e) Major ($\Delta z$) and minor ($\Delta y$) autocorrelation axis of the ellipse in (d) fitted to Eq. (\ref{eq:CaseA}) for cases $\theta_A=0$ and $\theta_A=\pi/2$, respectively. Fitting parameters $k_o$ = 2512.3 rad/m and $\delta_e$ = 0.42 were estimated for muscle sample 1. Fitting quality: $r^2$ = 0.962.}
\label{fig:ExpResults}
\end{figure}
\subsection{Experimental setup and processing scheme}
The experimental setup consists of a phase-sensitive optical coherence tomography (PhS-OCT) system implemented with a swept source laser (HSL-2100-WR, Santec, Aichi, Japan) of a center wavelength of 1318 nm and a bandwidth of 125 nm (Figure \ref{fig:ExpSetup}b). The frequency sweep rate of the light source was 20 kHz, and the optical resolution was measured to be 30 $\mu$m laterally, and 10 $\mu$ m axially. The system was used to acquire 3D motion frames of the chicken samples within a ROI of 9 x 9 mm in the YZ-plane. The mechanical excitation system begins with a function generator (AFG320, Tektronix, Beaverton, OR, USA) output signal connected to an ultra-low noise power amplifier (PDu150, PiezoDrive, Callaghan, NSW, Australia) feeding a piezoelectric bender poled in a parallel configuration of 10 x 45 mm surface dimensions (BA4510, PiezoDrive, Callaghan, NSW, Australia). A 3D printed pronged ring containing eight vertical equidistant and circular distributed rods is attached to one of the ends of the piezoelectric bender (Figure \ref{fig:ExpSetup}b). The rods are lightly touching the sample surface in a concentric configuration and produce a reverberant field when the piezoelectric bender is excited at 2 kHz. The ring shape allows the imaging of the cornea using the OCT system, while the rods introduce mechanical excitation. Reverberant particle velocity (motion) fields along the $x$ axis (sensor axis) were analyzed in the $yz$-plane in order to calculate complex 2D autocorrelations for further fitting with Eq. (\ref{eq:CaseA}). Anisotropic properties of the $n = 3$ chicken muscle samples were characterized by estimating parameters $k_o$ and $\delta_e$ as conducted in Section 4.3 for the simulated case.
\subsection{Results and discussion}
Figure \ref{fig:ExpResults}a shows the 3D structural OCT volume of one of the chicken samples. The average direction of the muscle fibers is aligned toward the $z$ axis as shown in the \emph{en face} structural image of Figure \ref{fig:ExpResults}b taken along the $yz$-plane. A motion snapshot (normalized particle velocity in arbitrary units) of the 2 kHz reverberant field produced in the chicken sample is shown in Figure \ref{fig:ExpResults}c. Here, a 6 x 6 mm region was selected for the calculation of the 2D autocorrelation (normalized units) and fitted to Eq. (\ref{eq:CaseA}) ($\theta_s=\pi/2$) when $\varphi_A=\pi/2$ (Figure \ref{fig:ExpResults}d). An elliptical shape in Figure \ref{fig:ExpResults}d highlights the anisotropic properties of muscle tissue when comparing autocorrelation plots parallel ($\Delta z$) and perpendicular ($\Delta y$) to the $z$ axis. The major and minor axes of the ellipse corresponding to $\Delta z$ and $\Delta y$ autocorrelation axes, respectively, are fitted to Eq. (\ref{eq:CaseA}) in Figure \ref{fig:ExpResults}e. Fitting parameters $k_o$ and $\delta_e$ are estimated and shown for all samples in Table \ref{tab:Table3}.
\begin{table}[h!]
\caption{Estimated shear moduli in the plane-of-isotropy (XY-plane) $G_p$ and in the transverse plane parallel to the axis-of-symmetry ($z$ axis) $G_t$, based on the fitting parameters $k_o$, and $\delta_e$ in $n = 3$ chicken muscle samples. Shear wave speed was also calculated along the same directions for further comparison. SE: standard error.}
\label{tab:Table3}
\centering\includegraphics[width=1\linewidth]{Tables3.pdf}
\end{table}
As explained in Section 4.2, for a transverse isotropic medium, the shear moduli in the plane-of-isotropy $G_p$ and in the transverse plane parallel to the axis-of-symmetry (direction of the fibers) $G_t$ can be calculated from shear speed $c_p$ and $G_t$, respectively, using $k_o$ and $\delta_e$ parameters. Table \ref{tab:Table3} shows $c_p$, $c_t$, $G_p$, and $G_t$ for all chicken samples, indicating a marked anisotropy in agreement with other studies \cite{Zvietcovich_2020,Koo_2013}. The fitting quality of Eq. (\ref{eq:CaseA}) to autocorrelation plots tends to degrade as sample points are further away from the center of the autocorrelation (Figure \ref{fig:ExpResults}e). This is explained as Eq. (\ref{eq:CaseA}) comes from a theoretical formulation of autocorrelation considering an infinite space field (numerous spatial waves within a region). In practice, due to the attenuation of waves in tissues, a limited number of cycles can be captured within a ROI as shown in Figure \ref{fig:ExpResults}c, constraining the effectiveness of the fitting of Eq. (\ref{eq:CaseA}) to the center of the autocorrelation map in Figure \ref{fig:ExpResults}d. Finally, this study demonstrates that reverberant elastography can be used in practical cases for the characterization of transverse isotropic tissues such as muscle. Future work will focus on extending this method to other anisotropic tissues such as cornea and brain.
\section{Conclusion}
The major concepts from electromagnetic fields in anisotropic media are reviewed and found to be helpful in deriving closed-form solutions to the problem of reverberant elastography in anisotropic media. We found Equations (\ref{eq:CaseA}) and (\ref{eq:CaseB}) describing the complex autocorrelation of reverberant fields in materials exhibiting a transverse isotropic model of elasticity for variable directions of: (1) the material's axis-of-symmetry, (2) the direction of motion measurement (sensor), and (3) complex autocorrelation. Results were validated with numerical simulations using finite elements achieving accuracy within 4\%. Moreover, Equation (\ref{eq:CaseA}) was used for the anisotropic characterization of chicken tibialis anterior muscle in OCE experiments, demonstrating its use in the non-destructive elastography of tissues. Finally, we developed a general solution for the isotropic model in Eq. (\ref{eq:isotropic_result}) consistent with previous reported results for particular configurations. Limitations of this work include the assumption of small anisotropic ratios and the consequent simplification of terms within the complex autocorrelation function. Future work will focus on the application of this approach to the elastography of other well know anisotropic tissues such as cornea and brain.
\section*{Acknowledgment}
The authors would like thank Prof. Miguel Alonso for his perspective. L. A. Alem{\'a}n-Casta{\~n}eda is supported by CONACyT Doctoral Fellowship, and F. Zvietcovich was supported by the Fondo para la
Innovacion, la Ciencia y la Tecnologia FINCyT--Peru (097-FINCyT-BDE-2014).
\bibliographystyle{ieeetr}
|
2,869,038,156,304 | arxiv | \section{Introduction}
The hard X-ray structure of the Galactic Center region is quite unusual
as well as those of the other wavelengths.
In particular, thin thermal hot plasma
prevails all over the region (\cite{Koyama89}; \cite{Yamauchi90};
\cite{Koyama96}; \cite{Maeda98}).
It has several surprising natures. First, the temperature is quite high,
about 10~keV.
Second, the total energy is also high, about $10^{54}$ erg.
Third, the plasma wide-spreads in the region bigger than 1\degr$\times$1\degr,
which corresponds to about 150pc$\times$150pc for the distance of
the Galactic Center.
Fourth, the spectrum of the plasma is surprisingly uniform from field
to field except for the slight change of the surface brightness.
The origin of the plasma is still an enigma.
Supernova remnants (SNRs) are one of the possible candidates for the origin
of the hot plasma (e.g., \cite{Koyama86}), as well as the other candidates;
the past activity of the central massive black hole
(e.g., \cite{Koyama96}), unresolved many cataclysmic
variables (e.g., \cite{Mukai93}), and
the global magnetic activity which heats the interstellar matter
(e.g., \cite{Yokoyama98}).
If the SNRs explain all the hot plasma, first, the total energy
($\sim 10^{54}$ erg) requires about 10$^3$ supernovae in the narrow region
with the scale of a few hundred pc in the last 5\,10$^4$ years,
which is the age of the plasma (\cite{Koyama96}).
It would lead to the much higher supernova rate than the common rate
of a few supernovae in a century in all the Galaxy. However,
the possible starburst activity in the Galactic Center region has been
often discussed.
In particular, recent {\it COMPTEL} observations
of \element[][26]{Al} 1.8~MeV line suggest that 10$^5$ supernovae occurred
in the last 4\,10$^6$ years (\cite{Chen95}; \cite{Hartmann95}),
which is consistent with the required supernova rate.
Second, the supernova origin hypothesis
requires the mean temperature of about 10 keV.
It is much higher than the usual temperature of SNRs
($kT\leq$ a few keV). However, if SNRs densely
exist in the region, the temperature may heat up to about 10 keV
by the mutual interaction of the shocks.
Thus, SNRs are still one of the candidates for the origin
of the Galactic Center plasma.
In this paper we concentrate on the X-ray natures of the SNRs
in the Galactic Center region from the observational point of view.
The {{\it ASCA}} capability of imaging spectroscopy with 0.5--10 keV (\cite{Tanaka94})
is quite suitable for the study. In fact, owing to
the hard X-ray sensitivity of {{\it ASCA}}, particularly above 2~keV,
we can possibly detect new SNRs which have been hidden
by the heavy interstellar absorption, hence obtain some information
whether the SNRs are valid for the origin of
the Galactic Center plasma.
Using all the available {{\it ASCA}} pointing observations in this region
($-$1\degr$<l<$1\degr, $-$1\degr$<b<$1\degr)
and a part of the data of the ongoing {{\it ASCA}} Galactic Center survey,
we studied the X-ray emission from the known SNRs
(\cite{Green98}), and searched new SNRs.
We assume the distance to the Galactic Center to be 8.5 kpc.
\section{Results on the cataloged SNRs}
We searched X-ray emission from the cataloged SNRs (\cite{Green98}),
and then investigated each nature where significant X-ray emission is detected.
\subsection{G359.1$-$0.5}
\begin{figure}
\centerline{\psfig{file=fig_1.ps,width=8.8cm,clip=} }
\vspace*{0.5cm}
\centerline{\psfig{file=G359_05_spec_p.ps,width=8.8cm,clip=} }
\caption{
(Upper) GIS contour map with 1.6--2.1 keV band superposed on a schematic
diagram of the radio structures; the radio shell of G359.1$-$0.5
and the radio non-thermal filament, the Snake.
Contour level is linearly
spaced and is saturated for A1742$-$294 and 1E~1740.7$-$2942.
The accumulated regions for the spectra are also noted.
(Lower) The background-subtracted GIS spectrum.
We also show the best-fit model where we fit the spectrum with
the model of the thermal bremsstrahlung and two narrow Gaussians
with interstellar absorption.
These two figures are adopted from Yokogawa {et al.} (1999).
\label{fig:359.1-0.5}}
\end{figure}
Fig.~\ref{fig:359.1-0.5} (upper panel) shows the {{\it ASCA}} GIS image of
G359.1$-$0.5 with 1.6--2.1 keV band.
We detected center-filled X-rays from this source, whereas
the radio image shows a clear shell-like structure (e.g., \cite{Uchida92}).
The spectrum is found to have emission lines from highly ionized ion
(Fig.~\ref{fig:359.1-0.5} (lower panel)),
hence the X-rays come from thin thermal plasma. The most distinct two lines
are K$\alpha$ lines from helium-like silicon and hydrogen-like sulfur.
They imply that the plasma has multi-temperature structure.
The absorption column density is estimated at $N_\mathrm{H}$$\sim 8\,10^{22}$H~cm$^{-2}$,
suggesting that G359.1$-$0.5 is located at near the Galactic Center,
which is consistent with the radio observations (\cite{Uchida92}).
This column density is larger by a factor of 2 or 3 than that with
the {{\it ROSAT}} measurement (\cite{Egger98}).
However, we are still confident of our estimation because the {{\it ASCA}}
energy band is the most suitable for measuring such heavy absorption.
Details of the analysis are given in Yokogawa {et al.} (1999).
\subsection{G0.9$+$0.1}
\begin{figure}
\centerline{\psfig{file=G0.9+0.1_img.eps,width=8.8cm,clip=} }
\vspace*{-0.5cm}
\centerline{\psfig{file=G0.9+0.1_spec_p.eps,width=8.8cm,clip=} }
\caption{
(Upper) GIS2$+$GIS3 contour map of G0.9$+$0.1 with 3--10 keV band.
The image is smoothed with a Gaussian filter of $\sigma=0.75$\arcmin,
and corrected for exposure, vignetting and the GIS grid structure
after subtraction of non-X-ray background (NXB).
Contour level is linearly spaced.
The coordinate is in galactic ($l_{\mathrm{II}}$, $b_{\mathrm{II}}$),
and the north is up.
(Lower) The same as Fig.~\ref{fig:359.1-0.5} lower panel, but of G0.9$+$0.1.
The fitting model is an absorbed power-law.
\label{fig:img:0.9+0.1}}
\end{figure}
Fig.~\ref{fig:img:0.9+0.1} (upper panel) shows the GIS image of G0.9$+$0.1
with 3--10 keV band. We detected significant X-ray emission
from this source in this hard energy band, whereas no significant X-ray
is detected in the softer energy band.
The X-ray emitting region is compact
and not resolved with {{\it ASCA}} GIS. The radio size of the source
is 2{\arcmin} in diameter (\cite{Helfand87}), hence it is consistent
with the X-ray image.
The spectrum is found to be hard; $kT>$2 keV (the best-fit of 40 keV)
in thermal model or $\Gamma\sim$1.5 in power-law model,
with heavy absorption of $N_\mathrm{H}$$\sim 10^{23}$H~cm$^{-2}$,
although the statistics are not good. This hardness may imply the emission
to be non-thermal origin.
The flux is 2\,10$^{-12}$ {erg~s$^{-1}$~cm$^{-2}$} in 2--10 keV band.
It is consistent with the upper limit by {{\it Einstein}} (\cite{Helfand87}),
but significantly lower than the {{\it SAX}} result (\cite{Mereghetti98}).
\subsection{G359.1$+$0.9}
\begin{figure}
\centerline{\psfig{file=G359p0.9_im.eps,width=8.8cm,clip=} }
\centerline{\psfig{file=G359p0.9_rprof.eps,width=8.8cm,clip=} }
\caption{
(Upper) The same as Fig.~\ref{fig:img:0.9+0.1} upper panel, but of G359.1$+$0.9
with 0.7--3.0 keV band.
The positions of the detected X-ray sources
(AX~J1738.4$-$2903, AX~J1739.3$-$2924, AX~J1739.6$-$2911, AX~J1740.3$-$2904),
the radio shell of G359.1$+$0.9,
and the radio pulsar PSR B1736$-$29 are indicated.
(Lower) The radial profile with the center at the peak of AX~J1739.6$-$2911.
The profile is fitted with the point spread function (see text).
The best-fit model is given with the dashed line, whereas the dotted line
shows only the background component in the best-fit model.
\label{fig:img:359.1+0.9}}
\end{figure}
Fig.~\ref{fig:img:359.1+0.9} (upper panel) shows the GIS image
around G359.1$+$0.9. X-ray emission from the position corresponding
to G359.1$+$0.9 is detected with the significance of 9.8$\sigma$
in 0.7--3 keV band (AX~J1739.6$-$2911 in Fig.~\ref{fig:img:359.1+0.9}),
whereas the significance in 3--10 keV band is 2.1$\sigma$.
Faint extended emission around AX~J1739.6$-$2911 also can be seen
in the 0.7--3 keV band image.
We made the radial profile with the center at the peak of AX~J1739.6$-$2911
and fitted it with the model of the point spread function (PSF) and
the background (NXB$+$CXB(cosmic X-ray background)) where the normalizations
of the PSF and the background were allowed to be free.
The model is found to be rejected with the confidence of 97.4\%,
\emph{i.e.}, the slightly extended emission with the radius of
4{\arcmin}$\sim$5{\arcmin} was marginally detected with the significance of
2.2$\sigma$ (Fig.~\ref{fig:img:359.1+0.9} lower panel).
The size is consistent with the radius of the radio shell, $r\sim$5{\arcmin}.
The spectrum of the central region of AX~J1739.6$-$2911
is found to be well fitted with the thin thermal plasma model
with $kT\sim$0.7 keV ($\chi^2$/d.o.f.$=$2.67/5) and $N_\mathrm{H}$$\sim 0$,
although the statistics are not good.
The flux is 3\,10$^{-13}$ {erg~s$^{-1}$~cm$^{-2}$} in total energy band,
converted to the luminosity of 2\,10$^{33}$ {erg~s$^{-1}$} if we assume
the distance to be 8.5 kpc. Note that the soft spectrum with no absorption
may imply that this source is not located in the Galactic Center region,
but is a foreground source, hence the above estimated luminosity may
be reduced by several factors or more.
The extended emission with positional coincidence with the radio structure
strongly supports that it is an X-ray counter part of the SNR,
if the detection of the extended emission is true.
The soft spectrum also supports it.
The luminosity may be rather low for an SNR. However, {{\it ASCA}}
Plane survey has failed to detect many radio SNRs, suggesting
their quite dim X-ray luminosities (\cite{Yamauchi99}). Therefore,
the low X-ray luminosity of this source may be acceptable for an SNR.
Future observations with high sensitivity will be encouraged.
\subsection{The other cataloged SNRs}
For the other cataloged SNRs (G359.0$-$0.9, Sgr A East (G0.0$+$0.0),
G0.3$+$0.0, Sgr D SNR (G1.0$-$0.1)), we detected no significant X-ray emission
associated with the SNRs. It is possibly because no strong X-ray is
actually emitted. However we should be conservative for no X-ray emission
from those sources because the detected positions of those SNRs are
heavily contaminated from nearby strong X-ray sources;
SLX~1744$-$299/300 near G359.0$-$0.9,
Sgr~A and AX~J1745.6$-$2901 near Sgr~A East,
1E~1743.1$-$2843 near G0.3$+$0.0, and GX3$+$1 near G1.0$-$0.1.
\section{Results on the new candidates of SNRs}
We discovered two new candidates of SNRs.
In this section, we report the preliminary results on them.
\subsection{G0.0$-$1.3 (AX~J1751$-$29.6)}
\begin{figure}
\centerline{\psfig{file=g00_13_img.eps,width=8.8cm,clip=} }
\vspace*{-0.5cm}
\centerline{\psfig{file=g00_13_spec_p.eps,width=8.8cm,clip=} }
\caption{
(Upper) The same as Fig.~\ref{fig:img:0.9+0.1} upper panel, but of G0.0$-$1.3
(AX~J1751$-$29.6) with 0.7--3.0 keV band.
(Lower) The same as Fig.~\ref{fig:359.1-0.5} lower panel, but of G0.0$-$1.3.
For the background spectrum, we accumulated the photons from the elliptical
region surrounding the source, excluding the source region,
in the same GIS field of view.
The fitting model is the thin thermal plasma model with absorption.
\label{fig:0.0-1.3}}
\end{figure}
Fig.~\ref{fig:0.0-1.3} (upper panel) shows the GIS image of G0.0$-$1.3
(AX~J1751$-$29.6) in 0.7--3 keV band. We discovered the clearly extended
emission with the scale of about 40\arcmin$\times$10\arcmin.
The spectrum is found to have the emission lines from highly ionized ions,
hence is the thermal origin (Fig.~\ref{fig:0.0-1.3} lower panel).
In fact, the spectrum is well fitted with a thin thermal plasma model
with $kT=0.5\pm 0.08$ keV and $N_\mathrm{H}$$= (1.3\pm 0.2)$\,10$^{22}$ {H~cm$^{-2}$}.
The X-ray flux is $\sim 10^{-11}$ {erg~s$^{-1}$~cm$^{-2}$} in 0.5--3 keV band.
The column density suggests that this source may be in front of
the Galactic Center region according to Sakano {et al.} (1999),
and be located at the distance of about 4~kpc if we assume
the mean interstellar density of 1 H~cm$^{-3}$.
Then, the obtained flux is converted to the luminosity
of 3\,$10^{35}$ {erg~s$^{-1}$} under the assumption of the distance of 4~kpc.
Even in the case of the quite small distance of 1~kpc, the luminosity
is larger than $10^{34}$ {erg~s$^{-1}$}.
We now consider the classification of the source.
The clearly extended emission of the thin thermal plasma with $kT\sim$0.5 keV
implies that this source is an SNR or a star forming region.
The luminosity is also within the typical range of SNRs but much higher
than that of a star forming region.
Therefore, AX~J1751$-$29.6 is a strong candidate for a new SNR.
\subsection{G0.56$-$0.01 (AX~J1747.0$-$2828)}
\begin{figure}
\centerline{\psfig{file=G0.5_feline.eps,width=8.8cm,clip=} }
\vspace*{-0.5cm}
\centerline{\psfig{file=G056_001_spec_ray.eps,width=8.8cm,clip=} }
\caption{
(Upper) The same as Fig.~\ref{fig:img:0.9+0.1} upper panel,
but of G0.56$-$0.01 (AX J1747.0$-$2828) with 6.0--7.0 keV band,
which is dominated by iron K$\alpha$ line.
Note that the image was corrected only for exposure, NXB was not subtracted,
and the region around a bright source 1E~1743.1$-$2843 was excluded
before the smoothing in order to reduce the contamination from 1E~1743.1$-$2843.
The positions of the X-ray reflection nebula Sgr~B2 cloud
(e.g., \cite{Murakami99}) and 1E~1743.1$-$2843
are also indicated.
(Lower) The same as Fig.~\ref{fig:359.1-0.5} lower panel, but of G0.56$-$0.01.
The fitting model is the thin thermal plasma model
and 6.4-keV narrow line, both with interstellar absorption.
\label{fig:G0.56-0.01}}
\end{figure}
Fig.~\ref{fig:G0.56-0.01} (upper panel) shows the GIS image of G0.56$-$0.01
(AX~J1747.0$-$2828) in 6--7 keV band, where this source can be seen
the most significantly. The X-ray emitting region is compact
and not resolved with GIS.
The background subtracted spectrum is given in Fig.~\ref{fig:G0.56-0.01}
(lower panel).
We accumulated the source X-ray photons from the $3'$-radius
circular region around AX~J1747.0$-$2828, and the background photons,
from an elliptical region with the major
axis parallel to the Galactic Plane, excluding the $3'$-radius circular
regions around AX~J1747.0$-$2828 itself and Sgr B2
(see Murakami {et al.} (1999)).
The spectrum is found to be characterized with a quite strong line
at between 6--7 keV, which is also implied from the X-ray image.
We tried to fit the spectrum with the thermal
bremsstrahlung and a Gaussian line. Then we found the equivalent width
of the line to be quite large, $\sim$2 keV, and the center energy of
the Gaussian to be 6.63$\pm$0.06 keV, being consistent with K$\alpha$
line from helium-like iron. Hence, the spectrum is definitely
a thin thermal origin with high temperature of several keV or higher.
We then fitted the spectrum with a thin thermal plasma model.
The model well represents the total spectral shape. The best-fit
temperature is $kT=6.0^{+1.9}_{-1.5}$ keV, the abundance, $Z>2$ solar,
and the hydrogen column density, $N_\mathrm{H}$$=(6.1^{+1.6}_{-1.1})\,10^{22}$ {H~cm$^{-2}$}.
The flux is 1.6\,10$^{-12}$ {erg~s$^{-1}$~cm$^{-2}$} in 0.7--10 keV band.
The large column density suggests this source to be located at
near the Galactic Center.
Thus, the absorption corrected X-ray luminosity is estimated
at $\sim$3.6\,10$^{34}$ {erg~s$^{-1}$} under the assumption of
the distance of 8.5 kpc.
This high temperature and the overabundance suggest AX~J1747.0$-$2828 to be
a possible new candidate of a young SNR.
The luminosity is also within the range of that of the typical SNR.
Although the supernova remnant is the most probable source,
the other possibility, for example, a cataclysmic variable,
still cannot be excluded (e.g., \cite{Terada99}).
In any case, it is a new class object in the Galactic Center region,
which is a good candidate for the origin of the Galactic Center plasma.
\section{Discussion}
SNRs are one of the possible candidates for the origin
of the Galactic Center hot plasma.
With {{\it ASCA}}, we completely surveyed the region of $|l|<1$\degr and $|b|<0.3$\degr,
where a large portion of SNRs in this region are expected
to exist, and partially surveyed some areas
with larger galactic latitude.
The number of the detected SNRs with {{\it ASCA}}
is two or three in 7 cataloged SNRs, and two for the new
candidates in the Galactic Center region.
On the other hand, the required number to be responsible
for the Galactic Center plasma is
about 10$^3$ in the region of $|l|\leq 1$\degr and $|b|\leq 0.5$\degr.
Therefore, the number of the detected SNRs is quite
insufficient.
From recent radio molecular line observations, Hasegawa {et al.} (1998)
found over 300 shell-like structures, possibly SNRs,
in the region of $|l|\leq 0.5$\degr, which correspond to
about 20 shell-like structures along any line of sight in the region.
Therefore, the number of SNRs may be possibly too much
to resolve with {{\it ASCA}}.
Future observations with higher angular resolution would be required
to solve this problem.
The spectra of four of the detected five SNRs (or candidates) are
relatively soft (G359.1$-$0.5, G359.1$+$0.9, and G0.0$-$1.3 (AX~J1751$-$29.6))
or have no line-like feature (G0.9$+$0.1). Thus they cannot explain
the spectrum of the Galactic Center plasma. On the other hand,
a new SNR candidate G0.56$-$0.01 (AX~J1747.0$-$2828) is notable
for the strong iron line similar to that of the Galactic Center plasma.
Such class of objects may be a key to understand the origin
of the plasma. Search for such objects will be strongly encouraged.
\vskip 0.4cm
\begin{acknowledgements}
The authors express their thanks to all the members of the {{\it ASCA}} team.
We are grateful to Drs. J. P. Hughes and P. Slane for their valuable
comments.
MS and YM acknowledge the support from Research Fellowships of
the Japan Society for the Promotion of Science.
\end{acknowledgements}
|
2,869,038,156,305 | arxiv | \section{Introduction}\label{sec:intro}
Knots are formed by branches or limbs during the growth of a tree and commonly appear as dark ellipses on the surfaces of sawn lumber. As they significantly affect both the aesthetic quality and mechanical properties of lumber, knots have a central role in determining the commercial value of lumber. Such determination is performed by inspecting and measuring the visual characteristics on the surface of the piece. The cost associated with manual defect detection is prohibitive in industrial fabrication, and thus automated systems are needed. However, lumber is a highly variable material: sizes and shapes of knots and other defects, as well as color and texture of sawn lumber, have much variability from tree to tree. The automatic classification of sawn lumber is therefore more challenging than domains where computer vision algorithms are already applied routinely, such as for air bubble detection in glass or plastic casts.
Simple inspection systems that monitor the wood surface with color cameras are already deployed in the lumber industry, exploiting the fact that knots are usually of darker color than the background. Most commonly, color thresholding is used to mark all knot pixels with a `1'. However, this requires appropriate tuning either by a domain expert or threshold selection methods \cite{otsu1979threshold}. As a result, the performance of these detection methods is highly sensitive to the choice of threshold and associated geometric features selected for a particular wood type and camera setup. Moreover, ellipse detection in imperfectly binarized images remains a difficult and error-prone problem \cite{libuda2007ellipse,wang2014fast,chia2007ellipse,muammar1991tristage,tsuji1978detection,aguado1995ellipse}. We provide a comparison of our proposed deep-learning-based ellipse detection method to one of the geometric detection methods in \cite{jia2017fast} and to other learning-based variants on the lumber knot images in the experiment section.
In the literature, knots are typically modeled as elliptical cones when a piece of lumber is treated as a 3-dimensional object \cite{guindos2013three}. Hence, we model knot faces on a 2-dimensional surface are as ellipses (conic sections). The proposed knot detection method takes color images of lumber captured by high-definition cameras as the input and returns a 5-dimensional parameter vector, $(cx, cy, rx, ry, \theta)$, representing the position of the center of the knot face on the $x$-axis and $y$-axis, the length of the semi-diameters along the $x$- and $y$-axes, along with the rotation angle of the ellipse $\theta$, respectively.
The available data for this paper consist of images of 113 large sawn Douglas-fir lumber specimens with 4894 knots in total. The data are collected in a collaborative research project between the Department of Statistics at the University of British Columbia and FPInnovations. Figure~\ref{fig: knot_example} shows an example of knots on a piece of sawn lumber.
Four images are taken for each board: two for the wide surfaces and another two for the narrow surfaces.
For training and testing purposes, we manually labeled each knot.
\begin{figure}
\centering
\includegraphics[width= 0.47\textwidth]{fig/knot_example_reduced.pdf}
\caption{Sample lumber with the four surfaces scanned. The wide surfaces are shown on the first and the third rows and the narrow surfaces are shown on the second and last rows. Knots can be seen from their darker color and the noticeable grain distortion around them.}
\label{fig: knot_example}
\end{figure}
In this paper, we address the knot detection and localization problem by adapting the Faster R-CNN framework \cite{ren2015faster} for rectangular object detection. Our main contributions are as follows:
\begin{itemize}
\item We adapt the Faster R-CNN to identify elliptical knots on the sawn lumber surfaces. Specifically, we replace the Region Proposal Network in the Faster R-CNN framework with the Gaussian Proposal Network to adapt to the ellipse detection problem, and propose an alternative loss function for ellipse regression.
\item We propose an effective algorithm to fix the misalignment of raw images of lumber specimens.
\item We prepare a labeled dataset of elliptical lumber knots, which can be used by future researchers to train and evaluate methods for knot detection and localization.
\end{itemize}
Knot detection is vital for assessing the strength and value of sawn lumber in a non-destructive manner. In particular, knot detection and localization based on lumber images is an integral step that generates input for existing knot matching algorithms that reconstruct the 3-dimensional structure of knots, e.g., \cite{jun2019sequential}. Thus, this paper provides an important step in the pipeline towards advancing the state-of-the-art in automatic strength prediction of lumber and quality assessment of other materials containing elliptical defects.
This paper is organized as follows. Related work regarding lumber knot localization and elliptical object detection is reviewed in Section~\ref{sec:relatedwork}. The data preparation procedures and the lumber knot dataset are discussed in Section~\ref{sec:dataprep}. Section~\ref{sec:approach} introduces our proposed method to solve the knot detection and localization problem. Experiment results and model performance evaluations are presented in Section~\ref{sec:exp}. Section~\ref{sec:conc} concludes this paper.
\section{Related Work}
\label{sec:relatedwork}
In this section, we discuss existing lumber defect detection methods, including active sensor-based solutions and vision-based solutions using geometry, machine learning (ML), and combinations thereof.
\paragraph{Active, using the laser tracheid effect.}
An existing method for knot location and size measurements in veneer (a thin layer of wood) surfaces using the laser tracheid effect is introduced in \cite{tormanen2009detection}. This method is based on the scattering patterns formed by laser spots. The laser light that penetrates the lumber surface is scattered mainly in the grain direction; this is often referred to as the ``tracheid effect''. The major features used to detect knots are the deviation of wood grain obtained by analyzing the amount of scattering. Areas with knots generally indicate the existence of large wood grain deviations and can be detected by thresholding. Methods based on the ``tracheid effect'' focus on finding improved approaches to computing the grain deviation. For example, Daval \textit{et al.}\cite{daval2015automatic} use the thermal conduction properties of lumber captured by a thermal camera to extract information such as the slope of grain and the presence of knots. The performance of the tracheid effect-based methods depends on the analysis of multiple individual laser spots projected on the surface. A practical limitation is in the resolution at which surface characteristics are captured. Empirical results show that tracheid effect-based methods are incapable of identifying knots of smaller sizes since they are usually located between adjacent laser spots. Therefore, these small knots are difficult to detect as they do not cause sufficiently large changes in the scattering patterns.
\paragraph{Passive geometric localization.}
Computer vision-based ellipse detection methods often use a multi-stage filtering process, by finding geometric features, such as lines, curves, arcs, and extended arc patterns as intermediate representations~\cite{libuda2007ellipse,teutsch2006real,kim2002fast,mai2008hierarchical,chia2010split,prasad2010ellipse,chia2011object,fornaciari2014fast,prasad2014deb,jia2017fast,dong2018accurate}. Improvements on the contour extraction and arc combination algorithms have been recognized in many real-world applications~\cite{wang2014fast,lu2019arc,jin2019ellipse}. Previous work applied to knot detection used image processing, morphological processing, and feature extraction procedures to detect sizes and locations of knots on lumber surfaces \cite{todoroki2010automated,yang2017}. Global thresholding is used to segment images through morphological operations to isolate regions that are likely to contain knots. Adaptive thresholding is then applied to suspected areas to improve the accuracy of knot segmentation results. However, the lack of robustness to noise and blurry shape edges still persists as major limitations to knot detection applications. The shape detection accuracy often deteriorates substantially when noise and blurriness of knots edges increase.
\paragraph{ML-based detection and localization.}
To identify and detect the locations of knots based on images, an alternative to modeling the knots as ellipses is to use object detection methods to find bounding boxes around knots. Bounding box regression and object detection have been a long-standing problem in the field of computer vision. The aim of bounding box regression is to refine or predict the minimum localization boxes within which objects of interest lie. With the developments in deep learning in recent years, many effective methods have been proposed to detect objects from an image. Methods such as R-CNN \cite{girshick2014rich}, Fast R-CNN \cite{girshick2015fast}, and Faster R-CNN \cite{ren2015faster} apply different methods to select possible regions containing objects. In particular, Faster R-CNN implements a Region Proposal Network (RPN) to propose plausible regions that are likely to contain objects. YOLO \cite{redmon2016you} segments images into smaller pieces and detects objects within and across the smaller pieces. SSD \cite{liu2016ssd} is another single shot detection method similar to YOLO, but it uses feature maps from different layers to detect objects of different sizes. However, these object-detection methods are proposed to use bounding boxes to detect rectangular objects and have limitations when operating on elliptical objects with inherent symmetries.
\paragraph{Elliptical object localization.}
More recently, papers have been proposed to detect elliptical objects using machine learning-based methods. For example,
Dong \textit{et al.} \cite{dong2020ellipse} propose Ellipse R-CNN to detect clustered and occluded ellipses and suggest an objective function for performing ellipse regression. However, their method has a complicated pipeline which includes many image cropping and scaling operations tailored for detecting occluded ellipses, which is not the case for lumber knots. Moreover, Wang \textit{et al.} \cite{wang2019ellipse} propose an Ellipse Proposal Network to detect optic boundaries in fundus images. However, the technical details regarding the objective functions and network architecture were not elaborated in the paper. Another application to pupil localization based on region proposal network and Mast R-CNN is proposed with a new computational schedule of anchor ellipse regression in \cite{lin2019pupil}. Li \cite{li2019detecting} proposes a Gaussian Proposal Network (GPN) to replace the usual region proposal network in object detection frameworks such as Faster R-CNN. The elliptical proposals are parameterized as Gaussian distributions and the GPN is trained to minimize the Kullback-Leibler (KL) divergence between the ground truths and the proposed ellipses. However, the proposed GPN is only designed to be a replacement for the RPN and does not have the Region of Interest (RoI) pooling, classification, and regression components. Therefore, using GPN alone cannot complete the ellipse detection pipeline. Moreover, we improve the KL divergence loss by using the Wasserstein distance.
\section{Data Preparation}
\label{sec:dataprep}
In this section, we discuss the procedures for preparing the lumber knot dataset. Specifically, Section~\ref{subsec:fixing} introduces the proposed algorithm to fix the misalignment in the raw images, while Section~\ref{subsec:knot_data} gives a brief introduction to the annotated lumber knot dataset.
\subsection{Preprocessing Algorithm}
\label{subsec:fixing}
Similar to the lumber image samples in Figure~\ref{fig: knot_example}, we have a total of $113$ pieces of lumber with four sides being scanned. The raw data do not contain any labels for knot faces, and the quality of the images is not ideal for object detection purposes. For example, it can be seen from Figure~\ref{fig: knot_example} that for each sample lumber image, there is one edge that is much longer than the other edge. Moreover, a major issue with the data is the pixel misalignment in the images of narrow surfaces.
An example of pixel misalignment in the image data is illustrated in Figure~\ref{fig:process}. To effectively improve the data quality, the preprocessing step needs to simultaneously recognize and address two challenges:
\begin{itemize}
\item The backgrounds do not have uniformly black color and contain noise pixels. Specifically, the dark background color exhibits an arbitrary change pattern close to the lumber edges.
\item The color of knot faces can be very similar to the background. If a knot lies on either edge of a lumber board, it is often difficult to distinguish it from the background purely based on its color.
\end{itemize}
These features make it impractical to fix the misalignment issue by simply setting up a threshold to distinguish the background from the lumber area. Preliminary analysis shows that methods based on thresholds produce non-smooth edges of the lumber region and may fail when the knots lie on either edge of the surface.
\begin{figure}[!ht]
\centering
\includegraphics[width= 0.47\textwidth]{fig/illustrations_reduced.pdf}
\caption{Illustration of fixing misalignment for one piece of lumber board. The RGB image is first converted to greyscale and fed to our algorithm to get the optimal displacement, which is used to shift the RGB image column-by-column to get the ideal result.}
\label{fig:process}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width= 0.47\textwidth]{fig/labels_opt_reduced.pdf}
\caption{Bounding ellipse annotations for the visible knots on the wide surface of one sawn lumber.}
\label{fig:labelexample}
\end{figure}
Taking these considerations into account, we propose an iterative algorithm to fix the misalignment problem through column-by-column alignment of pixels. We illustrate the procedures in Figure~\ref{fig:process}. An RGB image is first converted to grey scale. For each pixel column, the algorithm determines its optimal shift by comparing the current column with its previous shifted neighbouring columns by taking the sum of the vector norms that are weighted inversely by the distances between columns. That is, the weights are proportional to the inverse of the distance raised to the power $p$. Denote the pixel values in the $i$th column by $\boldsymbol{c}_i$. Let the number of neighbours be $n$ and the norm order be $k$. The optimal shift $\widehat{s}$ for column $i$, $i > 1$, is obtained by searching among all possible shifts $s\in \{s_{\min}^i, \dots, -1, 0, 1, \dots, s_{\max}^i\}$ such that
\begin{equation}
\widehat{s} = \argmin_{s\in\{s_{\min}^i, \dots, -1, 0, 1, \dots, s_{\max}^i\}} \sum_{j = \min(1, i - n)}^{i - 1}\frac{1}{|i-j|^p}||\boldsymbol{c}_i^{s} - \boldsymbol{c}_j||_k,
\end{equation}
where $\boldsymbol{c}_i^s$ represents pixels of the $i$th column shifted by $s$.
The values of $n$, $p$, and $k$ are fine-tuned to achieve optimal results. Taking the extent of misalignment into account, we set $n = 100$, $k = 2$, and $p = 1$ to search for the optimal shifts for each pixel column. This method proves to be effective in resolving the misalignment in the lumber images. A simplified example is depicted in Figure~\ref{fig:process}. It can be seen that our proposed greedy method is robust to the nonuniform dark background and edge-lying knots.
\subsection{Lumber Knot Dataset}
\label{subsec:knot_data}
After removing the dark background, lumber images are manually annotated using the free image annotation VGG Image Annotator \cite{dutta2019vgg} with each ellipse parameterized by five parameters: the $x$ and $y$ coordinates of the center of each ellipse denoted by $cx$ and $cy$; the semi-diameters along the $x$- and $y$-axes denoted by $rx$ and $ry$; the counterclockwise rotation angle in radians denoted by $\theta$.
Figure~\ref{fig:labelexample} shows an example of the bounding ellipses for all the visible knots on the wide surface of a piece of lumber. It can be seen that all the visible knots on the board are accurately and tightly annotated. We then randomly crop and resize the annotated images to generate square images for the detection task. This step also re-computes the parameters of the bounding ellipse based on their relative positions in each cropped image. The dimension of each cropped and resized image is 512$\times$512 pixels. For each piece of lumber, an average of $57.4$ cropped images that partially contain at least one elliptical knot are generated. The complete dataset contains 4894 annotated lumber knots and is freely accessible on \url{https://forestat.stat.ubc.ca/tiki-index.php?page=Databases+-+Products} to the public for future research.
\section{Proposed Method}
\label{sec:approach}
In this section, we formulate our elliptical detection and localization method.
The input to our method is a single image. The outputs are detected ellipses parameterized by $(cx, cy, rx, ry, \theta)$.
Since we are interested in a single class of knots, there is no need to further classify the category to which the objects contained in the bounding ellipses belong.
Our approach combines Faster R-CNN and its region proposal networks with the GPN introduced in \cite{li2019detecting}. The overview of the model pipeline is discussed in Section~\ref{subsec:overview}. We introduce the GPN in Section~\ref{subsec:gpn}. Our extensions in terms of the region proposal branch and ellipse regression loss functions are explained in Sections~\ref{subsec:region_prop}~and~\ref{subsec:alterloss}, respectively.
\subsection{Overview of the Model Pipeline}
\label{subsec:overview}
As an illustration, Figure~\ref{fig:pip} shows the architecture of our proposed model. We adopt the basic architecture of the Faster R-CNN, which was originally designed to solve rectangular object detection problems. An image is first passed to the convolutional layers for feature map extraction. Instead of proposing bounding boxes, we use the GPN to propose bounding ellipses as 2D equi-probability contours of Gaussian distributions on the image plane. With the RoI pooling branch, the feature maps corresponding to the proposed regions are then obtained. The feature maps and proposals are finally fed to the ellipse regression and classification branch for final ellipse prediction.
\begin{figure}[!ht]
\centering
\includegraphics[width= .45\textwidth]{fig/pipline_reduced.pdf}
\caption{Overview of the ellipse localization and prediction model pipeline.}
\label{fig:pip}
\end{figure}
\subsection{Gaussian Proposal Network}
\label{subsec:gpn}
In this section, we introduce the adaptations made by the GPN to accommodate Faster R-CNN for localizing ellipses, as required for the lumber knot detection problem.
\subsubsection{Parameterizing Ellipses by Gaussian Distributions}
In \cite{li2019detecting}, ellipses are reparameterized as 2-dimensional contours of Gaussian distributions. As a result, the usual objective function for minimizing the L1 or L2 loss used in performing bounding box regression can be replaced by minimizing the distance metrics between two Gaussian distributions for ellipse regression. This section shows how ellipses can be represented using 2D Gaussian distributions.
An ellipse in a 2D coordinate system without rotation can be represented by
\begin{equation}
\frac{(x-\mu_x)^2}{\sigma_x^2} + \frac{(y-\mu_y)^2}{\sigma_y^2} = 1, \label{eq: ellipse_2d}
\end{equation}
where $\mu_x$ and $\mu_y$ are coordinates representing the centers of the ellipse, and $\sigma_x$ and $\sigma_y$ are the lengths of the semi-axes along the $x$ and $y$ axes.
The probability density function of a 2D Gaussian distribution is given by
\begin{equation}
f(\boldsymbol{x}|\boldsymbol{\mu},\boldsymbol{\Sigma}) = \frac{\exp\left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1} (\boldsymbol{x}-\boldsymbol{\mu}) \right)}{2\pi|\boldsymbol{\Sigma}|^{\frac{1}{2}}},
\end{equation}
where the vector $\boldsymbol{\mu}$ denotes the coordinate vector representing the center $(x, y)$, while $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}$ are the mean vector and covariance matrix of the Gaussian distribution. If we assume the off-diagonal entries of $\boldsymbol{\Sigma}$ are 0 and parameterize $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}$ as
\begin{equation}
\boldsymbol{\mu} =
\begin{bmatrix}
\mu_x \\
\mu_y
\end{bmatrix}
\text{ and }
\boldsymbol{\Sigma} =
\begin{bmatrix}
\sigma_x^2 & 0 \\
0 & \sigma_y^2
\end{bmatrix},
\end{equation}
the equation in Equation~\eqref{eq: ellipse_2d} for the ellipse corresponds to the density contour of the 2D Gaussian distribution when
\begin{equation}
(\boldsymbol{x}-\boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1} (\boldsymbol{x}-\boldsymbol{\mu}) = 1.
\end{equation}
When the major axis of the ellipse is rotated of an angle $\theta$ with respect to the $x$-axis, a rotation matrix $R(\theta)$ can be defined as
\begin{equation}
R(\theta) =
\begin{bmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{bmatrix}.
\end{equation}
This matrix can be used to map the coordinates in the original $(x, y)$ system into a new $(x', y')$ system, i.e.,
\begin{equation}
\begin{bmatrix}
x' \\
y'
\end{bmatrix} =
R(\theta)
\begin{bmatrix}
x \\
y
\end{bmatrix}
\end{equation}
Denote the lengths of the semi-major and semi-minor axes of the ellipse by $\sigma_l$ and $\sigma_s$. It can be shown that the ellipse centered at $(\mu_x, \mu_y)$ with semi-major and semi-minor axes of lengths $\sigma_l$ and $\sigma_s$, and a rotation angle of $\theta$ between its major axis and the $x$-axis where $\theta\in[-\frac{\pi}{2}, \frac{\pi}{2}]$ can be parameterized by a 2D Gaussian distribution with
\begin{equation}
\boldsymbol{\mu} =
\begin{bmatrix}
\mu_x \\
\mu_y
\end{bmatrix}
\text{ and }
\boldsymbol{\Sigma}^{-1} = R^T(\theta)
\begin{bmatrix}
1/\sigma_l^2 & 0 \\
0 & 1/\sigma_s^2
\end{bmatrix}
R(\theta).\label{eq:cov}
\end{equation}
\subsubsection{Replacing Region Proposal Network with Gaussian Proposal Network}
The goal of the GPN is to propose bounding ellipses such that the Gaussian parameters $(\mu_x, \mu_y, \sigma_l, \sigma_s, \theta)$ are close to the ground truth ellipses through a distance metric. In \cite{li2019detecting}, Kullback-Leibler (KL) divergence is used as the distance measure. It is easily seen that the KL divergence between a proposed 2D Gaussian distribution $\mathcal{N}_p(\boldsymbol{\mu}_p, \boldsymbol{\Sigma}_p)$ and a target 2D Gaussian distribution $\mathcal{N}_t(\boldsymbol{\mu}_t, \boldsymbol{\Sigma}_t)$ is
\begin{align}
D_{KL}(\mathcal{N}_p || \mathcal{N}_t) &= \frac{1}{2}[\text{tr}(\boldsymbol{\Sigma}_t^{-1}\boldsymbol{\Sigma}_p) + (\boldsymbol{\mu}_p-\boldsymbol{\mu}_t)^T\boldsymbol{\Sigma}_t^{-1}(\boldsymbol{\mu}_p-\boldsymbol{\mu}_t) \nonumber\\
&\quad + \log\frac{|\boldsymbol{\Sigma}_t|}{|\boldsymbol{\Sigma}_p|} - 2],
\end{align}
where tr$(\cdot)$ is the trace of a matrix. GPN replaces RPN in the Faster R-CNN framework by predicting five parameters for ellipses instead of four parameters for bounding boxes and minimizing KL divergence instead of the smooth L1 loss. With these modifications, the RPN module can be replaced by the GPN to propose bounding ellipses.
\subsection{Region Proposal and Offset Regression}
\label{subsec:region_prop}
GPN was originally only designed to replace the RPN in the Faster R-CNN framework to generate elliptical proposals with the remaining components removed. Further detecting and predicting the exact locations of the ellipses in an image is out of the scope of GPN.
We re-introduce the necessary RoI pooling as well as the ellipse classification and regression components to complete the ellipse detection pipeline in analogy to Faster R-CNN. Faster R-CNN pools the feature maps in a rectangular region around the detection to have a fixed resolution, applies a CNN to these, and predicts offsets between the proposed parameters and the ground-truth parameters for each detection using a fully-connected layer. Similarly, given each ellipse proposal output by the GPN, we find the tightest axis-aligned bounding box covering the proposal. Feature maps in this rectangular are pooled in a similar manner to the Faster R-CNN. A fully-connected layer is then used to predict the parameter offsets based on the proposal center as well as the ellipse direction and shape for each detected ellipse.
\subsection{Loss Function for Ellipse Regression}
\label{subsec:alterloss}
In Faster R-CNN, L1 or L2 losses are used as the loss function to predict the offsets between the four pairs of predicted and ground-truth parameters defining a rectangular object. However, these losses do not work well for elliptical object detection problems since the angle parameter needs to be treated differently than the other four parameters.
A natural choice for the ellipse regression loss is the KL divergence in GPN, which is used as the distance measure between two Gaussian distributions in generating ellipse proposals. Nevertheless, KL divergence has a few non-negligible drawbacks. Firstly, KL divergence is asymmetric, i.e., $D_{KL}(\mathcal{D}_1||\mathcal{D}_2)$ does not always equal $D_{KL}(\mathcal{D}_2||\mathcal{D}_1)$ given two distributions $\mathcal{D}_1$ and $\mathcal{D}_2$. Secondly, KL divergence can sometimes be numerically unstable and it tends to be very large in magnitude when the two distributions are far apart. This may cause problems in gradient evaluation, hindering convergence of the neural network. Lastly, KL divergence does not satisfy the triangle inequality and is not a valid mathematical metric in that sense.
Wasserstein distance is another popular distance measure defined between two probability distributions. In contrast to KL divergence, Wasserstein distance is symmetric and satisfies the triangle inequality. In recent years, Wasserstein distance has been proposed to replace other asymmetric losses to improve the results generated by neural network models. For example, Arjovsky \textit{et al.} \cite{arjovsky2017wasserstein} propose Wasserstein GAN to stabilize the training of GANs.
The $p$-Wasserstein distance between two probability measures $\mu$ and $\nu$ is
\begin{equation}
W_p(\mu, \nu) = \left(\inf \mathbb{E} [d(X,Y)^p]\right)^{1/p},
\end{equation}
where $d(\cdot, \cdot)$ is a norm function, $X\sim\mu$, and $Y\sim\nu$. For two 2D Gaussian distributions, the 2-Wasserstein distance between them with respect to the usual Euclidean norm has a convenient form. For a proposed 2D Gaussian distribution $\mathcal{N}_p(\boldsymbol{\mu}_p, \boldsymbol{\Sigma}_p)$ and a target 2D Gaussian distribution $\mathcal{N}_t(\boldsymbol{\mu}_t, \boldsymbol{\Sigma}_t)$, the 2-Wasserstein distance with respect to the Euclidean norm is
\begin{align}
[W_2(\mathcal{N}_p, \mathcal{N}_t)]^2 &= ||\boldsymbol{\mu}_p-\boldsymbol{\mu}_t||^2_2 +\text{tr}\Big[\boldsymbol{\Sigma}_p + \boldsymbol{\Sigma}_t \nonumber\\
&\quad- 2\left(\boldsymbol{\Sigma}_p^{\frac{1}{2}}\boldsymbol{\Sigma}_t\boldsymbol{\Sigma}_p^{\frac{1}{2}}\right)^{\frac{1}{2}}\Big]
\end{align}
according to the results in \cite{olkin1982distance}, where $\text{tr}(\cdot)$ is the trace of a matrix.
In the commutative case where $\boldsymbol{\Sigma}_p \boldsymbol{\Sigma}_t = \boldsymbol{\Sigma}_t \boldsymbol{\Sigma}_p$, the 2-Wasserstein distance between two 2D Gaussian distributions can be further simplied to
\begin{equation}
[W_2(\mathcal{N}_p, \mathcal{N}_t)]^2 = ||\boldsymbol{\mu}_p-\boldsymbol{\mu}_t||^2_2 + ||\boldsymbol{\Sigma}_p^{\frac{1}{2}}-\boldsymbol{\Sigma}_t^{\frac{1}{2}}||^2_F,
\end{equation}
where $||\cdot||_F$ is the Frobenius norm of a matrix. The two covariance matrices can be computed based on the inverses in Equation~\eqref{eq:cov} from the ellipse parameters. The square root matrices $\boldsymbol{\Sigma}_p^{\frac{1}{2}}$ and $\boldsymbol{\Sigma}_t^{\frac{1}{2}}$ have a closed-form solution:
\begin{equation}
\boldsymbol{\Sigma}_p^{\frac{1}{2}} = R^T_p(\theta)
\begin{bmatrix}
\sigma_{p,l} & 0 \\
0 & \sigma_{p,s}
\end{bmatrix}
R_p(\theta)
\end{equation}
and
\begin{equation}
\boldsymbol{\Sigma}_t^{\frac{1}{2}} = R^T_t(\theta)
\begin{bmatrix}
\sigma_{t,l} & 0 \\
0 & \sigma_{t,s}
\end{bmatrix}
R_t(\theta).
\end{equation}
With all these modifications and adaptions, the overall loss function is the weighted sum of three components: the GPN ellipse proposal loss, the ellipse regression loss, and the cross entropy of classifying ellipses and backgrounds.
\section{Experimental Results}
\label{sec:exp}
Comprehensive evaluations of our proposed method for detecting elliptical knots in the lumber knot dataset are presented in this section. We introduce the experimental setup in Section~\ref{subsec:exp_setup} and summarize the quantitative detection performances across different experiment settings in Section~\ref{subsec:detection_quan}. Visualizations of the detection results are provided in Section~\ref{subsec:detection_qual}. We also compare our proposed method against the baseline deep learning-based solution and geometric reconstructions in~\ref{subsec:comp}.
\subsection{Experiment Setup}
\label{subsec:exp_setup}
Among all the annotated lumber specimens in the lumber knot dataset, 70\% are randomly chosen as the training set, 10\% are chosen as the validation set, and the remaining 20\% are used as the test set. We trained the model on the training set for 20 epochs; the model with the lowest validation total loss is saved for testing purposes. As with the original Faster R-CNN model, the pretrained VGG-16 network in \cite{simonyan2014very} is used as the base model for ellipse proposal generation and feature map extraction. We trained our proposed model with PyTorch 1.0.
The average of the intersection over union (IoU) between all the detected ellipses and ground truth ellipses are computed and used as the metrics to evaluate the performance of a proposed detection methods. Since there are not closed-formed formula to compute the overlapping area between two ellipses, we draw a grid of points from both ellipses and use a discretized sampling method to compute the IoU between the two ellipses. The width and height of the point grid are the same as those of the tightest axis-aligned rectangle covering both ellipses. One point is assigned at each pixel location.
\subsection{Quantitative Evaluation of Detection Performance}
\label{subsec:detection_quan}
To evaluate the performance of our proposed method, four experimental settings are considered as follows:
\begin{enumerate}
\item \textit{RPN with L2 loss for ellipse offset regression.} RPN in the Faster R-CNN is used to generate bounding ellipse proposals. Instead of the four parameters characterizing a rectangle, the RPN outputs the five parameters characterizing an ellipse. L2 loss is used for the ellipse offset regression. This setting directly modifies the RPN for the ellipse detection problem and serves as the baseline model.
\item \textit{GPN with L2 loss for ellipse offset regression.} RPN in the Faster R-CNN is replaced with the GPN to generate ellipse proposals. L2 loss is used for ellipse offset regression.
\item \textit{GPN with KL divergence for ellipse offset regression.}
\item \textit{GPN with 2-Wasserstein distance for ellipse offset regression.}
\end{enumerate}
Note that GPN uses KL divergence while RPN uses smooth L1 loss in the proposal network. We trained the model five times under each setting. The IoUs between the detected and ground-truth ellipses under each setting in each repeated experiment along with the mean IoU and standard error are reported in Table~\ref{tab: exp}.
\setlength{\tabcolsep}{4pt}
\begin{table}
\centering
\begin{tabular}{*{5}{l}}
\noalign{\smallskip}
\toprule
Setting & RPN, L2 & GPN, L2 & GPN, KLD & GPN, 2-WD \\\midrule
Exp 1 & 60.16 & 64.74 & 73.37 & 73.00 \\
Exp 2 & 65.59 & 66.14 & 73.13 & 73.32 \\
Exp 3 & 64.54 & 65.82 & 72.48 & 72.98 \\
Exp 4 & 65.98 & 66.97 & 71.07 & 72.85 \\
Exp 5 & 61.87 & 65.40 & 73.26 & 73.10 \\\midrule
Mean IoU & 63.63 & 65.81 & 72.66 & 73.05 \\
s.e. & 2.52 & 0.83 & 0.95 & 0.18\\
\bottomrule
\end{tabular}
\caption{The IoUs between the detected and ground-truth ellipses under each of the four settings in each repeated experiment along with the mean IoU and the standard error. L2, KLD, and 2-WD represent L2 loss, KL divergence, and 2-Wasserstein distance, respectively. All numbers are in percentages
}
\label{tab: exp}
\end{table}
\setlength{\tabcolsep}{1.4pt}
From Table~\ref{tab: exp}, it can be seen that using our proposed method, which generates elliptical proposals with GPN and uses the 2-Wasserstein distance as the loss function for ellipse offeset regression, improves the mean IoU by 10\% compared with a general detector such as Faster R-CNN, which generates proposals with RPN and uses L2 distance as the loss function for offset regression. Directly replacing RPN in Faster R-CNN with GPN improves the mean IoU by around 2\%. Replacing the general-purpose L2 loss with KL divergence or 2-Wasserstein distance both significantly improves the ellipse detection performance. In particular, models using 2-Wasserstein distance outperform models using KL divergence by more than 0.4\%. Moreover, using the Wasserstein distance instead of KL divergence led to a much lower (0.18 vs. 0.95) standard deviation across experiments.
\subsection{Qualitative Evaluation of Detection Performance}
\label{subsec:detection_qual}
In the previous section, we quantitatively evaluated the performance of our proposed ellipse detection method against the baseline Faster R-CNN model. In this section, visualizations of the detected ellipses in the lumber knot dataset are provided to qualitatively assess our proposed method.
Figure~\ref{fig: detection} shows the ground-truth knots from three pieces of lumber as well as the detected knots using the baseline method (RPN, L2) and our proposed method (GPN, 2-WD). It can be seen that the elliptical bounds for knots are drawn more tightly and accurately using our proposed method compared with the baseline method. This holds particularly true for knots close to the board boundary, which are only partially visible. Nonetheless, it can be seen that the rotation angles of the detected ellipses have relatively large errors in some of the examples shown in Figure~\ref{fig: detection}. Future research can be done to further improve the prediction performance for the rotation angles, possibly by introducing oriented instead of axis-aligned pooling regions in the GPN.
\begin{figure}[!ht]
\centering
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/083A_1453TLine.png_crop27.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/083A_1453TLine.png_crop94.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/083A_1453TLine.png_crop98.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/083B_1453TLine.png_crop41.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/083A_1453TLine.png_crop27.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/083A_1453TLine.png_crop94.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/083A_1453TLine.png_crop98.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/083B_1453TLine.png_crop41.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/093A_1442TLine.png_crop16.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/093A_1442TLine.png_crop6.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/093B_1442TLine.png_crop0.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/093B_1442TLine.png_crop96.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/093A_1442TLine.png_crop16.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/093A_1442TLine.png_crop6.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/093B_1442TLine.png_crop0.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/093B_1442TLine.png_crop96.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/113A_1426TLine.png_crop12.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/113B_1426TLine.png_crop14.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/113B_1426TLine.png_crop54.png_baseline}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/113B_1426TLine.png_crop76.png_baseline.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/113A_1426TLine.png_crop12.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/113B_1426TLine.png_crop14.png_wd.png}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/113B_1426TLine.png_crop54.png_wd}
\end{subfigure}
\begin{subfigure}{.115\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/113B_1426TLine.png_crop76.png_wd.png}
\end{subfigure}
\caption{Examples of detected ellipses from three specimens of lumber using the baseline method (RPN, L2) and our proposed method (GPN, 2-WD). Images in every two rows are from the same specimen. Green, blue, and red ellipses are the ground-truth ellipses, the detected ellipses using the baseline method, and the detected ellipses using our proposed method, respectively.}
\label{fig: detection}
\end{figure}
\subsection{Comparisons to Geometric Ellipse Detection} \label{subsec:comp}
To compare with our proposed method, we applied the geometric fast ellipse detector using projective invariant pruning method proposed in \cite{jia2017fast} to the lumber knot dataset (their code is available on \url{https://github.com/dlut-dimt/ellipse-detector}). Jia's method uses geometric features to find ellipses, which performs extremely poorly on lumber knot images. Among all the test images, less than 1\% of the ellipses can be detected using their method. In particular, Jia's method failed to detect any ellipses in all the 12 images in Figure~\ref{fig: detection}. Furthermore, Jia's method is also sensitive to the positioning of ellipses. For example, Figure~\ref{fig: detection2} visualizes the detection results using our proposed method (GPN + 2-WD) versus Jia's method on the same knot across different cropped images of a lumber board. It can be seen that among these five positions, our method consistently generates accurate position estimations while Jia's method is only able to successfully yet inaccurately detect this ellipse at one out of the five positions. Therefore, we conclude that our method is more robust and reliable than non-machine learning-based ellipse methods and works much better on detecting knots in lumber images. In terms of runtime, for an image of 512$\times$512 pixels, Jia's method takes around 8.8 milliseconds to generate predictions, while our proposed method (GPN + 2-WD) takes around 226 milliseconds in the test stage. Although Jia's method is faster than our proposed method, both methods are sufficiently fast for real-world applications.
\begin{figure}[!ht]
\centering
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop95.png_pred.png}
\end{subfigure}
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop96.png_pred.png}
\end{subfigure}
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop97.png_pred.png}
\end{subfigure}
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop98.png_pred.png}
\end{subfigure}
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop99.png_pred.png}
\end{subfigure}
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop95.png}
\end{subfigure}
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop96.png}
\end{subfigure}
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop97.png}
\end{subfigure}
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop98.png}
\end{subfigure}
\begin{subfigure}{.091\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{fig/test_results_reduced/007A_1426TLine.png_crop99.jpg}
\end{subfigure}
\caption{Examples of detecting the same ellipse in different cropped images using our method (GPN, 2-WD) and Jia's method. Green and blue ellipses in the first row are the ground-truth ellipses and the detected ellipses using our proposed method. Green ellipses in the second row are the detected ellipses using Jia's method. Note that Jia's method only detects the knot in the last cropped image.}
\label{fig: detection2}
\end{figure}
\section{Conclusion}
\label{sec:conc}
In this paper, we propose a method tailored to detect and localize elliptical objects in images. Our method adapts the Region Proposal Network in Faster R-CNN to model elliptical objects. We also extend the existing Gaussian Proposal Network by adding RoI pooling as well as the ellipse classification and regression branches to complete the elliptical object detection pipeline. Furthermore, we propose Wasserstein distance as the loss function for ellipse offset predictions. Experiments on the sawn lumber images show that our proposed method improves the mean IoU of the detected ellipses by 10\% compared with the baseline method. This is a major improvement in this illustrative application in so far as the result will be used in models that predict lumber strength and hence determine the grade class into which a piece of lumber will be placed. That in turn benefits consumers who use these grades in selecting lumber for their specific application. And it benefits producers who will use machine grading techniques to classify their lumber. For the forest products industry, this can result in products of better quality for intended use and in turn improvements to the manufacturer's bottom line. In addition, specific to the lumber example, we propose an algorithm to correct the misalignment of raw images of lumber specimens and create the first open-source lumber knot dataset with labeled elliptical knots, which can be used for future research. While experiments in this paper focus on the knot detection problem in sawn lumber images, our proposed method can easily be applied to detect ellipses in datasets containing other types of elliptical objects. In future work, we will attempt to predict the 3D knot shapes given 2D ellipsoid supervisions from both sides. It would also be interesting to predict the entire elliptical cone given images of all three sides, supervised by the 2D ellipse/Gaussian formulation.
\noindent
\textbf{Acknowledgements.} The authors thank Conroy Lum, along with FPInnovations and its technical support staff, for facilitating the experimental work that was done to produce the data used in this paper. We also thank Terry Chen, Zongjun Liu, Angela Wang-Lin, and Caitlin Zhang for their assistance in data processing. The work reported in this paper was partially supported by FPInnovations and a Collaborative Research and Development grant from the Natural Sciences and Engineering Research Council of Canada.
\section*{Appendix}
|
2,869,038,156,306 | arxiv | \section{Introduction}
\label{s:intro}
In the setting of free discontinuity problem and free boundary problems, a well established technique in investigating the structure of the gradient of functions $u$ is the so called \emph{slicing}. Such a technique allows to reconstruct the properties of the gradient by studying the slices of $u$ which are lower-dimensional and simpler to handle. One of the most challenging steps consists in establishing a precise relation between the one-dimensional slices of the jump set $J_u$ of $u$ and the jump of its one-dimensional slices. More precisely, given a direction $\xi \in \mathbb{S}^{n-1}$ and $y \in \xi^\bot$, one is interested in showing that the restriction of $J_u$ along the line $\ell:= \{y+t\xi : t \in \mathbb{R}\}$ coincides with the jump set of $u |_\ell$. Such a result can be obtained in a $BV$-setting via Coarea Formula (see, e.g.,~\cite[Theorem~3.108]{afp}). The $BD$-case, as well as the more general $BV^{\mathcal{A}}$-case, required a new approach mainly relying
on the following two facts: the rectifiability of $\Theta_u$, namely, the set of strictly positive $(n-1)$-dimensional upper density of $|Eu|$ or $|\mathcal{A}u|$, and a combination of codimension-one slicing analyisis together with the so called parallelogramm law (cf.~\cite[Formula (5.4)]{MR1480240} for the $BD$-space and \cite[Formula (30)]{arr} for the $BV^{\mathcal{A}}$-space). The proof of the rectifiability of $\Theta_u$ is originally due to Kohn \cite{Kohn} and relies on the celebrated Federer's Structure Theorem, while the relation between the jump points of the codimension-one slices and the set $\Theta_u$ has been first pointed out by Ambrosio-Coscia-Dal Maso in~\cite{MR1480240}.
The aim of this paper is to provide a unifying slicing criterion for the jump set $J_u$ of a measurable function $u$, which also presents applications in a non-Euclidean framework. With this motivation we will not only deal with slices along straight lines, as done in \cite{MR1480240, afp, arr}, but we will rather work with one-dimensional slices along solutions~$\gamma$ of a second order ODE driven by a sufficiently smooth field $F \colon \mathbb{R}^{n} \times \mathbb{R}^{n} \to \mathbb{R}^{n}$ which is $2$-homogeneous in the second variable (cf.~\eqref{e:quadratic}). \color{black} In order to better explain the difficulties due to our general approach let us introduce some notation. Fix $\Omega$ an open set in $\mathbb{R}^n$ and $u \colon \Omega \to \mathbb{R}^m$ a measurable function. The first novelty of our work is the notion of \emph{families of curvilinear projections} $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ where $P_\xi \colon \Omega \to \xi^\bot$ (cf.~Definition~\ref{d:CP}). The main feature of
such a family is that it satisfies a transversality property in the sense of Definition~\ref{d:transversal} and it admits a parametrization $\varphi \colon \{(y+t\xi,\xi) \in \mathbb{R}^n \times \mathbb{S}^{n-1} \ : \ (y,t) \in [\xi^\bot \cap \mathrm{B}_{\rho}(0)] \times (-\tau,\tau) \} \to \mathbb{R}^n$ (cf.~Definition~\ref{d:param}). In particular, the level sets~$P_\xi^{-1}(y)$ are the images of $t \mapsto \varphi_\xi(y + t\xi):= \varphi((y+t\xi,\xi))$ \color{black} solutions \color{black} of
\begin{equation}
\label{e:ODE}
\ddot{\gamma} = F(\gamma,\dot{\gamma}).
\end{equation}
For $E \in \Omega$, $\xi \in \mathbb{S}^{n-1}$, and $y \in \xi^\bot$, we define the one-dimensional slices
\begin{align}
\label{e:int1}
E^{\xi}_{y} &:= \{ t \in \mathbb{R}: \gamma(t) \in E\}\,,\\
\label{e:int2}
\hat{u}^{\xi}_{y} (t) &:= u(\gamma(t)) \cdot g(\gamma(t),\dot{\gamma}(t)) \ \ \text{for $t \in \Omega^{\xi}_{y}$}\,,
\end{align}
where $\gamma(t)=\varphi_\xi(y+t\xi)$ and $g \colon \Omega \times \mathbb{R}^n \to \mathbb{R}^m$ is a given continuous map. For later use we further denote $\dot{\varphi}_\xi(y+t\xi)$ the derivative in time of $t \mapsto \varphi_\xi(y + t\xi)$ and the velocity field $\xi_\varphi \colon \Omega \to \mathbb{R}^n$ as
\[
\xi_\varphi(x):= \dot{\varphi}_\xi(P_\xi(x)+t_x \xi),
\]
where $t_x$ is the unique time satisfying $x= \varphi_\xi(P_\xi(x)+t_x \xi)$. We notice that with the choice
\begin{align}
\label{e:int1100}
F(x,v) & := - \bigg( \sum_{i,j=1}^n\Gamma^1_{ij}(x)v_iv_j,\dotsc,\sum_{i,j=1}^n\Gamma^n_{ij}(x)v_iv_j \bigg) \qquad (x,v) \in \Omega \times \mathbb{R}^n\,,
\end{align}
the solutions $\gamma$ of \eqref{e:ODE} are the geodesics in coordinates of a Riemannian manifold with Christoffel symbols equal to $\Gamma^\ell_{ij}$.
The non-linear nature of our setting leads to a crucial point, which is the lack of symmetry required to exploit the parallelogram law technique and the possibility to perform codimension-one slicing on which the $BD$ and $BV^{\mathcal{A}}$ theories hinge. In order to overcome this difficulty, we develop a new approach which circumvents the rectifiability of $\Theta_u$ as well as Ambrosio-Coscia-Dal Maso's argument. At the best of our knowledge, this is the first time in which one dimensional slices involving projections on non-constant vector fields are considered.
The two fundamental conditions, \color{black} upon \color{black} which our slicing criterion is based, read as follows:
\begin{enumerate}[label=(\roman*)]
\item the finiteness of a suitable measure $\mathscr{I}_{u,p}$ related to the jump points of $\hat{u}^\xi_y$;
\item a control on the size of the set of points $x \in \Omega$ around which certain \emph{oscillations} of $u$ do not vanish.
\end{enumerate}
For $1 \leq p \leq \infty$ the measure $\mathscr{I}_{u,p}$ is defined by a Carath\'eodory's contruction \color{black} \cite[Section~2.10.1]{fed}. \color{black} Namely, we set
\begin{equation*}
\eta_\xi(B) := \int_{\xi^\bot} \sum_{ t \in B^\xi_y } \big( |[\hat{u}^\xi_y(t)]| \wedge 1 \big) \, \mathrm{d} \mathcal{H}^{n-1}(y) \qquad B \in \mathcal{B}(\Omega),
\end{equation*}
where $[\hat{u}^\xi_y(t)]$ denotes the jump of $\hat{u}^\xi_y$ at $t$, $\mathcal{B}(\Omega)$ denotes the Borel $\sigma$-algebra of $\Omega$, and we use as a Gauge function $\zeta_p \colon \mathcal{B}(\Omega) \to [0,\infty]$ the $L^p$-norm of the map $\xi \mapsto \eta_\xi(B)$ with respect to the $(n-1)$-dimensional Hausdorff measure $\mathcal{H}^{n-1}$. As for (ii), for $\rho>0$ we define the set $\text{Osc}_u (\rho)$ (see also Definitions~\ref{d:oscillation} and~\ref{d:oscillation2}) as those points $x \in \Omega$ such that
\[
\limsup_{r \searrow 0} \int_{\mathbb{S}^{n-1}} \inf_{\{\theta: {\rm Lip}(\theta) \leq 1\}} \bigg(\int_{-\rho/4}^{\rho/4} |\mathring{u}^\xi_x(rt) -\theta (t) |t^{n-1} \, \mathrm{d} t \bigg) \mathrm{d} \mathcal{H}^{n-1}(\xi) >0\,,
\]
where $\mathring{u}^\xi_x$ is given by \eqref{e:int2} with $\gamma$ being the unique solution of
\begin{equation}
\label{e:int1000}
\begin{cases}
\ddot{\gamma} =F(\gamma,\dot{\gamma}), &\\
\gamma(0)=x,&\\
\dot{\gamma}(0)=\xi\,.&
\end{cases}
\end{equation}
We will then ask for the set $\text{Osc}_u (\rho)$ to be $\sigma$-finite with respect to the curvilinear version of Farvard's integralgeometric measure $\tilde{\mathcal{I}}^{n-1}$ (see Definition~\ref{d:Favard} for the precise definition).
With the above notation at hands we can state the main contribution of this paper.
\begin{theorem}
\label{t:int1}
Let $u \colon \Omega \to \mathbb{R}^m$ be measurable, let $F\in C^{\infty}(\mathbb{R}^n \times \mathbb{R}^n;\mathbb{R}^n)$ be $2$-homogeneous in the second variable, let $g \in C(\Omega \times \mathbb{R}^n;\mathbb{R}^m)$, and let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on $\Omega$. Suppose that
\begin{enumerate}
\item There exists $p \in (1, +\infty]$ such that $\mathscr{I}_{u,p}(\Omega) < +\infty$;
\item There exists $\rho>0$ such that $\emph{Osc}_u (\rho)$ is $\sigma$-finite w.r.t. to $\tilde{\mathcal{I}}^{n-1}$.
\end{enumerate}
Then, there exists a countably $(n-1)$-rectifiable set $R \subseteq \Omega$ such that
\begin{equation}
\label{e:int6.1}
J_{\hat{u}^\xi_y} \subseteq R^\xi_y \qquad \text{ for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^\bot$}.
\end{equation}
In particular, for every $x \in R$
\begin{equation}
\label{e:mainslicepro9}
\mathcal{H}^{n-1} \big( \big\{ \xi \in \mathbb{S}^{n-1}: \, t_x^{\xi} \in J_{\hat u^\xi_{P_\xi(x)}} \big\} \big) >0
\end{equation}
\end{theorem}
The proof of Theorem \ref{t:int1} is obtained by exploiting a fundamental property of the measure $\mathscr{I}_{u,p}$. Namely, condition (1) implies that $\mathscr{I}_{u,p}$ is concentrated on points $x \in \Omega$ such that \eqref{e:mainslicepro9} holds true (cf.~Proposition \ref{p:keyprop}). The combination of such property with condition (2) allows us to apply the rectifiability criterion given in Theorem \ref{t:rectheorem} (see also \cite[Theorem 1.5]{Tas22}) and infer the $(n-1)$-rectifiability of the measure $\mathscr{I}_{u,1}$. In particular, this provides a countably $(n-1)$-rectifiable set $R$ such that \eqref{e:int6.1} holds true.
The inclusion in \eqref{e:int6.1} can be improved to an equality when $R$ is replaced by the jump set $J_\mathfrak{u}$ of $\mathfrak{u} \colon \Omega \to \mathbb{R}^m$, where $\mathfrak{u}(x):=\pi(x,u(x))$ and $\pi(x,\cdot)$ is the projection of~$\mathbb{R}^m$ onto the vector space generated by the image of $g(x,\cdot)$. The precise statement reads as follows.
\begin{corollary}
\label{c:int2}
Let $u \colon \Omega \to \mathbb{R}^m$ be measurable, let $F\in C^{\infty}(\mathbb{R}^n \times \mathbb{R}^n;\mathbb{R}^n)$ be $2$-homogeneous in the second variable, let $g \in C(\Omega \times \mathbb{R}^n;\mathbb{R}^m)$ satisfy \eqref{G2}, and let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on $\Omega$. Suppose that
\begin{enumerate}
\item There exists $p \in (1, +\infty]$ such that $\mathscr{I}_{u,p}(\Omega) < +\infty$\color{black};
\item There exists $\rho>0$ such that $\emph{Osc}_u(\rho)$ is $\sigma$-finite w.r.t. to $\tilde{\mathcal{I}}^{n-1}$;
\item There exists a diffeomorphism $\tau \colon \mathbb{R} \to (-1,1)$ such that, setting $u_\xi(x):= u(x) \cdot g(x,\xi_\varphi(x))$ for $(x,\xi) \in \Omega \times \mathbb{S}^{n-1}$, we have
\[
D_\xi (\tau (u_\xi \circ \varphi_\xi )) \in \mathcal{M}^+_b(\varphi_\xi^{-1}(\Omega)) \qquad \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$}.
\]
\end{enumerate}
Then, it holds
\begin{equation}
\label{e:int6}
J_{\hat{u}^\xi_y}=(J_{\mathfrak{u}})^\xi_y \qquad \text{ for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^\bot$}.
\end{equation}
\end{corollary}
Corollary \ref{c:int2} is obtained as a consequence of \eqref{e:int6} together with the following two facts: \emph{(i)} by an adaptation of the argument in the proof of \cite[Theorem 5.2]{dal}, hypothesis~(3) of Corollary \ref{c:int2} leads to a precise relation between the traces of $u_\xi$ and the traces of its one dimensional slices $\hat{u}^\xi_y$ for $y \in \xi^\bot$ on any $(n-1)$-rectifiable set $R \subseteq \Omega$ (cf.~\eqref{e:corollary-relje}), and~\emph{(ii)} the jump set of any measurable function $u \colon \mathbb{R}^n \to \mathbb{R}^m$ is countably $(n-1)$-rectifiable, as it was remarkably pointed out in~\cite{del}. Combining all this ingredients in the proper way we end up with Corollary~\ref{c:int2}. We refer to Propositions~\ref{c:relje} and~\ref{p:prodmeas}.
Let us comment on the applicability of Theorem~\ref{t:int1} and Corollary~\ref{c:int2}. Conditions~(1) and~(3) \color{black} of Corollary~\ref{c:int2} are usually satisfied in spaces admitting a slicing representation. For instance, by choosing~$g$ equal to the orthogonal projection on the second component, the $BD$-theory can be recovered and property~(1) is valid with $p=\infty$. This is guaranteed from inequality $\eta_\xi(B) \leq |Eu|(B)$ valid for every $\xi \in \mathbb{S}^{n-1}$ and for every Borel set~$B$. Moreover,~(3) follows by a control on $|D_\xi u_\xi|$. A similar argument leads to the validity of~(1) and~(3) in the $BV^{\mathcal{A}}$-framework with a different choice of $g$ dictated by the notion of spectral pairs (see, e.g.,~\cite{arr}).
The most delicate assumption is item (2) in Corollary \ref{c:int2}. In this regard we show in Section~\ref{s:applications} that the $\sigma$-finiteness of $\text{Osc}_u (\rho) $ follows from the $\sigma$-finiteness of the set of points $x \in \Omega$ satisfying
\begin{align}
\label{e:voscillation20000}
\limsup_{r \searrow 0 } \bigg(\inf_{a \in \Xi (\mathrm{B}_1(0))} \int_{\mathrm{B}_{1}(0)} |u_{r,x}-a| \wedge 1 \, \mathrm{d} z \bigg) >0,
\end{align}
where $u_{r,x} \colon \mathrm{B}_1(0) \to \mathbb{R}^n$ is defined as $u_{r,x}(z):=u(x+rz)$ and, loosely speaking, the space $\Xi (\mathrm{B}_1(0))$ consists of all functions $a \in C^{\infty}(\mathrm{B}_1(0); \mathbb{R}^m)$ whose one-dimensional restrictions $a(\gamma(t)) \cdot g(\gamma(t),\dot{\gamma}(t))$ are $1$-Lipschitz continuous whenever $\gamma$ solves $\ddot{\gamma}= F(\gamma,\dot{\gamma})$. Given a first-order constant coefficients operator $\mathcal{A}$, when investigating the $\sigma$-finiteness of \eqref{e:voscillation20000} with respect to $\mathcal{H}^{n-1}$ for $u \in BV^{\mathcal{A}}$ one can make use of Poincar\'e type of inequalities (cf.~\cite[Theorem~3.1]{MR1480240} and \cite{Gme19}) and appeal to the standard density estimates for Radon measures (cf.~\cite{afp}).
This implies in particular the (weaker) $\sigma$-finiteness required in (2).
In Section \ref{s:GBD} we apply the above strategy to a novel space of functions with generalised bounded deformation $GDB_F(\Omega)$ containing those maps $u$ whose one-dimensional slices~$t \mapsto u (\gamma(t)) \cdot \dot{\gamma} (t)$ have bounded variation when computed on solutions~$\gamma$ of the ODE \eqref{e:int1000}. Inspired by \cite{dal}, we say that $u \in GDB_F(\Omega)$ if there exists $\lambda \in \mathcal{M}^+_b(\Omega)$ such that for every Borel subset $B\subseteq \Omega$ and every curvilinear projection $P_\xi \colon \Omega \to \xi^\bot$
\begin{align*}
\int_{\xi^{\bot}} \Big( |{\rm D} \hat{u}^{\xi}_{y} | (B^{\xi}_{y} \setminus J^{1}_{\hat{u}^{\xi}_{y}}) & + \mathcal{H}^{0} (B^{\xi}_{y} \cap J^{1}_{\hat{u}^{\xi}_{y} } ) \Big)\, \mathrm{d} \mathcal{H}^{n-1}(y)
\leq \|\dot{\varphi}_\xi\|^2_{L^\infty}{\rm Lip}(P_{\xi};\Omega)^{n-1} \lambda(B)\,,
\end{align*}
where $J^1_{\hat{u}^{\xi}_{y}}:= \{t \in \Omega^\xi_y : \, |[\hat{u}^{\xi}_y (t)]| > 1 \}$ (see Definition \ref{d:GBD}). In order to conclude the $\sigma$-finiteness of \eqref{e:voscillation20000}, we show in Theorem \ref{t:poincare} a weak Poincar\'e inequality involving functions in $\Xi ({\rm B}_{1}(0))$ and the measure $\lambda$. To this purpose, we have to make a stronger assumption on the field~$F$. Namely, we suppose $F$ to satisfy a condition which we call \emph{Rigid Interpolation} \eqref{hp:F2}. Such a condition requires a (local) control on the $L^\infty$-norm of the curvilinear symmetric gradient, seen as an operator acting on smooth vector fields, in terms of a discrete semi-norm which takes into account the values of vector fields on the vertices of $n$-dimensional simplexes of $\mathbb{R}^n$ (see \eqref{e:rip5}). In the Riemmannian setting, where~$F$ takes the form~\eqref{e:int1100}, in order to ensure the validity of~\eqref{hp:F2} we have to perform a careful analysis based on a blow-up argument regarding Christoffel symbols around a point, allowing for a (local) control of the kernel of the curvilinear symmetric gradient (see Section \ref{sub:Riemann}). This analysis will also serve as a starting point for investigating further structural properties of the space $GBD_F(\Omega)$ as well as compactness and lower semicontinuity issues. This will be the subject of investigation in a forthcoming paper~\cite{AT_23}.
\subsection*{Plan of the paper.} In Section~\ref{s:preliminaries} we present the basic notation and preliminaries, as well as recall the definition of integralgeometric measure and a related rectifiability criterion from~\cite{Tas22}. In Section~\ref{s:curvilinear} we discuss the notion of (family of) curvilinear projections and present its main properties. Section~\ref{s:directional} is devoted to the proofs of Theorem~\ref{t:int1} and Corollary~\ref{c:int2}. In Section~\ref{s:applications} we investigate the relation between~\eqref{e:voscillation20000} and condition~(2) of Theorem~\ref{t:int1}. Finally, in Section \ref{s:GBD} we define the space $GBD_F(\Omega)$ and prove a Poincar\'e inequality in such setting (see Theorem \ref{t:poincare}), which, in turn, guarantees the applicability of Corollary \ref{c:int2}.
\section{Preliminaries and notation}
\label{s:preliminaries}
\subsection{Basic notation}
For $n , k \in \mathbb{N}$, we denote by~$\mathcal{L}^{n}$ and by~$\mathcal{H}^{k}$ the Lebesgue and the $k$-dimensional Hausdorff measure in~$\mathbb{R}^{n}$, respectively. The symbol $\mathbb{M}^n$ stands for the space of square matrices of order~$n$ with real coefficients, while~$\mathbb{M}^n_{sym}$ denotes its subspace of symmetric matrices. The set $\{e_{i}\}_{i=1}^{n}$ denotes the canonical basis of~$\mathbb{R}^{n}$ and $| \cdot|$ is the Euclidean norm on~$\mathbb{R}^{n}$. For every $\xi \in \mathbb{R}^{n}$, the map~$\pi_{\xi} \colon \mathbb{R}^{n} \to \mathbb{R}^{n}$ is the orthogonal projection over the hyperplane orthogonal to~$\xi$, which will be indicated with~$\xi^{\bot}$. For $x \in \mathbb{R}^{n}$ and~$\rho>0$, ${\rm B}_{\rho}(x)$ stands for the open ball of radius~$\rho$ and center~$x$ in~$\mathbb{R}^{n}$.
Given~$U_{j}$ a sequence of open subsets of~$\mathbb{R}^{n}$, $\Omega$ open subset of~$\mathbb{R}^{n}$, and~$f_{j} \in C^{\infty}(U_{j}; \mathbb{R}^{k})$, we say that $f_{j} \to f$ in~$C^{\infty}_{loc} (\Omega; \mathbb{R}^{k})$ if~$f \in C^{\infty}(\Omega; \mathbb{R}^{k})$, $U_{j} \nearrow \Omega$, and $f_{j} \to f$ in~$C^{\infty}(W; \mathbb{R}^{k})$ for every $W \Subset \Omega$. For a Lipschitz function \color{black} $f\colon \Omega \to \mathbb{R}^{k}$, \color{black} we denote by ${\rm Lip} (f;\Omega)$ the least Lipschitz constant of~$f$ on~$\Omega$. We will drop the dependence on the set whenever it is clear from the context.
Given an open subset~$U$ of~$\mathbb{R}^{n}$,~$\mathcal{M}_{b}(U)$ (resp.~$\mathcal{M}_{b}^{+}(U)$) is the space of bounded Radon measures on~$U$ (resp. bounded and positive Radon measures on~$U$). Given a Borel map~$\psi\colon U \to V \subseteq \mathbb{R}^{k}$ and a measure~$\mu \in \mathcal{M}_{b}(U)$, the push-forward measure of~$\mu$ through~$\psi$ is denoted by~$\psi_{\sharp}(\mu) \in \mathcal{M}_{b}(V)$. The set of all Borel \color{black} subsets \color{black} of~$U$ is indicated by~$\mathcal{B}(U)$. For every $A \subseteq \mathbb{R}^{n} \times \mathbb{S}^{n-1}$, every $\xi \in \mathbb{S}^{n-1}$, and every $x \in \mathbb{R}^{n}$ we will denote
$$ A_{\xi} := \{ (x \in \mathbb{R}^{n}: (x, \xi) \in A\} \qquad A_{x} := \{ \xi \in \mathbb{S}^{n-1}: (x, \xi) \in A\}\,.$$
For every $p \in [1, +\infty]$, $L^{p}(U; \mathbb{R}^{k})$ stands for the space of $p$-summable functions from~$u$ with values in~$\mathbb{R}^{k}$. The usual $L^{p}$-norm is denoted by~$\| \cdot\|_{L^{p}(U)}$. We will drop the set~$U$ in the notation of the norm when there is no chance of misunderstanding.
\subsection{A rectifiability criterion for a class of integralgeometric measures}
This section is based on the techniques developed in \cite{Tas22}.
\begin{definition}[Countably rectifiable set]
We say that a set $R \subseteq \Omega$ is countably $(n-1)$-rectifiable if and only if $R$ equals a countable union of images of Lipschitz maps~$(f_i)_i$ from some bounded sets $E_i \color{black} \subseteq \color{black} \mathbb{R}^{n-1}$ to $\Omega$.
\end{definition}
\begin{definition}[Rectifiable measure]
Let $\mu$ be a measure on $\Omega$. We say that~$\mu$ is $(n-1)$-rectifiable if there exist an $(n-1)$-rectifiable set $R$ and a real-valued measurable function~$\theta$ such that
\begin{equation*}
\mu = \theta \, \mathcal{H}^{n-1} \restr R\,.
\end{equation*}
\end{definition}
The notion of \emph{transversal family of maps} will play a fundamental role along this section. The following definition is an adaptation of \cite[Definition 2.4]{hov}.
\begin{definition}[Transversality]
\label{d:transversal}
Let $\Omega \subseteq \mathbb{R}^n$ be open and let $S_i:=\{\xi \in \mathbb{S}^{n-1} : |\xi\cdot e_i| \geq 1/\sqrt{n} \}$ for $i=1,\dotsc,n$. We say that a family of Lipschitz maps $P_\xi \colon \Omega \to \xi^\bot$ for $\xi \in \mathbb{S}^{n-1}$ is a transversal family of maps on~$\Omega$ if for every $i=1,\dotsc,n$ the maps
\begin{align*}
P^i_\xi(x) & := \pi_{e_i}\circ P_\xi(x) \qquad \text{for } \xi \in S_i, \ x \in \Omega\,,
\\
T^{i}_{xx'}(\xi) & := \frac{P^i_\xi(x) - P^i_\xi(x')}{|x-x'|} \qquad \text{for } \xi \in S_{i}, \ x, x' \in \Omega \text{ with $x \neq x'$}
\end{align*}
satisfy the following properties:
\begin{enumerate}[label=(H.\arabic*),ref=H.\arabic*]
\item For every $x \in \Omega$ the map $\xi \mapsto P^i_\xi(x)$ belongs to $C^2(S_i;\mathbb{R}^{n-1})$ and
\begin{equation}
\label{e:h1}
\sup_{(\xi,x) \in S_i \times \Omega} |D^j_\xi P^i_\xi(x)| < \infty \qquad \text{for }j=1,2\, ;
\end{equation}
\item \label{hp:H2} There exists a constant $C' >0$ such that for every $\xi \in S_i$ and $x,x' \in \Omega$ with $x \neq x'$
\begin{equation}
\label{e:h2}
|T^{i}_{xx'}(\xi)| \leq C' \ \ \ \text{ implies } \ \ \
|\text{J}_\xi T^{i}_{xx'}(\xi)| \geq C';
\end{equation}
\item \label{hp:H3} There exists a constant $C'' >0$ such that
\begin{equation}
\label{e:h3}
| D^j_\xi T^{i}_{xx'}(\xi) | \leq C''\qquad \text{for }j=1,2\,
\end{equation}
for $\xi \in S_i$ and $x,x' \in \Omega$ with $x \neq x'$.
\end{enumerate}
\end{definition}
\begin{remark}\label{r:'}
Hypothesis \eqref{hp:H2} is equivalent to the following:
\begin{enumerate}[label=(H.2'), ref=H.2']
\item \label{hp:'}
There exists two constants $C'_1,C'_2 >0$ such that for every $\xi \in S_i$ and $x,x' \in \Omega$ with $x \neq x'$
\begin{equation}
\label{e:h2'}
|T^{i}_{xx'}(\xi)| \leq C'_1 \ \ \ \text{ implies } \ \ \
|\text{J}_\xi T^{i}_{xx'}(\xi)| \geq C'_2;
\end{equation}
\end{enumerate}
Indeed ,\eqref{hp:H2} clearly implies \eqref{hp:'}. Viceversa if \eqref{hp:'} holds true with $C'_1 \leq C'_2$ we can simply replace $C'_2$ with $C'_1$ in \eqref{e:h2'} and the implication remains true. Similarly if $C'_1 > C'_2$ any triple $(\xi,x,x')$ satisfying $|T^{i}_{xx'}(\xi)| \leq C'_2$ satisfies $|T^{i}_{xx'}(\xi)| \leq C'_1$ as well, therefore implication \eqref{e:h2'} remains true by replacing $C'_1$ with $C'_2$.
\end{remark}
\begin{remark}
\label{r:observation}
The definition of transversality above slightly differs from that in \cite{Tas22}. Indeed, it turns out that for every $i=1,\dotsc,n$ $(P^i_\xi)_{\xi \in S_i }$ is transversal in the sense of \cite[Definition 3.3]{Tas22}, while the entire family $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ is not. However, the relevant property of transversality contained in \cite{Tas22} can be easily transferred to $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ simply by writing $(P_\xi)_{\xi \in \mathbb{S}^{n-1}} = \bigcup_{i=1}^n (P^i_\xi)_{\xi \in S_i }$.
\end{remark}
We are now in position to introduce a curvilinear version of codimension one Farvard's integralgeometric measure.
\begin{definition}
\label{d:Favard}\color{black} Let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of transversal maps on~$\Omega$. We \color{black} define the Borel regular measure $\tilde{\mathcal{I}}^{n-1}$ on $\Omega$ as
\begin{align}
\label{e:farvard1}
\tilde{\mathcal{I}}^{n-1}(B)&:= \int_{\mathbb{S}^{n-1}} \bigg( \int_{\xi^\bot} \mathcal{H}^0(B \cap P^{-1}_\xi(y)) \, \mathrm{d} \mathcal{H}^{n-1}(y) \bigg) \mathrm{d} \mathcal{H}^{n-1}(\xi) \qquad B \in \mathcal{B}(\Omega)\,, \\
\label{e:farvard2}
\tilde{\mathcal{I}}^{n-1}(E) &:= \inf \{ \tilde{\mathcal{I}}^{n-1}(B) : E \subset B, \ B \in \mathcal{B}(\Omega) \} \qquad E \color{black} \subseteq \color{black} \Omega\,.
\end{align}
Notice that the required measurability for \eqref{e:farvard1} can be obtained by means of the measurable projection theorem \cite[Section 2.2.13]{fed} with a similar argument as in \cite[Section 5]{Tas22}.
\end{definition}
Furthermore we need to define the following additional class of measures.
\begin{definition}
\color{black} Let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of transversal maps on~$\Omega$ and let $(\eta_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of Borel regular measures of~$\mathbb{R}^n$ satisfying
\begin{equation}
\label{e:condigm2000}
\xi \mapsto \eta_\xi(A_\xi) \text{ is $\mathcal{H}^{n-1}$-measurable for every $A \in \mathcal{B} ( \Omega \times \mathbb{S}^{n-1} )$.}
\end{equation}
Let moreover $f_B(\xi):= \eta_\xi(B)$ for every $\xi \in \mathbb{S}^{n-1}$ and $B \in \mathcal{B}( \Omega)$. For $p \in [1, +\infty]$, \color{black} we define the set function
\begin{equation}
\label{e:condigm1000}
\zeta_p(B) := \|f_B \|_{L^p(\mathbb{S}^{n-1})}.
\end{equation}
Via the classical Caratheodory's construction we define the measure
\begin{equation}
\label{e:caratheodoryc2}
\mathscr{I}_p^{n-1}(E) := \sup_{\delta>0} \, \inf_{G_\delta} \sum_{B \in G_\delta} \zeta_p(B),
\end{equation}
whenever $E \subseteq \Omega$ and where $G_\delta$ is the family of all countable Borel coverings of $E$ made of sets having diameter less than or equal to~$\delta$.
\end{definition}
\begin{definition}
\color{black} Let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of transversal maps on~$\Omega$ and let $(\eta_\xi)_{\xi \in\mathbb{S}^{n-1}}$ be as in \eqref{e:condigm2000}. We \color{black} define the set functions
\begin{align}
\label{e:condigm1.0.0}
\hat{\zeta}(A) & := \int_{\mathbb{S}^{n-1}} \eta_\xi(A_\xi) \, \mathrm{d} \mathcal{H}^{n-1}(\xi) \qquad \text{for $A \in \mathcal{B}( \Omega \times \mathbb{S}^{n-1})$}\,,
\\
\label{e:caratheodoryc2.1.0}
\hat{\mathscr{I}}_{n-1}(F) & := \sup_{\delta>0} \, \inf_{G'_\delta} \sum_{B \in G'_\delta} \hat{\zeta}(B) \qquad \text{for $F \subseteq \Omega \times \mathbb{S}^{n-1}$}\,,
\end{align}
where~$G'_\delta$ is the family of all countable Borel coverings of~$F$ made of sets having diameter less than or equal to $\delta$.
\end{definition}
We have the following general representation result for $\mathscr{I}^{n-1}_{1}$ and~$\hat{\mathscr{I}}_{n-1}$.
\begin{proposition}
\label{p:coincidence}
Under the previous assumption the measures $\mathscr{I}^{n-1}_1$ and $\hat{\mathscr{I}}_{n-1}$ satisfy
\begin{align*}
\mathscr{I}^{n-1}_1(E) &= \inf_{\substack{E \subseteq B \\ B \in \mathcal{B}(\Omega)}} \int_{\mathbb{S}^{n-1}} \eta_\xi(B) \, \mathrm{d} \mathcal{H}^{n-1}(\xi) \qquad \text{for every $E \subseteq \Omega$\,,} \\
\hat{\mathscr{I}}_{n-1}(F) &= \inf_{\substack{F \subseteq A \\ A \in \mathcal{B} ( \Omega \times \mathbb{S}^{n-1}) }} \int_{\mathbb{S}^{n-1}} \eta_\xi(A_\xi) \, \mathrm{d} \mathcal{H}^{n-1}(\xi) \qquad \text{for every $F \subseteq \Omega \times \mathbb{S}^{n-1}$}.
\end{align*}
\end{proposition}
In the next two propositions we state two properties regarding the disintegration of~$\hat{\mathscr{I}}_{n-1}$ w.r.t.~$ \mathscr{I}^{n-1}_1$ and the measures~$\eta_{\xi}$. In particular, we fix $(P_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ a transversal family of maps in~$\Omega$.
\begin{proposition}
\label{p:fproposition}
Let $(P_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ be a transversal family of maps in~$\Omega$, let $(\eta_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ be a family of Radon measures as in~\eqref{e:condigm2000}--\eqref{e:caratheodoryc2.1.0}.
Assume that there exists $p \in (1, +\infty]$ such that $\mathscr{I}_p^{n-1}$ is finite and that
\begin{equation}
\label{e:abscont}
(P_{\xi})_{\sharp} \, \eta_\xi \ll \mathcal{H}^{n-1} \restr \xi^{\bot} \ \text{for $\mathcal{H}^{n-1}$-a.e. } \xi \in \mathbb{S}^{n-1}\,.
\end{equation}
Then, the measure $\hat{\mathscr{I}}_{n-1}$ can be disintegrated as
\begin{equation*}
\hat{\mathscr{I}}_{n-1} = (f_x \, \mathcal{H}^{n-1} \restr \mathbb{S}^{n-1}) \otimes \mathscr{I}^{n-1}_1 ,
\end{equation*}
where $f_x \colon \mathbb{S}^{n-1} \to \mathbb{R}$ are $\mathcal{H}^{n-1}$-measurable functions with $\int_{\mathbb{S}^{n-1}}f_x \, \mathrm{d} \mathcal{H}^{n-1}=1$ for $\mathscr{I}^{n-1}_{1}$-a.e. $x \in \Omega$. Moreover, the family of functions $(f_x)_{x \in \Omega}$ can be chosen in such a way that, defining $f \colon \Omega \times \mathbb{S}^{n-1} \to \mathbb{R}$ as $f(x,\xi):=f_x(\xi)$, then $f$ is a Borel function.
\end{proposition}
\begin{proof}
It follows by a combination of disintegration theorem with \cite[Proposition 4.7]{Tas22} and \cite[Remark 4.3]{Tas22}.
\end{proof}
We present the definition of \emph{integralgeometric measure}.
\begin{definition}
\label{d:igm}
Let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of transversal maps on~$\Omega$, let the family~$(\eta_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ be as in~\eqref{e:condigm2000}, and let the measure $\mathscr{I}^{n-1}_p$ be as in~\eqref{e:condigm1000}--\eqref{e:caratheodoryc2}. We say that~$\mathscr{I}^{n-1}_p$ is integralgeometric if and only if \eqref{e:abscont} holds true and there exists $E \in \mathcal{B}( \Omega)$ such that
\begin{align}
\label{e:condigm5}
&\eta_\xi(\Omega \setminus E)=0 \ \ \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$}\\
\label{e:condigm6}
&\mathcal{H}^0(E \cap P^{-1}_\xi(y)) < +\infty \text{ for $\mathcal{H}^{n-1}$-a.e. }\xi \in \mathbb{S}^{n-1} \text{, $(P_{\xi})_{\sharp}\eta_\xi$-a.e. }y \in \xi^\bot.
\end{align}
\end{definition}
We conclude this section with a fundamental rectifiability criterion for integralgeometric measures (see \cite[Theorem 1.5]{Tas22}).
\begin{theorem}
\label{t:rectheorem}
Let $\Omega$ be an open subset of~$\mathbb{R}^{n}$, let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of transversal maps on~$\Omega$, and let $(\eta_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ and $\mathscr{I}^{n-1}_{p}$ be as in~\eqref{e:condigm2000}--\eqref{e:caratheodoryc2} for $p \in [1, +\infty]$. Assume that there exists $p \in (1, +\infty]$ such that $\mathscr{I}_p^{n-1}$ is a finite integralgeometric measure. Then, $\mathscr{I}^{n-1}_1$ is $(n-1)$-rectifiable.
\end{theorem}
\section{Curvilinear projections}
\label{s:curvilinear}
This section is dedicated to the basic definitions of curvilinear projections (cf.~Definitions~\ref{d:param} and~\ref{d:CP}) and to the local construction of a family of curvilinear projections (see Section~\ref{sub:curvpro}). Further properties of curvilinear projections are studied in Section~\ref{s:technical}.
\subsection{Assumptions and basic definitions}
\label{sub:basic}
From now on we fix a field $F \in C^{\infty} (\mathbb{R}^{n} \times \mathbb{R}^{n}; \mathbb{R}^{n})$ which is $2$-homogeneous in the second variable, namely, for every $x,v \in \mathbb{R}^n$ and $\alpha \in \mathbb{R}$ we have
\begin{equation}
\label{e:quadratic}
F(x,\alpha v) = \alpha^2 F(x,v) \,.
\end{equation}
We now give the definitions of parametrized \color{black} maps and of curvilinear projections. \color{black}
\begin{definition}[Parametrized maps]
\label{d:param-maps}
Let $\Omega$ be an open subset of~$\mathbb{R}^n$ and $\xi \in \mathbb{S}^{n-1}$. We say that a map $P \colon \Omega \to \xi^{\bot}$ is {\em parametrized} on~$\Omega$ if and only if there exist $\rho,\tau>0$ and a smooth Lipschitz map $ \varphi \colon \{ y + t\xi: (y, t) \in [\xi^{\bot} \cap \mathrm{B}_{\rho}(0)] \times (-\tau, \tau)\} \to \mathbb{R}^{n}$ such that
\begin{enumerate}[label=(\arabic*), ref=(\arabic*)]
\item $\Omega \subseteq \text{Im}(\varphi )$;
\item $\varphi^{-1} \restr \Omega$ is a bi-Lipschitz diffeomorphism with its image;
\item $ P(\varphi ( y + t\xi )) = y$ for every $(y, t) \in [\xi^{\bot} \cap \mathrm{B}_{\rho}(0)] \times (-\tau, \tau)$ such that $y + t\xi \in \varphi^{-1} (\Omega)$;
\end{enumerate}
\end{definition}
\begin{remark}
\label{r:notation-velocity}
Given $\varphi \colon \{ y + t\xi: (y, t) \in [\xi^{\bot} \cap \mathrm{B}_{\rho}(0)] \times (-\tau, \tau)\} \to \mathbb{R}^{n}$ as in Definition~\ref{d:param-maps}, for simplicity of notation we denote \color{black} by \color{black} $\dot{\varphi}$ and $\ddot{\varphi}$ the first and second derivatives w.r.t.~$t$ of the function $t \mapsto \varphi (y + t\xi)$.
\end{remark}
\begin{remark}
Conditions (2) \color{black} and \color{black} (3) of parametrized map imply
\begin{enumerate}[label=(\arabic*), ref=(\arabic*), ]
\setcounter{enumi}{3}
\item
$ \varphi ( P (x) + (\varphi^{-1}(x) \cdot \xi) \xi \big) = x$ for every $x \in \Omega$.
\end{enumerate}
We will more compactly denote by~$t^\xi_x$ the real number $ \varphi^{-1}(x) \cdot \xi$ for every $x \in \Omega$ and every $\xi \in \mathbb{S}^{n-1}$. Whenever~$\xi$ is fixed and there is no misunderstanding, we simply drop the index~$\xi$ and therefore write~$t_x$ instead of~$t^\xi_x$.
\end{remark}
\begin{definition}[Velocity field]
Let~$\Omega$ be a bounded open subset of~$\mathbb{R}^{n}$, $\xi \in \mathbb{S}^{n-1}$, and let $P \colon \Omega \to \xi^{\bot}$ be a map parametrized by~$\varphi_{\xi}$ on~$\Omega$. For every $x \in \Omega$ we define the {\rm velocity field}
\begin{equation*}
\xi_{\varphi}(x):= \dot{\varphi}(P (x) +t_x \xi).
\end{equation*}
\end{definition}
\begin{definition}[Curvilinear projections]
\label{d:CP-maps}
Let $\Omega$ be an open subset of~$\mathbb{R}^n$ and $\xi \in \mathbb{S}^{n-1}$. We say that a smooth Lipschitz map $P \colon \Omega \to \xi^{\bot}$ is a \emph{curvilinear projection} (with respect to $F$) on~$\Omega$ if \color{black} the following holds: \color{black}
\begin{enumerate}
\item $P$ is parametrized on~$\Omega$ by $\varphi \colon \{ y + t\xi: (y, t) \in [\xi^{\bot} \cap \mathrm{B}_{\rho}(0)] \times (-\tau, \tau)\} \to \mathbb{R}^{n}$;
\item For every $(y, t) \in [\xi^{\bot} \cap {\rm B}_\rho(0)] \times (-\tau,\tau)$ we have
\begin{equation*}
\ddot{\varphi} (y + t\xi) = F(\varphi(y + t\xi) ,\dot{\varphi}(y + t\xi) )\,.
\end{equation*}
\end{enumerate}
\end{definition}
In our analysis, we will further need the following notions of parametrized \color{black} family and of family of curvilinear projections. \color{black}
\begin{definition}[Parametrized family]
\label{d:param}
Let $\Omega$ be an open subset of~$\mathbb{R}^n$. We say that a family $P_\xi \colon \Omega \to \xi^\bot$ for $\xi \in \mathbb{S}^{n-1}$ is {\em parametrized} on~$\Omega$ if and only if there exist $\rho,\tau>0$, an open subset~$A$ of~$\mathbb{R}^{n} \times \mathbb{S}^{n-1}$, and a smooth Lipschitz map $\varphi \colon A \to \mathbb{R}^{n}$ such that
\begin{enumerate}[label=(\arabic*), ref=(\arabic*)]
\item for every $\xi \in \mathbb{S}^{n-1}$ we have $A_{\xi} = \{ y + t\xi: (y, t) \in [\xi^\bot \cap {\rm B}_\rho(0)] \times (-\tau,\tau) \} $;
\item for every $\xi \in \mathbb{S}^{n-1}$, $P_{\xi}$ is parametrized on~$\Omega$ by the map $\varphi_{\xi} := \varphi (\cdot, \xi) \colon A_{\xi} \to \mathbb{R}^{n}$.
\end{enumerate}
\end{definition}
We also give the definition of families of curvilinear projections.
\begin{definition}[Family of curvilinear projections]
\label{d:CP}
Let $\Omega$ be an open subset of~$\mathbb{R}^n$. We say that a family of maps $P_\xi \colon \Omega \to \xi^\bot$ for $\xi \in \mathbb{S}^{n-1}$ is a family of \emph{curvilinear projections} on~$\Omega$ if the following conditions hold:
\begin{enumerate}[label=(\arabic*), ref=(\arabic*)]
\item the family $(P_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ is parametrized by $\varphi \colon A \to \mathbb{R}^{n}$;
\item for every $\xi \in \mathbb{S}^{n-1}$, $P_\xi$ is a curvilinear projection on~$\Omega$ with parametrization $\varphi_{\xi} = \varphi(\cdot, \xi)$;
\item $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ is a transversal family of maps on~$\Omega$;
\item for every $x \in \Omega$, the map $\xi \mapsto \xi_{\varphi}(x)/|\xi_{\varphi}(x)|$ is a diffeomorphism from $\mathbb{S}^{n-1}$ onto itself.
\end{enumerate}
\end{definition}
We conclude this section by defining suitable slices of a measurable function~$u \colon \Omega \to \mathbb{R}^{m}$ and of a subset~$B$ of~$\Omega$ w.r.t.~a curvilinear projection. For this purpose we fix a map $g \colon \Omega \times \mathbb{R}^n \to \mathbb{R}^m$ satisfying the following properties:
\begin{enumerate}[label=(G.\arabic*),ref=G.\arabic*]
\item \label{G1} $g \in C(\Omega \times \mathbb{R}^{n} ; \mathbb{R}^{m})$;
\item \label{G2}
For every $x \in \Omega$ and $\Sigma \subset \mathbb{S}^{n-1}$ with $\mathcal{H}^{n-1}(\Sigma)>0$ we find an open neighborhood $U$ of $x$, an integer $0 \leq k \leq m$, and vectors $\{\xi_1,\dotsc,\xi_k\} \subset \Sigma$ such that
\begin{align*}
\text{span}\{g(z,\xi_1), \dotsc,g(z,\xi_k)\} =\color{black} \text{span}\{g(z,v) : \color{black} v \in \mathbb{R}^n\} \qquad \text{for every $z \in U$}.
\end{align*}
\end{enumerate}
\begin{remark}
We notice that for most of our arguments we only need~\eqref{G1}, while \eqref{G2} will be used in Corollary~\ref{c:int2} and in the corresponding definitions.
\end{remark}
We now give two definitions.
\begin{definition}
\label{d:mathfraku}
Let $\Omega$ be an open subset of~$\mathbb{R}^{n}$ and $g \colon \Omega \times \mathbb{R}^n \to \mathbb{R}^m$ satisfying~\eqref{G1}--\eqref{G2}. Then, we define a continuous map $\pi \colon \Omega \times \mathbb{R}^m \to \mathbb{R}^m$ in such a way that $\pi(x,\cdot)$ coincides with the orthogonal projection of $\mathbb{R}^m$ onto $\text{span}\{ \text{Im}(g(x,\cdot))\}$ for every $x \in \Omega$. Moreover, for every $u \colon \Omega \to \mathbb{R}^m$ we define $\mathfrak{u} \colon \Omega \to \mathbb{R}^m$ as
\begin{equation*}
\mathfrak{u} (x):=\pi(x,u(x))\,.
\end{equation*}
\end{definition}
\begin{definition}[Slices]
\label{d:slices}
Let $\Omega$ be an open subset of~$\mathbb{R}^{n}$, $\xi \in \mathbb{S}^{n-1}$, let $P \colon \Omega \to \xi^{\bot}$ be a curvilinear projection on~$\Omega$ parametrized by $\varphi \colon \{ y + t\xi: (y, t) \in [\xi^{\bot} \cap {\rm B}_{\rho} (0)] \times (-\tau, \tau)\} \to \mathbb{R}^{n}$, and let $g \colon \Omega \times \mathbb{R}^{n} \to \mathbb{R}^{m}$ satisfy~\eqref{G1}. For every $B \subseteq \Omega$ and every $y \in \xi^{\bot} \cap {\rm B}_{\rho} (0)$ we define
\begin{displaymath}
B^{\xi}_{y}:= \{ t \in \mathbb{R}: \, \varphi (y + t\xi ) \in B\}\,.
\end{displaymath}
For every measurable function $u \colon \Omega \to \mathbb{R}^{m}$, we define $\hat{u}^{\xi}_y \colon \Omega^{\xi}_{y} \to \mathbb{R}$ as
\begin{equation*}
\hat{u}^{\xi}_{y} (t) := u(\varphi ( y+ t\xi )) \cdot g(\varphi (y + t\xi ),\dot{\varphi} (y + t\xi))\,,
\end{equation*}
We further define $u_\xi \colon \Omega \to \mathbb{R}$ by
\begin{equation*}
u_\xi(x) := u(x) \cdot g(x, \xi_\varphi(x)) \,,
\end{equation*}
and we notice the following identity
\begin{equation}
\label{e:sliceide}
u_\xi(\varphi ( y + t \xi ) ) = \hat{u}^\xi_y(t) \qquad \text{for }\xi \in \mathbb{S}^{n-1} \text{ and } (y,t) \in [\xi^\bot \cap \mathrm{B}_\rho(0)] \times (-\tau,\tau).
\end{equation}
Eventually, for a measurable function $v \colon \Omega \to \mathbb{R}^{m}$ we also set $v^{\xi}_{y} (t) := v(\varphi (y + t\xi))$ for $t \in \Omega^{\xi}_{y}$.
\end{definition}
\color{black}
\subsection{A technical result on curvilinear projections}
\label{s:technical}
This section is devoted to a technical result concerning curvilinear projections.
We start by introducing the \emph{exponential map}.
\begin{definition}
\label{d:exp}
Let $F \in C^{\infty} (\mathbb{R}^{n} \times \mathbb{R}^{n}; \mathbb{R}^{n})$ satisfy~ \eqref{e:quadratic}. For every $x \in \mathbb{R}^{n}$ we define, where it exists, the {\em exponential map} $\text{exp}_{x} \colon \mathbb{R}^n \to \mathbb{R}^n$ as $\text{exp}_{x}(\xi):= v_{\xi, x} (1)$, where $t\mapsto v_{\xi, x} (t)$ solves
\begin{equation}
\label{e:system-exponential}
\begin{cases}
\ddot{u}(t) = F(u(t),\dot{u}(t)), \ &t \in \mathbb{R}\,, \\
u(0)=x\,,&\\
\dot{u}(0)=\xi\,.&
\end{cases}
\end{equation}
\end{definition}
In the next definition we introduce the concept of injectivity radius.
\begin{definition}
\label{d:inj}
For every $x \in \mathbb{R}^n$ we define the injectivity radius $\text{inj}_{x}\in [0, + \infty)$ as the supremum of all $r>0$ for which $\text{exp}_{x} \restr {\rm B}_{\overline{r}}(0)$ is well defined and $\text{exp}^{-1}_{x} \restr {\rm B}_{\overline{r}}(x)$ is a diffeomorphism with its image.
\end{definition}
The definition of~$\text{exp}_{x}$ in a small ball ${\rm B}_{r} (0)$ is justified by the following lemma.
\begin{lemma}
\label{l:exp}
Let $F \in C^{\infty}( \mathbb{R}^n \times \mathbb{R}^n ; \mathbb{R}^n)$ satisfy~ \eqref{e:quadratic}.
For every~$x \in \Omega$ we have $\emph{inj}_{x}>0$.
\end{lemma}
\begin{proof}
By the $2$-homogeneity of $F(x,\cdot)$ we get that
\begin{equation}
\label{e:retr17}
v_{s\xi, x} (t) = v_{\xi, x} (st)\qquad \text{for $s,t \in [0, + \infty)$, $\xi \in \mathbb{R}^n$.}
\end{equation}
Hence, by the local well-posedness of ODEs we have that there exists~$r>0$ such that~$\text{exp}_{x}$ is well-defined on~${\rm B}_{r}(0)$. For every $i \in \{1,\dotsc,n \}$ we have that
\[
D\exp_{x}(0) e_i = \lim_{t \to 0^+} \frac{v_{te_{i}, x} (1) - v_{0, x} (1)}{t} = \lim_{t \to 0^+} \frac{v_{e_{i}, x} (t) - v_{e_{i}, x} (0) }{t} = \dot{v}_{e_{i}, x} ( 0) = e_i\,.
\]
Thus, the differential of $\text{exp}_{x}$ at $0$ is the identity. Applying the implicit function theorem, we find a sufficiently small $\tilde r>0$ such that $\text{exp}^{-1}_{x} \restr {\rm B}_{\tilde r}(x)$ is a diffeomorphism with its image. Therefore setting $\text{inj}_{x} \geq \min\{ r, \tilde{r}\} >0$.
\end{proof}
\begin{definition}
Let $F \in C^{\infty}( \mathbb{R}^n \times \mathbb{R}^n ; \mathbb{R}^n)$ satisfy~ \eqref{e:quadratic}. and let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on~$\Omega$. Thanks to Lemma~\ref{l:exp} we may define for $0 < \overline{r} < \text{inj}_{x}$ the map $\phi_{x}\colon \mathrm{B}_{\overline{r}}(x) \setminus\{x\} \to \mathbb{S}^{n-1}$ as
\begin{equation}
\label{e:retr12}
\phi_{x}(z):=\psi_{x}^{-1}( \phi_0(\text{exp}^{-1}_{x}(z)))\qquad\text{for every }z \in \mathrm{B}_{\overline{r}}(x) \setminus \{x\}\,,
\end{equation}
where we set $\phi_0(z):=\frac{z}{|z|}$ and $\psi_{x}(\xi):=\phi_0(\xi_\varphi(x))$.
\end{definition}
The following proposition holds.
\begin{proposition}
\label{p:retr}
Let $F \in C^{\infty}( \mathbb{R}^n \times \mathbb{R}^n ; \mathbb{R}^n)$ satisfy~ \eqref{e:quadratic} and let~$(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on~$\Omega$. For every $x \in \Omega$, let $0 < \overline{r} < \emph{inj}_{x}$. Then, $\phi_{x} \in C^{1}( \mathrm{B}_{\overline{r}}(x) \setminus \{x\} ; \mathbb{S}^{n-1})$ and
\begin{align}
\label{e:chi1.1}
&P_{\xi}(z)=P_{\xi}(x) \ \ \text{if and only if $\xi = \phi_{x}(z)$ for every $z \in \mathrm{B}_{\overline{r}}(x)\setminus \{x\}$.}\\
\label{e:retr2}
& |\emph{J}\phi_{x}(z)| \leq \frac{C_{x}}{|z-x|^{n-1}} \qquad \text{for every $z \in \mathrm{B}_{\overline{r}}(x) \setminus \{x\}$},
\end{align}
for some constant $C_{x}>0$. Moreover, if we assume that
\begin{equation}
\label{e:retr9.1.3}
\inf_{(z,\xi) \in {\rm B}_{\overline{r}}(x)\times \mathbb{S}^{n-1}}|\text{J}_z P_{\xi}(z)|>0 \ \,
\end{equation}
then we find a constant $C'_{x}>0$ such that
\begin{equation}
\label{e:retr2.1}
\frac{C'_{x}}{|z-x|^{n-1}}\leq |\emph{J}\phi_{x}(z)| \qquad \text{for every $z \in \mathrm{B}_{\overline{r}}(x) \setminus \{x\}$.}
\end{equation}
\end{proposition}
\begin{proof}
For simplicity of notation, we drop the index~$x$ in the function~$\phi_{x}$. For every $z \in \mathrm{B}_{\overline{r}}(x) \setminus \{x\}$, choosing $\xi_{z} = \text{exp}^{-1}_{x}(z)$ in~\eqref{e:system-exponential} we get that
\begin{equation}
\label{e:retr18}
v_{\xi_{z}, x} (1) = z\,.
\end{equation}
Therefore, arguing as in~\eqref{e:retr17}, the solution~$u$ of
\begin{equation*}
\begin{cases}
\ddot{u}(t) = F(u(t),\dot{u}(t)) &t \in \mathbb{R}\,, \\
u(0)=x\,,&\\
\dot{u}(0)=\phi_0(\text{exp}^{-1}_{x}(z))\,,&
\end{cases}
\end{equation*}
satisfies $u(| \xi_{z} |t)= v _{\xi_{z}, x} (t)$. Setting $\eta:=\psi_{x}^{-1}(\phi_0(\text{exp}^{-1}_{x}(z))$, from the definition of velocity field we have that
\[
\varphi_{\eta}( P_{\eta} ( x ) + t_{x}^{\eta} \eta) = x \ \ \text{ and } \ \ \phi_0(\text{exp}^{-1}_{x}(z)) = \phi_0(\dot{\varphi}_{\eta}( P_{\eta} (x) + t_{x}^{\eta} \eta)).
\]
Since the curve $\gamma(t):= \varphi_{\eta}(P_{\eta} (x) + t\eta)$ satisfies $\ddot{\gamma}(t) = F(\gamma(t),\dot{\gamma}(t)), \ \ t \in \mathbb{R}$, we infer from the $2$-homogeneity of $F(z,\cdot)$ the existence of a constant $\alpha$ for which $u(|\xi_z|t)=\varphi_{\eta}(P_{\eta}(x) + \alpha (t + t_{x}^{\eta}) \eta)$ for every~$t$. By~\eqref{e:retr18} and by the definition of~$\phi(z)$ we thus deduce
\[
\varphi_{\phi(z)} ( P_{\phi(z) }(x) + \alpha( 1 + t_{x}^{\phi(z)}) \phi(z))= z\,.
\]
Therefore, we can make use of (3) of Definition~\ref{d:param-maps} to infer the validity of the \emph{if} part of implication \eqref{e:chi1.1}. The \emph{only if} part of \eqref{e:chi1.1} can be obtained in a similar way.
From property \eqref{hp:H2} of transversal maps we have the existence of a constant $C>0$ such that
\begin{equation}
\label{e:retr3}
|\text{J}_\xi( P_{\phi(z)}(z)- P_{\phi(z)}(x))| \geq C |z-x|^{n-1} \qquad \text{for } \color{black} (\xi,z) \color{black} \in \mathbb{S}^{n-1} \times {\rm B}_{\overline{r}}(x).
\end{equation}
Therefore, we are in a position to apply the implicit function theorem and deduce that $z \mapsto \phi(z)$ is $C^1$-regular on the open set ${\rm B}_{\overline{r}}(x) \setminus \{x\}$. We can thus compute the jacobian of $\phi$ as
\begin{equation}
\label{e:retr15}
\text{J}\phi(z) = \frac{\text{J}_z P_{\phi(z)}(z)}{\text{J}_\xi( P_{\phi(z)}(z)- P_{\phi(z)}(x))} \qquad \text{for every }z \in {\rm B}_{\overline{r}}(x)\setminus \{x\}.
\end{equation}
Since the maps $ z \mapsto P_{\xi}(z)$ are Lipschitz continuous in~$\Omega$ we can make use of property~\eqref{e:h3} to deduce that their Lipschitz constants are uniformly bounded with respect to $\xi \in \mathbb{S}^{n-1}$. Therefore, properties~\eqref{e:retr3}--\eqref{e:retr15} yield
\begin{equation*}
|\emph{J}\phi(z)| \leq \frac{C_{x}}{|z-x|^{n-1}}\qquad \text{ for every } z \in {\rm B}_{\overline{r}}(x) \setminus \{x\},
\end{equation*}
for some constant $C_{x}>0$ depending on~$x$. This proves~\eqref{e:retr2}.
Now using property \eqref{hp:H3} of transversal maps, we have
\begin{equation}
\label{e:retr9.1.2}
|\text{J}_\xi( P_{\phi(z)}(z)- P_{\phi(z)}(x))| \leq C'|z-x|^{n-1}\qquad \text{for every }z \in {\rm B}_{\overline{r}}(x)\setminus \{x\}.
\end{equation}
Therefore, by virtue of~\eqref{e:retr9.1.3} and of~\eqref{e:retr15}--\eqref{e:retr9.1.2}, we find $C'_{x}>0$ such that
\begin{equation*}
|\text{J} \phi(z)| \geq \frac{C'_{x}}{|z-x|^{n-1}} \qquad \text{for every }z \in {\rm B}_{\overline{r}}(x) \setminus \{x\}.
\end{equation*}
This concludes the proof of~\eqref{e:retr2.1} and of the proposition.
\end{proof}
\subsection{Local existence of families of curvilinear projections}
\label{sub:curvpro}
In this subsection we show that it is always possible to locally construct a family of curvilinear projections. This is done by considering suitable flows of solutions of second order ODEs associated to the field $F$. A crucial point in our analysis is that $F$ is $2$-homogeneous in the second variable, which allows us to properly rescale the solution to the ODEs system driven by~$F$.
\begin{definition}
\label{d:curvpro}
Let $x_0 \in \mathbb{R}^n$ and $\rho_{0}>0$. For every $\xi \in {\rm B}_{2}(0)$ and every $y \in \xi^{\bot}\cap {\rm B}_{\rho_{0}}(0)$ we consider the solution $t \mapsto u_{\xi, y}(t)$ of the ODE system
\begin{equation*}
\begin{cases}
\ddot{u}(t) = F(u(t), \dot{u}(t)) & t \in \mathbb{R},\\
u(0)=y+x_0\,,\\
\dot{u}(0)=\xi\,,
\end{cases}
\end{equation*}
which is well-defined for $t \in (-\tau, \tau)$, for a suitable $\tau >0$ depending only on~$x_{0}$ and~$\rho_{0}$, but not on~$\xi$ and~$y$. Then, we define $\varphi_{\xi, x_{0}} \colon \mathbb{R}^n \to \mathbb{R}^n$ as follows: for every $x \in \mathbb{R}^{n}$, if $x = y + t\xi$ with $y \in \xi^{\bot}\cap {\rm B}_{\rho_{0}}(0)$ and $t \in (-\tau, \tau)$, we set $\varphi_{\xi, x_{0}} (x) := u_{\xi, y}(t)$.
We further define $\varphi_{x_{0}} \colon \mathbb{R}^{n} \times \mathbb{S}^{n-1} \to \mathbb{R}^{n}$ as $\varphi_{x_{0}} (x, \xi) := \varphi_{\xi, x_{0}} (x)$ for $x \in \mathbb{R}^{n}$ and $\xi \in \mathbb{S}^{n-1}$.
\end{definition}
\begin{remark}\label{r:rx0}
Under the assumptions of Definition~\ref{d:curvpro}, the map $\varphi_{\xi, x_{0}}$ is well defined on the open ball ${\rm B}_{r_{x_{0}}} (0)$, for a suitable $r_{x_{0}}>0$ which only depends on~$x_{0}$, but not on~$\xi \in {\rm B}_{2}(0)$.
\end{remark}
\begin{remark}
\label{r:Axi}
Let $\tau$ and~$\rho_{0}$ as in Definition~\ref{d:curvpro}. For every $\xi \in \mathbb{S}^{n-1}$ we may set
\begin{align*}
A_{\xi} & := \{ y + t\xi: (t, y) \in (-\tau, \tau) \times (\xi^{\bot}\cap {\rm B}_{\rho_{0}}(0))\} \subseteq \mathbb{R}^{n}\,,
\\
A&:= \{(x, \xi) \in \mathbb{R}^{n} \times \mathbb{S}^{n-1} : \, x \in A_{\xi},\, \xi \in \mathbb{S}^{n-1}\}\,.
\end{align*}
Then, $\varphi_{x_{0}}$ is well defined on~$A \times \mathbb{S}^{n-1}$.
\end{remark}
\begin{definition}
\label{d:varphi-xi-r}
Let $x_{0} \in \mathbb{R}^{n}$ and $r_{x_{0}}>0$ as in Remark~\ref{r:rx0}, and $r>0$. For every $\xi \in {\rm B}_{2}(0)$, and every $x \in {\rm B}_{\frac{r_{x_{0}}}{ r}} (0)$ we define
\begin{align}
\label{e:varphi-xi-r}
\varphi_{\xi,x_0,r}(x) & :=r^{-1}(\varphi_{ \xi, x_{0}} (rx) - x_0)\,,\\
\label{e:varphi-xi-r-2}
\Phi_{x_{0}, r} (x, \xi) &:= \big( \varphi_{\xi,x_0,r}(x) , \xi \big)\,.
\end{align}
\end{definition}
We prove a basic convergence property of~$\Phi_{ x_0, r}$.
\begin{lemma}
\label{l:curvpro1.3}
For every $x_{0} \in \Omega$ it holds
\begin{align}
\label{e:curvpro1.3}
& \Phi_{x_0,r} \to id
\end{align}
in $C^{\infty}_{loc} (\mathbb{R}^{n} \times {\rm B}_{2}(0) ; \mathbb{R}^{n}\times \mathbb{R}^{n})$ as $r\searrow0$.
\end{lemma}
\begin{proof}
Let $r_{x_{0}}>0$ be as in Remark~\ref{r:rx0}. We notice that for every $\xi \in \mathbb{S}^{n-1}$ and for $r>0$ the function $u_r(t):=\varphi_{\xi,x_0,r}(y+t\xi)$ satisfies
\begin{equation}
\label{e:curvpro12}
\begin{cases}
\ddot{u}_r(t) =F_{r, x_{0}} (u_{r} (t) , \dot{u}_{r} (t)) & t \in \mathbb{R}\,,\\
u_r(0)=y \,,\\
\dot{u}_r(0)=\xi \,,
\end{cases}
\end{equation}
where $F_{r, x_{0}} (x, v):= rF(rx + x_{0} , v)$ for every $(x,v) \in \mathbb{R}^n \times \mathbb{R}^n$. Denoting by
\begin{align*}
& U_{r}:= \big \{ (t, y, \xi) \in \mathbb{R} \times \mathbb{R}^{n} \times \mathbb{S}^{n-1}: y \in \xi^{\bot} \text{ and system~\eqref{e:curvpro12} admits solution }
\\
&
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \text{in~$[-|t|, |t|]$ with initial conditions~$(y, \xi)$} \big\}\,,
\\
& U_{\infty}:= \{ (t, y, \xi) \in \mathbb{R} \times \mathbb{R}^{n} \times \mathbb{S}^{n-1}: \, y \in \xi^{\bot}\}\,,
\end{align*}
we have that $U_{r} \nearrow U_{\infty}$ as $r \searrow 0$. Let $G_r \colon U_{r} \to \mathbb{R}^{n}$ be the flow relative to system~\eqref{e:curvpro12}, i.e., the map that to each $(t, y, \xi) \in U_{r}$ associates~$v_{r}(t)$, where $v_{r}$ is the unique solution of~\eqref{e:curvpro12} with initial data $(y, \xi)$. Since $F \in C^\infty(\mathbb{R}^n \times \mathbb{R}^n; \mathbb{R}^{n})$, $F_{r, x_{0}} \to 0$ in $C^{\infty}_{loc}(\mathbb{R}^n \times \mathbb{R}^n; \mathbb{R}^{n})$ as $r \searrow 0$. Thus, by the continuous and differentiable dependence of solutions of ODEs from the data of the system, we have that
\begin{equation}
\label{e:curvpro14}
G_{r} \to G \qquad \text{ in $C^{\infty}_{loc} (U; \mathbb{R}^{n})$ as $r \searrow 0$,}
\end{equation}
where $G \colon U \to \mathbb{R}^{n}$ is defined as $G(t, y, \xi) := y + t\xi$. For every $x \in {\rm B}_{\frac{r_{x_{0}}}{r}} (0)$ we can write
\begin{equation}
\label{e:curvpro13}
\Phi_{x_{0}, r} (x, \xi) = \big( \varphi_{\xi,x_0,r}(x) , \xi \big) = \big( G_r(\xi \cdot x, \pi_\xi(x),\xi), \xi\big) \,.
\end{equation}
From \eqref{e:curvpro14} and \eqref{e:curvpro13} we immediately deduce~\eqref{e:curvpro1.3}.
\end{proof}
\begin{corollary}
\label{c:curvpro1.3}
For every $x_{0} \in \Omega$ there exists $R_{x_{0}}>0$ such that for every $\xi \in {\rm B}_{2}(0)$ the map $\varphi_{\xi, x_{0}}^{-1} \lfloor {\rm B}_{R_{x_{0}}} (x_{0})$ is a diffeomorphism with its image.
\end{corollary}
\begin{proof}
The thesis follows directly from Lemma~\ref{l:curvpro1.3} and from Definition~\ref{d:varphi-xi-r} (see~\eqref{e:varphi-xi-r}).
\end{proof}
Corollary~\ref{c:curvpro1.3} justifies the following definition of~$P_{\xi, x_{0}}$ for $x_{0} \in \Omega$ and~$\xi \in \mathbb{S}^{n-1}$.
\begin{definition}
\label{d:P-xi}
Let $x_{0} \in \Omega$, $\xi \in {\rm B}_{2}(0)$, and $R_{x_{0}}>0$ be as in Corollary~\ref{c:curvpro1.3}. We define the map $P_{\xi,x_0} \colon {\rm B}_{R_{x_{0}}} (x_{0}) \to \xi^\bot$ as
\begin{equation}
\label{e:curvproj}
P_{\xi,x_0}:=\pi_\xi \circ \varphi_{\xi, x_{0}} ^{-1},
\end{equation}
where $\pi_\xi \colon \mathbb{R}^n \to \xi^\bot$ denotes the orthogonal projection onto the orthogonal to $\xi$. Furthermore, for $r>0$ we define $P_{\xi, x_{0}, r}\colon {\rm B}_{\frac{R_{x_{0}}}{r}} (0) \to \xi^{\perp}$ as
\begin{equation}
\label{e:curvproj-2}
P_{\xi,x_0, r}:=\pi_\xi \circ \varphi_{\xi, x_{0},r} ^{-1}.
\end{equation}
\end{definition}
The rest of this section is devoted to prove that, up to taking a smaller~$R_{0} \in (0, R_{x_{0}}]$ depending only on~$x_{0}$,~$(P_{\xi, x_{0}})_{\xi \in \mathbb{S}^{n-1}}$ is a family of curvilinear projections over~${\rm B}_{R_{0}} (x_{0})$ (see Theorem~\ref{p:curvpro}). To this aim, we start by showing that $(\pi_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ is transversal in~$\mathbb{R}^{n}$.
\begin{proposition}
\label{p:trapro}
The family of orthogonal projections $\pi_{\xi} \colon \mathbb{R}^n \to \xi^\bot$ with $\xi \in \mathbb{S}^{n-1}$ is transversal.
\end{proposition}
\begin{proof}
We only have to check \eqref{hp:H2} of Definition~\ref{d:transversal}. For this purpose we first claim that for every $x,x' \in \mathbb{R}^{n}$ with $x\neq x'$, every $\xi \in \mathbb{S}^{n-1}$, and every $C\in (0,1)$ it holds
\begin{equation}
\label{e:trapro1}
\frac{|\pi_\xi(x)-\pi_\xi(x')|}{|x-x'|} \leq C \ \ \text{ implies } \ \ \bigg|\text{J}_\xi \frac{\pi_\xi(x)-\pi_\xi(x')}{|x-x'|}\bigg| \geq (1-C^2)^{\frac{n-1}{2}}.
\end{equation}
Since for every $\mathrm{O} \in SO(n)$ we have
\begin{equation}
\label{e:trapro4}
\pi_\xi(\mathrm{O}x)=\mathrm{O} \, \pi_{\mathrm{O}^t\xi}(x) \qquad \text{for } x \in \mathbb{R}^{n}\,,
\end{equation}
condition \eqref{e:trapro1} is invariant by rotation. We can thus reduce ourselves to prove \eqref{e:trapro1} for $\xi=e_1$. Moreover, being $\xi \mapsto x -(x \cdot \xi)\xi$ a $C^\infty$-map on~$\mathbb{R}^n$ and being $\pi_\xi(x) = x -(x \cdot \xi)\xi$ for $\xi \in \mathbb{S}^{n-1}$, the tangential jacobian appearing in~\eqref{e:trapro1} can be computed as the jacobian of the map
\[
f_{xx'}(\eta) := \frac{x -(x \cdot (e_1+\eta))(e_1 +\eta)-[x' -(x' \cdot (e_1+\eta))(e_1 +\eta)]}{|x-x'|} \qquad \eta \in e_1^\bot
\]
evaluated at $\eta=0$. The $(j-1)$-th column of the $n \times (n-1)$ matrix representing the differential of $f_{xx'}$ at~$0$ with respect to the basis $\{e_1,\dotsc,e_n\}$ can be explicitly written as
\[
-\frac{(x_j-x'_j)e_1 + (x_1-x'_1)e_j }{|x-x'|}\qquad \text{for $j=2,\dotsc,n$.}
\]
A direct computation shows that
\begin{equation}
\label{e:trapro3}
\text{J}_\eta f_{xx'}(0)=(-1)^{n-1}\frac{(x_1-x'_1)^{n-1}}{|x-x'|^{n-1}} \qquad \text{ for $x,x' \in \mathbb{R}^n$, $x \neq x'$.}
\end{equation}
The invariance under rotation of~\eqref{e:trapro4} allow us to deduce from~\eqref{e:trapro3} that for every $x, x' \in \mathbb{R}^{n}$ with $x \neq x'$ and every $\xi \in \mathbb{S}^{n-1}$
\begin{equation*}
\text{J}_\xi \frac{\pi_\xi(x)-\pi_\xi(x')}{|x-x'|} = (-1)^{n-1} \frac{((x -x') \cdot \xi)^{n-1}}{|x-x'|^{n-1}}\,.
\end{equation*}
Since $|\pi_\xi(x)-\pi_\xi(x')|/|x-x'| \leq C$ implies $|(x -x') \cdot \xi|/|x-x'| \geq \sqrt{1-C^2}$,
we conclude~\eqref{e:trapro1}.
For $i=1,\dotsc,n$ we recall the notation $S_i:=\{\xi \in \mathbb{S}^{n-1} :\, |\xi\cdot e_i| \geq 1/\sqrt{n} \}$. For every~$\xi \in S_i$ and $x,x' \in \Omega$ with $x \neq x'$ let
\begin{equation*}
T_{xx'}(\xi) := \frac{\pi_{e_i} \circ \pi_\xi(x) - \pi_{e_i} \circ \pi_\xi(x')}{|x-x'|}.
\end{equation*}
Since $|\xi \cdot e_i| \geq 1/\sqrt{n}$ for $\xi \in S_{i}$, there exists a constant $c(n) \in (0,1)$ such that for every~$\xi \in S_i$ and for every $z,z' \in \xi^\bot$,
\[
|\pi_{e_i}(z)-\pi_{e_i}(z')| \geq c(n)|z-z'|.
\]
Therefore, for every $\xi \in S_i$ and for every $x,x' \in \Omega$ with $x \neq x'$ we have
\begin{equation}
\label{e:T-1}
c(n)\frac{|\pi_\xi(x)-\pi_\xi(x')|}{|x-x'|} \leq |T_{xx'}(\xi)| \,.
\end{equation}
Moreover, since $|\text{J} (\pi_{e_i})(z)| = |\xi \cdot e_i|$ for every $\xi \in S_i$ and every $z \in \xi^\bot$, we further have that
\begin{equation}
\label{e:T-2}
|\text{J}_\xi(T_{xx'}(\xi))| = |\xi \cdot e_i| \, \bigg| \text{J}_\xi \frac{\pi_\xi(x)-\pi_\xi(x')}{|x-x'|} \bigg|.
\end{equation}
Hence, from~\eqref{e:trapro1} and~\eqref{e:T-1}--\eqref{e:T-2} we infer that for every $x,x' \in \Omega$ with $x \neq x'$, every $\xi \in S_{i}$, and every $C \in (0,1)$:
\begin{equation}
\label{e:trapro2}
|T_{xx'}(\xi)| \leq c(n)C\ \ \text{implies} \ \ |\text{J}_\xi(T_{xx'}(\xi))| \geq \frac{1}{\sqrt{n}} (1-C^2)^{\frac{n-1}{2}}.
\end{equation}
Set $g_1(C):=c(n) C$, $g_2(C):= (1-C^2)^{\frac{n-1}{2}}/\sqrt{n}$, and $g(C):= g_1(C)-g_2(C)$. Since~$g$ is continuous, $g(0)<0$, and $g(1)>0$, we deduce that there exists $C_0 \in (0,1)$ such that $g(C_0)=0$. Finally, condition \eqref{hp:H2} of Definition~\ref{d:transversal} follows by setting $C:=C_0$ in~\eqref{e:trapro2}.
\end{proof}
The next lemma is a direct consequence of Definition~\ref{d:P-xi} and of Lemma~\ref{l:curvpro1.3}.
\begin{lemma}
\label{l:curvpro1.3-2}
For every $x_{0} \in \Omega$ it holds
\begin{align}
\label{e:curvpro1.3P}
& P_{\xi,x_0,r} \to \pi_{\xi}
\end{align}
in $C^{\infty}_{loc} ( \mathbb{R}^{n} \times {\rm B}_{2}(0) ; \mathbb{R}^{n})$ as $r\searrow0$, where the maps in~\eqref{e:curvpro1.3P} are intended as functions of~$(x, \xi)$.
\end{lemma}
\begin{proof}
It is enough to combine the definition of~$P_{\xi, x_{0}, r}$ given in~\eqref{e:curvproj-2} with the convergence shown in Lemma~\ref{l:curvpro1.3}.
\end{proof}
In order to show that~$(P_{\xi, x_{0}})_{ \xi \in \mathbb{S}^{n-1}}$ is a family of curvilinear projections, we will make use of the functions
\begin{align*}
T^i_{xx'}(\xi,r) & :=\frac{\pi_{e_i} \circ P_{\xi,x_0,r}(x) - \pi_{e_i}\circ P_{\xi,x_0,r}(x')}{|x-x'|}\,,\\
T^i_{xx'}(\xi,0)& :=\frac{\pi_{e_i} \circ \pi_\xi(x) - \pi_{e_i}\circ\pi_\xi(x')}{|x-x'|}\,.
\end{align*}
defined for every $i=1,\dotsc,n$, every $r>0$, and every $x,x' \in {\rm B}_1(0)$ with $x \neq x'$. Notice that for $r>0$ sufficiently small, $\frac{R_{x_{0}}}{r}>1$, so that the maps $T^i_{xx'}(\xi,r)$ are well defined for $x,x' \in {\rm B}_1(0)$ with $x \neq x'$.
\begin{theorem}
\label{p:curvpro}
Let $F \in C^{\infty}( \mathbb{R}^n \times \mathbb{R}^n ; \mathbb{R}^n)$ satisfy \color{black} condition~\eqref{e:quadratic}. \color{black} Then, for every $x_0 \in \Omega$ there exists $R_0>0$ such that the family of maps $\{ P_{\xi,x_0} \colon {\rm B}_{R_0}(x_0) \to \xi^\bot: \xi \in \mathbb{S}^{n-1}\}$ is a family of curvilinear projections on~${\rm B}_{R_{0}}(x_{0})$. Moreover, for every $k \in \mathbb{N}$, every $R>0$, and every $\epsilon>0$ there exists~$r_{\epsilon}>0$ such that
\begin{equation}
\label{e:curvpro1.1}
\|T^i_{xx'}(\cdot,r_{\epsilon}) -T^i_{xx'}(\cdot,0)\|_{C^{k} ({\rm B}_{2}(0))} \leq \epsilon \qquad \text{for $x,x' \in {\rm B}_1(0)$ with $x \neq x'$.}
\end{equation}
\end{theorem}
\begin{proof}
To shorten the notation we further set $f_r(x,\xi):=\pi_{e_i} \circ P_{\xi,x_0,r}(x)$ and $f(x,\xi):=\pi_{e_i} \circ \pi_\xi(x)$. In view of Lemma~\ref{l:curvpro1.3-2} we have that
\begin{equation}
\label{e:curvpro1.2}
f_{r}(\cdot,\cdot) \to f(\cdot,\cdot) \qquad \text{in $C^\infty_{loc}(\mathbb{R}^n \times {\rm B}_{2}(0); \mathbb{R}^{n})$ as $r \searrow 0$.}
\end{equation}
Let us denote by $\alpha$ a generic multi-index of order $k$ and by $\partial^\alpha_{\xi}$ the associated partial derivative. Then, for every $\xi \in {\rm B}_{2}(0)$, every $r>0$, and every $x, x' \in {\rm B}_{1}(0)$ with $x \neq x'$ we have that
\begin{align*}
|\partial^{\alpha}_{\xi} T^i_{xx'}(\xi,r) - \partial^{\alpha}_{\xi} T^i_{xx'} (\xi,0)| &\leq \bigg| \frac{\partial^{\alpha}_{\xi} f_r( x, \xi ) -\partial^{\alpha}_{\xi} f_r ( x', \xi )}{|x-x'|} -D_x(\partial^{\alpha}_{\xi}f_r (x, \xi )) \cdot \frac{x'-x}{|x-x'|} \bigg| \\
&\quad + \bigg| D_x(\partial^{\alpha}_{\xi} f_r (x, \xi )) \cdot \frac{x'-x}{|x-x'|} - D_x(\partial^{\alpha}_{\xi} f (x, \xi )) \cdot \frac{x'-x}{|x-x'|}\bigg| \\
&\quad + \bigg|\frac{\partial^{\alpha}_{\xi}f(x, \xi ) -\partial^{\alpha}_{\xi}f(x', \xi)}{|x-x'|}- D_x(\partial^{\alpha}_{\xi}f(x, \xi)) \cdot \frac{x'-x}{|x-x'|}\bigg| \\
&\leq \|D^2_x(\partial^{\alpha}_{\xi} f_r)\|_{L^\infty({\rm B}_{1}(0) \times {\rm B}_{2}(0))}|x-x'|
\\
&\quad+ \|D_x(\partial^{\alpha}_{\xi}f_r) - D_x(\partial^{\alpha}_{\xi}f)\|_{L^\infty({\rm B}_{1}(0) \times {\rm B}_{2}(0))}\\
&\quad +\|D^2_x(\partial^{\alpha}_{\xi} f)\|_{L^\infty({\rm B}_{1}(0) \times {\rm B}_{2}(0))} |x-x'|.
\end{align*}
The previous chain of inequalities together with~\eqref{e:curvpro1.2} implies that for every $\epsilon>0$ there exist $r_\epsilon, \rho_{\epsilon} >0$ (depending also on $\Xi$ and $k$) such that $|\partial^{\alpha}_{\xi} T^i_{xx'}(\xi,r_\epsilon) - \partial^{\alpha}_{\xi} T^i_{xx'}(\xi,0)| \leq \epsilon$ whenever $|x-x'|\leq \rho_\epsilon$. For points $x,x' \in {\rm B}_1(0)$ with $|x-x'|>\rho_\epsilon$ we use instead the following bound
\begin{align*}
|\partial^{\alpha}_{\xi} T^i_{xx'}(\xi,r) - \partial^{\alpha}_{\xi} T^i_{xx'}(\xi,0)| &\leq \frac{2 \|\partial^{\alpha}_{\xi} f_r -\partial^{\alpha}_{\xi} f\|_{L^\infty ({\rm B}_{1}(0) \times {\rm B}_{2}(0)) }}{|x-x'|}
\\
&
\leq 2\rho^{-1}_\epsilon \|\partial^{\alpha}_{\xi} f_r -\partial^{\alpha}_{\xi} f\|_{L^\infty({\rm B}_{1}(0) \times {\rm B}_{2}(0)) }.
\end{align*}
Thus, by eventually choosing a smaller $r_\epsilon$ we also have $|\partial^{\alpha}_{\xi} T^i_{xx'}(\xi,r_\epsilon) - \partial^{\alpha}_{\xi} T^i_{xx'}(\xi,0)| \leq \epsilon$ for every $\xi \in {\rm B}_{2}(0)$ and every $x,x' \in {\rm B}_1(0)$ with $|x-x'|>\rho_\epsilon$. This implies~\eqref{e:curvpro1.1}.
The arbitrariness of $\epsilon>0$ together with Remark~\ref{r:'} and Proposition~\ref{p:trapro} provides $R_0>0$ for which the family $(P_{\xi,x_0,R_0})_{\xi \in \mathbb{S}^{n-1}}$ is transversal on ${\rm B}_1(0)$. Since
\begin{equation*}
P_{\xi,x_0}(x)=R_0 P_{\xi,x_0,R_0}\bigg(\frac{x-x_0}{R_0}\bigg) \qquad \text{for $x \in {\rm B}_{R_0}(x_0)$,}
\end{equation*}
we deduce that $(P_{\xi,x_0})_{\xi \in \mathbb{S}^{n-1}}$ is a transversal family on ${\rm B}_{R_0}(x_0)$. Furthermore, $(P_{\xi,x_0})_{\xi \in \mathbb{S}^{n-1}}$ is parametrized by $\varphi_{x_{0}}$ by construction (cf.~Definitions~\ref{d:curvpro} and~\ref{d:P-xi}, Remark~\ref{r:Axi}, and Corollary~\ref{c:curvpro1.3}), so that conditions (1)--(3) of Definition~\ref{d:CP} are satisfied. Property~(4) follows directly from the convergence~\eqref{e:curvpro1.3}.
\end{proof}
\section{Proofs of Theorem~\ref{t:int1} and of Corollary~\ref{c:int2}}
\label{s:directional}
Throughout this section we assume that $\Omega$ is an open subset of $\mathbb{R}^n$ and we fix $F \in C^{\infty} ( \mathbb{R}^{n} \times \mathbb{R}^{n}; \mathbb{R}^{n})$ fulfilling~ \eqref{e:quadratic} and a map $g \colon \Omega \times \mathbb{R}^n \to \mathbb{R}^m$ satisfying~\eqref{G1}. Furthermore, we rely on the notation introduced in Section \ref{s:curvilinear}. We start by recalling the definition of jump set.
\begin{definition}
Let $u \colon \Omega \to \mathbb{R}^m$ be measurable, then $x \in \Omega$ belongs to $J_u$ if and only if there exists $(u^+(x),u^-(x),\nu(x)) \in \mathbb{R}^m \times \mathbb{R}^m \times \mathbb{S}^{n-1}$ such that
\[
\aplim_{\substack{z \to x \\ \pm(z-x) \cdot \nu (x) >0}} u(z)=u^\pm(x).
\]
\end{definition}
Here we present a fundamental property of the jump set (cf.~\cite{del}).
\begin{theorem}
\label{t:delnin}
Let $u \colon \Omega \to \mathbb{R}^n$ be measurable. Then,~$J_u$ is countably $(n-1)$-rectifiable.
\end{theorem}
We continue our analysis with two measurability results. Before \color{black} doing this, \color{black} it is convenient to introduce the notion of \emph{directional jump set}.
\color{black}
\begin{definition}
\label{d:dirjump}
Let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on $\Omega$ and let $u \colon \Omega \to \mathbb{R}^m$ be a measurable function. We introduce the directional jump set of $u$ in the direction~$\xi$ as
\begin{equation*}
J_{\hat u_\xi}:=\{x \in \Omega :\, t^{\xi}_{x} \in J_{\hat{u}^\xi_{P_\xi(x)}}\}\,.
\end{equation*}
In addition, we define the collection of all directional jump sets of~$u$ by letting~$\xi$ varies in $\mathbb{S}^{n-1}$ as a subset of the product space $\Omega \times \mathbb{S}^{n-1}$ as follows
\begin{equation}
\label{e:Au}
A_{\hat{u}} := \{ (x , \xi ) \in \Omega \times \mathbb{S}^{n-1}: \, x \in J_{\hat{u}_{\xi}}\}\,.
\end{equation}
\end{definition}
\color{black}
\begin{lemma}
\label{l:meas10000}
Let $u \colon \Omega \to \mathbb{R}^m$ be Borel measurable and let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on $\Omega$. Then, the sets~$J_{\hat u_\xi}$ and~$ A_{\hat{u}}$
are Borel measurable.
\end{lemma}
\begin{proof}
We limit ourselves to prove that $A_{\hat{u}}$ is Borel since the remaining assertion can be proved in the same way. By replacing the function~$u_{\xi}$ with \color{black} $v_{\xi} := \arctan (u_{\xi})$, \color{black} we may further work with bounded functions. Hence, we define $s^\pm \colon \Omega \times \mathbb{S}^{n-1} \to \mathbb{R}$ and $i^\pm \colon \Omega \times \mathbb{S}^{n-1} \to \mathbb{R}$ as
\begin{align*}
s^\pm(x,\xi) & := \limsup_{r \searrow 0} \Xint-_{0}^{r} \color{black} v_{\xi} \color{black} ( \varphi_{\xi} ( P_{\xi} (x) \pm t\xi ) ) \, \mathrm{d} t \,,
\\
i^\pm(x,\xi) & := \liminf_{r \searrow 0} \Xint-_{0}^{r} \color{black} v_{\xi} \color{black} ( \varphi_{\xi} ( P_{\xi} (x) \pm t\xi ) ) \, \mathrm{d} t \,.
\end{align*}
We notice that~$s^{\pm}$ and~$i^{\pm}$ are Borel measurable in~$\Omega \times \mathbb{S}^{n-1}$. Indeed, the integrand functions are Borel measurable in the triple~$(x, \xi, t)$. Hence, an application of Fubini's theorem implies that the integral functions are Borel measurable in~$\Omega \times \mathbb{S}^{n-1}$. Finally, both liminf and limsup can be computed by restricting $r \in \mathbb{Q}$ because of the continuity of the integrals with respect to~$r$. Since it holds that
\begin{align*}
A_{\hat{u}} = \Big\{(x,\xi) \in \Omega \times \mathbb{S}^{n-1} \, : & \, s^+(x,\xi)=i^+(x,\xi), \ s^-(x,\xi)=i^-(x,\xi),
\\
&
s^+(x,\xi)\neq s^-(x,\xi),\ s^{\pm} (x, \xi) \in \Big(-\frac{\pi}{2} , \frac{\pi}{2} \Big) \Big\}\,,
\end{align*}
we immediately infer the Borel measurability of~$A_{\hat{u}}$.
\end{proof}
\begin{lemma}
\label{l:meas1000}
Let $u \colon \Omega \to \mathbb{R}^m$ be $\mathcal{L}^n$-measurable and let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on $\Omega$. Then, for every $B \in \mathcal{B} ( \mathbb{R}^n)$ and $A \in \mathcal{B} ( \mathbb{R}^n \times \mathbb{S}^{n-1})$ we have that
\begin{align}
\label{e:meas1000.1}
&y \mapsto \sum_{ t \in B^\xi_y } (|[\hat{u}^\xi_y(t)]| \wedge 1)\qquad \text{ is $\mathcal{H}^{n-1}$-measurable} \\
\label{e:meas1000.2}
&\xi \mapsto \int_{\xi^\bot} \sum_{ t \in (A_\xi)^\xi_y } \big ( |[\hat{u}^\xi_y(t)]| \wedge 1 \big) \, \mathrm{d} \mathcal{H}^{n-1}(y) \qquad\text{ is $\mathcal{H}^{n-1}$-measurable}.
\end{align}
\end{lemma}
\begin{proof}
We focus on~\eqref{e:meas1000.2} since the measurability of~\eqref{e:meas1000.1} can be obtained repeating the same argument with obvious modification. Given $\delta>0$ we consider $N_\delta \in \mathbb{N}$ and $\{\eta_1,\dotsc,\eta_{N_\delta}\}\subseteq \mathbb{S}^{n-1}$ for which the sets
\[
\Sigma_i:=\{\xi \in \mathbb{S}^{n-1} \, : \, \xi \cdot \eta_i \geq (1-\delta) \} \qquad \text{for } i=1,\dotsc,N_\delta
\]
form a covering of~$\mathbb{S}^{n-1}$. We notice that we can reduce ourselves to show the measurability of~\eqref{e:meas1000.2} when restricted to $\Sigma_i$ for every $i=1,\dotsc,N_\delta$.
Let $A$ be an open subset of~$\mathbb{R}^{n} \times \mathbb{S}^{n-1}$ and $\varphi \colon A \to \mathbb{R}^{n}$ be the parametrization of~$(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$, according to Definition~\ref{d:param}. In particular, there exist $\rho, \tau>0$ such that for every $\xi \in \mathbb{S}^{n-1}$ the map $\varphi_{\xi} = \varphi(\cdot, \xi) \colon \{ y + t\xi: \, (y, t) \in [ \xi^{\bot} \cap {\rm B}_{\rho} (0)] \times (-\tau, \tau)\} \to \mathbb{R}^{n}$ is a parametrization of~$P_{\xi}$. For every $\xi \in \Sigma_i$ we consider the map $P^i_\xi:= \pi_{\eta_i} \circ P_\xi$. Since for every $\xi \in \Sigma_i$ the map $\pi_{\eta_i} \restr \xi^\bot$ is an isomorphism between $\xi^\bot$ and $\eta_i^\bot$, the family $(P^i_\xi)_{\xi \in \Sigma_i}$ can be parametrized by
\[
\varphi^i_{\xi} ( y + t \xi):= \varphi_{\xi} ((\pi_{\eta_i}\restr \xi^\bot)^{-1}(y) +t\xi), \ \text{for } (y,t) \in [\eta_i^\bot \cap \mathrm{B}_{\tilde{\rho}}(0)] \times (-\tau,\tau),
\]
where $\tilde{\rho} > 0$ is such that $\eta_i^\bot \cap \mathrm{B}_{\tilde{\rho}}(0) \subset \pi_{\xi}(\xi^\bot \cap \mathrm{B}_{\rho}(0))$ for every $\xi \in \Sigma^i$. Up to consider a smaller $\delta>0$ we may also assume that $\Omega \subseteq \text{Im}(\varphi^i_\xi)$ for every $i=1,\dotsc, N_\delta$. With this choice we have that the family $(P^i_{\xi})_{\xi \in \Sigma_i}$ is parametrized on $\Omega$ for every $i=1,\dotsc,N_\delta$. The convenience in doing this relies on the fact that we are now working with a family of maps taking values in a common space. For the rest of the proof we drop the dependence from the index $i$.
Under this assumption the function in \eqref{e:meas1000.2} restricted to $\Sigma$ coincides with
\begin{equation}
\label{e:meas1000.2.1}
\xi \mapsto \int_{\eta^\bot} \sum_{t \in (A_\xi )^\xi_y} \big ( |[\hat{u}^\xi_y(t)]| \wedge 1 \big) \, \mathrm{d} \mathcal{H}^{n-1}(y),
\end{equation}
where the slicing of sets and functions are now considered with respect to the new family $(P_\xi)_{\xi \in \Sigma}$ accordingly to Definition \ref{d:slices}. Notice that the $\mathcal{H}^{n-1}$-equivalence class of the function in \eqref{e:meas1000.2.1} does not depend on the Lebesgue equivalence class of $u$. Therefore, we may assume with no loss of generality that $u$ is Borel measurable. As in the proof of Lemma \ref{l:meas10000}, letting $s^\pm \colon \Omega \times \Sigma \to \overline{\mathbb{R}}$ and $i^\pm \colon \Omega \times \Sigma \to \overline{\mathbb{R}}$ be defined as
\[
s^\pm(x,\xi):= \aplims_{s \to t^\pm_x} \, \hat{u}^\xi_{P_\xi(x)}(s), \qquad \text{and} \qquad i^\pm(x,\xi):= \aplimi_{s \to t^\pm_x} \, \hat{u}^\xi_{P_\xi(x)}(s)
\]
and arguing as in the proof of Proposition~\ref{p:prodmeas} we infer that~$s^\pm$ and~$i^\pm$ are Borel measurable functions. Using the identity
\[
|[\hat{u}^\xi_{y}(t)]|= |s^+(\varphi_\xi(y+t\xi),\xi)-s^-(\varphi_\xi(y+t\xi),\xi)|,
\]
whenever $(y,\xi,t) \in \eta^\bot \times \Sigma \times \mathbb{R}$ are such that $s^+(\varphi_\xi(y+t\xi),\xi)=i^+(\varphi_\xi(y+t\xi),\xi) \in \mathbb{R}$, $s^-(\varphi_\xi(y+t\xi),\xi)=i^-(\varphi_\xi(y+t\xi),\xi) \in \mathbb{R}$, and $t \in \Omega^\xi_{y}$. Therefore, from the Borel measurability of the maps $s^\pm$ and $i^\pm$ we infer the Borel measurability of $j \colon \eta^\bot \times \Sigma \times \mathbb{R} \to \mathbb{R}$ defined as
\[
j(y,\xi,t):=
\begin{cases}
|[\hat{u}^\xi_{y}(t)]| \wedge 1, &\text{ if }(y,\xi,t) \in \eta^\bot \times \Sigma \times \mathbb{R} \text{ and } t \in \Omega^\xi_{y} \\
0 &\text{ otherwise on } \eta^\bot \times \Sigma \times \mathbb{R}.
\end{cases}
\]
Now fix $A \in \mathcal{B}(\Omega \times \Sigma)$. By construction we have for every $\xi \in \Sigma$
\[
\int_{\eta^\bot} \sum_{t \in (A_\xi )^\xi_y} \big ( |[\hat{u}^\xi_y(t)]| \wedge 1 \big) \, \mathrm{d} \mathcal{H}^{n-1}(y)= \int_{\eta^\bot} \sum_{t \in (A_\xi )^\xi_y} j(y,\xi,t) \, \mathrm{d} \mathcal{H}^{n-1}(y).
\]
For every $m=1,2,\dotsc$ and $k=0,1,\dotsc$ let $A^m_k$ be the Borel subset of $\Omega \times \Sigma$ defined by $(\psi \circ j^{-1})([k2^{-m},(k+1)2^{-m}))$ where $\psi \colon \eta^\bot \times \Sigma \times \mathbb{R} \to \Omega \times \Sigma$ is defined as $\psi(y,\xi,t):= (\varphi_\xi(y+t\xi),\xi)$. Notice that
\begin{equation}
\label{e:meas1000.3}
\text{$(x,\xi) \in A^m_k$ \ and \ $\varphi_{\xi}(y+t\xi)=x$ implies \ $0 <j(y,\xi,t)-k2^{-m} \leq 2^{-m}$}.
\end{equation}
Furthermore, we consider a sequence of countable Borel partitions of $A \cap A_{\hat{u}}$, say $(\mathcal{B}_m)_m$, such that $B' \in \mathcal{B}_m$ implies $\text{diam}(B') \leq 2^{-m}$ for every $m =1,2,\dotsc$. We claim that
\begin{equation}
\label{e:meas1000.5}
\sum_{k=1}^\infty \sum_{B' \in \mathcal{B}_m} k2^{-m}\mathbbm{1}_{P_\xi((A^m_k \cap B')_\xi)}(y)\nearrow \sum_{t \in (A_\xi)^\xi_y} j(y,\xi,t), \ \ \text{for $(y,\xi) \in \eta^\bot \times \Sigma$}
\end{equation}
as $m \to \infty$. The fact that the sequence is monotonically increasing follows by construction. Moreover, letting $s(\xi,y)$ coincide with $\sum_{t \in (A_\xi)^\xi_y} j(y,\xi,t)$, it is not difficult to show that for a fixed couple $(y,\xi)$ and for a given positive integer $M$ there exists a finite subset of $(A_\xi \cap J_{\hat{u}_\xi})^\xi_y = ((A \cap A_{\hat{u}})_\xi)^\xi_y$, say $\tilde{A}^\xi_y$, such that
\[
\begin{cases}
|\sum_{t \in (A_\xi)^\xi_y} j(y,\xi,t) - \sum_{t \in \tilde{A}^\xi_y} j(y,\xi,t)| \leq M^{-1}, &\text{ if } s(\xi,y) < \infty\\
\sum_{\tilde{A}^\xi_y} j(y,\xi,t) \geq M, &\text{ otherwise}.
\end{cases}
\]
Letting
\[
d:= \min_{\substack{t,t' \in \tilde{A}^\xi_y \\ \ t\neq t'}} |\varphi_\xi(y+t\xi)-\varphi_\xi(y+t'\xi) |,
\]
if $m$ is such that $2^{-m} < d$, then, using also that $(A^m_k \cap B')_{k,B'}$ forms a partition, to every $t \in \tilde{A}^\xi_y$ we can injectively associate $B' \in \mathcal{B}_m$ for which $\psi(y,\xi,t) \in (A^m_k \cap B')_\xi \times \{\xi \}$ for some $k=0,1,\dotsc$. Using also \eqref{e:meas1000.3} we infer
\[
\begin{split}
\lim_{m \to \infty} &\sum_{k=1}^\infty \sum_{B' \in \mathcal{B}_m} k2^{-m}\mathbbm{1}_{P_\xi((A^m_k \cap B')_\xi)}(y) \geq \sum_{t \in \tilde{A}^\xi_y}(1-2^{-m}) j(y,\xi,t)\\
&\geq \sum_{t \in (A_\xi)^\xi_y} j(y,\xi,t) -2^{-m}s(\xi,y) - M^{-1},
\end{split}
\]
whenever $s(\xi,y) < \infty$ and
\[
\lim_{m \to \infty} \sum_{k=1}^\infty \sum_{B' \in \mathcal{B}_m} k2^{-m}\mathbbm{1}_{P_\xi((A^m_k \cap B')_\xi)}(y) \geq \sum_{t \in \tilde{A}^\xi_y}(1-2^{-m}) j(y,\xi,t) \geq (1-2^{-m})M,
\]
otherwise. Thanks to the arbitrariness of $M$ we finally deduce for every $(y,\xi) \in \eta^\bot \times \Sigma$
\begin{equation}
\label{e:meas1000.4}
\lim_{m \to \infty} \sum_{k=1}^\infty \sum_{B' \in \mathcal{B}_m} k2^{-m}\mathbbm{1}_{P_\xi((A^m_k \cap B')_\xi)}(y) \geq \sum_{t \in (A_\xi)^\xi_y} j(y,\xi,t),
\end{equation}
whenever $s(\xi,y) < \infty$ and
\begin{equation*}
\lim_{m \to \infty} \sum_{k=1}^\infty \sum_{B' \in \mathcal{B}_m} k2^{-m}\mathbbm{1}_{P_\xi((A^m_k \cap B')_\xi)}(y) = \infty,
\end{equation*}
otherwise. In order to prove the opposite inequality, we simply observe that, since $(A^m_k \cap B')_{k,B'}$ forms a partition, given $m=1,2,\dotsc$, then for every $k=0,1,\dotsc$ and $B' \in \mathcal{B}_m$ for which $\mathbbm{1}_{P_\xi((A^m_k \cap B')_\xi)}(y)=1$ we can injectively associate $t \in (A_\xi \cap J_{\hat{u}_\xi})^\xi_y=((A \cap A_{\hat{u}})_\xi)^\xi_y$ such that $\psi(y,\xi,t) \in (A^m_k \cap B')_\xi \times \{ \xi\}$; thanks to \eqref{e:meas1000.3} such a $t$ satisfies $k2^{-m} < j(y,\xi,t)$ from which we immediately deduce the validity of the opposite inequality in \eqref{e:meas1000.4}. Our claim is thus proved.
To conclude, we define the continuous map $P \colon \Omega \times \Sigma \to \eta^\bot \times \Sigma$ as $P(x,\xi):=(P_\xi(x),\xi)$ and notice that
\[
\mathbbm{1}_{P_\xi((A^m_k \cap B')_\xi)}(y) = \mathbbm{1}_{P(A^m_k \cap B')}(y,\xi), \ \ \text{for every $(y,\xi) \in \eta^\bot \times \Sigma$}.
\]
Therefore, we are in position to apply the measurable projection theorem \cite[2.2.13]{fed}, and infer the $(\mathcal{H}^{n-1} \restr \eta^\bot \otimes \mathcal{H}^{n-1} \restr \mathbb{S}^{n-1})$-measurability of $(y,\xi) \mapsto \mathbbm{1}_{P_\xi((A^m_k \cap B')_\xi)}(y)$. We can thus integrate both sides of \eqref{e:meas1000.5} with respect to $\mathcal{H}^{n-1} \restr \eta^\bot$ and apply Fubini's theorem together with the monotone convergence theorem for every $\xi \in \Sigma$ to finally infer the $\mathcal{H}^{n-1}$-measurability of \eqref{e:meas1000.2.1}. This concludes the proof.
\end{proof}\color{black}
\subsection{Rectifiability of the directional jump set}
Given an $\mathcal{L}^n$-measurable function $u \colon \Omega \to \mathbb{R}^m$, with the help of Lemma \ref{l:meas1000} consider for every $\xi \in \mathbb{S}^{n-1}$ the Borel regular measure $\eta_\xi$ of $\mathbb{R}^n$ given by
\begin{align}
\label{e:defeta1}
\eta_\xi(B) & := \int_{\xi^\bot} \sum_{ t \in B^\xi_y} \big( |[\hat{u}^\xi_y(t)]| \wedge 1 \big) \, \mathrm{d} \mathcal{H}^{n-1}(y) \qquad B \in \mathcal{B} ( \mathbb{R}^n) \,,
\\
\label{e:defeta2}
\eta_\xi(E) & := \inf \, \{\eta_\xi(B) : \, E \subseteq B, \ B \in \mathcal{B} (\Omega)\}\,.
\end{align}
\begin{definition}
\label{d:defiu}
Let $u \colon \Omega \to \mathbb{R}^m$ be measurable and let $(P_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on some open subset~$U$ of~$\Omega$, and let $(\eta_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be the family of measures in~\eqref{e:defeta1}--\eqref{e:defeta2}. Then, for $1 \leq p \leq \infty $ we define~$\mathscr{I}_{u,p}$ as the resulting measure on~$U$ according to~\eqref{e:caratheodoryc2} and~$\hat{\mathscr{I}}_{u}$ as the resulting measure on $U \times \mathbb{S}^{n-1}$ according to~\eqref{e:caratheodoryc2.1.0}.
\end{definition}
We show that $\mathscr{I}_{u,p}$ is concentrated on points $x \in \Omega$ such that $\mathcal{H}^{n-1}(\{\xi \in \mathbb{S}^{n-1} : [\hat{u}^\xi_{P_\xi(x)}(t_x)] \neq 0 \}) >0$.
\begin{proposition}
\label{p:keyprop}
Let $u \colon \Omega \to \mathbb{R}^m$ be measurable and let $(P_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on $\Omega$. Assume that there exists $p \in (1, +\infty]$ such that $\mathscr{I}_{u, p}$ is finite. Then,
\begin{equation}
\label{e:keyprop1000}
\mathscr{I}_{u,1} \big ( \{x \in \Omega : \, \text{$ x \notin J_{\hat u_\xi} $ for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$} \} \big) = 0\,.
\end{equation}
\end{proposition}
\begin{proof}
First of all the set defined in \eqref{e:keyprop1000} can be rewritten as
\begin{equation}
\label{e:keyprop7000}
E:= \big\{x \in \Omega : \, \text{$t_x \notin J_{\hat{u}^\xi_{P_\xi(x)}}$ for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$} \big\}\,.
\end{equation}
We claim that the set $E$ does not depend on the Lebesgue representative of $u$. In order to verify this claim it is enough to prove that given $u_1,u_2$ in the same Lebesgue equivalence class, for $\mathcal{H}^{n-1}$-a.e.~$\xi$ we have that $u_1 \restr P_{\xi}^{-1}(P_\xi(x))$ and $u_2 \restr P_{\xi}^{-1}(P_\xi(x))$ are $\mathcal{H}^1$ equivalent. Let $\phi_x \colon \mathrm{B}_{\overline{r}}(x) \setminus \{x\} \to \mathbb{S}^{n-1}$ be the map given by Proposition~\ref{p:retr}. In particular, it holds that
\begin{equation}
\label{e:keyprop99999}
P_\xi(z)=P_\xi(x) \ \ \text{if and only if $\xi =\phi_x(z)$, for every $z \in {\rm B}_{\overline{r}}(x) \setminus \{x\}$}.
\end{equation}
Therefore, an application of Coarea Formula, together with the fact that estimate~\eqref{e:retr2} gives $\phi_{x} \in L^1(\mathrm{B}_{\overline{r}}(x))$, implies that
\[
\int_{\mathbb{S}^{n-1}} \bigg( \int_{\phi^{-1}_x(\eta) \cap \mathrm{B}_{\overline{r}}(x)} |u_1-u_2| \wedge 1 \, \mathrm{d} \mathcal{H}^1 \bigg) \mathrm{d} \mathcal{H}^{n-1}(\eta)=0\,.
\]
By combining this last information with~\eqref{e:keyprop99999} we obtain the desired claim.
Using the disintegration theorem, we write $\eta_\xi = \eta^\xi_y \otimes (P_{\xi})_{\sharp} \eta_\xi$ for a suitable family of probability measures $(\eta^\xi_y)_{y \in \xi^{\bot}}$ concentrated for $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^\bot$ on the level set~$P^{-1}_{\xi}(y)$. From the definition~\eqref{e:defeta1} of~$\eta_\xi$ we deduce that
\begin{equation}
\label{e:keyprop6000}
(P_{\xi})_{\sharp} \eta_\xi \ll \mathcal{H}^{n-1} \restr \xi^{\bot} \qquad \text{ and } \qquad \eta^\xi_y= \eta^\xi_y \restr J_{\hat u_\xi} \,,
\end{equation}
for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$ and for $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^\bot$. From~\eqref{e:keyprop6000} we deduce in particular that the measures $\eta^\xi_y$ are atomic. Furthermore, being $\mathscr{I}_{u,p}$ finite for some $1<p\leq \infty$, using Proposition~\ref{p:fproposition} we find a disintegration of $\hat{\mathscr{I}}_{u}$ of the form
\begin{equation}
\label{e:keyprop5000}
\hat{\mathscr{I}}_{u} = (f_x\, \mathcal{H}^{n-1} \restr \mathbb{S}^{n-1}) \otimes \mathscr{I}_{u,1}\, ,
\end{equation}
for a Borel measurable real-valued function $(x,\xi) \mapsto f_x(\xi)$.
Defining $S_\xi := \{x \in \Omega : \, x \notin J_{\hat{u}_\xi} \}$ and using $\eta_\xi = \eta^\xi_y \otimes (P_\xi)_\sharp \eta_\xi$ together with \eqref{e:keyprop6000}, we have that
\begin{equation}
\label{e:keyprop3000}
\eta_\xi(S_\xi) =0 \qquad \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$}.
\end{equation}
Recalling the notation~\eqref{e:Au} and
\begin{align*}
(A_{\hat{u}})_\xi & =\{ x \in \Omega : \, (x, \xi) \in A_{\hat{u}} \} = \{ x \in \Omega : \, x \in J_{\hat u_{\xi}} \} \qquad \text{for $\xi \in \mathbb{S}^{n-1}$}\,,
\\
(A_{\hat{u}})_x & = \{\xi \in \mathbb{S}^{n-1} : \, (x, \xi) \in A_{\hat{u}}\} = \{ \xi \in \mathbb{S}^{n-1}: x \in J_{\hat u_{\xi}} \} \qquad \text{for $x \in \Omega$}\,,
\end{align*}
equality~\eqref{e:keyprop3000} can be rewritten as
\begin{equation*}
\eta_\xi(\Omega\setminus (A_{\hat{u}})_\xi) =0 \qquad \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$}.
\end{equation*}
By Lemma~\ref{l:meas10000} the set $A_{\hat{u}}$ is Borel. Thus, Proposition~\ref{p:coincidence} yields that
\[
\hat{\mathscr{I}}_{u}([\Omega \times \mathbb{S}^{n-1}] \setminus A_{\hat{u}}) = \int_{\mathbb{S}^{n-1}} \eta_\xi(\Omega \setminus (A_{\hat{u}})_\xi) \, \mathrm{d} \mathcal{H}^{n-1}(\xi) =0\,.
\]
Using disintegration~\eqref{e:keyprop5000} we obtain that
\begin{equation}
\label{e:keyprop4001}
\int_{\mathbb{S}^{n-1} \setminus (A_{\hat{u}})_x} f_x(\xi) \, \mathrm{d} \mathcal{H}^{n-1}(\xi) =0 \qquad \text{for $\mathscr{I}_{u,1}$-a.e.~$x \in \Omega$}.
\end{equation}
We notice that the set~$E$ in~\eqref{e:keyprop1000}--\eqref{e:keyprop7000} can be rewritten as
\[
E = \{x \in \Omega : \, \mathcal{H}^{n-1}((A_{\hat{u}})_x) = 0 \}\,.
\]
Hence, by~\eqref{e:keyprop4001} for $\mathscr{I}_{u,1}$-a.e.~$x \in E$ condition $f_x\equiv 0$ is guaranteed. Since we know from Proposition \ref{p:fproposition} that $\int_{\mathbb{S}^{n-1}} f_x \, \mathrm{d} \mathcal{H}^{n-1}=1$ for $\mathscr{I}_{u,1}$-a.e.~$x \in \Omega$, we finally infer $\mathscr{I}_{u,1}(E)=0$. This concludes the proof of~\eqref{e:keyprop1000}.
\end{proof}
We now give the notion of one-dimensional radial oscillation around~$x$ and rigorously define the set $\text{Osc}_{u}( \rho)$.
\begin{definition}[One dimensional radial oscillation around~$x$]
\label{d:oscillation}
Let $f \colon \mathbb{R} \to \mathbb{R}$ be measurable. We introduce the oscillation of $f$ at scale $r>0$ around the origin as
\begin{equation*}
\text{Osc}_r(f,\rho) := \inf_{\text{Lip}(\theta)\leq 1}\int_{-\rho/4}^{\rho/4} (|f(rt)-\theta| \wedge 1) \, t^{n-1} \, \mathrm{d} t \, .
\end{equation*}
For $\Omega \subseteq \mathbb{R}^n$ open and $u \colon \Omega \to \mathbb{R}^{m}$ measurable, setting $\exp_{x,\xi}(t):= \exp_x(t\xi)$ and
\begin{displaymath}
\mathring{u}^\xi_x (t) := u( \exp_{x,\xi}(t) ) \cdot g( \exp_{x,\xi}(t), \dot{\exp}_{x,\xi}(t))\,,
\end{displaymath}
we define the \emph{oscillation of $u$ around $x \in \Omega$} as
\begin{equation*}
\label{e:defosc}
\text{Osc}(u,x,\rho):= \limsup_{r \searrow 0} \int_{\mathbb{S}^{n-1}} \text{Osc}_r(\mathring{u}^\xi_x,\rho) \, \mathrm{d} \mathcal{H}^{n-1} (\xi) \,.
\end{equation*}
\end{definition}
\begin{definition}
\label{d:oscillation2}
Given $\Omega \subseteq \mathbb{R}^n$ open, $u \colon \Omega \to \mathbb{R}^m$ measurable, and $\rho >0$ we define
\begin{equation*}
\text{Osc}_u(\rho) := \{x \in \Omega : \text{Osc}(u,x,\rho)>0 \}.
\end{equation*}
\end{definition}
We are now in position to prove Theorem~\ref{t:int1}.
\begin{proof}[Proof of Theorem \ref{t:int1}]
We observe the validity of the following implication for every $x \in \Omega$:
\begin{equation}
\label{e:rectiu1.1}
\mathcal{H}^{n-1}(\{\xi \in \mathbb{S}^{n-1} : x \in J_{\hat{u}_\xi} \})>0 \ \text{ implies } x \in \text{Osc}_{u} (\rho) \,.
\end{equation}
Indeed, setting $\psi_x(\xi):= \xi_{\varphi}(x)/|\xi_{\varphi}(x)|$, we know from property (4) of family of curvilinear projections (see Definition~\ref{d:CP}) \color{black} that the jacobian of~$\psi_x$ is bounded away from zero for every $\xi \in \mathbb{S}^{n-1}$. Thus, we can write
\[
\int_{\mathbb{S}^{n-1}} \text{Osc}_r(\mathring{u}^\xi_x, \rho) \, \mathrm{d} \mathcal{H}^{n-1} (\xi) = \int_{\mathbb{S}^{n-1}} \text{Osc}_r(\mathring{u}^{\psi_{x} (\eta)}_x, \rho) J_\xi\psi_x(\eta) \, \mathrm{d} \mathcal{H}^{n-1} (\eta)
\]
In addition, if we denote by $\Lambda_x := \{\xi \in \mathbb{S}^{n-1} : \, \xi = \psi_x(\eta) \text{ and } x \in J_{\hat{u}_\eta} \text{ for some }\eta \in \mathbb{S}^{n-1}\}$, condition $\xi \in \Lambda_x$ implies by Ascoli-Arzel\'a that
\begin{equation}
\label{e:rectiu4000.1}
\lim_{r \searrow 0 } \text{Osc}_r(\mathring{u}^\xi_x, \rho) >0 \,.
\end{equation}
Being $\psi_x(\cdot)$ a diffeomorphism of $\mathbb{S}^{n-1}$ onto itself, condition $\mathcal{H}^{n-1}(\{\xi \in \mathbb{S}^{n-1} :\, x \in J_{\hat{u}_\xi} \})>0$ implies $\mathcal{H}^{n-1}(\Lambda_x)>0$. Therefore, inequality \eqref{e:rectiu4000.1} together with Fatou's Lemma allows us to infer
\begin{equation*}
\liminf _{r \searrow 0} \int_{\mathbb{S}^{n-1}} \text{Osc}_r(\mathring{u}^\xi_x, \rho) \, \mathrm{d} \mathcal{H}^{n-1} (\xi) >0\,,
\end{equation*}
which yields $\text{Osc} (u, x, \rho) >0$ and $x \in \Omega \setminus \text{Osc}_u( \rho) $. We have thus proved~\eqref{e:rectiu1.1}.
Eventually, we notice that \eqref{e:rectiu1.1} in combination with Proposition \ref{p:keyprop} tells us that $\mathscr{I}_{u,1}(\Omega \setminus \text{Osc}_u (\rho))=0$. This implies that $\mathscr{I}_{u, q}$ is integralgeometric for every $q \in [1, +\infty]$. Since we assumed the finiteness of~$\mathscr{I}_{u,p}$, the $(n-1)$-rectifiability of~$\mathscr{I}_{u,1}$ follows now from Theorem~\ref{t:rectheorem}.
In particular, we know that there exists a countably $(n-1)$-rectifiable subset $R$ of~$\Omega$ such that
\begin{equation*}
\label{e:euju1000}
\mathscr{I}_{u,1}(\Omega \setminus R)=0\,.
\end{equation*}
This condition together with the formula for integralgeometric measures given in Proposition \ref{p:coincidence} and applied to the Borel set $\Omega \setminus R$, yields that
\begin{equation*}
\label{e:euju6.1}
\eta_\xi(\Omega \setminus R)=0\qquad \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$.}
\end{equation*}
From the definition of $\eta_\xi$ (cf. \eqref{e:defeta1}) we immediately infer
\begin{align*}
\label{e:euju1}
J_{\hat u^\xi_y} \cap \big ( \Omega^{\xi}_{y} \setminus R^{\xi}_{y} \big) =\emptyset \qquad \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$, for $\mathcal{H}^{n-1}$-a.e. $y \in \xi^\bot$.}
\end{align*}
Hence, we have that~\eqref{e:int6.1} holds.
This concludes the proof of the theorem.
\end{proof}
\subsection{Slicing the jump set}
We start with general proposition which relates the trace on rectifiable sets of a measurable function $u\colon \Omega \to \mathbb{R}^{m}$ with the traces of its one dimensional slices. Since its proof is an adaptation of a quite standard argument, we postpone its proof in Appendix~\ref{appendix}.
\begin{proposition}
\label{c:relje}
Let $u \colon \Omega \to \mathbb{R}^m$ be measurable, let $\xi \in \mathbb{S}^{n-1}$, and let $P \colon \Omega \to \xi^\bot $ be a parametrized map. Assume that there exists a diffeomorphism $\tau \colon \mathbb{R} \to (-1,1)$ such that
$D_\xi \tau (u_\xi \circ \varphi) \in \mathcal{M}_b(\varphi^{-1}(\Omega))$. Then, for every countably $(n-1)$-rectifiable set $R \subseteq \Omega$ it holds true
\begin{equation}\label{e:corollary-relje}
\aplim_{\substack{z \to x \\ \pm(z-x) \cdot \nu_{R} (x) >0}} u_\xi(z) = \aplim_{s \to t^{ \pm \sigma(x) }}\hat{u}^\xi_y(s) \qquad \text{for $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^{\bot}$ and for $t \in R^\xi_y$}\,,
\end{equation}
whenever at least one between the above approximate limits exists and where $x = \varphi(y+t\xi)$, $\nu_{R} \colon R \to \mathbb{S}^{n-1}$ is a Borel measurable orientation of $R$, and $\sigma(x) := \text{\emph{sign}}(\nu_{R}(x) \cdot \xi_{\varphi} (x))$.
\end{proposition}
\begin{remark}
We notice that the equality \eqref{e:corollary-relje} does not depend on the choice of the orientation~$\nu_{R}$.
\end{remark}
Combining Theorem~\ref{t:int1} and Proposition~\ref{c:relje} we infer the following general structure result for the jump set of a measurable function~$u \colon \Omega \to \mathbb{R}^{m}$.
\begin{theorem}
\label{t:slicecoe}
Let $\Omega$ be an open subset of~$\mathbb{R}^{n}$, let $F \in C^{\infty} (\mathbb{R}^{n} \times \mathbb{R}^{n}; \mathbb{R}^{n})$ fulfilling~ \eqref{e:quadratic}, let $g \colon \Omega \times \mathbb{R}^{n} \to \mathbb{R}^{m}$ satisfy \eqref{G1}, let $u \colon \Omega \to \mathbb{R}^m$ be measurable, let $\tau \colon \mathbb{R} \to (-1,1)$ be a diffeomorphism, and let $(P_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on $\Omega$. Suppose that the following conditions hold:
\begin{enumerate}
\item There exists~$p \in (1, +\infty]$ such that~$\mathscr{I}_{u,p}$ is finite;
\item There exists $\rho>0$ such that $\emph{Osc}_u (\rho)$ is $\sigma$-finite w.r.t.~$\tilde{\mathcal{I}}^{n-1}$;
\item $D_\xi \tau(u_\xi \circ \varphi_\xi) \in \mathcal{M}_b(\varphi_\xi^{-1}(\Omega))$ for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$.
\end{enumerate}
Then, we have that
\begin{equation}
\label{e:slicing1}
J_{\hat{u}^\xi_y} = (J_{u_\xi})^\xi_y, \ \ \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e. $y \in \xi^\bot$}.
\end{equation}
\end{theorem}
\begin{proof}
We start by showing that
\begin{equation}
\label{e:slicing2}
J_{\hat{u}^\xi_y} \subseteq (J_{u_\xi})^\xi_y \qquad \text{for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^\bot$}.
\end{equation}
By Theorem~\ref{t:int1} we know that there exists a countably $(n-1)$-rectifiable subset~$R$ of~$\Omega$ such that~\eqref{e:int6.1} holds.
By~\eqref{e:corollary-relje} applied to~$R$ we have that
\begin{equation}
\label{e:slicing3}
J_{\hat{u}^\xi_y} \cap R^\xi_y \subseteq (J_{u_\xi})^\xi_y, \ \ \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e. $y \in \xi^\bot$}.
\end{equation}
Hence,~\eqref{e:int6.1} and~\eqref{e:slicing3} imply~\eqref{e:slicing2}. In order to show the opposite inclusion we first make use of Theorem~\ref{t:delnin} to infer the countably $(n-1)$-rectifiability of $J_{u_\xi}$. Thus, by applying again~\eqref{e:corollary-relje} to~$J_{u_\xi}$ we immediately obtain that
\begin{equation*}
J_{\hat{u}^\xi_y} \supseteq (J_{u_\xi})^\xi_y, \ \ \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e. $y \in \xi^\bot$}.
\end{equation*}
This concludes the proof.
\end{proof}
The remaining part of this section is devoted to the relation between the one-dimensional jump set~$J_{\hat{u}^{\xi}_{y}}$ and the slices~$(J_{\mathfrak{u}})^{\xi}_{y}$ of the jump set of the function~$\mathfrak{u}$ introduced in Definition~\ref{d:mathfraku}.
\begin{definition}
Let~$\Omega$ be an open subset of~$\mathbb{R}^{n}$ and let $(P_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on~$\Omega$. Given an $(n-1)$-rectifiable set $R \subseteq \Omega$ and $\xi \in \mathbb{S}^{n-1}$ we denote
\begin{equation*}
R^\xi := \left\{x \in R : \text{ there exists }\nu_{R}(x) \text{ and } \nu_{R}(x) \cdot \xi_{\varphi}(x) \neq 0 \right\}.
\end{equation*}
\end{definition}
We state two technical propositions whose proofs can be found in the Appendix~\ref{appendix}.
\begin{proposition}
\label{p:r=rxi}
Let~$\Omega$ be an open subset of~$\mathbb{R}^{n}$ and let $(P_{\xi})_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on~$\Omega$. Assume that $R \subseteq \Omega$ is $(n-1)$-rectifiable. Then, we have that
\begin{equation}
\label{e:r=rxi}
\mathcal{H}^{n-1} \big( \{\xi \in \mathbb{S}^{n-1} : \, \mathcal{H}^{n-1}(R \setminus R^\xi)>0 \} \big) = 0\,.
\end{equation}
\end{proposition}
\begin{proposition}
\label{p:prodmeas}
Let $u \colon \Omega \to \mathbb{R}^m$ be Borel measurable, let $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on $\Omega$, let $R \subseteq \Omega$ be countably $(n-1)$-rectifiable, and let $\nu \colon R \to \mathbb{S}^{n-1}$ be a Borel measurable orientation. Assume that there exists a diffeomorphism $\tau \colon \mathbb{R} \to (-1,1)$ such that $D_\xi\tau(u_\xi \circ \varphi_\xi) \in \mathcal{M}_b(\varphi_\xi^{-1}(\Omega))$ for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$. If we set
\begin{equation}
\label{e:nrelje1}
\Delta := \bigg\{ (x,\xi) \in R \times \mathbb{S}^{n-1} : \ \aplim_{\substack{z \to x \\ \pm(z-x) \cdot \nu_{R} (x) >0}} u_\xi(z) = \aplim_{s \to t^{\pm\sigma(x)}_x}\hat{u}^\xi_{P_\xi(x)}(s) \bigg\}\,,
\end{equation}
(the existence of at least one of the above approximate limits in \eqref{e:nrelje1} is tacitly assumed) then
\begin{align}
\label{e:nrelje100}
(\mathcal{H}^{n-1} \restr R \otimes \mathcal{H}^{n-1} \restr \mathbb{S}^{n-1}) \big ( (R \times \mathbb{S}^{n-1}) \setminus \Delta \big) = 0\,.
\end{align}
\end{proposition}
Finally, we are in position to prove Corollary \ref{c:int2}. We recall that, besides~\eqref{G1}, we now assume also condition~\eqref{G2} for $g$.
\begin{proof}[Proof of Corollary \ref{c:int2}]
By virtue of Theorem \ref{t:slicecoe} it is enough to prove that
\begin{equation}
\label{e:mainslicepro2}
(J_{u_\xi})^\xi_y = ( J_{\mathfrak{u}} )^\xi_y \qquad \text{for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^\bot$}.
\end{equation}
Since condition~\eqref{e:mainslicepro2} does not depend on the Lebesgue representative of~$u$ we may suppose that~$u$ is a Borel measurable function. Let $R \subseteq \Omega$ be the countably $(n-1)$-rectifiable set provided by Theorem~\ref{t:int1}.
Thanks to Proposition~\ref{p:prodmeas} and to Fubini's theorem we know that for $\mathcal{H}^{n-1}$-a.e.~$x \in R$ we have (remember that the existence of both approximate limits below is tacitly guaranteed)
\begin{equation}
\label{e:mainslicepro3}
\aplim_{\substack{z \to x \\ \pm(z-x) \cdot \nu_{R} (x) >0}} u_\xi(z) = \aplim_{s \to t^{\pm\sigma(x)}_x}\hat{u}^\xi_{P_\xi(x)}(s) \qquad \text{for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$}.
\end{equation}
Therefore, we infer from \eqref{e:mainslicepro9} and \eqref{e:mainslicepro3} that for $\mathcal{H}^{n-1}$-a.e.~$x \in R$ we have
\begin{equation}
\label{e:mainslicepro5}
\mathcal{H}^{n-1}(\{\xi \in \mathbb{S}^{n-1} \, : \, x \in J_{u_\xi} \})>0.
\end{equation}
Using property \eqref{G2} of~$g$ and property (4) of family of curvilinear projections, we infer from~\eqref{e:mainslicepro5} that for $\mathcal{H}^{n-1}$-a.e.~$x \in R$ we find $\{\xi^1,\dotsc,\xi^k\}$ (for some $1 \leq k \leq m$) and an open neighborhood $U$ of $x$ such that $x \in J_{u_{\xi_j}}$ for $j=1,\dotsc,k$ and such that
\begin{align}
\label{e:mainslice99999}
\text{span}\{g(z,\xi^1_{\varphi}(x)),\dotsc,g(z,\xi^k_{\varphi}(x))\} = \text{span}\{g(z,v) : v \in \mathbb{R}^n \}, \ \ \text{for }z \in U
\end{align}
With no loss of generality, we may assume $k=\text{dim} (\text{span}\{g(x,\xi^1_{\varphi}(x)),\dotsc, g(x,\xi^k_{\varphi}(x))\})$, otherwise we would just remove some of the $\xi_j$-s. Therefore, using the continuity of $g(\cdot,\cdot)$ and $ \xi_{\varphi}(\cdot)$, up to consider a smaller neighborhood $U$, we infer from \eqref{e:mainslice99999}
\[
k=\text{dim} (\text{span}\{g(z,\xi^1_{\varphi}(z)),\dotsc, g(z,\xi^k_{\varphi}(z))\})=
\text{dim}(\text{span}\{g(z,v) : v \in \mathbb{R}^n \}), \ \ \text{for }z \in U.
\]
In particular we deduce
\begin{equation}
\label{e:mainslice99999.1}
\text{span}\{g(z,\xi^1_{\varphi}(z)),\dotsc, g(z,\xi^k_{\varphi}(z))\} = \text{span}\{g(z,v) : v \in \mathbb{R}^n \}, \ \ \text{for }z \in U.
\end{equation}
Using again the continuity of~$g$, condition \eqref{e:mainslice99999.1} gives continuous coefficients $\alpha_j \colon U \to \mathbb{R}$ such that $\mathfrak{u} (z) = \sum_j \alpha_j(z) u(z) \cdot g(z,\xi_{\varphi}^j(z))$ for $z \in U$. Therefore, we can write
\[
\begin{split}
\aplim_{\substack{z \to x \\ \pm(z-x) \cdot \nu_{R} (x) >0}} \mathfrak{u} (z) &= \sum_{j=1}^k \aplim_{\substack{z \to x \\ \pm(z-x) \cdot \nu_{R} (x) >0}} \alpha_j(z) \, u(z) \cdot g(z,\xi^j_\varphi(z))\\
&=\sum_{j=1}^k \alpha_j(x) \aplim_{\substack{z \to x \\ \pm(z-x) \cdot \nu_{R} (x) >0}} \, u(z) \cdot g(z,\xi^j_\varphi(z)).
\end{split}
\]
This gives $\mathfrak{u}^\pm (x) \in \mathbb{R}^m$ for which
\[
\aplim_{\substack{z \to x \\ \pm(z-x) \cdot \nu_{R} (x) >0}} \mathfrak{u} (z)= \mathfrak{u}^\pm (x)\,.
\]
Moreover, it cannot be $\mathfrak{u}^+(x)= \mathfrak{u}^-(x)$, otherwise we would get a contradiction with the fact that $x \in J_{u_{\xi^j}}$ for $j=1,\dotsc,k$. Hence, we have that $x \in J_{\mathfrak{u}}$. Therefore, we have obtained that
\[
x \in J_{\mathfrak{u}} \qquad \text{for $\mathcal{H}^{n-1}$-a.e.~$x \in R$}.
\]
As a consequence we can infer that
\begin{equation}
\label{e:mainslicepro6}
(J_{u_\xi} \cap R)^\xi_y \subseteq (J_{\mathfrak{u}})^\xi_y \qquad \text{for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^\bot$}.
\end{equation}
Furthermore, the set~$R$ also satisfies~\eqref{e:int6.1}, which together with~\eqref{e:slicing1} gives
\begin{equation}
\label{e:mainslicepro7}
(J_{u_\xi} \cap R)^\xi_y = (J_{u_\xi})^\xi_y \qquad \text{for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^\bot$}.
\end{equation}
Combining \eqref{e:mainslicepro6} with \eqref{e:mainslicepro7} yields
\begin{equation}
\label{e:mainslicepro8}
(J_{u_\xi})^\xi_y \subseteq (J_{\mathfrak{u}})^\xi_y \qquad \text{for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e. $y \in \xi^\bot$}.
\end{equation}
It remains to prove the opposite inclusion. We claim that
\begin{equation}
\label{e:mainslicepro10}
x \in J_{u_\xi} \qquad \text{for $\mathcal{H}^{n-1}$-a.e. $x \in J_{\mathfrak{u}}$, $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$}.
\end{equation}
Indeed, suppose by contradiction that \eqref{e:mainslicepro10} does not hold true. Then, we find a set $B \subseteq J_{\mathfrak{u}}$ such that $\mathcal{H}^{n-1}(B)>0$ and
\begin{equation}
\label{e:mainslicepro13}
\mathcal{H}^{n-1}(\{ \xi \in \mathbb{S}^{n-1} \, : \, x \notin J_{u_\xi} \}) >0 \qquad \text{for $x \in B$}.
\end{equation}
In view of~\eqref{e:mainslicepro3} applied to the rectifiable set~$J_{\mathfrak{u}}$ (cf.~Theorem~\ref{t:delnin}), we find $x \in B$ and $\Sigma \subseteq \mathbb{S}^{n-1}$ with $\mathcal{H}^{n-1}(\Sigma)>0$ for which $x$ has to be a Lebesgue point for $u_\xi$ for every $\xi \in \Sigma$. Arguing as above, we find $\{\xi^1,\dotsc,\xi^k\}$ and continuous coefficients $\alpha_j \colon U \to \mathbb{R}$ such that $\mathfrak{u} (z) = \sum_j \alpha_j(z) u(z) \cdot g(z,\xi_{\varphi}^j(z))$ for every~$z$ in some open neighborhood of~$x$. Therefore, we can write
\[
\begin{split}
\aplim_{z \to x} \mathfrak{u} (z) &= \sum_{j=1}^k \aplim_{z \to x } \alpha_j(z) \, u(z) \cdot g(z,\xi^j_\varphi(z))\\
&=\sum_{j=1}^k \alpha_j(x) \aplim_{z \to x} \, u(z) \cdot g(z,\xi^j_\varphi(z)),
\end{split}
\]
from which we immediately deduce that $x$ is a Lebesgue point of~$\mathfrak{u}$. This gives a contradiction with the assumption $x \in J_{\mathfrak{u}}$ and proves claim~\eqref{e:mainslicepro10}.
Since~$u$ is assumed to be Borel measurable, arguing as in Proposition~\ref{p:prodmeas} we infer that the set $\{(x,\xi) \in J_{\mathfrak{u}} \times \mathbb{S}^{n-1} \, : \, x \in J_{u_\xi} \}$ is Borel measurable. Therefore, we infer from~\eqref{e:mainslicepro10} and from Fubini's theorem the following property
\begin{equation}
\label{e:mainslicepro11}
x\in J_{u_\xi} \qquad \text{for $\mathcal{H}^{n-1}$-a.e.~$\xi \in \mathbb{S}^{n-1}$, $\mathcal{H}^{n-1}$-a.e.~$x \in J_{\mathfrak{u}}$}.
\end{equation}
Condition~\eqref{e:mainslicepro11} immediately gives the opposite inclusion in~\eqref{e:mainslicepro8} and concludes the proof.
\end{proof}
\section{An example}
\label{s:applications}
In this section we want to show how the hypotheses of Corollary \ref{c:int2} are guaranteed in the $BV^{\mathcal{A}}$-setting. In particular, we show how condition (2) can be ensured by means of Poincar\'e type of inequalities.
We start with some general preliminaries, which will be useful also in Section~\ref{s:GBD}. We consider a field $F \in C^\infty(\Omega \times \mathbb{R}^n; \mathbb{R}^n)$ satisfying~\eqref{e:quadratic}, a function $g \in C(\Omega \times \mathbb{R}^n; \mathbb{R}^m)$ fulfilling \eqref{G1}--\eqref{G2}, and an operator $\mathcal{E} \colon C^\infty(\Omega; \mathbb{R}^m) \to C^\infty(\Omega; \mathbb{R}^k)$. Suppose that $\mathcal{E}$ satisfies the following condition; given $a \in C^\infty(\Omega; \mathbb{R}^m)$ and $\gamma \colon (-\tau,\tau) \to \mathbb{R}^n$ solution of $\ddot{\gamma} = F(\gamma,\dot{\gamma})$ there exists an increasing continuous function $c_{\mathcal{E}} \colon [0,+\infty) \to [0,+\infty)$ such that
\begin{equation}
\label{e:operator1}
\frac{\mathrm{d}}{\mathrm{d} t} [a(\gamma(t)) \cdot g(\gamma(t),\dot{\gamma}(t) )] \leq c_{\mathcal{E}}(|\dot{\gamma}(t)|) |\mathcal{E}(a)(\gamma(t))|, \qquad \text{for every } t \in (-\tau,\tau).
\end{equation}
We introduce the function space $\Xi(U)$. Given an open set $U \subset \Omega$ we define
\begin{equation*}
\Xi(U) := \Big\{a \in C^{\infty}(U;\mathbb{R}^m) :\, \|\mathcal{E}(a)\|_{L^\infty(U;\mathbb{R}^{k})} \leq \frac{1}{c_{\mathcal{E}} (2)} \Big\}.
\end{equation*}
In verifying condition (2) of Theorem \ref{e:int1}, instead of looking at $\text{Osc}_u (\rho)$ it is usually easier to obtain a control on the size of the following set
\begin{equation*}
[\text{Osc}]_u (\rho) := \{ x \in \Omega : [\text{Osc}](u,x,\rho)>0\},
\end{equation*}
where
\begin{equation*}
[\text{Osc}](u,x,\rho):= \limsup_{r \to 0^+} \inf_{a \in \Xi(\mathrm{B}_{\rho/2} (0))} \int_{\mathrm{B}_{\rho/2}(0)} |u_{r,x}-a| \wedge 1 \, \mathrm{d} z,
\end{equation*}
and $u_{r,x} \colon \mathrm{B}_1(0) \to \mathbb{R}^m$ is defined as $u_{r,x}(z):=u(x+rz)$. We have the validity of the following proposition.
\begin{proposition}
\label{p:application1}
Let $\rho>0$ and $(P_\xi)_{\xi \in \mathbb{S}^{n-1}}$ be a family of curvilinear projections on~$\Omega$. Then, we have
\begin{equation*}
\emph{Osc}_u (\rho) \subseteq [\emph{Osc}]_u (\rho)\,.
\end{equation*}
\end{proposition}
\begin{proof}
By applying Coarea Formula with the map $\phi_x$ given in \eqref{e:retr12} and property \eqref{e:operator1}, it is not difficult to verify that we have for every but sufficiently small $r>0$
\begin{equation*}
\int_{\mathbb{S}^{n-1}} \text{Osc}_r(\mathring{u}^\xi_x, \rho)\, \mathrm{d} \mathcal{H}^{n-1}(\xi) \leq \inf_{a \in \Xi(\mathrm{B}_{\rho/2}(0))} \int_{\mathrm{B}_{\rho/2}(0)} |u_{r,x} - a| \wedge 1 \, \mathrm{d} z,
\end{equation*}
where we have used that $|\dot{\exp}_{r,x}(t\xi)| \leq 2$ for $(\xi,t) \in \mathbb{S}^{n-1} \times (-1,1)$ whenever $r$ is sufficiently small.
\end{proof}
The previous proposition tells us that the $\sigma$-finiteness of $\text{Osc}_u (\rho) $ can be deduced from the $\sigma$-finiteness of $[\text{Osc}]_u (\rho)$. This latter condition is typically guaranteed by the validity of Poincar\'e's type of inequalities.
\subsection{Complex-elliptic operators satisfying a mixing condition} A first example is provided by choosing $\mathcal{E}:= \mathcal{A}$ whenever $\mathcal{A}$ is a (first-order) complex-elliptic operator. We briefly recall that an operator of the form
\[
\mathcal{A}(a)(x)=\sum_{i=1}^n A_i \partial_i a(x) \in C^\infty(\Omega;\mathbb{R}^k), \ \ \text{for }a \in C^\infty(\Omega;\mathbb{R}^m),
\]
for suitable linear map $A_i \colon \mathbb{R}^m \to \mathbb{R}^k$, is called complex-elliptic if and only if the complexification of its principal symbol $\mathbb{A}(\zeta) := \sum_{i=1}^n \zeta_i A_i $ ($\zeta \in \mathbb{C}^n$) satisfies the following inequality
\[
|\mathbb{A}(\zeta)v| \geq c |\zeta|\,|v|, \ \ \text{for every } \zeta \in \mathbb{C}^n \text{ and } v \in \mathbb{C} \otimes \mathbb{R}^m,
\]
for some constant $c>0$. In this case the kernel of $\mathcal{A}$ can be completely characterized in the sense that there exists a positive integer $\ell = \ell(\mathcal{A})$ such that whenever $\mathcal{A}(u)=0$ holds true in the sense of distribution on $\Omega$ then $u$ is a polynomial map from $\Omega$ with values in $\mathbb{R}^m$ of degree at most $\ell$ (cf.~\cite{Arr-Sk, smith}). As shown in \cite{Gme19} such a characterization leads to the following Poincar\'e's inequality; for any $x \in \Omega$ we find $\ell = \ell(\mathcal{A}) \in \mathbb{N} \setminus \{0\}$ and a polynomial $p_{x,r}$ of degree at most $\ell -1$ such that
\[
\|u -p_{x,r}\|_{L^{\frac{n}{n-1}}(B_r(x))} \leq c |\mathcal{A}(u)|({\rm B}_r(x)), \ \ \text{for $u \in BV^{\mathcal{A}}(\Omega)$ and ${\rm B}_r(x) \subset \Omega$},
\]
for some constant $c=c(n,\mathcal{A})>0$. In particular, by investigating the asymptotic behaviour of the coefficients of $p_{x,r}$ for $r \to 0^+$ it is possible to prove the following proposition regarding the oscillation of $u$ (cf.~\cite{Arr-Sk}).
\begin{proposition}
\label{p:application2}
Let $\mathcal{A}$ be a first-order complex-elliptic operator and let $u \in BV^\mathcal{A}(\Omega)$. Then for every $x \in \Omega$ satisfying $\Theta^{*n-1}(|\mathcal{A}(u)|,x)=0$ we have
\begin{equation*}
\lim_{r \searrow 0} \, \inf_{a \in \mathbb{R}} \int_{\mathrm{B}_1(0)} |u_{r, x} - a |^{\frac{n}{n-1}} \, \mathrm{d} z =0\,,
\end{equation*}
where $\Theta^{*n-1}$ denotes the $(n-1)$-dimensional upper-density and $u_{r, x}(z):=u(x+rz)$.
\end{proposition}
In addition, if $\mathcal{A}$ satisfies the \emph{mixing condition} introduced in~\cite{arr, Spe-VS}, it is possible to prove that given $\xi \in \mathbb{R}^{n}$ then there exist $e \in \mathbb{R}^m$ and $w \in \mathbb{R}^{k}$ such that $w$ is a rank-one covector and $(\xi,e)$ forms a \emph{spectral pair} (cf.~\cite{arr}). In particular, this implies that for every $x \in \Omega$ and $t \in \mathbb{R}$ for which $x +t\xi \in \Omega$ we have
\begin{equation}
\label{e:spectral1}
\frac{\mathrm{d}}{\mathrm{d} t} [a(x+t\xi) \cdot e] = w \cdot \mathcal{A}(a)(x +t\xi), \qquad \text{for } a \in C^\infty(\Omega;\mathbb{R}^m).
\end{equation}
By eventually dividing both sides of the previous inequality by the product $(|e| \vee 1) \, (|w| \vee 1)$ we may suppose with no loss of generality that $|e| \leq 1$ and $|w| \leq 1$. Therefore, we can consider a map $p \colon \mathbb{R}^n \to \mathbb{R}^m$ which selects for each $\xi \in \mathbb{R}^n$ a vector $e_\xi \in \mathbb{R}^m$ with $|e_\xi| \leq 1$ for which \eqref{e:spectral1} holds true for some $w \in \mathbb{R}^k$ with $|w| \leq 1$. By choosing the field $F :=0$ and the map $g(x,\xi):=p(\xi)$, we see that condition~\eqref{e:operator1} is satisfied with $c_{\mathcal{E}}(\cdot)$ constantly equal to~$1$. This means that condition~(2) in Corollary~\ref{c:int2} is guaranteed by Propositions~\ref{p:application1} and~\ref{p:application2} together with the $\sigma$-finiteness of $\{x \in \Omega : \Theta^{*n-1}(|\mathcal{A}(u)|,x)>0 \}$ w.r.t.~$\mathcal{H}^{n-1}$ and the general formula (cf.~\cite[Corollary 2.10.11.]{fed})
\[
\int_{\xi^\bot} \mathcal{H}^0(E \cap P^{-1}(y))\, \mathrm{d} \mathcal{H}^{n-1}(y) \leq \text{Lip}^{n-1}(P) \mathcal{H}^{n-1}(E) \qquad E \subseteq \Omega\,.
\]
Conditions~(1) and~(3) are instead a direct consequence of the slicing representation~\cite{arr}. In particular, Corollary~\ref{c:int2} applies for every (first-order) complex-elliptic operator satisfying the above mentioned mixing condition. We further notice that in this case we can make use of \cite[Remark~2.2]{arr} to infer that the map~$\mathfrak{u}$ introduced in Definition~\ref{d:mathfraku} coincides with~$u$.
\section{Generalised bounded deformation}
\label{s:GBD}
In this section we show how Corollary \ref{c:int2} can be applied to the space of vector fields having generalised bounded deformation on manifolds. We assume that $\Omega$ is an open subset of $\mathbb{R}^n$, $\{e_1,\dotsc,e_n\}$ is the canonical basis of $\mathbb{R}^n$, and that $g \colon \Omega \times \mathbb{R}^n \to \mathbb{R}^n$ is the projection on the second component, namely, $g(x,z):=z$. We point out that with this choice of $g$ the local constant rank property \eqref{G2} is trivially satisfied. Moreover we fix a field $F \in C^{\infty} (\mathbb{R}^{n} \times \mathbb{R}^{n}; \mathbb{R}^{n})$ satisfying
\begin{enumerate}[label=(Q),ref=Q]
\item \label{hp:F} $F$ is a quadratic form in the second variable, that is, for every $x \in \mathbb{R}^{n}$ and every $v_1,v_2 \in \mathbb{R}^n$
\begin{equation*}
F(x,v_1 + v_2) + F(x,v_1 - v_2) = 2 F(x,v_1) +2 F(x,v_2)\,.
\end{equation*}
\end{enumerate}
For later convenience we associate to $F$ a map $F^q \colon \mathbb{R}^n \to \text{Lin}(\mathbb{R}^n \otimes \mathbb{R}^n \otimes \mathbb{R}^n;\mathbb{R})$ in the following way
\begin{equation*}
F^q(x)(v_1 \otimes v_2 \otimes v_3) := \frac{v_3}{2} \cdot (F(x,v_1+v_2) -F(x,v_1) -F(x,v_2) ) \qquad v_1,v_2,v_3 \in \mathbb{R}^n.
\end{equation*}
It is worth noting that, under our hypothesis~\eqref{hp:F}, for every $v_3 \in \mathbb{R}^n$ the map $(v_1,v_2) \mapsto F^q(x)(v_1 \otimes v_2 \otimes v_3)$ is symmetric and hence can be represented as an element of $\mathbb{M}^{n \times n}_{sym}$. For this reason we can write
\begin{equation*}
F^q(x)(v_1 \otimes v_2 \otimes v_3)= (v_3 \cdot F^q(x))v_1 \cdot v_2, \ \ v_1,v_2,v_3 \in \mathbb{R}^n,
\end{equation*}
for a suitable $(v_3 \cdot F^q(x))\in \mathbb{M}^{n \times n}_{sym}$ depending on $v_3$. Given $r>0$ and a point $x \in \mathbb{R}^n$ we define $F_{r, x} \colon \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}^n$ as $F_{r, x}(z,v):= r F ( x + r z , v )$ and analogously $F^q_{r, x} \colon \mathbb{R}^n \to \text{Lin}(\mathbb{R}^n \otimes \mathbb{R}^n \otimes \mathbb{R}^n; \mathbb{R})$ as $F^q_{r, x}(z):= rF^q(x+rz)$.
\subsection{The Rigid Interpolation condition}
As it will be shown in Section \ref{sub:poincare}, in order to apply Corollary \ref{c:int2} we need to assume a further condition on the field $F$ which we call \emph{Rigid Interpolation}. At this point it is convenient to introduce some notation. Given $z \in {\rm B}_1(0)$ and $r>0$, the symbol $\mathcal{S}_{0,z}$ denotes the set $\{z+e_0, \dotsc,z+e_n \}$, where we have set $e_0:=0$, and for $0 \leq i < j \leq n$ we define $t \mapsto \ell_{z, r, ij}(t)$ as the curve $\gamma(\cdot)$ (whenever it is well defined) satisfying
\begin{equation}
\label{e:poincare15000}
\begin{cases}
\ddot{\gamma}(t) = F_{r, x}(\gamma(t),\dot{\gamma}(t)), \ t \in [0,t_{ij}], \ \text{for some }t_{ij}>0 & \\
\gamma(0)=z+e_i, \ \gamma(t_{ij})=z+e_j &\\
|\dot{\gamma}(0)|= 1, &
\end{cases}
\end{equation}
where $F_{r,x}(z,v):=rF(x+rz,v)$.
\begin{remark}
The existence of the curves $\ell_{z,r,ij}$ for sufficiently small $r$ depending on $x$ can be made rigorous following the lines of Lemma \ref{l:exp}. More precisely, if we denote by $\text{inj}_{r,x}(z)$ the injectivity radius defined in Definition \ref{d:inj} with $F$ replaced by $F_{r,x}$, we have that $\text{inj}_{r,x}(z) \to \infty$ as $r \to 0^+$ uniformly for $z \in \mathrm{B}_1(0)$.
\end{remark}
The symbol $\mathcal{S}_{r,1,z}$ denotes the 1-dimensional geodesic skeleton of $\mathcal{S}_{0,z}$, namely,
\[
\mathcal{S}_{r,1,z} := \{ h \in \mathbb{R}^n : \, h = \ell_{z, r, ij}(t) \ \text{for some }t \in [ 0,t_{ij}] \text{ and } \ i \neq j \}.
\]
Moreover for $0 \leq i <j \leq n$ we define
\[
\xi_{r,ij}(z):= \dot{\ell}_{z, r,ij}(0) \ \ \text{ and } \ \ \xi_{r,ji}(z):= \dot{\ell}_{z, r,ij}(t_{ij}).
\]
We consider a semi-norm on $E_{r, z} \colon \mathbb{R}^{n+1} \times \mathbb{R}^n \to [0,\infty)$ defined as follows
\[
E_{r, z} (w) := \sum_{0 \leq i < j \leq n} |w^j \cdot \xi_{r,ji}(z) - w^i \cdot \xi_{r,ij}(z)| \qquad \text{for $w \in \mathbb{R}^{(n+1) \times n}$,}
\]
where $w^{i}$ denotes the $i$-th column of the matrix~$w$. Eventually, we denote by $\mathcal{S}_{n,z}$ the convex hull of $\mathcal{S}_{0,z}$. Observing that every~$z \in \mathrm{B}_1(0)$ with $z \cdot e_i < 0$ for every $i=1,\dotsc,n$ satisfies $\mathcal{S}_{n,z}\subset B_1(0)$ and that $
\mathcal{L}^n(\{ z \in \mathrm{B}_1(0) : \, z \cdot e_i < 0, \ i=1,\dotsc,n \})=2^{-n}\omega_n$ we infer from elementary geometric considerations that there exists a dimensional constant $0<\rho(n) \leq 1$ such that $2^{n+1}\mathcal{L}^n(Q(n)) \geq \omega_n$ whenever
\begin{equation}
\label{e:rip1}
Q(n):= \{z \in \mathrm{B}_1(0) :\, \mathrm{B}_{\rho(n)}(0) \subset \mathring{\mathcal{S}}_{n,z} \subset \mathcal{S}_{n,z} \subset \mathrm{B}_1(0) \} \,.
\end{equation}
We are now in position to state the required Rigid interpolation property of $F$;
\begin{enumerate}[label=(RI),ref=RI]
\item \label{hp:F2} Given $x \in \mathbb{R}^n$ there exists a radius $r_x>0$ such that for every $z \in Q(n)$, $w \in \mathbb{R}^{(n+1) \times n}$, and $0 < r \leq r_x$, we find a smooth map $a_r \colon \mathrm{B}_1(0) \to \mathbb{R}^n$ for which
\begin{align}
\label{e:rip4}
& \ \ \ \ \ \ \ \ a_r(h)=w^i \qquad \text{ for every $h \in \mathcal{S}_{0,z}$,} \\
\label{e:rip5}
&\|\tilde{e}(a_r) - a_r \cdot F^q_{r , x} \|_{L^{\infty}(\mathcal{S}_{n,z}; \mathbb{M}^{n}_{sym})} \leq c(n) E_{r, z} (w)\,,
\end{align}
where $c(n)>0$ is a dimensional constant and where $\tilde{e}(a_r)$ denotes the symmetric gradient of $a_r$.
\end{enumerate}
\begin{remark}
In the manifolds-setting considered in the next section, we will see that the operator $\mathcal{E}:= \tilde{e}(\cdot) - (\cdot) \cdot F^q \colon C^{\infty}(\Omega;\mathbb{R}^n) \to C^\infty(\Omega;\mathbb{M}^n_{sym})$ coincides with the curvilinear symmetric gradient. We further point out that $\mathcal{E}$ satisfies~\eqref{e:operator1}.
\end{remark}
\subsection{Definition of the space}
We start with a preliminary proposition.
\begin{proposition}
Let $\Omega$ be an open bounded subset of~$\mathbb{R}^{n}$, $\xi \in \mathbb{S}^{n-1}$, let $P_\xi\colon \Omega \to \xi^{\bot}$ be a curvilinear projection on~$\Omega$, and let $u \colon \Omega \to \mathbb{R}^n$ be a measurable function. Then, for every $B \in \mathcal{B}( \Omega)$ the function
\[
y \mapsto |{\rm D} \hat{u}^{\xi}_{y} | (B^{\xi}_{y} \setminus J^{1}_{\hat{u}^{\xi}_{y}}) + \mathcal{H}^{0} (B^{\xi}_{y} \cap J^{1}_{\hat{u}^{\xi}_{y} })
\]
is $\mathcal{H}^{n-1}$-measurable on $\xi^\bot$.
\end{proposition}
\begin{proof}
Letting $v(x):= (u_\xi \circ \varphi_\xi)(x)$ for every $x \in \varphi^{-1}(\Omega)$ and using identity \eqref{e:sliceide}, namely, $v(y+t\xi)=\hat{u}^\xi_y$ for $y \in \xi^\bot$ and $t \in \Omega^\xi_y$ (notice that $\Omega^\xi_y= \{t \in \mathbb{R} : \ y+t\xi \in \varphi^{-1}(\Omega)\}$, the claimed measurability follows from~\cite[Lemma 3.6]{dal}
\end{proof}
We are now in a position to define~$GBD_{F}(\Omega)$.
\begin{definition}[The space $GBD_{F}(\Omega)$]
\label{d:GBD}
Let $\Omega$ be an open subset of~$\mathbb{R}^{n}$. We say that a measurable function~$u \colon \Omega \to \mathbb{R}^{n}$ belongs to $ GBD_{F}(\Omega)$ if there exists $\lambda \in \mathcal{M}_{b}^{+}(\Omega)$ such that for every open subset $U$ of~$\Omega$, every $\xi \in \mathbb{S}^{n-1}$, and every curvilinear projection~$P_{\xi}\colon U \to \xi^{\bot}$ on~$U$, the following facts hold true:
\begin{enumerate}
\item\label{e:slice-1} $\hat{u}^{\xi}_{y} \in BV_{loc} (U^{\xi}_{y})$ for $\mathcal{H}^{n-1}$-a.e.~$y \in \xi^{\bot}$;
\vspace{1mm}
\item\label{e:slice-2} for every Borel subset $B \in \mathcal{B}(U)$
\begin{align*}
\int_{\xi^{\bot}} \Big( |{\rm D} \hat{u}^{\xi}_{y} | (B^{\xi}_{y} \setminus J^{1}_{\hat{u}^{\xi}_{y}}) & + \mathcal{H}^{0} (B^{\xi}_{y} \cap J^{1}_{\hat{u}^{\xi}_{y} } ) \Big)\, \mathrm{d} \mathcal{H}^{n-1}(y)
\leq \|\dot{\varphi}_\xi\|^2_{L^\infty}{\rm Lip}(P_{\xi};U)^{n-1} \lambda(B)\,.
\end{align*}
\end{enumerate}
\end{definition}
\begin{remark}
\label{r:nontrivial}
We notice that, thanks to the construction in Section~\ref{sub:curvpro} (see Theorem~\ref{p:curvpro}), for every open set $\Omega \subseteq \mathbb{R}^{n}$ there exists an at most countable family
\[
\big\{ \big(x_{i}, r_{i}, (P_{\xi, x_{i}})_{\xi \in \mathbb{S}^{n-1}}\big): \, i \in I \big\}
\]
such that $\{ {\rm B}_{r_{i}} (x_{i})\}_{i \in I}$ is a cover of~$\Omega$ and $(P_{x_{i}, \xi})_{\xi \in \mathbb{S}^{n-1}}$ is a family of curvilinear projections in~${\rm B}_{r_{i}} (x_{i})$. This justifies the requests~$(1)$ and~$(2)$ in Definition~\ref{d:GBD} and that the set~$GBD_{F}(\Omega)$ is nontrivial.
\end{remark}
\subsection{A weak Poincar\'e's inequality}
\label{sub:poincare}
In this subsection we show that a Poincar\'e inequality holds true in $GBD_F$ as a consequence of \eqref{hp:F2}.
\begin{theorem}
\label{t:poincare}
Let $F \in C^{\infty} ( \mathbb{R}^n \times \mathbb{R}^n ; \mathbb{R}^n)$ satisfy~\eqref{hp:F}--\eqref{hp:F2} and $u \in GBD_F(\Omega)$ with $\lambda \in \mathcal{M}^+_b(\Omega)$ given by Definition \ref{d:GBD}. There exists a dimensional constant $c(n)>0$ such that if $\Theta^{*n-1}(\lambda,x)=0$ for some $x \in \Omega$ then we find a radius $r_x>0$ and for every $0<r\leq r_x$ a smooth map $a_r \colon \mathrm{B}_1(0) \to \mathbb{R}^n$ satisfying
\begin{align}
\label{e:poincare1000}
&\|e(a_r) - a_r \cdot F^q_{r, x} \|_{L^{\infty}({\rm B}_{\rho(n)/2}(0); \mathbb{M}^{n}_{sym})} \leq c(n) r^{1-n}\lambda_r ({\rm B}_1(0)) \\
\label{e:poincare2000}
& \ \ \ \ \int_{{\rm B}_{\rho(n)/2}(0)} |u_{r, x}-a_r| \wedge 1 \, \mathrm{d} z \leq c(n) r^{1-n} \lambda_r ({\rm B}_1(0))\, ,
\end{align}
where $\rho(n)>0$ is the dimensional constant defined by condition \eqref{e:rip1}, $\lambda_r := \psi_{r, x \sharp}\lambda$, and $\psi_{r, x} \colon \mathrm{B}_r(x) \to \mathrm{B}_1(0)$ is defined as $\psi_{r, x}(z):= (z-x)/r$.
\end{theorem}
\begin{proof}
We notice that $u_{r, x} \in GBD_{F_{r, x}}({\rm B}_1(0))$ and that $\lambda_r$ can be chosen in the definition of $u_{r, x} \in GBD_{F_{r, x}}({\rm B}_1(0))$ (see (2) of Definition~\ref{d:GBD}).
It is convenient to fix some notation. For $z \in Q(n)$ (cf.~ \eqref{e:rip1}) and $v \colon {\rm B}_1(0) \to \mathbb{R}^n$ we define $w_{v(z)} \in \mathbb{R}^{(n+1) \times n}$ as
\[
w_{v(z)}:= (v(z+e_0), v(z+e_1), \dotsc, v(z+e_n))
\]
and for every $i=0,\dotsc,n$ we set
\[
w^i_{v(z)}:= v(z+e_i).
\]
For $z \in {\rm B}_{1}(0)$, $r>0$, and $\xi \in \mathbb{R}^{n}$, similarly to Definition~\ref{d:exp} we set $\exp_{r, z}(\xi) := \gamma(1)$, where $\gamma$ is the unique solution of
\begin{displaymath}
\left\{\begin{array}{lll}
\ddot{\gamma} (t) = F_{r, x} (\gamma(t), \dot{\gamma} (t)) \,,\\[1mm]
\gamma(0) = z\,,\\[1mm]
\dot{\gamma} (0) = \xi\,.
\end{array}\right.
\end{displaymath}
As $F_{r, x} \to 0$ in $C^{\infty}_{loc} (\mathbb{R}^{n} \times \mathbb{R}^{n}; \mathbb{R}^{n})$ as $r \searrow 0$, there exists $r_{x}>0$ such that~$\exp_{r, z} (\xi)$ is well-defined for every $\xi \in {\rm B}_{4} (0)$ and every $z \in {\rm B}_{1}(0)$. Arguing as in Lemma~\ref{l:exp}, we may as well assume that $\exp_{r, z}$ is a diffeomorphism of~${\rm B}_{4}(0)$ onto its image, and that $\exp_{r, z}({\rm B}_{4}(0)) \supseteq {\rm B}_{2}(z)$ for $0 < r < r_{x}$ and $z \in {\rm B}_{1}(0)$. Hence, we may as well define for $w \in {\rm B}_{2}(z)$
\begin{align*}
\phi_{r, z} (w) := \frac{ \exp^{-1}_{r, z} (w)}{ |\exp^{-1}_{r, z} (w)|} \,, \qquad
\chi_{r, z} (w) := \dot{\exp}_{r, z} (t \phi_{r, z} (w) ) |_{t = |\exp^{-1}_{r, z} (w)|} \,.
\end{align*}
Finally, we set
\begin{align}
\label{e:not1}
\hat{u}^\xi_{r, z }(t)&:=u_{r, x}(\exp_{r, z}(t\xi)) \cdot \dot{\exp}_{r, z}(t\xi) \qquad \text{for $t >0$ and $\xi \in \mathbb{S}^{n-1}$,} \\
B^\xi_{r, z} &:= \{t >0 : \, \exp_{r, z}(t\xi) \in B \} \qquad \text{for $B \in \mathcal{B}( {\rm B}_1(0) )$.} \label{e:not2}
\end{align}
We notice that, up to fix a smaller $r_{x}>0$, $\hat{u}^{\xi}_{r, z}(t)$ is well posed for $t \in {\rm B}_{1}(0)^{\xi}_{r, z}$.
We subdivide the proof of the theorem in 3 main steps.
\noindent\underline{{\em Step 1}.} We now prove that, up to redefine $r_{x}>0$, there exists a constant $c(n) >0$ such that for every $0 < r \leq r_{x}$
\begin{equation}
\label{e:poincare3000}
\int_{Q(n)} E^1_{r, z}(w_{u_{r, x}(z)}) \, \mathrm{d} z \leq c(n) \lambda_r({\rm B}_1(0))\,,
\end{equation}
where $E^1_{r,z}$ denotes the truncated version of $E_{r,z}$, namely,
\[
E^1_{r, z} (w) := \sum_{0 \leq i < j \leq n} |w^j \cdot \xi_{r,ji}(z) - w^i \cdot \xi_{r,ij}(z)| \wedge 1 \qquad \text{for $w \in \mathbb{R}^{(n+1) \times n}$}.
\]
Fix $i,j \in \{0,\dotsc,n \}$ with $i < j$ and define $\xi := (e_j -e_i)/|e_j-e_i|$. Given $s \in (-1,1)$ we consider the $(n-1)$-dimensional affine space $\xi^\bot_s:= \xi^\bot + s\xi$. Consider the vector field $v_{s, r} \colon \xi^\bot \to \mathbb{S}^{n-1}$ given by $v_{s,r}(y):= \phi_{r, y+s \xi}(y+s\xi+(e_j-e_i))$. If we consider the flow $G_r \colon \mathbb{R}^n \times \mathbb{R} \times \mathbb{R}^n \to \mathbb{R}^n \times \mathbb{R}^n$ relative to the system
\begin{equation}
\label{e:poincare14000}
\begin{cases}
\ddot{\gamma}(t) = F_{r, x}(\gamma(t),\dot{\gamma}(t))&\\
\gamma(0)= x, \ x \in \mathbb{R}^n &\\
\dot{\gamma}(0)= v, \ v \in \mathbb{R}^n,
\end{cases}
\end{equation}
then because of the convergence $F_{r, x} \to 0$ in $C^\infty_{loc}(\mathbb{R}^n \times \mathbb{R}^{n} ; \mathbb{R}^{n})$ as $r \searrow 0$, we see that
\begin{equation}
\label{e:Gr}
G_r( w,t,v) \to (w+tv,v) \qquad \text{in } C^\infty_{loc}(\mathbb{R}^n \times \mathbb{R} \times \mathbb{R}^n ; \mathbb{R}^{n} \times \mathbb{R}^{n} ) \text{ as $r \searrow 0$}.
\end{equation}
As a consequence, if we define $\varphi_{\xi, r} \colon \mathbb{R}^n \to \mathbb{R}^n$ in such a way that $\varphi_{ \xi, r}(y+(t+s)\xi) = \pi_1(G_r(y+s\xi,t,v_{ s, r}(y)))$ for $y \in \xi^\bot \cap {\rm B}_1(0)$ and $t \in (-\tau,\tau)$ (for a suitable $\tau>0$ which can be chosen independently of~$r$ and where $\pi_1 \colon \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}^n$ denotes the orthogonal projection on the first coordinate), then for every but sufficiently small $r>0$ we find a Lipschitz map $P_{\xi,s,r} \colon {\rm B}_1(0) \to \xi^\bot$ whose Lipschitz constants converge to $1$ as $r \searrow 0$ and whose level sets are exactly the curves $t \mapsto \varphi_{\xi , r }(y+(t+s)\xi)$ for every $y \in \xi^\bot \cap {\rm B}_1(0)$. Arguing as in Section~\ref{sub:curvpro} and using~\eqref{e:Gr}, we get that for every but sufficiently small $r>0$ and for every $s \in (-1,1)$ and $\xi \in \mathbb{S}^{n-1}$ the map~$P_{\xi,s,r}$ is a curvilinear projection with respect to
$F_{r, x}$ parametrized by~$\varphi_{\xi, r}$. In addition, writing~$z \in Q(n)$ as $z = y + s\xi - e_{i}$ for some $y \in \xi^{\bot} \cap {\rm B}_{1}(0)$ we deduce from the uniqueness of the solution of~\eqref{e:poincare15000} and~\eqref{e:poincare14000} that $\varphi_{\xi, r}(y+(t+s)\xi)= \ell_{z, r,ij}(t)$ for every $t \in [0,t_{ij}]$. In particular, we have
\begin{align*}
\varphi_{\xi, r }(y+s\xi)=z+e_i \qquad &\text{ and } \qquad \varphi_{\xi,r}(y+(t_{ij}+s)\xi)= z+e_j \\
\xi_{r,ij}(z) = \dot{\varphi}_{\xi, r}(y+s\xi) \qquad &\text{ and } \qquad \xi_{r,ji}(z) = \dot{\varphi}_{\xi,r}(y+(t_{ij}+s)\xi).
\end{align*}
Therefore, from the definition of $GBD_{F_{r, x}}({\rm B}_1(0))$ we can write by Fubini's Theorem
\[
\begin{split}
&\int_{Q(n)}|w_{u_{r, x}(z)}^j \cdot \xi_{r,ji}(z) - w_{u_{r, x}(z)}^i \cdot \xi_{r,ij}(z)| \wedge 1 \, \mathrm{d} z\\
&= \int_{-1}^1 \bigg( \int_{\xi^\bot+s\xi-e_i} |w_{u_{r, x}(h)}^j \cdot \xi_{r,ji} ( h ) - w_{u_{r, x} (h)}^i \cdot \xi_{r,ij} ( h) | \wedge 1 \, \mathrm{d} \mathcal{H}^{n-1}(h) \bigg) \mathrm{d} s \\
& \ \ \ \ \ \ \ \ \ \ \leq 2 \|\dot{\varphi}_{\xi, r }\|^2_{L^\infty} \text{Lip}(P_{\xi, s, r} ; {\rm B}_1(0))^{n-1} \lambda_r({\rm B}_1(0)) \,.
\end{split}
\]
The arbitrariness of the indices $i$ and $j$ leads to the existence of a radius $r_x >0$ and a constant $c(n) >0$ for which \eqref{e:poincare3000} holds true.
\noindent \underline{\em Step 2.} Let us compactly write
\[
\begin{split}
|D\hat{u}^\xi_{r, z}|:=&|D\hat{u}^\xi_{r, z}|({\rm B}_1(0)^\xi_{r, z} \setminus J^1_{\hat{u}^\xi_{r, z}} ) + \mathcal{H}^0({\rm B}_1(0)^\xi_{r, z} \cap J^1_{\hat{u}^\xi_{r, z}} ) \\
&O_{r, z}(u):= \int_{\mathbb{S}^{n-1}} |D\hat{u}^\xi_{r, z}| \, \mathrm{d} \mathcal{H}^{n-1}(\xi) \,.
\end{split}
\]
We claim that, up to a redefinition of~$r_{x}>0$ and of~$c(n)>0$, it holds
\begin{align}
\label{e:poincare4000}
\int_{{\rm B}_1(0)} O_{r, z}(u) \, \mathrm{d} z &\leq c(n) \lambda_r({\rm B}_1(0)) \qquad \text{for $z \in {\rm B}_{1}(0)$}\,.
\end{align}
For $z \in {\rm B}_{1}(0)$, $i = 1, \ldots, n$, and $j \in \mathbb{N}$ we define the open sets
\[
U^\pm_{z,i,j} := \bigg\{w \in {\rm B}_{2^{-j+1}}(z) \setminus \overline{{\rm B}}_{2^{-j}}(z) : \, (w-z) \cdot e_i > \pm \frac{|w-z|}{2\sqrt{n}} \bigg\}\,.
\]
For every $j \in \mathbb{N}$ and $z \in \rm {\rm B}_1(0)$ the map $P_{r, i, j} \colon U^\pm_{z,i,j} \to e_i^\bot$ defined as $P_{r, i, j}(w) := (\pi_{e_i} \circ \phi_{r, z})(w) $ is a curvilinear projection on both open sets $U^+_{z,i,j}$ and $U^-_{z,i,j}$ with respect to~$F_{r, x}$ for every but sufficiently small $r >0$ (uniformly in~$j$ and $z$). In order to verify this, we have to find a parametrization of $\varphi_{r, i, j} \colon \{y + te_i : (y,t) \in [e_i^\bot \cap {\rm B}_{\rho}(0)] \times (-\tau,\tau) \} \to \mathbb{R}^n$ of~$P_{r, i, j}$ (for suitable $\tau,\rho>0$) in the sense of Definition~\ref{d:param-maps}, for every $0 < r < r_{x}$.
We focus on $U^+_{z,i,j}$. We set $\rho := \sqrt{1-(4\sqrt{n})^{-2}}$, $\tau := 2^{-j}$, and
$$\xi_y := (\pi_{e_i} \restr \{\eta \in \mathbb{S}^{n-1} : \, \eta \cdot e_i >(4\sqrt{n})^{-1} \})^{-1}(y)
\qquad \text{for $y \in e_i^\bot \cap {\rm B}_{\rho}(0)$}\,.
$$
For $(y,t) \in [e_i^\bot \cap {\rm B}_{\rho}(0)] \times (-\tau,\tau)$ we let
\[
\varphi_{r, i, j} (y + te_i)= \exp_{r, z} \big((t + \tau +2^{-j-2}) \xi_y \big)\,.
\]
To verify (1) of Definition \ref{d:param-maps} we notice that $U^{+}_{z,i,j} \subset \text{Im}(\varphi_{e_i})$ is equivalent to
\begin{equation}
\label{e:radialslice1}
\begin{split}
& \{w \in {\rm B}_1(0) \setminus \overline{{\rm B}}_{2^{-1}}(0) : \, w \cdot e_i > \pm|w|/2\sqrt{n} \}
\\
&
\subset \left\{\frac{\exp_{r, z}(2^{-j}(t +1 + 2^{-2})\xi_y) - z}{2^{-j}} : \, (y,t) \in [e_i^\bot \cap {\rm B}_{\rho}(0)] \times (-1,1) \right\}\,.
\end{split}
\end{equation}
Since $(\exp_{r, z} ( 2^{-j} \, \cdot ) - z) / 2^{-j} \to \text{Id}$ in $C^\infty_{loc} (\mathbb{R}^{n}; \mathbb{R}^{n})$ as $r \searrow 0$ uniformly w.r.t.~$j \in \mathbb{N}$, by our choice of~$\rho$ and~$\tau$ we deduce that, up to redefine $r_{x}>0$, inclusion \eqref{e:radialslice1} holds true for every $0 < r < r_{x}$, every $z \in {\rm B}_{1}(0)$, and every $j \in \mathbb{N}$. In a similar way we get that $\varphi_{r, i, j}^{-1} \restr U^+_{z,i,j}$ is a bi-Lipschitz diffeomorphism for every $i$, $j$, and $0 < r < r_{x}$. This gives property~(2) of Definition~\ref{d:param-maps}. Property~(3) follows by construction. To conclude, we have to show that~$\varphi_{r, i, j}$ satisfies $\ddot{\varphi}_{r, i, j} = F_{r, x}(\varphi_{r, i, j}, \dot\varphi_{r, i, j})$. Again this follows by construction, since the level sets~$\phi_{r, z}^{-1}(\xi)$ are described exactly by the curve $t \mapsto \exp_{r, z}(t\xi)$.
Let us fix~$j \in \mathbb{N}$ and $i = 1 , \ldots, n$. Applying the definition of~$GBD_{F_{r,x}} ({\rm B}_{1}(0))$ (see Definition~\ref{d:GBD}) with curvilinear projection $P_{r, i, j} \colon U^+_{z,i,j} \to e_i^\bot$ we get for every $B \in \mathcal{B}(U_{z, i, j}^{+})$
\begin{align}
\label{e:1000}
\int_{e_i^\bot}\!\!\! \big( | D (\hat{u}_{r, x})^{e_i}_y|(B^{e_i}_y \setminus J^1_{(\hat{u}_{r, x})^{e_i}_y}) & + \mathcal{H}^0(B^{e_i}_y \cap J^1_{(\hat{u}_{r, x})^{e_i}_y} \big) \, \mathrm{d} \mathcal{H}^{n-1}(y)
\\
&
\leq \| \dot{\varphi}_{r, i, j} \|_{L^\infty}^2 \text{Lip}(P_{r, i, j} ; U^+_{z,i,j})^{n-1} \lambda_{r}(B) \,. \nonumber
\end{align}
Since $F_{r, x} \to 0$ in $C^{\infty}_{loc} (\mathbb{R}^{d} \times \mathbb{R}^{d}; \mathbb{R}^{d})$, we have that, up to redefine $r_{x}>0$ and $c(n)>0$, it holds
\begin{align*}
\text{Lip}(P_{r, i, j} ;U^+_{z,i,j})^{n-1} \leq c(n) 2^{(n-1)(j+1)} \qquad \text{and} \qquad \|\dot{\varphi}_{r, i, j}\|_{L^\infty}^2 \leq 2\,.
\end{align*}
Hence, it follows from~\eqref{e:1000} that
\begin{align}
\label{e:radialslice2}
\int_{e_i^\bot} \big( |D(\hat{u}_{r, x})^{e_i}_y | (B^{e_i}_y \setminus J^1_{(\hat{u}_{r, x})^{e_i}_y}) & + \mathcal{H}^0(B^{e_i}_y \cap J^1_{(\hat{u}_{r, x})^{e_i}_y} \big) \, \mathrm{d} \mathcal{H}^{n-1}(y)
\\
&
\leq 2 c(n) 2^{(n-1)(j+1)} \lambda_{r}(B) \leq 2 \int_{B} \frac{c(n)}{|w-z|^{n-1}} \, \mathrm{d} \lambda_{r} (w)\,, \nonumber
\end{align}
for every $B \in \mathcal{B}( U^+_{z,i,j})$ and every $j\in \mathbb{N}$. The same inequality for the set $U^-_{z,i,j}$ can be obtained.
Let us set $S_{i}(z):= \{w \in \mathrm{B}_1(0) : \, |(w-z) \cdot e_i| > |w-z|/(2\sqrt{n}) \}$. Choosing $B = U^+_{z,i,j} \cap \rm B_1(0) $ and $B= U^-_{z,i,j} \cap \rm B_1(0) $ in~\eqref{e:radialslice2} and summing both sides with respect to $j \in \mathbb{N}$, we obtain that
\begin{equation}
\label{e:radialslice3}
\begin{split}
\int_{e_i^\bot} \big( |D(\hat{u}_{r, x})^{e_i}_y|(S_{i}(z)^{e_i}_y \setminus J^1_{(\hat{u}_{r, x})^{e_i}_y}) &+ \mathcal{H}^0(S_{i}(z)^{e_i}_y \cap J^1_{(\hat{u}_{r, x})^{e_i}_y} \big) \, \mathrm{d} \mathcal{H}^{n-1}(y)\\
&\leq 2\int_{S_{i}(z)} \frac{c(n)}{|w - z|^{n-1}} \, \mathrm{d} \lambda_{r}(w)\,.
\end{split}
\end{equation}
Recalling the notation~\eqref{e:not1}--\eqref{e:not2},
we can perform the change of variable induced by the map $\pi_{e_i} \colon \{ \xi \in \mathbb{S}^{n-1} : |\xi \cdot e_i| > 1/(2\sqrt{n}) \} \to e_i^\bot$ on the integral on the left hand-side of~\eqref{e:radialslice3} and up to redefine $c(n)>0$ we get
\begin{align}
\label{e:radialslice4}
\int_{\mathbb{S}^{n-1}} \big( |D\hat{u}^{\xi}_{ r, z} |(S_{i}(z)^{\xi}_{r, z} \setminus J^1_{\hat{u}^{\xi}_{r, z}}) &+ \mathcal{H}^0(S_{i}(z)^{\xi}_{r, z} \cap J^1_{\hat{u}^{\xi}_{r, z}} \big) \, \mathrm{d} \mathcal{H}^{n-1}(\xi)\\
&\leq 2\int_{S_{i}(z)} \frac{c(n)}{|w-z|^{n-1}} \, \mathrm{d} \lambda_{r}(w)\,. \nonumber
\end{align}
By possibly redefining again $c(n)>0$ and by summing both sides of~\eqref{e:radialslice4} with respect to $i=1,\dotsc,n$, we obtain
\begin{align}
\label{e:radialslice}
\int_{\mathbb{S}^{n-1} } | D\hat{u}^{\xi}_{r, z} | ( \mathrm{B}_{1}(0)^{\xi}_{r, z} \setminus J^1_{\hat{u}^{\xi}_{r, z}}) &+ \mathcal{H}^0( \mathrm{B}_{1}(0)^{\xi}_{r, z} \cap J^1_{\hat{u}^{\xi}_{r, z}}) \, \mathrm{d} \mathcal{H}^{n-1}(\xi)\\
&\leq \int_{{\rm B}_{1}(0)} \frac{c(n)}{|w-z|^{n-1}} \, \mathrm{d} \lambda_{r}(w)\,. \nonumber
\end{align}
Then, inequality~\eqref{e:poincare4000} follows by integrating both side of~\eqref{e:radialslice} with respect to $z \in {\rm B}_{1}(0)$ and by using Fubini's Theorem.
\noindent{\underline{\em Step 3: proof of~\eqref{e:poincare1000} and~\eqref{e:poincare2000}.}}
We claim that, up to redefine $c(n) >0$, for every $0 < r \leq r_x$ we find $z_r \in Q(n)$ satisfying for every $z \in S_{0,z_r}$
\begin{align}
\label{e:poincare5000}
& E^1_{r, z_r}(w_{u_{r, x}(z_r)}) \leq c(n) \lambda_r({\rm B}_1(0)) \,, \\
\label{e:poincare6000}
& O_{r, z}(u)\leq c(n) \lambda_r({\rm B}_1(0))\,, \\
\label{e:poincare7000}
&|u_{r, x}(h) \cdot \chi_{r, z}(h)- u_{r, x}(z) \cdot \phi_{r, z}(h)| \wedge 1 \leq c(n) |D\hat{u}^{\phi_{r, z}(h)}_{r, x}| \ \text{a.e. } h \in {\rm B}_{\rho(n)/2}(0)\,,
\end{align}
where in \eqref{e:poincare7000} we are also assuming that every $z \in S_{0,z_r}$ is a Lebesgue's point of~$u_{r, x}$. Indeed,~\eqref{e:poincare5000}--\eqref{e:poincare6000} can be obtained from \eqref{e:poincare3000}--\eqref{e:poincare4000} via Chebychev's inequality and appealing to the lower bound on the measure of $Q(n)$ in \eqref{e:rip1}. Notice also that in view of $\Theta^{*n-1}(\lambda,x)=0$, inequality \eqref{e:poincare5000} becomes for every but sufficiently small $r>0$
\begin{equation}
\label{e:poincare5000.1}
E_{r, z_r}(w_{u_{r, x}(z_r)}) \leq c(n) \lambda_r({\rm B}_1(0)) \,.
\end{equation}
For what concerns~\eqref{e:poincare7000}, we notice that if $z \in {\rm B}_1(0)$ satisfies $O_{r, z}(u) < \infty$, then for $\mathcal{H}^{n-1}$-a.e. $\xi \in \mathbb{S}^{n-1}$ the function $t \mapsto \hat{u}^\xi_{r, z}(t)$ belongs to $BV_{loc} ({\rm B}_1(0)^\xi_{r, z})$. In addition, if~$z$ is also a Lebesgue point of $u_{r, x}$, we can apply the Fundamental Theorem of Calculus to write exactly
\[
\begin{split}
|u_{r, x}(h) \cdot \chi_{r, z}(h)- u_{r, x}(z) \cdot \phi_{r, z}(h)| \wedge 1 \leq |D\hat{u}^{\phi_{r, z}(h)}_{r, z}| \,,
\end{split}
\]
where we used the concavity of the truncation function when restricted to the positive real line. Eventually, for every $0 < r \leq r_x$ we associate to each $h \in {\rm B}_{\rho(n)/2}(0)$ the $n$-tuple $(\chi_{z_r+e_i,r}(h))_{i=1}^n$ and notice that, since $F_{r, x} \to F_{0, x} = 0$ in $C^\infty_{loc}(\mathbb{R}^n \times \mathbb{R}^n; \mathbb{R}^{n})$ as $r \searrow 0$, the following convergence holds true for every $i=1,\dotsc,n$
\begin{equation}
\label{e:poincare8000}
\lim_{r \searrow0} \bigg\| \chi_{r, z_r+e_i} - \frac{(\cdot) -(z_r+e_i)}{|(\cdot) -(z_r+e_i)|} \bigg\|_{L^\infty({\rm B}_{\rho(n)/2}(0))} = 0\,.
\end{equation}
Condition \eqref{e:poincare8000} implies that, up to redefine $r_x>0$ and $c(n)>0$, we have
\begin{equation}
\label{e:poincare9000}
\inf_{h \in {\rm B}_{\rho(n)/2}(0)} |\chi_{r, z_r+e_1}(h) \wedge \dotsc \wedge \chi_{r, z_r+e_n } ( h ) | \geq \frac{1}{c(n)} \qquad \text{ for every } 0<r\leq r_x \,.
\end{equation}
By the Rigid interpolation property~\eqref{hp:F2}, up to redefine again $r_x>0$ and $c(n)>0$, we find a smooth map $a_r \colon {\rm B}_{\rho(n)/2}(0) \to \mathbb{R}^n$ satisfying for every $0 < r \leq r_x$
\begin{align}
\label{e:poincare10000}
&a_r(z)=u_{r, x}(z) \qquad \text{ for every $z \in S_{0,z_r}$,} \\
\label{e:poincare11000}
&\|e(a_r) - a_r \cdot F^q_{r, x} \|_{L^{\infty}(S_{n,z_r}; \mathbb{M}^{n}_{sym})} \leq c E_{z_r,r}(w_{u_{r, x}(z_r)}) \,.
\end{align}
In particular, \eqref{e:poincare11000} in combination with \eqref{e:poincare5000} gives immediately \eqref{e:poincare1000}.
It remains to prove \eqref{e:poincare2000}. To this purpose, we define the vector field $X \colon {\rm B}_{\rho(n)/2}(0) \to \mathbb{R}^n$ as
\[
X(h):=((u_{r, x}(h) - a_r(h)) \cdot \chi_{r, z_r+e_1}(h), \dotsc, (u_{r, x}(h)-a_r(h)) \cdot \chi_{r, z_r+e_n}(h)).
\]
By virtue of \eqref{e:poincare7000} and \eqref{e:poincare10000} we can write for every $i=1,\dotsc,n$ and for a.e.~$h \in {\rm B}_{\rho(n)/2}(0)$
\[
\begin{split}
|X_i(h)| \wedge 1 &= |(u_{r, x } ( h ) - a_r( h ) ) \cdot \chi_{r, z_r+e_i} ( h ) - (u_{r, x} (z_r+e_i) - a_r ( z_r + e_i ) ) \cdot \phi_{r, z_r+e_i} ( h ) | \wedge 1 \\
& \leq | D\hat{u}^{\phi_{r, z_r+e_i}(h)}_{r, z_r+e_i}| + \|e(a_r) - a_r \cdot F^q_{r, x} \|_{L^{\infty}(S_{n,z_r}; \mathbb{M}^{n}_{sym})} \sup_{\xi,t } |\dot{\exp}_{r, z_r+e_i} (t\xi)|\,,
\end{split}
\]
where the supremum is considered for all $\xi \in \mathbb{S}^{n-1}$ and all positive~$t$ for which the map $\exp_{r, z_r+e_i}((\cdot)\xi)$ takes value in~${\rm B}_1(0)$. In view of the previous inequality and of~\eqref{e:poincare11000}, we can redefine $r_x$ and $c(n)>0$ in such a way that
\begin{equation}
\label{e:poincare12000}
|X_i(h)| \wedge 1 \leq |D\hat{u}^{\phi_{r, z_r+e_i}(h)}_{r, z_r+e_i} | + c(n) E_{r, z_r}(w_{u_{r, x}(z_r)}) \,,
\end{equation}
for a.e.~$h \in {\rm B}_{\rho(n)/2}(0)$, for every $0< r \leq r_x$, and for every $i=1,\dotsc,n$.
Notice that Coarea formula together with estimate \eqref{e:retr2} implies
\[
\begin{split}
\int_{{\rm B}_{\rho(n)/2}(0)} & |D\hat{u}^{\phi_{r, z_r+e_i}(h)}_{r, z_r+e_i} | \, \mathrm{d} h
\leq c'(n) \int_{{\rm B}_{\rho(n)/2}(0)} |D\hat{u}^{\phi_{r, z_r+e_i}(h)}_{r, z_r+e_i}| J\phi_{r, z_r+e_i}(h) \, \mathrm{d} h \\
&=c''(n)\int_{\mathbb{S}^{n-1}} |D\hat{u}^{\xi}_{r, z_r+e_i}|\big( \mathcal{H}^1(\phi^{-1}_{r, z_r+e_i} (\xi) \cap {\rm B}_{\rho(n)/2}(0))\big) \, \mathrm{d} \mathcal{H}^{n-1}(\xi) \\
&\leq c'''(n) \int_{\mathbb{S}^{n-1}} |D\hat{u}^{\xi}_{r, z_r+e_i}| \, \mathrm{d} \mathcal{H}^{n-1}(\xi)
\end{split}
\]
for suitable constants $c'(n),c''(n),c'''(n)$ and for every but sufficiently small $r>0$. Therefore, by~\eqref{e:poincare5000}--\eqref{e:poincare6000} we can possibly redefine $r_x>0$ and $c(n)>0$ so that by integrating both sides of \eqref{e:poincare12000} we obtain
\begin{equation}
\label{e:poincare13000}
\int_{{\rm B}_{\rho(n)/2}(0)} |X_i(h)| \wedge 1 \, \mathrm{d} h \leq c(n) \lambda_r({\rm B}_1(0)) \qquad \text{ for every } 0 <r \leq r_x\,.
\end{equation}
Finally, we notice that condition \eqref{e:poincare9000} implies that, up to redefine $c(n)>0$, we have also
\[
|u_{r, x}(h) - a_r(h)| \leq c(n) |X(h)| \qquad \text{ for every $h \in {\rm B}_{\rho(n)/2}(0)$ and $0 <r \leq r_x$},
\]
which together with \eqref{e:poincare13000} immediately gives the validity of \eqref{e:poincare2000}.
\end{proof}
We conclude this subsection with the following proposition.
\begin{proposition}
\label{p:lll}
Let $F \in C^{\infty} ( \mathbb{R}^n \times \mathbb{R}^n ; \mathbb{R}^n)$ satisfy~\eqref{hp:F}--\eqref{hp:F2}, $u \in GBD_F(\Omega)$, and $x \in \Omega$ such that $\Theta^{*n-1}(\lambda,x)=0$. Then
\begin{equation*}
\lim_{r \searrow 0} \, \inf_{a \in \Xi (\rm B_{\rho(n)/2}(0))} \int_{\mathrm{B}_{\rho(n)/2}(0)} |u_{r, x} - a | \wedge 1 \, \mathrm{d} z =0\,.
\end{equation*}
\end{proposition}
As a consequence of Proposition \ref{p:lll}, the set $[{\rm Osc}]_{u} (\rho(n))$ is $\sigma$-finite w.r.t.~$\mathcal{H}^{n-1}$.
\subsection{The case of Riemannian manifolds}
\label{sub:Riemann}
In this subsection we verify that, in the case $F$ is related to the geometry of a Riemannian manifold, then condition \eqref{hp:F2} is satisfied. For this purpose consider an $n$-dimensional manifold $\rm (M,g)$ which for simplicity we suppose embedded in $\mathbb{R}^m$ (actually this is not restrictive in view of the celebrated Nash's imbedding Theorem \cite{Nash}). Let $(U,\psi)$ denote a chart with $\psi \colon U \to \Omega \subset \mathbb{R}^n$. Given $q \in \Omega$ and $0 < \delta < \text{dist}(z, \partial \Omega )$ we denote by $\Psi$ a $C^\infty$-regular extension of $\psi \restr \psi^{-1}(\mathrm{B}_\delta(q))$ to the whole of $\mathbb{R}^m$ and such that $\mathrm{B}_\delta(q) \subset \Psi(\mathbb{R}^m) \subset \Omega$. Letting $\varphi \colon \Omega \to U$ denote the inverse of $\psi$, we define $g_i \colon \Omega \to \mathbb{R}^m$ as $g_i(h):= \partial_i \varphi(h)$ for every $i=1,\dotsc,n$. Since \begin{equation*}
\text{rank}\{g_1(h),\dotsc,g_n(h)\} = n \qquad \text{for every $h\in \Omega$},
\end{equation*}
we can define the $i$-th element of the dual basis $g^i \colon U \to \mathbb{R}^{m*}$ in such a way that
\begin{equation}
\label{e:appendix2000}
\langle g^i(h), g_j(h)\rangle = \delta_{ij} \qquad \text{for every }i,j=1,\dotsc,n \ \text{ and } \ h \in \Omega\,,
\end{equation}
where $\langle \cdot, \cdot \rangle$ denote the duality paring between $\mathbb{R}^{m*}$ and $\mathbb{R}^m$.
Now we want to write the covariant derivative of $a \in \mathscr{D}^1(\rm M)$, namely, a differential one-form on $\rm M$, locally in terms of the dual basis $\{g^1,\dotsc,g^n\}$. For this purpose we denote by $a_i \colon \Omega \to \mathbb{R}$ the \emph{curvilinear coordinates} of $a$, namely, $a(h)= \sum_i a_i(h) g^i(h)$ for every $h \in \Omega$. For $i=1,\dotsc,n$, the $i$-th \emph{covariant derivatives} $\nabla_i a(h) $ of~$a$ at $h \in \Omega$ is the element of $\mathbb{R}^{m*}$ defined as
\begin{align}
\label{e:A6}
&\langle \nabla_i a(h), g_j(h) \rangle := \partial_i a_j(h) -\sum_k a_k(h) \Gamma^k_{ij}(h) \qquad j=1,\dotsc,n \,, \\
\label{e:A6.1}
& \ \ \ \ \ \ \ \ \ \langle \nabla_i a(h), v \rangle :=0 \qquad \text{if } v \notin \text{span} \{g_1(h),\dotsc,g_n(h)\},
\end{align}
where $\Gamma^k_{ij}$ denotes the Christoffel's symbols with respect to the basis $\{g_1,\dotsc,g_n\}$; we recall that they can be computed in coordinates as
\begin{equation*}
\Gamma^k_{ij}(h):=-\langle \partial_i g^k(h), g_j(h) \rangle \qquad \text{for } i,j,k=1,\dotsc,n.
\end{equation*}
The gradient of $a$ can be locally represented as an operator $\nabla \colon C^{\infty}(\Omega;\mathbb{R}^n) \to C^\infty(\Omega;\mathbb{M}^n)$ called \emph{curvilinear gradient} and defined as
\[
\begin{split}
[\nabla(a)]_{ij}(h) &:= \langle \nabla_i a(h), g_j(h) \rangle \\ &:=\partial_i a_j(h) - \sum_{\ell=1}^n a_\ell(h) \Gamma^\ell_{ij}(h), \qquad i,j=1,\dotsc,n.
\end{split}
\]
Analogously, the symmetric gradient of $a$ can be locally represented as an operator $e \colon C^{\infty}(\Omega;\mathbb{R}^n) \to C^\infty(\Omega;\mathbb{M}^n_{sym})$ called \emph{curvilinear symmetric gradient} and defined as
\[
\begin{split}
2[e(a)]_{ij}(h) &:= \langle \nabla_i a(h), g_j(h) \rangle + \langle \nabla_j a(h), g_i(h) \rangle\\ &:=\partial_i a_j(h) + \partial_j a_i(h) - 2\sum_{\ell=1}^n a_\ell(h) \Gamma^\ell_{ij}(h), \qquad i,j=1,\dotsc,n.
\end{split}
\]
We continue by fixing~$(\tilde{e}_i)_{i=1}^n$ and~$(e_i)_{i=1}^m$ two orthonormal bases of~$\mathbb{R}^n$ and of~$\mathbb{R}^m$, respectively. We further denote by $(\tilde{e}^i)_{i=1}^n$ and $(e^i)_{i=1}^m$ their dual bases. For convenience of notation we also set~$\tilde{e}_0:=0$.
By definition of~$\Psi$ and~$\varphi$, for $h \in \mathrm{B}_\delta(q)$ we have the identity
\[
\partial_i (\Psi \circ \varphi)(h) \cdot \tilde{e}_j = \delta_{ij} \qquad \text{for }i,j=1,\dotsc,n\,.
\]
Thus, for every $i,j=1,\dotsc,n$ and every $h \in \mathrm{B}_\delta(q)$ we get that
\begin{equation}
\label{e:appendix1000}
\delta_{ij}=\sum_{\ell=1}^m (\partial_\ell \Psi(\varphi(h)) \cdot \tilde{e}_j) (\partial_i \varphi(h) \cdot e_\ell) =\sum_{\ell=1}^m ( \partial_\ell \Psi(\varphi(h)) \cdot \tilde{e}_{j}) (g_i (h ) \cdot e_\ell)\,.
\end{equation}
Combining \eqref{e:appendix2000} and~\eqref{e:appendix1000} we obtain for every $h \in \mathrm{B}_\delta(q)$
\begin{equation}
\label{e:appendix3000}
\partial_\ell\Psi(\varphi(h)) \cdot \tilde{e}_j = \langle g^j(h),e_\ell \rangle \qquad \text{for }\ell=1,\dotsc,m \text{ and } j=1,\dotsc,n\,.
\end{equation}
We notice that, because of the fact that $\rm M$ is isometrically embedded in $\mathbb{R}^m$, there is a natural homomorphism $\mathrm{i} \colon \mathscr{D}^1(U) \to C^\infty(\Omega;\mathbb{R}^{m*})$ acting as
\[
\mathrm{i}(a)(x) := \sum_i a_i(h)g^i(h), \qquad \text{ for } x \in U \text{ and }h = \psi(x).
\]
Therefore we can consider coefficients $\tilde{a}_k:= \langle a,e_k \rangle$, so that $a(h)= \sum_{k=1}^m \tilde{a}_k(h) \, e^k$ and the (euclidean) gradient $\tilde{\nabla}(a)(h) \in \mathbb{M}^{m \times n}$ of~$a$ at $h \in \Omega$ is defined as
\begin{equation}
\label{e:appendix1.1}
[\tilde{\nabla}(a)]_{ij}(h) := \langle \partial_i \tilde{a}(h), e^j \rangle = \partial_i \tilde{a}_j(h) \qquad \text{for $i=1,\dotsc,n$ and $j=1\dotsc,m$.}
\end{equation}
We want to find a precise relation between the gradient and the curvilinear gradient. In order to do so, we first write the following identity for every smooth function $w \colon \mathbb{R}^n \to \mathbb{R}$ and every $j=1,\dotsc,m$
\begin{equation}
\label{e:appendix4000}
\partial_j (w \circ\Psi)(z) = \sum_{\ell=1}^n \partial_\ell w(\Psi(z)) \partial_j \Psi(z) \cdot \tilde{e}_\ell\,.
\end{equation}
Now let $v \colon \mathbb{R}^m \to \mathbb{R}^{m*}$ be defined as $v(z) := a(\Psi(z))$. By~\eqref{e:appendix3000}, by~\eqref{e:appendix4000}, and by definition of~$\Gamma^{k}_{ij}$, for every $z \in \psi^{-1} ({\rm B}_{\delta} (q))$ and $i,j=1,\dotsc,m$ we compute
\begin{align}
\label{e:appendix10099}
\langle \partial_j v(z),e_i\rangle & = \sum_{k=1}^n\partial_j (a_k \circ \Psi)(z)\langle g^k(\Psi(z)) ,e_i\rangle +\sum_{k=1}^n a_k(\Psi(z))\langle \partial_j(g^k \circ \Psi)(z),e_i \rangle\\
&= \sum_{k,\ell=1}^n\partial_\ell a_k(\Psi(z)) (\partial_j \Psi(z) \cdot \tilde{e}_\ell) \langle g^k(\Psi(z)) ,e_i\rangle \nonumber
\\
&
\qquad +\sum_{k,\ell=1}^n a_k(\Psi(z))\langle \partial_\ell g^k(\Psi(z)),e_i \rangle (\partial_j \Psi(z) \cdot \tilde{e}_\ell) \nonumber \\
&= \sum_{k,\ell=1}^n\partial_\ell a_k(\Psi(z)) \langle g^\ell(\Psi(z)), e_j \rangle \langle g^k(\Psi(z)) ,e_i\rangle \nonumber
\\
&
\qquad +\sum_{k,\ell=1}^n a_k(\Psi(z))\langle \partial_\ell g^k(\Psi(z)),e_i \rangle \langle g^\ell(\Psi(z)), e_j \rangle \nonumber \\
&= \sum_{k,\ell=1}^n\partial_\ell a_k(\Psi(z)) \langle g^\ell(\Psi(z)), e_j \rangle \langle g^k(\Psi(z)) ,e_i\rangle \nonumber
\\
&
\qquad -\sum_{k,\ell,p=1}^n a_k(\Psi(z)) \Gamma^k_{\ell p}(\Psi(z)) \langle g^p(\Psi(z)),e_i\rangle \langle g^\ell(\Psi(z)),e_j \rangle \nonumber \\
&= \sum_{k,\ell=1}^n\big[\partial_\ell a_k(\Psi(z)) - \sum_{p=1}^n a_p(\Psi(z))\Gamma^p_{\ell k}(\Psi(z))\big]\langle g^\ell(\Psi(z)), e_j \rangle \langle g^k(\Psi(z)) ,e_i\rangle. \nonumber
\end{align}
Hence, for every $h \in \mathrm{B}_\delta(q)$ and $i,j=1,\dotsc,m$ we can compactly write
\begin{equation}
\label{e:appendix5000}
\langle \partial_j v(\varphi(h)),e_i\rangle= \sum_{k,\ell=1}^n\langle \nabla_{\ell} a(h),g_k(h) \rangle\langle g^\ell(h), e_j \rangle \langle g^k(h) ,e_i\rangle.
\end{equation}
Therefore, letting $G \colon \mathrm{B}_\delta(q) \to \mathbb{M}^{n \times m}$ be defined as $G_{ij}(h):= \langle g^i(h), e_j \rangle$ for $i=1,\dots,n$ and $j=1,\dotsc,m$, in view of~\eqref{e:appendix1.1} and of~\eqref{e:appendix5000}, the gradient~$\tilde{\nabla} (v)$ and the curvilinear gradient $\nabla(a)$ are related by
\begin{equation}
\label{e:appendix3.1}
\tilde{\nabla}(v)(\varphi(h)) =G^\top(h) \, \nabla(a)(h) \, G(h) \qquad \text{for every }h \in \mathrm{B}_\delta(q)\,.
\end{equation}
Since the (euclidean) symmetric gradient $\tilde{e}(v)$ of~$v$ and the curvilinear symmetric gradient $e(a)$ of $a$ can be compactly written as
\[
\tilde{e}(v) := \frac{\tilde{\nabla}(v)+\tilde{\nabla}(v)^{\top}}{2} \qquad \text{and} \qquad e(a) := \frac{\nabla(a)+\nabla(a)^{\top}}{2},
\]
we may similarly relate $\tilde{e}(v)$ with the curvilinear symmetric gradient~$e(a)$ of~$a$ by
\begin{equation}
\label{e:appendix100999}
\tilde{e}(v)(\varphi(h)) = G^{\top}(h) \, e(a)(h) \, G(h) \qquad \text{for every }h \in \Omega\,.
\end{equation}
Furthermore, it follows from~\eqref{e:appendix1000} that~$G(h)$ admits a right-inverse~$G^{-1}(h) \in \mathbb{M}^{n \times m}$ such that $G^{-1}_{ij}(h)=g_{j}(h) \cdot e_i$ for $i=1,\dotsc,n$ and $j=1,\dotsc,m$. Hence,~\eqref{e:appendix100999} can be inverted as
\begin{equation}
\label{e:appendix100.1}
G^{-\top}(h) \, \tilde{e}(v)(\varphi(h)) \, G^{-1}(h) = e(a)(h) \qquad \text{for every }h \in \mathrm{B}_\delta(q)\,.
\end{equation}
Now we let the field $F$ be defined as
\begin{equation}
\label{e:f=christoffel}
F(h,v):= -\bigg(\sum_{i,j=1}^n\Gamma^1_{ij}(h)v_iv_j,\dotsc,\sum_{i,j=1}^n\Gamma^n_{ij}(h)v_iv_j \bigg), \qquad (h,v) \in \Omega \times \mathbb{R}^n.
\end{equation}
Choosing the function $g \colon \Omega \times \mathbb{R}^n \to \mathbb{R}^n$ to be the projection onto the second component, a straightforward computation shows that \eqref{e:operator1} is satisfied with $\mathcal{E}=e$ and with function $c_{\mathcal{E}}(\cdot) = |\cdot|^2$. With the above geometrical preliminaries in mind we want to verify that \eqref{hp:F2} is satisfied whenever $F$ has the form in \eqref{e:f=christoffel}. In particular, thanks to the weak-Poincar\'e's inequality, condition (2) of Corollary \ref{c:int2} will be satisfied. For this purpose we fix $x \in \mathrm{B}_\delta(q)$ and set $\varphi_{r,x}(h):= \varphi(x+rh)$ for $h \in B_{1}(0)$ and for every but sufficiently small $r>0$. Then, we have that
\begin{equation}\label{e:appendix7}
g_{i,r,x}(h) := r g_i(x+rh) = \partial_i \varphi_{r,x}(h) \qquad \text{for $i=1,\dotsc,n$ and $h \in \mathrm{B}_1(0)$,}
\end{equation}
Similarly to~\eqref{e:A6} we set
\begin{equation}
\label{e:appendix6}
\Gamma^k_{ij,r,x}(h):= -\langle \partial_i g_{r,x}^k(h), g_{j,r,x}(h)\rangle \qquad \text{for $i,j,k=1,\dotsc,n$ and $h \in \mathrm{B}_1(0)$,}
\end{equation}
so that the function $F_{r,x}(h,v)= rF(x+rh,v)$ satisfies for $h \in \mathrm{B}_1(0)$
\begin{equation*}
F_{r,x}(h,v)= -\Big( \sum_{i,j = 1}^{n} \Gamma^1_{ij,r,x}(h)v_iv_j, \dotsc, \sum_{i,j= 1}^{n} \Gamma^n_{ij,r,x}(h)v_iv_j\Big)\,.
\end{equation*}
Notice that the computation presented in~\eqref{e:appendix1000}--\eqref{e:appendix100.1} can be repeated with~$\varphi$ replaced by~$\varphi_{r, x}$. Thus, there exists $G_{r,x} \colon {\rm B}_1(0) \to \mathbb{M}^{n \times m}$ satisfying relation \eqref{e:appendix3.1} with $\{g_1,\dotsc,g_n\}$ replaced by $\{g_{1,r,x},\dotsc,g_{n,r,x}\}$. Since by definition we have that
\begin{align}
\label{e:appendix23.1}
& r(G_{r,x})_{ij} \to \langle g^i(x), e_j \rangle \qquad r^{-1}(G^{-1}_{r,x})_{ji}\to \langle g_i(x), e_j \rangle \ \ \text{ in } C^\infty({\rm B}_1(0))\,, \\
\label{e:appendix23}
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Gamma^k_{ij,r,x} \to 0 \qquad \text{in } C^\infty({\rm B}_1(0))\,,
\end{align}
as $r\searrow 0$, we find a radius $r_x>0$ such that for $0< r \leq r_x$
\begin{align}
\label{e:appendix16}
& \|\nabla G^{-\top}_{r,x}\|_{L^{\infty} (B_{1}(0))} \| G^{\top}_{r,x} \|_{L^{\infty} (B_{1}(0))}
\\
&
\qquad \qquad + \| G_{r,x} \|_{L^{\infty} (B_{1}(0))} \|\nabla G^{-1}_{r,x}\|_{L^{\infty} (B_{1}(0))} \leq \frac{1}{64(n^2+n)}\,. \nonumber
\end{align}
Let $z \in Q(n)$ and $r\in (0,r_x]$. To each $w \in \mathbb{R}^{(n+1)\times n} $ we associate $\underline{w}_{r, z} \in \mathbb{R}^{(n+1)\times m}$ by
\begin{equation}
\label{e:appendix15}
\underline{w}^i_{r, z} := \sum_{k=1}^n w^i_k \, g_{r,x}^k(z+\tilde{e}_i) \qquad \text{for $i=0,\dotsc,n$.}
\end{equation}
Let $v_r \colon \mathbb{R}^m \to \mathbb{R}^{m*}$ be any linear-affine interpolation of the values $\{\underline{w}^0_{r, z},\dotsc,\underline{w}^n_{r, z} \}$, namely, a function fulfilling
\begin{equation}
\label{e:appendix14}
\tilde{\nabla}v_r(\zeta) \text{ is constant in $\zeta \in \mathbb{R}^m$} \ \ \text{and} \ \ v_r(\varphi_{r, x } ( z + \tilde{e}_i ) ) = \underline{w}^i_{r, z} \qquad \text{for $i=0,\dotsc,n$.}
\end{equation}
Let us define $a_r \colon \mathrm{B}_1(0) \to \mathbb{R}^{m*}$ as $a_r(h):= v_r(\varphi_{x,r}(h))$. We infer from~\eqref{e:appendix15}--\eqref{e:appendix14} that the curvilinear coordinates~$\{(a_{r})_{j}\}_{j =1}^{n}$ of~$a_r$ satisfy
\begin{equation}
\label{e:appendix14.1}
(a_r)_j( z + \tilde{e}_i) = w^i_j \qquad \text{for $i=0,\dotsc,n$ and $j=1,\dotsc,n$.}
\end{equation}
In view of \eqref{e:appendix14.1}, to show the validity of \eqref{hp:F2} we are left to prove that $a_r$ fulfills \eqref{e:rip5}.
\begin{theorem}
\label{t:RI-manifold}
The function $a_{r}$ satisfies~\eqref{e:rip5} of the Rigid interpolation property~\eqref{hp:F2}.
\end{theorem}
\begin{proof}
We notice that the function~$a_{r}$ appearing in~\eqref{e:rip4}--\eqref{e:rip5} is the vector $((a_{r})_{1}, \ldots, (a_{r})_{n})$ of curvilinear coordinates of~$a_{r}$. However, below we will continue considering the map $a_{r} \colon {\rm B}_{1}(0) \to \mathbb{R}^{m*}$, in order to use relations~\eqref{e:appendix3.1}--\eqref{e:appendix100.1}. In particular, for $x$ fixed let us start with~$r_{x}>0$ such that~\eqref{e:appendix16} holds.
We claim that, up to redefine the value of $r_x>0$, we have for every $(z,w,r) \in Q(n) \times\mathbb{R}^{(n+1)\times n} \times (0,r_x]$
\begin{equation}
\label{e:appendix21}
|e(a_r)(h)-e(a_r)(h')| \leq \frac{\|e(a_r)\|_{L^\infty(\mathrm{B}_1(0))}}{ 16 (n+1)n} \qquad \text{for } h,h' \in \mathrm{B}_1(0)\,.
\end{equation}
Let us fix $h, h' \in {\rm B}_{1}(0)$ and let $\sigma_{h, h'} (s) := sh+(1-s)h'$ for $s \in [0,1]$. By~\eqref{e:appendix100999}--\eqref{e:appendix100.1} and by the fact that~$v_r$ is affine, we estimate
\begin{align*}
| e(a_r)(h) & - e(a_r)(h') | \leq \int_0^1 \bigg|\frac{\mathrm{d}}{\mathrm{d} s} e(a_r)(\sigma_{h,h'} (s) )\bigg| \, \mathrm{d} s
\\
&
= \int_0^1 \bigg|\frac{\mathrm{d} }{\mathrm{d} s} G_{r,x}^{-\top}(\sigma_{h,h'} (s)) \tilde{e}(v_r)(\varphi_{r, x} ( \sigma_{h,h'} (s))) G_{r,x}^{-1} ( \sigma_{h,h'} (s) )\bigg| \, \mathrm{d} s \\
&\leq \int_0^1 | \nabla G_{r,x}^{-\top} ( \sigma_{h,h'} (s) ) \cdot (h-h') \tilde{e}(v_r) ( \varphi_{r, x} (\sigma_{h,h'} (s)) ) G_{r,x}^{-1} ( \sigma_{h,h'} (s) ) | \, \mathrm{d} s \\
&\qquad + \int_0^1 | G_{r,x}^{-\top} ( \sigma_{h,h'} (s) ) \tilde{e}(v_r)(\varphi_{r, x} (\sigma_{h,h'} (s)) ) \nabla G_{r,x}^{-1}(\sigma_{h,h'} (s) )\cdot (h-h') | \, \mathrm{d} s \\
&= \int_0^1 | \nabla G_{r,x}^{-\top}(\sigma_{h,h'} (s)) \cdot (h-h') G_{r,x}^{\top} ( \sigma_{h,h'} (s) ) e(a_r)(\sigma_{h,h'} (s)) | \, \mathrm{d} s \\
&\qquad + \int_0^1 | e ( a_r ) ( \sigma_{h,h'} (s) )G_{r,x}(\sigma_{h,h'} (s)) \nabla G_{r,x}^{-1} ( \sigma_{h,h'} (s) )\cdot (h-h') | \, \mathrm{d} s \\
&\leq 4 \| e(a_r) \|_{L^\infty(\mathrm{B}_1(0))} \big( \|\nabla G_{r,x}^{-\top}\|_{L^\infty(\mathrm{B}_1(0))} \|G_{r,x}^{\top} \|_{L^\infty(\mathrm{B}_1(0))}
\\
&
\qquad + \|G_{r,x}\|_{L^\infty(\mathrm{B}_1(0))}\|\nabla G_{r,x}^{-1}\|_{L^\infty(\mathrm{B}_1(0))} \big)\,.
\end{align*}
The above inequality, together with~\eqref{e:appendix16}, implies~\eqref{e:appendix21}.
We can thus infer from~\eqref{e:appendix21} the validity of the following estimate
\begin{equation}
\label{e:appendix22}
\|e(a_{r})\|_{L^\infty(\mathrm{B}_1(0))} \leq |e(a_{r} (h))| + \frac{\|e(a_{r})\|_{L^\infty(\mathrm{B}_1(0))}}{ 16 (n+1)n} \qquad \text{for every }h \in \mathrm{B}_1(0)\,.
\end{equation}
We further claim that, up to redefine $r_x>0$, the following inequality holds true for $(z,w,r) \in Q(n) \times\mathbb{R}^{(n+1)\times n} \times (0,r_x]$
\begin{equation}
\label{e:appendix19}
|e(a_{r} ( z ) ) | \leq c(n) \Big( E_{r,z}(w) + \frac{ 5 }{8} \|e(a_{r})\|_{L^\infty(\mathrm{B}_1(0))} \Big)\,,
\end{equation}
for a dimensional constant~$c(n)>0$. Indeed, given $i,j=0,1,\dotsc, n$, let $\ell_{z,r,ij} \colon [0,t_{ij}] \to \mathbb{R}^n$ and $\xi_{r,ij} \colon \mathrm{B}_1(0) \to \mathbb{R}^n$ be defined in Section~\ref{s:curvilinear} (see \eqref{e:poincare15000}). By~\eqref{e:appendix21} we infer that for every $0 <r\leq r_x$
\begin{align}
\label{e:appendix-120}
w^j \cdot \xi_{r,ji}(z) - w^i \cdot \xi_{r,ij}(z) & = \int_0^{t_{ij}} \frac{\mathrm{d}}{\mathrm{d} t} \big( a_{r} (\ell_{z,r,ij}(t)) \cdot \dot{\ell}_{z,r,ij}(t) \big) \, \mathrm{d} t
\\
&
=\int_0^{t_{ij}} e(a_{r} (\ell_{z,r,ij}(t))) \dot{\ell}_{z,r,ij}(t) \cdot \dot{\ell}_{z,r,ij}(t) \, \mathrm{d} t \nonumber
\\
&
\geq t_{ij} e(a_{r} (z+e_i))\xi_{r,ij}(z) \cdot \xi_{r,ij}(z) \nonumber
\\
& \qquad - \|e(a_{r})\|_{L^\infty(\mathrm{B}_1(0))}\bigg(\int_0^{t_{ij}} |\xi_{r,ij}(z) -\dot{\ell}_{z,r,ij}(t)| \, \mathrm{d} t \nonumber
\\
&
\qquad + \int_{0}^{t_{ij}} | \dot{\ell}_{z, r, ij} (t) | \, | \xi_{r,ij}(z) -\dot{\ell}_{z,r,ij}(t)| \, \mathrm{d} t +\frac{t_{ij}}{16(n+1)n} \bigg) \nonumber
\end{align}
Setting $\tilde{e}_{ij}:= (\tilde{e}_j-\tilde{e}_i)/|\tilde{e}_j-\tilde{e}_i|$ we have, by~\eqref{e:appendix23}, that
\[
\begin{split}
&t_{0j} \to 1 \ \ \text{ for } 1\leq j \leq n\\
&t_{ij} \to \sqrt{2} \ \ \text{ for } 1\leq i< j \leq n\\
&\dot{\ell}_{z,r,ij}(t) \to \tilde{e}_{ij} \ \ \text{ for } 0\leq i < j \leq n
\end{split}
\]
as $r \searrow 0$, uniformly w.r.t.~$t \in [0, t_{ij}]$ and $z \in Q(n)$. Hence, up to further redefine $r_x>0$, we infer from~\eqref{e:appendix-120} that
\begin{equation}
\label{e:appendix24}
|e(a_{r} (z+e_i)) \xi_{r,ij}(z) \cdot \xi_{r,ij}(z)| \leq |w^j \cdot \xi_{r,ji}(z)-w^i \cdot \xi_{r,ij}(z)| + \frac{\|e(a_{r} ) \|_{L^\infty(\mathrm{B}_1(0))}}{2(n+1)n},
\end{equation}
for every $(z,w,r) \in Q(n) \times\mathbb{R}^{(n+1)\times n} \times (0,r_x]$ and every $0 \leq i < j \leq n$. Thanks to~\eqref{e:appendix21} and~\eqref{e:appendix24}, we may further estimate for $(z,w,r) \in Q(n) \times\mathbb{R}^{(n+1)\times n} \times (0,r_x]$,
\begin{align*}
&|e(a_{r}(z))\tilde{e}_{ij} \cdot \tilde{e}_{ij}| \leq |e(a_{r} ( z + e_i) ) \xi_{r,ij} (z) \cdot \xi_{r,ij}(z)| \\
\vphantom{\frac12}&\quad + | [ e ( a_{r} ( z + e_i) ) - e (a_{r} ( z ) ) ] \xi_{r,ij} (z) \cdot \xi_{r,ij}(z) |
+ | e ( a_{r} ( z ) ) [ \xi_{r,ij}(z) \cdot \xi_{r,ij}(z) - \tilde{e}_{ij} \cdot \tilde{e}_{ij} ]|\\
& \leq |w^j \cdot \xi_{r,ji}(z)-w^i \cdot \xi_{r,ij}(z)| + \frac{\|e(a)\|_{L^\infty(\mathrm{B}_1(0))}}{2(n+1)n} + \frac{\|e(a)\|_{L^\infty(\mathrm{B}_1(0))}}{16(n+1)n}
\\
\vphantom{\frac12}&\quad +2\|e(a)\|_{L^\infty(\mathrm{B}_1(0))}|\xi_{r,ij}(z)-\tilde{e}_{ij}|,
\end{align*}
which together with convergence \eqref{e:appendix23} gives, up to possibly redefine once again $r_x>0$, that
\begin{equation}
\label{e:appendix25}
\begin{split}
|e(a_{r} (z))\tilde{e}_{ij} \cdot \tilde{e}_{ij}| &\leq |w^j \cdot \xi_{r,ji}(z)-w^i \cdot \xi_{r,ij}(z)|
+ \frac{5\|e(a_{r})\|_{L^\infty(\mathrm{B}_1(0))}}{8(n+1)n}\,,
\end{split}
\end{equation}
whenever $(z,w,r) \in Q(n) \times\mathbb{R}^{(n+1)\times n} \times (0,r_x]$ and $0 \leq i < j \leq n$. Noticing that
\[
|e(a_{r}(z))| \leq c(n) \sum_{0 \leq i < j \leq n} |e(a_{r}(z))\tilde{e}_{ij} \cdot \tilde{e}_{ij}|\,,
\]
for some dimensional constant $c(n)>0$, combining \eqref{e:appendix24} with \eqref{e:appendix25} we infer the validity of \eqref{e:appendix19}. Using \eqref{e:appendix19} in \eqref{e:appendix22} we obtain for $(z,w,r) \in Q(n) \times\mathbb{R}^{(n+1)\times n} \times (0,r_x]$
\begin{equation*}
\|e(a_{r})\|_{L^\infty(\mathrm{B}_1(0))} \leq \frac{32c(n)}{11} \, E_{r,z}(w)
\end{equation*}
which is exactly~\eqref{e:rip5}.
\end{proof}
|
2,869,038,156,307 | arxiv | \section{Introduction}
\label{sec I}
The concept of frustration plays an important role
in the search for novel quantum states of condensed matter, see,
e.g., \cite{lnp04,buch2,moessner01,frust1,frust2}. The
investigation of frustrating quantum spin systems is a challenging
task. Exact statements about the properties of quantum spin
system are known only in exceptional cases. The simplest known
exact eigenstate is the fully polarized ferromagnetic state.
Furthermore the one- and two-magnon excitations above the fully
polarized ferromagnetic state also can be calculated exactly,
see, e.g., \cite{mattis81,Kuzian07,Zhitomirsky10,nishimoto2011}. An
example for non-trivial eigenstates is Bethe's famous solution for
the one-dimensional (1D) Heisenberg antiferromagnet (HAFM)
\cite{bethe}.
The investigation of strongly frustrated magnetic systems
surprisingly led to the discovery of several new exact
eigenstates. Some of the eigenstates found for frustrated quantum
magnets are of quite simple nature and for several physical
quantities, e.g., the spin correlation functions, analytical
expressions can be found. Hence such exact eigenstates may play an
important role either as groundstates of real quantum magnets or
at least as groundstates of idealized models which can be used as
reference states for more complex quantum spin systems.
A well-known class of exact eigenstates are dimerized singlet
states, where a direct product of pair singlet states is an
eigenstate of the quantum spin system. Such states become
groundstates for certain values/regions of frustration. The most
prominent examples are the Majumdar-Gosh state of the 1D $J_1-J_2$
spin-half HAFM \cite{majumdar} and the orthogonal dimer state of
the Shastry-Sutherland model, see, e.g.,
\cite{shastry81,Mila,Miyahara,Lauchli,uhrig2004,darradi2005}. Many other frustrated spin
models in one, two or three dimensions are known which have also
dimer-singlet product states as groundstates, see, e.g.,
\cite{pimpinelli,ivanov97,japaner3d,koga,schul02}. A systematic
investigation of systems with dimerized eigenstates can be found
in \cite{schmidt05}. Note that these dimer-singlet product
groundstates have gapped magnetic excitations and lead therefore
to a plateau in the magnetization at $m=0$.
Recently it has been demonstrated for the 1D counterpart of the
Shastry-Sutherland model \cite{ivanov97,koga,schul02}, that more
general product eigenstates containing chain fragments of finite
length can lead to an infinite series of magnetization plateaus
\cite{schul02}.
Other examples of product ground states are single-spin product
states of 1D XYZ model \cite{mueller85} and the the highly
degenerate ground-state manifold of localized-magnon states found
for antiferromagnetic quantum spin systems on various frustrated
lattices \cite{lm}. Finally, we mention the so-called central-spin
model or Heisenberg star where also exact statements on the
groundstate are known \cite{starI}.
Although, at first glance such singlet-product states seem to
exist only for 'exotic' lattice models, it turned out that such
models are not only a playground of theoreticians but may become
relevant for experimental research. The most prominent example is
the above mentioned Shastry-Sutherland model introduced in 1981
\cite{shastry81} for which only in 1999 the corresponding
quasi-two-dimen\-sional compound SrCu$_2$(BO$_3$)$_2$ was found
\cite{Kage,srcubo}. Other examples are the quasi-1D spin-Peierls
compound $CuGeO_3$, see, e.g., \cite{cugeo}, or the star-lattice
compound
[Fe$_3$($\mu_3$-O)($\mu$T-OAc)$_6$-(H$_2$O)$_3$][Fe$_3$($\mu_3$-O)($\mu$-OAc)$_{7.5}$]$_2
\cdot$7 H$_2$O.\cite{star-exp,star-theor}
In the present paper we combine the ideas of Shastry and
Sutherland \cite{shastry81} and our recent findings on exact
trimerized singlet product ground states (TSPGS's) for 1D
integer-spin Heisenberg systems \cite{schmidt10} and discuss such
TSPGS's on a two-dimensional modified Shastry-Sutherland
square-lattice model. Section \ref{sec_egs} shortly recapitulates
the theory of TSPGS's and section \ref{sec_model} defines the
modified Shastry-Sutherland model and its finite realizations
that will be analyzed in what follows. We have concentrated in our
numerical studies on the size of the gap for the exact ground
state for finite lattices of $N=12$ (for spin quantum numbers
$s=1$, $s=2$), as well as $N=18$ and $N=24$ (for $s=1$) and on
the magnetization curves for selected values of $J_2$, see section
\ref{sec_nr}. The analytical results in section \ref{sec ar}
mainly concern upper and lower bounds of the gap function. These
results depend on a slightly generalized statement and proof of
the gap theorem, first formulated in \cite{schmidt10}, which is
done in appendix~\ref{app}. Finally, appendix~\ref{class} contains exact results on
classical magnetization curves for the model under consideration.
\section{Exact ground states}
\label{sec_egs} The anti-ferromagnetic uniform spin trimer
\begin{equation}\label{egs1}
H_1=J(\op{\bf{s}}_0\cdot\op{\bf{s}}_1+
\op{\bf{s}}_0\cdot\op{\bf{s}}_2+\op{\bf{s}}_1\cdot\op{\bf{s}}_2)
\end{equation}
has, for $J>0$ and integer $s$, a unique $S=0$ ground state, denoted
by $[0,1,2]$, with ground state energy
\begin{equation}\label{egs2}
E_0=-\frac{3}{2}J s(s+1) \;.
\end{equation}
The corresponding product state
\begin{equation}\label{egs3}
\Phi=\bigotimes_{i=1}^{\mathcal N}[i0,i1,i2]
\end{equation}
will be an eigenstate of a system of ${\mathcal N}$ coupled spin
trimers indexed by $i=1,\ldots,{\mathcal N}$ with Hamiltonian
\begin{equation}\label{egs4}
H=\sum_{i\epsilon
j\delta}J_{i\epsilon,j\delta}\,\op{\bf{s}}_{i\delta}\cdot\op{\bf{s}}_{j\epsilon}
\;,
\end{equation}
if and only if the coupling between different trimers is ``balanced"
in the following sense:
\begin{equation}\label{egs5}
J_{i\delta,j\delta}+J_{i\epsilon,j\epsilon}=J_{i\delta,j\epsilon}+J_{i\epsilon,j\delta}
\end{equation}
for all $1\le i<j\le{\mathcal N}$ and $\delta,\epsilon=0,1,2$, see
\cite{schmidt10}. Moreover, (\ref{egs3}) will be a ground state of
(\ref{egs4}), a TSPGS, if the intra-trimer coupling is almost
uniform and the inter-trimer coupling is not too strong
\cite{schmidt10}.
The domain of coupling constants where this is the case will be called the ``TSPGS-region".\\
If the system of trimers has a periodic lattice structure, the
difference $\Delta E$ between the energy of the first excited state
and that of the ground state can be shown \cite{schmidt10} to be
bounded from below independently of the system size. In other
words, the TSPGS is ``gapped".
\begin{figure}
\resizebox{0.9\columnwidth}{!}{
\includegraphics{24_2d_trimer_a.eps}
} \caption{ The modified Shastry-Sutherland model on the decorated
square lattice for $N= 24$ sites (periodic conditions imposed) used for exact diagonalization.}
\label{fig1}
\end{figure}
\begin{figure}
\resizebox{1.0\columnwidth}{!}{
\includegraphics{12_18.eps}
} \caption{Two finite decorated square lattices of $N= 12$ and
$N=18$ sites used for exact diagonalization.} \label{fig2}
\end{figure}
\section{The model}
\label{sec_model}
We consider the inter-spin Heisenberg Hamiltonian on a decorated
square-lattice (see figure~\ref{fig1}). It results from the
well-known Shastry-Sutherland model by replacing its diagonals by
equilateral triangles with uniform intra-trimer interaction strength
$J_1>0$. The set of triangles is divided in a bi-partite fashion
into two disjoint subsets of triangles of type I and type II,
corresponding to diagonals with positive slope resp.~negative ones,
see figure~\ref{fig1}. Each triangle of, say, type I is surrounded
by four
triangles of type II and connected to each of them with three bonds of strength $J_2$.\\
It follows that the inter-trimer coupling satisfies the balance
condition (\ref{egs5}) and hence the theory of TSPGS's applies. In
particular, two questions arise which will be addressed in the
following sections: What is the size of the TSPGS-region and of what
kind are the lowest excitations? The latter question is also
connected to the issue of magnetization plateaus which will be
shortly discussed below.
\section{Results}
\subsection{Numerical results}
\label{sec_nr} In what follows we set $J_1=1$ and consider $J_2$ as
the variable bond strength. To study the region where the TSPGS is
the ground state of the model (\ref{egs4}) we use the Lanczos exact
diagonalization (ED) technique. Since for spin quantum numbers $s >
1/2$ considered here the size of the Hamiltonian matrix grows much
faster with system size $N$ than for $s=1/2$, we are restricted to
finite lattices of $N= 12,18$ and $24$ for $s=1$ and $N=12$ for
$s=2$. The largest lattice is shown in figure~\ref{fig1}, whereas
the smaller lattices are shown in figure~\ref{fig2}. Although the
criterion for the existence of TSPGS's (see section \ref{sec_model})
are fulfilled, we have to mention that for the small lattices of
$N=12$ and $N=18$ the exchange pattern of the $J_1$ diagonal bonds
in the squares do not match to the infinite system. Nevertheless, we
have included the data for $N= 12$ and $18$ to get an impression on
finite-size effects and on the influence of the spin quantum number
$s$.
According to \cite{schmidt10} the TSPGS is gapped. Hence we use the
spin gap, see figure~\ref{fig3}, to detect the critical points
$J^{c1}_2$ and $J^{c2}_2$, where the TSPGS gives way for other
ground states. We find for $s=1$ the values $J^{c1}_2
=-0.570,-0.578$, and $-0.587$ and $J^{c1}_2 =0.434, 0.446$, and
$0.454 $ for $N=12,18$, and $24$, respectively (cf.
figure~\ref{fig3}(a)). For $s=2$ and $N=12$ we have $J^{c1}_2
=-0.400$ and $J^{c2}_2 =0.322$, cf. figure~\ref{fig3}(b). These
values lie between the upper and lower bounds which will be derived
for $J^{c1}_2$ and $J^{c2}_2$ in the next section for $N \to
\infty$. The nature of the lowest excited state depends on $J_2$.
Around $J_2=0$ it is a triplet state with strong antiferromagnetic
correlations along the trimer bonds and weak correlations between
the trimers. Near $J^{c1}_2$ the lowest excitation is a
ferrimagnetic state, i.e. the total spin is $S=Ns/3$ and the system
splits into two ferromagnetically correlated sublattices containing
on the one hand the $2N/3$ square-lattice sites (i.e. sites
$0,1,\ldots,15$ in figure~\ref{fig1}) and on the other hand the
$N/3$ additional sites (i.e. sites $16,17,\ldots,23$ in
figure~\ref{fig1}). The spin correlations between both sublattices
are anti-ferromagnetic. The ferrimagnetic state is the ground state
for $-1.5 < J_2 < J^{c1}_2$. Near $J^{c2}_2$ the lowest excitation
is a collective singlet state with strong correlations along all
bonds, and, this state becomes
the ground state at $J_2=J^{c2}_2$.
\begin{figure}
\vspace*{5cm}
\scalebox{0.7}{\includegraphics{gap_multi_mit_schranken_vert.eps}}
\caption{Numerical exact data for $N=12$, $18$, and
$24$ (symbols) as well as upper (black solid line) and lower bounds
(red solid line) for the excitation gap $\Delta E $. (a) spin
quantum number $s=1$;
(b) spin quantum number $s=2$.
Note that the labels $S=1$, $S=0$, $S=2N/3$ (ferri), and $S=8$
(ferri) characterize the total spin of the excited state.
}
\label{fig3}
\end{figure}
\begin{figure}
\vspace*{5cm}
\scalebox{0.7}{\includegraphics{m_h_plat_multi_vertical.eps}}
\caption{(a) Magnetization curve $m(h)$ for selected
values of $J_2$ and $s=1$ (thick lines $N=24$, thin lines $N=18$);
(b) Plateau widths $\Delta h$ of the $m=1/3$ and the $m=2/3$ plateaus as
a function of $J_2$ for $N=24$ and $N=18$ and $s=1$.
}
\label{fig4}
\end{figure}
It is well known that the magnetization curve of the
Shastry-Sutherland model (as well as that of the corresponding
material SrCu$_2$(BO$_3$)$_2$) possesses a series of pla\-teaus, see,
e.g., \cite{Kage,kodama,misguich,mila}. Motivated by this, we study
now briefly the magnetization curve $M(h)$ (where $M$ is the total
magnetization and $h$ is the strength of the external magnetic
field) for the considered model for $s=1$ using ED
for $N=18$ and $N=24$ sites. ED results for the
relative magnetization $m=M/M_{sat}$ versus magnetic field $h$ for
$N=18$ and $N=24$ sites are shown in figure~\ref{fig4}a. Again the
finite-size effects seem to be small. Trivially, in the limit
$J_2=0$ the $m(h)$ curve consists of three equidistant plateaus and
jumps according to the magnetization curve of an individual
triangle. Switching on a ferromagnetic inter-triangle bond $J_2 < 0$
the general shape of the magnetization curve is preserved. However,
the saturation field as well as the end points of the plateaus
decrease almost lineraly with $J_2$ and become zero at $J_2=1.5$,
where the ground state becomes the fully polarized ferromagnetic
state.
In case of a moderate antiferromagnetic inter-triangle bond $J_2 >
0$ the plateaus at $m=1/3$ and $m=2/3$ still exist, however the
discontinuous transition between plateaus becomes smooth. Note that
a $m=1/3$ plateau was also found for the standard Shastry-Sutherland
model \cite{misguich,mila}. The plateau widths $\Delta h$ of the $m=1/3$
and $m=2/3$ plateaus in dependence on $J_2$ is shown in
figure~\ref{fig4}b. Obviously, both widths shrink monotonously with
increasing of $|J_2|$.
If $J_2$ approaches the critical value $J^{c1}_2$ we find
indications for additional plateaus, e.g., at $m=5/6$. Note, however,
that our finite-size analysis of the plateaus naturally could miss
other plateaus present in infinite systems, see, e.g., the discussion
of the ED data of the $m(h)$ curve of the standard
Shastry-Sutherland model in \cite{wir04}. Hence, the study of
the magnetization process of the considered quantum spin model needs
further attention based on alternative methods.
One might expect that the presence of these plateaus and jumps may
be linked purely to quantum effects because they are often not
observed in equivalent classical models at $T=0$
\cite{lm,kawamura,zhito,cabra}. However, for the present model the
plateau at $m=1/3$ survives in the classical limit for $J_2 < 0$ as
we will show in appendix~\ref{class}.
\subsection{Analytical results}\label{sec ar}
\subsubsection{$s=1$}
\begin{figure}
\vspace*{5cm}
\scalebox{0.85}{\includegraphics{HR69_vertical.eps}}
\caption{Two possible subsystems of the modified
Shastry-Sutherland lattice, see e.~g.~figure \ref{fig1}. The upper
one, $H_6$, consists of two coupled triangles; the lower one, $H_9$
of three triangles.
}
\label{figH69}
\end{figure}
In order to obtain analytical results about the TSPGS-region we have
adapted theorem $3$ of \cite{schmidt10} to the present situation. A
slightly more general version of this theorem is stated and proven
in appendix~\ref{app}. It yields lower bounds for the gap $\Delta E$ of the
form $\Delta E\ge f(4J)$ and the TSPGS-region in terms of properties
of simpler spin systems of which the lattice can be composed, see
figure~\ref{fig3}. These subsystems are chosen here as systems
isomorphic to $H_6$, see figure \ref{figH69}, consisting of two
neighboring triangles. For $s=1$ the gap function $x\equiv \delta_6
E=f(J),\,J\equiv\frac{J2}{J1}$ of $H_6$ is obtained as a special
case of equation (\ref{ar2a}) given below.
This yields the corresponding bounds for the
TSPGS-region $(J^{c1},J^{c2})$
\begin{equation}\label{ar3}
J^{c1}<\frac{3-\sqrt{73}}{16}\approx -0.3465<\frac{1}{4}<J^{c2}\quad
\mbox{for }s=1\;.
\end{equation}
The function $\delta_6 E=f(J)$ according to (\ref{ar2a}) also
provides an upper bound for the gap function of the lattice, since it
represents the energy of a state orthogonal to the TSPGS, albeit not
an eigenstate of $H$. This bound is very close to the numerically
determined gap function in the case of $N=12$, see figure \ref{fig3}a,
but considerably deviates in the cases of $N=18$ and $N=24$. This
indicates that, in general, the lowest excitations of the lattice
are different from the excitations of $H_6$.\\
\subsubsection{General $s$}
It is possible to analytically calculate the energy of the lowest
excitations of $H_6$ for general integer $s$. The corresponding gap
$\delta_6 E=x=f(J)$ is obtained as the lowest root of the following
cubic equation
\begin{eqnarray}\nonumber
&-&(x-4) (x-2) (x-1 )
-(x-1 ) (2x-5 ) J\\ \label{ar2a}
&+&( 1 - 3 r - x + r x)J^2 +r J^3=0
\end{eqnarray}
where we have set $r\equiv s(s+1)$. From this result one derives the
lower bound
\begin{equation}\label{ar2b}
\Delta E\ge f(4J) \quad \mbox{for general }s
\end{equation}
and a lattice of arbitrary size, see theorem $1$ in appendix~\ref{app}
adapted to the system under consideration.
The corresponding curves are shrinking in $J$-direction with
increasing $s$ and yield inner bounds for the TSPGS-region
$(J^{c1},J^{c2})$ of the form
\begin{equation}\label{ar2b1}
J^{c1} < J_{L}^{(1)}<0<J_{L}^{(2)}<J^{c2}
\;,
\end{equation}
see figure \ref{figG1} (green curves).
Upon scaling w.~r.~t.~the new variable
$j\equiv\sqrt{r}J$ the graphs of (\ref{ar2a}) asymptotically approach the curve given by
\begin{equation}\label{ar2c}
j^2=\frac{(x-4)(x-2)(x-1)}{16(x-3)}\;,
\end{equation}
with Taylor expansion
\begin{equation}\label{ar2d}
x=1-\frac{32}{3}j^2+{\mathcal O}(j^3)\;,
\end{equation}
see figure \ref{figG3b}.
Hence $J_{L}^{(i)}$ assumes for $s\to\infty$ asymptotically the form
\begin{equation}\label{ar2d1}
|J_{L}^{(i)}|\sim\frac{1}{\sqrt{6s(s+1)}},\;\;i=1,2
\;.
\end{equation}
\\
\begin{figure}
\resizebox{1.0\columnwidth}{!}{
\includegraphics{FIGLB.eps}
} \caption{Lower bounds of the scaled gap function
$\delta_6 E(j),\;j=\sqrt{s(s+1)}J$ of the modified
Shastry-Sutherland spin lattice for $s=1,\ldots,10$ (thin curves)
obtained from Eq.~(\ref{ar2a}). The curves approach the asymptotic
(\ref{ar2c}) for $s\rightarrow\infty$ (thick red curve) which has a
simple quadratic approximation (\ref{ar2d}) (thick green curve).} \label{figG3b}
\end{figure}
\begin{figure}
\resizebox{1.0\columnwidth}{!}{
\includegraphics{FIGUB.eps}
} \caption{Upper bounds of the scaled gap function
$\delta_0 E(j),\;j=\sqrt{s(s+1)}J$ of the modified
Shastry-Sutherland spin lattice for $s=1,\ldots,10$ (thin curves)
obtained from Eq.~(\ref{ar6}). The curves approach the asymptotic
(\ref{ar7}) for $s\rightarrow\infty$ (thick red curve) which has a
simple quadratic approximation (\ref{ar8}) (thick green curve).} \label{figG3a}
\end{figure}
In order to obtain close upper bounds $g(J)$ of the gap $\Delta(E)$
in the case $N\ge 18$ we calculate
the energy of a certain (degenerate) state that involves three
triangles for arbitrary integer $s$, say, one triangle of type $I$
and two neighboring triangles of type $II$, see figure \ref{figH69}.
This state is obtained as an exact eigenstate of $H_0$, which is the
full Hamiltonian $H$, restricted to a $4^3=64-$dimensional subspace
spanned by product states of the form
\begin{equation}\label{ar4}
\phi_i\otimes\phi_j\otimes\phi_k,\quad i,j,k=0,\ldots,3\;.
\end{equation}
The $\phi_n$ live in the $(2s+1)^3$-dimensional Hilbert spaces
belonging to one of the three triangles. $\phi_0=[0,1,2]$ denotes
the TSPGS of the corresponding triangle and
\begin{equation}\label{ar5}
\phi_i\equiv \frac{\op{s}_0^{(i)}\phi_0}{||
\op{s}_0^{(i)}\phi_0||},\;i=1,2,3\;,
\end{equation}
where $\op{s}_0^{(i)}$ is the $i$-th component of the spin operator
$\op{\bf s}_0$ pertaining to the spin site number $0$, an
arbitrarily chosen spin site of the corresponding triangle. The gap
function of $H_0$ will be denoted by $x\equiv\delta_0 E=g(J)$ and
constitutes an upper bound for $\Delta(E)$.
It has the following implicit form, using $r\equiv s(s+1)$:
\begin{eqnarray}\nonumber
0&=& -12(x-3)^2(x-2)(x-1)-6J(x-3)(x-1)(4x-9)\\ \nonumber
&& +J^2
(x-3)(9-9x+4r(7x-15))+16 J^3 r (2x-5)\;.\\
&& \label{ar6}
\end{eqnarray}
Again, the function $g$ belongs to the lowest branch of (\ref{ar6}).
The corresponding curves are shrinking in $J$-direction with
increasing $s$ and yield outer bounds for the TSPGS-region
$(J^{c1},J^{c2})$ of the form
\begin{equation}\label{ar6a}
J_{U}^{(1)}<J^{c1} < 0<J^{c2}<J_{U}^{(2)}
\;,
\end{equation}
see figure \ref{figG1} (red curves).
Upon scaling w.~r.~t.~the new variable
$j\equiv\sqrt{r}J$ the graphs of (\ref{ar6})
asymptotically approach the curve given by
\begin{equation}\label{ar7}
j^2=\frac{3(x-3)(x-2)(x-1)}{7x-15}\;,
\end{equation}
with Taylor expansion
\begin{equation}\label{ar8}
x=1-\frac{4}{3}j^2+{\mathcal O}(j^3)\;,
\end{equation}
see figure \ref{figG3a}.
Hence $J_{U}^{(i)}$ assumes for $s\to\infty$ asymptotically the form
\begin{equation}\label{ar8a}
|J_{U}^{(i)}|\sim\sqrt{\frac{6}{5s(s+1)}},\;\;i=1,2
\;.
\end{equation}
\\
\begin{figure}
\resizebox{1.0\columnwidth}{!}{
\includegraphics{FIGG1.eps}
} \caption{Exact bounds for the TSPGS-region $(J^{c1},J^{c2})$} for $s=1,\ldots,10$
of the form $J_{U}^{(1)}<J^{c1}< J_{L}^{(1)}< 0<J_{L}^{(2)}<J^{c2}<J_{U}^{(2)}$.
These are derived
from (\ref{ar2a}) (green curves, inner bounds) and (\ref{ar6}) (red curves, outer bounds).
In the classical limit $s\rightarrow\infty$
the TSPGS-region shrinks to zero according to (\ref{ar2d1}) and (\ref{ar8a}).
\label{figG1}
\end{figure}
Although these curves constitute only upper
bounds of the true gap functions, the comparison with the numerical
results for $N=18$ and $N=24$ reveals a close approximation to both
curves, see figure \ref{fig3}. This supports our conjecture that
(\ref{ar6}) indeed may serve as an analytical approximation of the
gap functions for large $N$ and arbitrary integer $s$. This would
mean that the excitations from the TSPGS can be viewed as local
excitations essentially concentrated on three neighboring triangles.
Numerically determined spin correlation functions seem to be in accordance with
this conjecture. Of course, the corresponding excited state will be
largely degenerate due to the translational symmetry of the lattice.
We expect an almost flat $\bf{k}$-dependance of the energy band
$E(\bf{k})$. This expectation is also supported by our numerical
results. We have found that the lowest excitations close to $J=0$
have the total spin quantum number $S=1$ in accordance with our
model.\\
In the case $N=12$ where we have performed numerical calculations for
$s=1$ and $s=2$ it is not possible to put a subsystem of type $H_9$ into the
lattice and the above results do not apply. However,
an analogous method can be applied to two
coupled triangles of type $H_6$ and yields an upper bound of the gap function of the form
\begin{equation}\label{ar2e}
\Delta E\le \frac{1}{12} (18 - 3 J - \sqrt{9(J-2)^2 + 96 J^2
s(s+1)}).
\end{equation}
The numerically determined gap together with the bounds (\ref{ar2b})
and (\ref{ar2e}) is represented in figure \ref{fig3} b.\\
\section*{Acknowledgement}
The numerical calculations were performed using J.~Schulenburg's
{\it spinpack}.
|
2,869,038,156,308 | arxiv | \section{Introduction}
Understanding how quarks and gluons are distributed within hadrons remains an overarching goal of modern-day nuclear physics. Among the physical processes used to asses the internal structure of such states is Compton scattering, which has been proposed as a tool to obtain generalized parton distributions of hadrons~\cite{Ji:1996ek,Radyushkin:1996nd}. For this reason, the Compton scattering process is particularly relevant in the 12 GeV upgrade at Jefferson Lab~\cite{Dudek:2012vr} as well as the future electron-ion collider~\cite{Accardi:2012qut}.
Lattice quantum chromodynamics (lattice QCD) is the only known systematically improvable method for making non-perturbative predictions based in the fundamental theory of the strong nuclear force. In order to be numerically tractable, lattice QCD calculations must be defined in a finite Euclidean spacetime, which inherently limits the classes of observables that are accessible. For example the Compton amplitude, together with a wide class of other scattering and decay amplitudes, requires physical, Minkowski time evolution. It can therefore only be accessed from Euclidean correlators via analytic continuation or finite-volume methods. While the first method has received considerable attention recently (see e.g.~Refs.~\cite{Tripolt:2018xeo,Bailas:2020qmv,Bulava:2021fre}), the second is well established and has proven very useful for extracting hadronic scattering and decay amplitudes~\cite{Luscher:1986pf,Hansen:2019nir,Briceno:2017max,Rusetsky:2019gyk}.
Another promising numerical approach for evaluating real-time QCD quantities involves using quantum computing techniques (for a review of these ideas see~\cite{Georgescu:2013oza,Banuls:2019bmf}, and for recent applications see~\cite{Jordan:2014tma, Jordan:2011ci, Jordan:2011ne, Davoudi:2019bhy, Kuno:2014npa,Martinez:2016yna,Mueller:2019qqj,Lamm:2019uyc, Kaplan:2018vnj, Kaplan:2017ccd, Gustafson:2019mpk, Marshall:2015mna,Lu:2018pjk,Ciavarella:2020vqm}).
In this work, we discuss the prospects of accessing Compton-like amplitudes from Minkowski correlation functions. We define the finite-volume Minkowski correlator with a non-infinitesimal $i \epsilon$ (implemented via the Fourier transform) and note that the desired infinite-volume amplitude is given by the ordered double limit: $L \to \infty$ (where $L$ is the spatial periodicity) followed by $\epsilon \to 0$ (see also Refs.\cite{Hansen:2015zga,Bulava:2019kbi,Agadjanov:2016mao}). We then use the finite-volume formalism derived in~\cite{Briceno:2019opb} to predict the size of finite-volume systematic errors for given values of $\epsilon$ and $L$.
We additionally provide a prescription, based on averaging over redundant kinematics, that significantly reduces the finite-volume errors. For the theories that we consider this typically reduces the finite-volume effects by several orders of magnitude. The expectation is that the improvement provided by this procedure is universal. Although a proof remains outstanding, here we provide empirical evidence supporting this conjecture. The first evidence, published in Ref.~\cite{Briceno:2020rar}, showed that this procedure reduces finite-volume effects for kinematics where a single two-particle channel is kinematically open. In addition to reviewing these findings, in the present work we show preliminary results demonstrating that the same conclusion may be drawn for kinematics where multiple two-particle channels may are open.
\section{Infinite and finite volume amplitudes in 1+1D}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\textwidth]{./figs/iM_iT}
\caption{Diagrammatic representations of $(a)$ the $2\to2$ scattering amplitude, $\M$, and $(b)$ the Compton-like amplitude, $\T$. \label{fig:iM_iT}}
\end{center}
\end{figure}
Ultimately, we are interested in the study of Compton scattering for arbitrary kinematics. However, at this stage, the formalism needed to describe the finite-volume artifacts for arbitrary kinematics has yet to be developed. Therefore, as a start, we restrict our attention to the kinematic region where two particle states may go on-shell.
As shown in Ref.~\cite{Briceno:2019opb}, the kinematic singularities and finite-volume effects of these amplitudes are parametrized in terms of the infinite-volume amplitudes describing all physical subprocesses. Thus, in the present context, in order to understand the behavior of the Compton amplitude, we must first understand the $2\to2$ and $1+\mathcal{J}\to 2$ scattering amplitudes. The on-shell representation of these amplitudes are well known in 3+1D [see, for example, Ref.~\cite{Briceno:2020vgp} for a recent detailed derivation]. Given that the first quantum computations are most likely to be performed in 1+1D, here we only consider 1+1D theories. In Ref.~\cite{Briceno:2020rar}, we first derived the expressions for these amplitudes, including the Compton-like amplitude and its finite-volume analogue, in which a non-zero $\epsilon$ parameter is introduced in the definition. In what follows we quickly review the main necessary results.
\subsection{Amplitudes for a single two-particle channel in 1+1D}
We begin by considering the scattering of two hadrons of mass $m$ in 1+1D. The two-vector $P^\mu = (E, \boldsymbol P)$ denotes the total energy and momentum of the two-hadron state. In the center of mass frame the total energy ($E^\star$) is given by,
\begin{equation}
E^{\star2} = P^\mu P_\mu = E^2-\boldsymbol P^2 = s\,,
\end{equation}
where $s$ is the Mandelstam variable. In this section, we restrict our discussion to energies where two-particle systems may go on-shell. First, we assume there is a single two-particle channel open that is composed of two identical particles of mass $m$. This then implies that the energies considered will satisfy the condition $2m < E^\star < 3m$. We will later partly lift this assumption by allowing for multiple two-particle channels to dynamically couple.
The $2\to2$ hadronic amplitude, denoted by $\M $, is defined diagrammatically in Figure~\ref{fig:iM_iT}(a). By isolating the singularities and summing these contributions to all orders, the amplitude can be written as
\begin{align}
\M(s)
=\frac{1}{\K(s)^{-1}-i\rho(s)} \,,
\label{eq:Mdef}
\end{align}
where the the K-matrix, $\K $, is a real quantity for $s > 4 m^2$; and $\rho $ is the phase space factor. For identical particles in 1+1D the phase-factor is $\rho(s) = \frac{1 }{8 E^\star q^\star}$, where $q^\star$ is the relative momentum in the center-of-mass frame, $q^\star \equiv \sqrt{s/4 - m^2}$.
In Figure~\ref{fig:iM_iT}(a), it is possible to replace one of the initial hadron states with an external current. In this case, the
$1+\mathcal{J}\to 2$ transition amplitude at all orders, analogous to $\M$, can be written as,
\begin{align}
\mathcal{H}(s)
=\M(s) \mathcal{A}(s ,Q^2) \,,
\label{eq:calHdef}
\end{align}
where $\mathcal{A}(s ,Q^2)$ is a {\it generalized} transition form factor and is a smooth function in $s$~\cite{Briceno:2020vgp}.
Now, we consider the presence of two external currents to study Compton-like amplitudes. In particular, we focus on matrix elements involving the time ordered product, of two identical scalar currents, $\mathcal J(x)$, between an initial and final single-particle external state,
\begin{equation}
\T(s,Q^2,Q^2_{if}) \equiv i \int d^2 x \, e^{i\omega t - i\bm{q} \cdot \bm{x}} \, \langle \boldsymbol p_f \vert \, \text{T} \{ \mathcal J (x) \mathcal J'(0) \} \, \vert \boldsymbol p_i \rangle_{\text{c}}\,,
\label{eq:TAdef}
\end{equation}
where the subscript ``$\text{c}$'' indicates that the definition of $\mathcal{T}$ includes only connected contributions, and $\text{T}$ denotes the time ordering. The process kinematics is defined in the first line of Figure~\ref{fig:iM_iT}(b), where $q=(\omega, \bm{q})$. The Lorentz invariants relevant for this process are the Mandelstam variable $s = (p_f+q)^2$,
while $Q^2_{if}=-(p_f+q-p_i)^2$ and $Q^2=-q^2$ are the incoming and outgoing current virtualities, respectively.
The Compton-like amplitude $\mathcal{T}$ is diagrammatically defined in Figure~\ref{fig:iM_iT}(b).
By isolating the possible singularities associated with intermediate two-particle states, one can write this amplitude in terms of $\M$, $\mathcal{A}$, and a new smooth real function, $\mathbf{S} $~\footnote{In Ref.~\cite{Briceno:2020rar} $\T$ was written explicitly in terms of $ \mathbf{T}$ and $\bH$. Here we instead use the equivalent expression obtained following the steps sketched in Ref.~\cite{Briceno:2020vgp,Briceno:tbp}},
\begin{align}
\mathcal{T} (s,Q^2 ,Q^2_{if})
&= \mathbf{S}(s,Q^2 ,Q^2_{if})
+ \mathcal{A}(s ,Q^2)\mathcal{M}(s)\mathcal{A}(s ,Q^2_{if}) +
[s\longleftrightarrow u]
\,
\,,
\label{eq:compton}
\end{align}
where $[s\longleftrightarrow u]$ denotes the exchange of the Mandelstam variables $s$ by $u$.
Having established the relevant expressions for the Compton-like amplitude $\mathcal T $, we now focus on a finite-volume estimator for this quantity, $\mathcal T_L$, defined as
\begin{equation}
\T_L(p_f, q, p_i) \equiv 2i \sqrt{\omega_{\boldsymbol p_f} \omega_{\boldsymbol p_i}} \, L\int dx^0 \int_0^L dx^1 \, e^{i \omega x^0 -\epsilon \vert x^0 \vert -i \bm{q} \cdot \bm{x}}\, \langle \boldsymbol p_f \vert \, \text{T} \{ \mathcal J(x) \mathcal J'(0) \} \, \vert\boldsymbol p_i \rangle_{\text{c}, L} \,,
\label{eq:TAdefFV}
\end{equation}
where the proportionality factor arises from the normalization of one-particle states in a finite volume. The $\epsilon$ regulates the singularities both in the $s$- and $u$-channel diagrams. Here we only consider explicitly this effect in the $s$-channel diagrams, where this shift can be understood as a shift in $q_0\to q_0+i\epsilon$. In going forward, we will simply ignore the contribution from the $u$-channel diagrams.
In order to understand how to recover the infinite volume amplitude, $\T$, from its finite-volume counterpart, $\T_L$, we consider the finite-volume long range formalism derived in Ref.~\cite{Briceno:2019opb} for 3+1D. The finite-volume analog for the Compton-Scattering amplitude, $\mathcal T_L$, can be written in terms of the infinite volume amplitudes and one finite-volume function, $F$. Ignoring exponentially suppressed volume effects, one finds
\begin{align}
\T_{L}(p_f,q,p_i) =
\T (s,Q^2 ,Q^2_{if}) -
\cH(s,Q^2)
\,
\frac{1}{F^{-1}(E^\star,\boldsymbol P, L) + \M(s) }
\,
\cH(s ,Q^2_{if})
\,,
\label{eq:TL}
\end{align}
and one can show that in $1+1$D the $F$ function can be written as
\begin{align}
F(E^\star,\boldsymbol P, L )
&= i \rho(s)
+
\frac{\rho(s)}{2}
\left[
\cot\left(\frac{L\gamma(q^\star+\omega_q^\star\beta)}{2}\right)
+
\cot\left(\frac{L\gamma(q^\star-\omega_q^\star\beta)}{2}\right)
\right]
\,,
\label{eq:Fcot}
\end{align}
where $\gamma$ and $\beta$ define a Lorentz boost in the $\boldsymbol P$ direction, $\gamma = E/E^\star$ and $\beta = \boldsymbol P /E$, and $\omega_q^\star=\sqrt{q^{\star 2}+m^2} = E^\star/2$.
It is easy to show that $F$ satisfies,
\begin{align}
\lim_{\epsilon\to 0} \lim_{L \to \infty}F(E^\star +i\epsilon,\boldsymbol P, L )= 0 \, .
\label{eq:F_lim}
\end{align}
Thus the physical Compton-like amplitude can be recovered from the ordered double limit:
\begin{align}
\lim_{\epsilon \to 0} \lim_{L \to \infty} \T_{L}(p_f,q,p_i) = \T (s,Q^2 ,Q^2_{if}) \,.
\label{eq:IV_lim}
\end{align}
In practice, one cannot take this limit numerically. Instead, one must resort to determining an estimate from various values of $\epsilon$ and $L$ and assigning a systematic uncertainty due to the non-zero and finite values, respectively, or due to the extrapolation ansatz.
\subsection{Extension for multiple open channels}
For energies where $n$ two-body channels may interact, the infinite volume scattering amplitude can be written as~\cite{Briceno:2012yi,Hansen:2012tf}
\begin{align}
\mathcal{M}_{ab}(s)
= \left[\left( 1 - i\mathcal{K}(s)\rho(s) \right)^{-1}\right]_{ab'}\mathcal{K}_{b'b}(s),
\end{align}
where the indexes run over the possible channels. The K-matrix now is a square matrix of dimension $n\times n$, and $\rho$ is a diagonal matrix defined by $\rho_{ab}(s) = \frac{\delta_{ab}}{8E^\star q_a^\star}$. For simplicity, we consider that the two particles in each given channel are identical with mass $m_a$, and as a result the relative momentum for the $a$-th channel can be written as $q_a^\star \equiv \sqrt{s/4 - m_a^2}$. We choose $m_1$ to be the mass of the lightest particle, $m_1<m_{a\neq1}$.
The infinite volume Compton amplitude remains a scalar quantity,
\begin{align}
\mathcal{T}(s,Q^2,Q^2_{if})
&= \mathbf{S}(s,Q^2,Q^2_{if})
+ \mathcal{A}_a(s,Q^2)\mathcal{M}_{ab}(s)\mathcal{A}_b(s,Q_{if}^2),
\end{align}
where now the transition form factors, $\mathcal{A}$, are vectors in channel space, and $\mathbf{S} $ still remains a scalar smooth function. For the finite volume Compton amplitude with multiple open channels,
\begin{align}
\T_{L}(p_f,q,p_i)
&= \mathcal{T}(s,Q^2,Q^2_{if})
- \mathcal{H}_a(s,Q^2) \left[\left(F^{-1}(E^\star,\boldsymbol P, L )+\mathcal{M}(s)\right)^{-1}\right]_{ab}
\mathcal{H}_b(s,Q^2_{if}),
\label{eq:TL_coupled}
\end{align}
the transition amplitudes, $\mathcal{H}$, are vectors in channel space which may be expressed as $\mathcal{H}_a = \mathcal{M}_{ab} \mathcal{A}_b $ and $F$ is a finite-volume diagonal matrix whose elements are the geometric functions for each channel, as given in Eq.~\eqref{eq:Fcot} with the appropriate relative momentum $q_a^\star$.
\section{Numerical Results}
\label{sec:Numerical_results}
In this section, we explore how to numerically recover the infinite-volume Compton amplitude, $\T$, from its finite-volume analog, $\T_L$. To achieve this, one requires to choose reasonable functional forms for the infinite-volume real functions $\K$, $\mathcal{A}$ and $\mathbf{S}$, since they enter in $\T_L$.
We will first discuss the results for a single channel open, and then consider multiple open channels.
\subsection{Single channel open}
\label{sec:single_num}
We use a flexible parametrization of the K matrix,
\begin{gather}
\K (s) = m^2 q^{\star 2} \bigg(\frac{g^2}{m_{R}^2 - s}+h(s) \bigg) \,,
\label{eq:Kmatpar}
\end{gather}
where $g$ is a dimensionless coupling constant, $m_R$ is a parameter with units of energy, and $h(s)$ is a polynomial in $s$ with dimensions $1/m^2$. The dimensions of these parameters are chosen such that
$\K $ has dimensions of $m^2$. For the transition form factor, $\mathcal{A}$, and the smooth function $\mathbf{S}$ we choose:
\begin{equation}
\mathcal{A}(s, Q^2) =
\frac{1}{1+{Q^2}/m_R^2} \,, \qquad
\mathbf{S}(s,Q^2,Q_{if}^2) = 0 \,.
\label{eq:Hparam}
\end{equation}
\subsubsection*{Naive analysis}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\textwidth]{./figs/iTL_disc}
\caption{Infinite-volume amplitude, $\T$ (black curve) vs. finite-volume estimator $\overline {\T_L}$ (defined in Ref.~\cite{Briceno:2020rar}) (colored points), for a single channel open. The photon virtualities are $Q^2=Q^2_{if}=2 m^2$, and the binning resolution is $\Delta_{Q^2}=0.01 m^2$ (see Eq.~\eqref{eq:BinSet1}). The smaller plots below each panel represent the percent deviation, quantified by with $\sigma_L$ defined in Eq.~\eqref{eq:sigmaL}. \label{fig:iTL_disc}}
\end{center}
\end{figure}
Using the parametrization above for the K matrix, we consider a set of resonant amplitudes given by $m_R = 2.5m$, $g=2.5$ and $h(s)=0$. We then evaluate the finite volume dependence of $\mathcal{T}_L$ numerically in Figure~\ref{fig:iTL_disc} for three different values of $L$, $mL = 20,\, 50,\, 100$ and two values of $\epsilon L = 1,\,4$. The black lines represent the infinite volume Compton amplitude, $\mathcal{T}$, while the colored dots represent an estimator for the finite volume scattering amplitude analog denoted by $\overline {\T_L}$ (for details see Section~IV.B in Ref.~\cite{Briceno:2020rar}). This estimator, $\overline {\mathcal T_L} $, is computed within a suitable kinematic bin defined by,
\begin{equation}
\big |\overline{Q^2}-{Q}^2\big| < \Delta_{Q^2} \qquad \text{and} \qquad \big|{Q}_{if}^2-{Q}^2\big| < \Delta_{Q^2} \,,
\label{eq:BinSet1}
\end{equation}
where we fixed the target virtuality $\overline{Q^2} = 2m^2$. We also fix the virtuality resolution to $\Delta_{Q^2}=0.01 m^2$. The deviation from $\mathcal{T}_L(E+i\epsilon)$ to $\mathcal{T}(E^\star)$ can be quantified using,
\begin{align}
\sigma_L(E^\star, \boldsymbol P, \epsilon)=100\times\left|
\frac{\T_L(E + i \epsilon, \boldsymbol P)-\T(E^\star)}{\T(E^\star)}
\right| \,,
\label{eq:sigmaL}
\end{align}
plotted in the panels below $\overline {\mathcal T_L}/m^2$. From Figure~\ref{fig:iTL_disc} we note that in general $\overline {\mathcal T_L}$ shows substantial deviations from the infinite volume amplitude, in particular around the peak of the amplitude, which can be attributed to a nearby resonance. Only for volumes as large as $mL = 100$ and $\epsilon$ as small as $\epsilon L = 1$ these deviations are reduced but still far from the percent level.
\subsubsection*{Boost averaging}
As discussed with detail in Ref.~\cite{Briceno:2020rar}, the scenario shown in Figure~\ref{fig:iTL_disc} can be improved by exploiting the fact that $\mathcal{T}$ depends only on Lorentz scalars, while $\mathcal{T}_L$ depends on the total momentum of the system. Therefore, binning and averaging over similar kinematic points makes the finite volume estimator $\overline {\T_L}$ to converge faster to the physical amplitude. Also, we consider an average over several volumes, since this largely cancel the fluctuations associated with a single value of $L$. To perform this average we sample ${\T_L}$ in bins centered at a fixed value $\overline E^\star$, each bin with a width $\Delta_{E^\star}$. We then average all the values of $\T_L$ lying in the 3D-bin defined by:
\begin{equation}
\big |\overline{Q^2}-{Q}^2\big| < \Delta_{Q^2}\, , \qquad \qquad \big |{Q}_{if}^2-{Q}^2\big| < \Delta_{Q^2} \qquad \text{and} \qquad |\overline {E^\star} - E^\star| \leq \Delta_{E^\star} \,.
\label{eq:BinSet2}
\end{equation}
In the upper panels of Figure~\ref{fig:iTL_binning}, we consider the {\it Model 1} defined by $m_R = 2.5m$, $g=2.5$ and $h(s)=0$ and compute the average $\overline \T_L$ over three different volumes, $mL = 20\,, 25\,, 30$, for three target virtualities, $\overline{Q^2} = 2m^2, 5m^2, 10m^2$. The size of the energy bins is $\Delta_{E^\star} = 0.08m$, while the virtuality bins have a size of $\Delta_{Q^2}=0.05\,m^2$ and our $\epsilon$ choice is $\epsilon(L)=1/[L (mL)^{1/2}]$. In the bottom panels of Figure~\ref{fig:iTL_binning} we consider the {\it Model 2} defined by $m_R=5.5m$, $g=6$, $h(s) = 0.2/m^2$. In both cases, we note that the proposed averaging provides the best reconstruction to the infinite volume Compton Amplitude.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\textwidth]{./figs/iTL_binning}
\caption{
Infinite-volume amplitude, $\T$ (black curve) vs. the finite-volume estimator (defined in Ref.~\cite{Briceno:2020rar}) (red points) for a single channel open $(a)$ Data is generated using the \emph{Model 1} set of parameters (used in Figure.~\ref{fig:iTL_disc}): $m_R=2.5m$, $g=2.5$, $h(s) = 0$. $(b)$ Data is generated using \emph{Model 2} set of parameters: $m_R=5.5m$, $g=6$, $h(s) = 0.2/m^2$. The light grey points in the two panels on the left correspond to the values of $\mathcal T_L$ obtained from points with similar kinematics (see Eq.~\eqref{eq:BinSet2}). These light-gray points are then used to compute $\overline{\mathcal T_L}$. Although the formalism used strictly holds only below the three-particle threshold, we take the liberty to extend to energies well above this threshold.
\label{fig:iTL_binning}}
\end{center}
\end{figure}
\subsection{Multiple open channels}
\label{sec:Multiple_channels_numerical}
\begin{figure}[ht]
\centering
\subfigure{\includegraphics[width=0.32\linewidth]{./figs/FV_CC_Compton_2CC}}
\subfigure{\includegraphics[width=0.32\linewidth]{./figs/FV_CC_Compton_3CC}}
\subfigure{\includegraphics[width=0.32\linewidth]{./figs/FV_CC_Compton_4CC}}
\subfigure{\includegraphics[width=0.32\linewidth]{./figs/FV_CC_Compton_2CC5}}
\subfigure{\includegraphics[width=0.32\linewidth]{./figs/FV_CC_Compton_3CC5}}
\subfigure{\includegraphics[width=0.32\linewidth]{./figs/FV_CC_Compton_4CC5}}
\caption{The red points represent the binned Compton amplitude obtained from the points with similar kinematics, shown in gray, for volumes $L=20,$ $25,$ and $30$. The black solid line is the infinite volume Compton amplitude. From left to right there are 2, 3, and 4 open channels with masses $m_2=1.3m_1$, $m_3=1.35m_1$, and $m_4=1.4m_1$ and coupling constants $g_1=2.5$, $g_2=1.5$, $g_3=1.35$, and $g_4=0.985$. Here we consider $\Delta_{Q^2}=0.05m_1^2$ and $\Delta_{E^\star}=0.08m_1$ for the binning conditions, as well as the smooth functions $\mathbf{S}(s,Q^2, Q_{if}^2)=0$, and $h_{ab}(s)=0$. The incoming and outgoing virtualities are $Q^2=Q_{if}^2=2m_1^2$, for the top, and $Q^2=Q_{if}^2=5m_1^2$ for the bottom.}
\label{Fig:CC_Bin_Compton}
\end{figure}
In Ref.~\cite{Briceno:2020rar} we outlined a procedure for accessing the Compton amplitude given arbitrary values of $s$. However, evidence that this procedure works was only shown explicitly for kinematics where a single channel composed of two particles may go on-shell. In this section we provide preliminary empirical evidence that these observations persist even for kinematics where multiple two-body channels may go on-shell. In this case, our parametrization for the K-matrix is given by a simple generalization of Eq.~\eqref{eq:Kmatpar}
\begin{align}
\mathcal{K}_{ab}(s)
&= m_a m_bq_1^{\star 2}\left(
\frac{g_a g_b}{m_R^2-s}
+ h_{ab}( s )\right),
\end{align}
where $g_a$ is the coupling constant to the $a$-th channel and $h_{ab}(s)$ is a matrix whose elements are polynomials in $s$. While the transition form factors we fix to
\begin{align}
\mathcal{A}_a(s,Q^2)
&= \frac{1}{1+Q^2/m_R^2}.
\end{align}
For equal incoming and outgoing virtualities, $Q^2 = Q^2_{if}$, binning conditions $\Delta_{Q^2}=0.05m_1^2$ and $\Delta_{E^\star}=0.08m_1$, smooth function $\mathbf{S}_{ab} (s,Q^2,Q_{if}^2)=0$, and matrix $h_{ab}(s)=0$, we find the results shown in Fig.~\ref{Fig:CC_Bin_Compton} considering two, three, and four open channels with corresponding masses $m_2=1.3m_1$, $m_3=1.35m_1$, and $m_4=1.4m_1$ and coupling constants $g_1=2.5$, $g_2=1.5$, $g_3=1.35$, and $g_4=0.985$. These results support the hypothesis that the method outlined in Ref.~\cite{Briceno:2020rar} holds for an arbitrary number of open channels.
\section{Final Remarks}
In this work we have explored the prospects of accessing Compton-like amplitudes in real-time calculations of a 1+1-dimensional theory with periodicity $L$ in the single spatial direction. A finite-volume, non-zero $\epsilon$ estimator for the Compton amplitude can be defined, which coincides with the physical amplitude in the ordered double limit: first $L \to \infty$ followed by $\epsilon \to 0$. Having defined this quantity, the practical issue arises of whether values of $\epsilon$ and $L$ can be identified to give a predicted value that is not dominated by systematic uncertainties.
To explore this question we have taken the formalism of Ref.~\cite{Briceno:2019opb} for extracting finite-volume long-range matrix elements as a diagnostic tool. It is worth stressing that the formalism is used here in a manner completely distinct from the main focus of that work. Instead of using finite-volume information from lattice QCD calculations to predict infinite-volume amplitudes, here we take an ansatz for the infinite-volume amplitudes to predict the finite-volume, non-zero-$\epsilon$ estimator. This allows us to quantify finite-volume effects that might be seen by future real-time simulations, assuming that the latter do not make use of the formalism of Ref.~\cite{Briceno:2019opb}.
For the systems we consider, in particular those with a resonant peak of width comparable with typical low-lying QCD resonances, the value of $\epsilon$ must be taken sufficiently small to not distort the amplitude. But taking values in the regime where the $\epsilon \to 0$ extrapolation is feasible, we find that
finite-volume effects become significant, to the extent that one requires volumes of order $mL = \mathcal{O}(10^2)- \mathcal{O}(10^3)$ to reduce these systematics to the $5-10\, \%$ level. We present a practical solution to overcome this issue which relies on exploiting symmetries of the infinite-volume amplitudes, binning over similar kinematics and averaging over each bin. The proposed average converges faster to the infinite-volume amplitude and requires volumes of order $mL = 20- 30$. Here we provide first evidence that this procedure also works for kinematics in which two or more two-body channels are kinematically open.
\section{Acknowledgments}
RAB and JVG are partly supported by the USDOE grant under which Jefferson Science Associates, LLC,\, manages and operates Jefferson Lab, \, No. DE-AC05-06OR23177. Additionally, RAB acknowledges support from the USDOE Early\, Career award, contract de-sc0019229. MCB and JVG are also supported by the Jefferson Lab LDRD project LD2117. MTH is supported by UK Research and Innovation Future Leader Fellowship MR/T019956/1, and also in part by UK STFC grant ST/P000630/1.
\bibliographystyle{JHEP}
|
2,869,038,156,309 | arxiv | \section{Introduction} \label{sec:intro}
\par The existence of magnetohydrodynamic (MHD) waves in the solar atmosphere was predicted long before their actual detection \citep{uch-68, hab-79}. Contrary to the historically theoretical character of solar MHD research, nowadays, a wide variety of both space-based and ground-based instruments capable of unprecedented spatial and temporal resolution is available. Both Alfv{\'en} (see e.g. \citealt{jess-09}), fast \citep{mor-12}, and slow MHD waves \citep{fre-16} have been detected in the various features (e.g. coronal loops, prominences, sunspots) of the solar atmosphere and interpreted as oscillations in cylindrical (e.g. \citealt{asch-99}) or slab-like magnetised plasma configurations \citep{all-19}. This, in turn, motivates further analytical and numerical modelling in order to gain a better understanding of solar phenomena. This is the aim of solar magneto-seismology (SMS), which extends the scope of examinations by means of MHD waves from the corona (coronal seismology) to the lower parts of the magnetically coupled solar atmosphere \citep{somaseis1, somaseis2, obstrends}.
\par Specifically, the study of slab geometry has a long history in SMS. A comprehensive discussion of the topic in a form popular today was given in three seminal articles of \citeauthor{roberts1} (\citeyear{roberts1, roberts2}) and \citeauthor{roberts3} (\citeyear{roberts3}). They revealed the details of linear wave propagation in a non-gravitational, (in)compressible, inviscid and ideal plasma. Their analysis found that the presence of a single interface may, under appropriate conditions, give rise to both the slow and the fast magnetoacoustic surface modes \citep{roberts1}. By introducing another interface, they constructed the model of a magnetic slab, which they \-examined first in field-free \citep{roberts2}, and then in a magnetic environment \citep{roberts3}. Some key steps and results in constructing and developing these slab models are summarised in \citet{all-18, asymag} and \citet{all-19}.
\par The, one may now label it as, classical model described by \citeauthor{roberts3} was symmetric about the centre of the slab. However, the solar atmosphere is a highly inhomogeneous medium with plenty of structuring, in which one cannot expect perfect symmetry to be present in the environment of MHD waveguides. Therefore, it was an important step forward in theoretical modelling when, as a generalisation of classical models, \citeauthor{asymm} (\citeyear{asymm}) introduced asymmetry into the slab geometry, by examining a magnetic slab is embedded in a non-magnetic but asymmetric environment. A further generalisation of the model was reached by dividing up the internal region into an arbitrary $N$ number of homogeneous slabs, as detailed by \citeauthor{shu-18} (\citeyear{shu-18}) and \citeauthor{all-19} (\citeyear{all-19}). In our previous paper \citep{asymag}, we have explored the complexity and applicability of the slab model to a greater extent, by further generalising the slab model in a different manner, through embedding it in a magnetically asymmetric environment. We derived the general dispersion relation for linear perturbations and explored the fundamental effects of asymmetry on the nature of eigenmodes. We also carried out an application to magnetic bright points in the incompressible limit in order to demonstrate how powerful the analytical insight may be.
\par In the current paper, after a brief summary of the general results obtained in \citeauthor{asymag} (\citeyear{asymag}) necessary for the present work, we turn our attention to limiting cases that may be applicable to a number of solar and plasma-astrophysical structures. We suggest a few examples of such features that can be considered for magneto-seismological studies using the asymmetric slab model, however, the applicability of the model has to be evaluated on a case-by-case basis. First, the approximation of the equilibrium as a thin, and then as a wide slab are explored. Afterwards, the effect of the relationship between plasma parameters and the magnetic field is considered by examining the limits of zero (i.e. cold plasma), low, high, and infinite plasma-$\beta$ values. Finally, we explore the interesting phenomenon of avoided crossings shown by quasi-sausage and quasi-kink surface modes in response to varying key equilibrium parameters, such as e.g. density or magnetic field strength ratios between the slab and its environment.
\section{MHD waves in an asymmetric magnetic environment} \label{sec:general}
\par We investigate the magnetic waveguide model comprised of unbounded, three-di\-men\-sio\-nal, inviscid and ideal plasma embedded in equilibrium magnetic field $B_0(x)\mathbf{\hat{z}}$, where $\mathbf{\hat{z}}$ is the unit vector in the vertical direction. In order to make the influence of the magnetic asymmetry itself clear, only magneto-acoustic waves are studied, and therefore, the effects of gravity and background bulk motions are not considered. The volume is divided by two surfaces of discontinuity, defining three domains of uniform plasma, with different densities, $\rho$, pressures, $p$, temperatures, $T$, and magnetic field strengths, $B$, across the domains:
\begin{equation}
N(x)=\begin{cases}
N_1 & \qquad x<-x_0,\\
N_0 & \qquad |x|<x_0, \\
N_2 & \qquad x_0< x,\\
\end{cases}
\end{equation}
where $N_i$ denotes any of the physical parameters listed above, namely $N_i = \text{ constant }$ (for $i=0,1,2$). An illustration of this equilibrium configuration can be found in Figure \ref{fig:eq}.
\par Disturbances in the slab and its environment are governed by the ideal MHD equations. By performing a linearisation, and constraining the study to plane-wave solutions propagating in the $z$-direction (i.e. along the slab), we determined that each domain ($i=0,1,2$) is governed by an ordinary differential equation of the form
\begin{equation}
\hat{v}_{x}''-m_i^2 \hat{v}_{x} = 0, \label{theode}
\end{equation}
where $\hat{v}_{x}$ is the amplitude of the $x$-component of the velocity perturbation introduced, and
\begin{equation}
m_i^2=\frac{\left( k^2 v_{Ai}^2 - \omega^2\right)\left( k^2 c_i^2 - \omega^2\right)}{\left(v_{Ai}^2 + c_i^2\right)\left( k^2 c_{Ti}^2 - \omega^2\right)}.
\end{equation}
\begin{figure}[H]
\centering
\resizebox{0.75\linewidth}{!}{%
\begin{tikzpicture}
\path [fill=white!30!yellow!50!orange, opacity=0.65] (3.5,0) -- (3.5,4) -- (6.5,4) -- (6.5,0) -- (3.5,0);
\shade[left color=white!30!yellow!50!orange,right color=white!30!yellow!40!orange!25, opacity=0.5] (6.5,0) -- (6.5,4) -- (8.25,5) -- (8.25,0.85) -- (6.5,0);
\shade[top color=white!30!yellow!50!orange,bottom color=white!30!yellow!40!orange!25, opacity=0.65] (3.5,0) -- (6.5,0) -- (6.5,-0.3) -- (3.5,-0.3) -- (3.5,0);
\shade[top color=white!30!yellow!40!orange!25,bottom color=white!30!yellow!40!orange, opacity=0.65] (3.5,4) -- (5.25,5) -- (8.25,5) -- (6.5,4) -- (3.5,4);
\shade[left color=white!30!yellow!50!orange,right color=white!30!yellow!40!orange!25, opacity=0.5] (6.5,0) -- (6.5,-0.3) -- (8.25,0.55) -- (8.25,0.85) -- (6.5,0.);
\path [fill=white!30!yellow!50!orange, opacity=0.25] (6.5,0) -- (6.5,4) -- (11,4) -- (11,0) -- (6.5,0);
\shade[top color=white!30!yellow!50!orange,bottom color=white!30!yellow!40!orange!25, opacity=0.25] (6.5,0) -- (11,0) -- (11,-0.3) -- (6.5,-0.3) -- (6.5,0);
\shade[top color=white!30!yellow!40!orange!25,bottom color=white!30!yellow!50!orange, opacity=0.25] (6.5,4) -- (8.25,5) -- (12.75,5) -- (11,4) -- (6.5,4);
\shade[left color=white!30!yellow!50!orange,right color=white!30!yellow!40!orange!25, opacity=0.25] (11,-0.3) -- (11,4) -- (12.75,5) -- (12.75,0.65) -- (11,-0.3);
\path [fill=white!30!yellow!50!orange, opacity=0.9] (-1,0) -- (-1,4) -- (3.5,4) -- (3.5,0) -- (-1,0);
\shade[top color=white!30!yellow!50!orange,bottom color=white!30!yellow!40!orange!25, opacity=0.9] (-1,0) -- (3.5,0) -- (3.5,-0.3) -- (-1,-0.3) -- (-1,0);
\shade[top color=white!30!yellow!40!orange!25,bottom color=white!30!yellow!50!orange, opacity=0.9] (-1,4) -- (0.75,5) -- (5.25,5) -- (3.5,4) -- (-1,4);
\draw [<->] (-1,1) -- (-1,0) -- (11,0);
\draw [->] (-1,0) -- (-0.5,0.3);
\draw [color=darkgray, ultra thick, dashed] (3.5,0) -- (3.5,4);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (3.5,4) -- (5,4.9);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (3.5,3) -- (4.5,3.6);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (3.5,2) -- (4.5,2.6);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (3.5,1) -- (4.5,1.6);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (3.5,0) -- (5,0.9);
\draw [ultra thick, blue, path fading=north] (4.5,0) -- (4.5, 1.7);
\draw [ultra thick, blue, path fading=south] (4.5,-0.3) -- (4.5,0);
\draw [ultra thick, blue, path fading=south] (4.5,2.2) -- (4.5,3.9);
\draw [ultra thick, blue, -stealth] (4.5,3.9) -- (4.5,4);
\draw [ultra thick, white!30!blue, path fading=north] (5,0.2) -- (5, 1.7);
\draw [ultra thick, white!30!blue, path fading=south] (5,-0.1) -- (5,0.3);
\draw [ultra thick, white!30!blue, path fading=south] (5,2.2) -- (5,4.2);
\draw [ultra thick, white!30!blue, -stealth] (5,4.1) -- (5,4.3);
\draw [ultra thick, white!60!blue, path fading=north] (5.5,0.4) -- (5.5, 1.7);
\draw [ultra thick, white!60!blue, path fading=south] (5.5,0.1) -- (5.5,0.6);
\draw [ultra thick, white!60!blue, path fading=south] (5.5,2.2) -- (5.5,4.5);
\draw [ultra thick, white!60!blue, -stealth] (5.5,4.3) -- (5.5,4.6);
\draw [ultra thick, white!75!blue, path fading=north] (6,0.6) -- (6, 1.7);
\draw [ultra thick, white!75!blue, path fading=south] (6,0.3) -- (6,0.9);
\draw [ultra thick, white!75!blue, path fading=south] (6,2.2) -- (6,4.8);
\draw [ultra thick, white!75!blue, -stealth] (6,4.5) -- (6,4.9);
\draw [ultra thick, blue, path fading=north] (5.75,0) -- (5.75, 1.7);
\draw [ultra thick, blue, path fading=south] (5.75,-0.3) -- (5.75, 0);
\draw [ultra thick, blue, path fading=south] (5.75,2.2) -- (5.75,3.9);
\draw [ultra thick, blue, -stealth] (5.75,3.9) -- (5.75,4);
\draw [ultra thick, white!30!blue, path fading=north] (6.25,0.2) -- (6.25, 1.7);
\draw [ultra thick, white!30!blue, path fading=south] (6.25,-0.1) -- (6.25, 0.3);
\draw [ultra thick, white!30!blue, path fading=south] (6.25,2.2) -- (6.25,4.2);
\draw [ultra thick, white!30!blue, -stealth] (6.25,4.1) -- (6.25,4.3);
\draw [ultra thick, white!60!blue] (6.75,0.4) -- (6.75, 2.3);
\draw [ultra thick, white!60!blue, path fading=south] (6.75,0.1) -- (6.75, 0.6);
\draw [ultra thick, white!60!blue] (6.75,2.3) -- (6.75,4.5);
\draw [ultra thick, white!60!blue, -stealth] (6.75,4.3) -- (6.75,4.6);
\draw [ultra thick, white!75!blue] (7.25,0.6) -- (7.25, 2.6);
\draw [ultra thick, white!75!blue, path fading=south] (7.25,0.3) -- (7.25, 0.9);
\draw [ultra thick, white!75!blue] (7.25,2.6) -- (7.25,4.8);
\draw [ultra thick, white!75!blue, -stealth] (7.25,4.5) -- (7.25,4.9);
\draw [ultra thick, blue] (3.,0) -- (3., 2.2);
\draw [ultra thick, blue] (3.,2.2) -- (3.,3.9);
\draw [ultra thick, blue, path fading=south] (3.,-0.3) -- (3.,0);
\draw [ultra thick, blue, -stealth] (3.,3.9) -- (3.,4);
\draw [ultra thick, white!30!blue] (3.5,0.2) -- (3.5, 2.5);
\draw [ultra thick, white!30!blue] (3.5,2.5) -- (3.5,4.2);
\draw [ultra thick, white!30!blue, path fading=south] (3.5,-0.1) -- (3.5,0.3);
\draw [ultra thick, white!30!blue, -stealth] (3.5,4.1) -- (3.5,4.3);
\draw [ultra thick, white!60!blue, path fading=north] (4.,0.4) -- (4., 1.7);
\draw [ultra thick, white!60!blue, path fading=south] (4.,2.2) -- (4.,4.5);
\draw [ultra thick, white!60!blue, path fading=south] (4.,0.1) -- (4.,0.6);
\draw [ultra thick, white!60!blue, -stealth] (4.,4.3) -- (4.,4.6);
\draw [ultra thick, white!75!blue, path fading=north] (4.5,0.6) -- (4.5, 1.7);
\draw [ultra thick, white!75!blue, path fading=south] (4.5,2.2) -- (4.5,4.8);
\draw [ultra thick, white!75!blue, path fading=south] (4.5,0.3) -- (4.5,0.9);
\draw [ultra thick, white!75!blue, -stealth] (4.5,4.5) -- (4.5,4.9);
\draw [ultra thick, blue, path fading=north] (1.2,0) -- (1.2, 2.2);
\draw [ultra thick, blue, path fading=south] (1.2,2.8) -- (1.2,3.9);
\draw [ultra thick, blue, path fading=south] (1.2,-0.3) -- (1.2,0);
\draw [ultra thick, blue, -stealth] (1.2,3.9) -- (1.2,4);
\draw [ultra thick, white!30!blue, path fading=north] (1.7,0.2) -- (1.7, 2.2);
\draw [ultra thick, white!30!blue, path fading=south] (1.7,2.8) -- (1.7,4.2);
\draw [ultra thick, white!30!blue, path fading=south] (1.7,-0.1) -- (1.7,0.3);
\draw [ultra thick, white!30!blue, -stealth] (1.7,4.1) -- (1.7,4.3);
\draw [ultra thick, white!60!blue, path fading=north] (2.2,0.4) -- (2.2, 2.2);
\draw [ultra thick, white!60!blue, path fading=south] (2.2,2.8) -- (2.2,4.5);
\draw [ultra thick, white!60!blue, path fading=south] (2.2,0.1) -- (2.2,0.6);
\draw [ultra thick, white!60!blue, -stealth] (2.2,4.3) -- (2.2,4.6);
\draw [ultra thick, white!75!blue, path fading=north] (2.7,0.6) -- (2.7, 2.2);
\draw [ultra thick, white!75!blue, path fading=south] (2.7,2.8) -- (2.7,4.8);
\draw [ultra thick, white!75!blue, path fading=south] (2.7,0.3) -- (2.7,0.9);
\draw [ultra thick, white!75!blue, -stealth] (2.7,4.5) -- (2.7,4.9);
\draw [ultra thick, blue] (-0.6,0) -- (-0.6, 2.2);
\draw [ultra thick, blue] (-0.6,2.2) -- (-0.6,3.9);
\draw [ultra thick, blue, path fading=south] (-0.6,-0.3) -- (-0.6,0);
\draw [ultra thick, blue, -stealth] (-0.6,3.9) -- (-0.6,4);
\draw [ultra thick, white!30!blue] (-0.1,0.2) -- (-0.1, 2.5);
\draw [ultra thick, white!30!blue] (-0.1,2.5) -- (-0.1,4.2);
\draw [ultra thick, white!30!blue, path fading=south] (-0.1,-0.1) -- (-0.1,0.3);
\draw [ultra thick, white!30!blue, -stealth] (-0.1,4.1) -- (-0.1,4.3);
\draw [ultra thick, white!60!blue, path fading=north] (0.4,0.4) -- (0.4, 2.2);
\draw [ultra thick, white!60!blue, path fading=south] (0.4,2.8) -- (0.4,4.5);
\draw [ultra thick, white!60!blue, path fading=south] (0.4,0.1) -- (0.4,0.6);
\draw [ultra thick, white!60!blue, -stealth] (0.4,4.3) -- (0.4,4.6);
\draw [ultra thick, white!75!blue, path fading=north] (0.9,0.6) -- (0.9, 2.2);
\draw [ultra thick, white!75!blue, path fading=south] (0.9,2.8) -- (0.9,4.8);
\draw [ultra thick, white!75!blue, path fading=south] (0.9,0.3) -- (0.9,0.9);
\draw [ultra thick, white!75!blue, -stealth] (0.9,4.5) -- (0.9,4.9);
\draw [ultra thick, blue, path fading=south] (7.25,-0.3) -- (7.25,0.2);
\draw [ultra thick, blue] (7.25,0.2) -- (7.25,3.9);
\draw [ultra thick, blue, -stealth] (7.25,3.9) -- (7.25,4);
\draw [white!30!blue, ultra thick, path fading=north] (7.75,0.2) -- (7.75, 2.2);
\draw [white!30!blue, ultra thick, path fading=south] (7.75,2.8) -- (7.75,4.2);
\draw [white!30!blue, ultra thick, path fading=south] (7.75,-0.1) -- (7.75,0.3);
\draw [white!30!blue, ultra thick, -stealth] (7.75,4.1) -- (7.75,4.3);
\draw [white!60!blue, ultra thick, path fading=north] (8.25,0.4) -- (8.25, 2.2);
\draw [white!60!blue, ultra thick, path fading=south] (8.25,2.8) -- (8.25,4.5);
\draw [white!60!blue, ultra thick, path fading=south] (8.25,0.1) -- (8.25,0.6);
\draw [white!60!blue, ultra thick, -stealth] (8.25,4.3) -- (8.25,4.6);
\draw [white!75!blue, ultra thick, path fading=north] (8.75,0.6) -- (8.75, 2.2);
\draw [white!75!blue, ultra thick, path fading=south] (8.75,2.8) -- (8.75,4.8);
\draw [white!75!blue, ultra thick, path fading=south] (8.75,0.3) -- (8.75,0.9);
\draw [white!75!blue, ultra thick, -stealth] (8.75,4.5) -- (8.75,4.9);
\draw [ultra thick, blue, path fading=north] (8.5,0) -- (8.5, 2.2);
\draw [ultra thick, blue, path fading=south] (8.5,2.8) -- (8.5,3.9);
\draw [ultra thick, blue, path fading=south] (8.5,-0.3) -- (8.5,0);
\draw [ultra thick, blue, -stealth] (8.5,3.9) -- (8.5,4);
\draw [white!30!blue, ultra thick, path fading=north] (9,0.2) -- (9, 2.2);
\draw [white!30!blue, ultra thick, path fading=south] (9,2.8) -- (9,4.2);
\draw [white!30!blue, ultra thick, path fading=south] (9,-0.1) -- (9,0.3);
\draw [white!30!blue, ultra thick, -stealth] (9,4.1) -- (9,4.3);
\draw [white!60!blue, ultra thick, path fading=north] (9.5,0.4) -- (9.5, 2.2);
\draw [white!60!blue, ultra thick, path fading=south] (9.5,2.8) -- (9.5,4.5);
\draw [white!60!blue, ultra thick, path fading=south] (9.5,0.1) -- (9.5,0.6);
\draw [white!60!blue, ultra thick, -stealth] (9.5,4.3) -- (9.5,4.6);
\draw [white!75!blue, ultra thick, path fading=north] (10,0.6) -- (10, 2.2);
\draw [white!75!blue, ultra thick, path fading=south] (10,2.8) -- (10,4.8);
\draw [white!75!blue, ultra thick, path fading=south] (10,0.3) -- (10,0.9);
\draw [white!75!blue, ultra thick, -stealth] (10,4.5) -- (10,4.9);
\draw [ultra thick, blue, path fading=north] (9.75,0) -- (9.75, 2.2);
\draw [ultra thick, blue, path fading=south] (9.75,2.8) -- (9.75,3.9);
\draw [ultra thick, blue, path fading=south] (9.75,-0.3) -- (9.75,0);
\draw [ultra thick, blue, -stealth] (9.75,3.9) -- (9.75,4);
\draw [white!30!blue, ultra thick, path fading=north] (10.25,0.2) -- (10.25, 2.2);
\draw [white!30!blue, ultra thick, path fading=south] (10.25,2.8) -- (10.25,4.2);
\draw [white!30!blue, ultra thick, path fading=south] (10.25,-0.1) -- (10.25,0.3);
\draw [white!30!blue, ultra thick, -stealth] (10.25,4.1) -- (10.25,4.3);
\draw [white!60!blue, ultra thick, path fading=south] (10.75,0.1) -- (10.75,0.6);
\draw [white!60!blue, ultra thick] (10.75,0.6) -- (10.75,4.5);
\draw [white!60!blue, ultra thick, -stealth] (10.75,4.3) -- (10.75,4.6);
\draw [white!75!blue, ultra thick, path fading=south] (11.25,0.3) -- (11.25,0.8);
\draw [white!75!blue, ultra thick] (11.25,0.8) -- (11.25,4.8);
\draw [white!75!blue, ultra thick, -stealth] (11.25,4.5) -- (11.25,4.9);
\draw [ultra thick, blue] (11.,0.2) -- (11.,3.9);
\draw [ultra thick, blue, path fading=south] (11.,-0.3) -- (11.,0.2);
\draw [ultra thick, blue, -stealth] (11.,3.9) -- (11.,4);
\draw [white!40!blue, ultra thick] (11.5,0.4) -- (11.5,4.2);
\draw [white!40!blue, ultra thick, path fading=south] (11.5,-0.1) -- (11.5,0.4);
\draw [white!40!blue, ultra thick, -stealth] (11.5,4.1) -- (11.5,4.3);
\draw [white!70!blue, ultra thick] (12.,0.4) -- (12.,4.5);
\draw [white!70!blue, ultra thick, path fading=south] (12.,0.1) -- (12.,0.6);
\draw [white!70!blue, ultra thick, -stealth] (12.,4.3) -- (12.,4.6);
\draw [white!80!blue, ultra thick] (12.5,0.9) -- (12.5,4.8);
\draw [white!80!blue, ultra thick, path fading=south] (12.5,0.3) -- (12.5,0.9);
\draw [white!80!blue, ultra thick, -stealth] (12.5,4.5) -- (12.5,4.9);
\draw [color=darkgray, ultra thick, dashed] (6.5,0) --(6.5,4);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (6.5,4) -- (8,4.9);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (6.5,2) -- (7.5,2.6);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (6.5,1) -- (7.5,1.6);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (6.5,3) -- (7.5,3.6);
\draw [color=darkgray, ultra thick, dashed, path fading=east] (6.5,0) -- (8,0.9);
\small
\node [below] at (3.5,0) {$-x_0$};
\node [below] at (6.5,0) {$x_0$};
\node [below] at (10.75,0) {$x$};
\node [left] at (-0.55,1) {$z$};
\node [right] at (-0.5,0.3) {$y$};
\large
\node [right] at (0.15,2.5) {$\rho_1$, $p_1$, $T_1$, $B_1$};
\node [right] at (3.75,2) {$\rho_0$, $p_0$, $T_0$, $B_0$};
\node [right] at (7.6,2.5) {$\rho_2$, $p_2$, $T_2$, $B_2$};
\end{tikzpicture}
}
\caption{The equilibrium: a magnetic slab, $|x|\leq{}x_0$ (medium orange colour), sandwiched between two, semi-infinite uniform magnetised plasmas, $x<-x_0$ and $x>x_0$ (light and dark orange). The blue arrows illustrate the magnetic fields, $B_0\mathbf{\hat{z}}$, $B_1\mathbf{\hat{z}}$ and $B_2\mathbf{\hat{z}}$; and the dashed black lines outline the boundaries of the slab.}
\label{fig:eq}
\end{figure}
Here, $\omega$ is the angular frequency of the waves, and $k$ is the $z$-component of the wavenumber vector. The characteristic speeds in the plasma are: the Alfv{\'e}n speed, $v_{Ai}=B_i/\sqrt{\rho_i \mu}$, where $\mu$ is the permeability of free space; and the sound speed, $c_i=\sqrt{\gamma p_i/\rho_i}$, where $\gamma$ is the adiabatic index. $\mu$ and $\gamma$ are uniform across all the domains, as the plasma composition is assumed to be the same in the entire configuration. The third characteristic speed,
\begin{equation}
c_{Ti}^2 = \frac{v_{Ai}^2 c_i^2}{v_{Ai}^2 + c_i^2},
\end{equation}
is the so-called cusp or tube speed of a given domain, which is a sub-sonic and sub-Alfv{\'e}nic speed.
\par For physically real solutions that are evanescent outside the slab, following \citeauthor{asymag} (\citeyear{asymag}), we found the dispersion relation to be
\begin{align}
& 2 \frac{\rho_0}{\rho_1} m_1 \frac{\rho_0}{\rho_2} m_2 \left( k^2 v_{A0}^2 - \omega^2\right)^2 + 2 m_0^2 \left( k^2 v_{A1}^2 - \omega^2\right) \left( k^2 v_{A2}^2 - \omega^2\right) \nonumber\\
& \quad + \rho_0 m_0 \left( k^2 v_{A0}^2 - \omega^2\right) \left[ \frac{m_2}{\rho_2} \left( k^2 v_{A1}^2 - \omega^2\right) + \frac{m_1}{\rho_1} \left( k^2 v_{A2}^2 - \omega^2\right) \right] \left[ \tau_0 + \frac{1}{\tau_0} \right] =0, \label{fullsurface}
\end{align}
where $\tau_0 = \tanh{(m_0 x_0)}$. It is apparent that the full dispersion relation does not decouple into separate solutions for sausage or kink modes, as it would in the symmetric case. Accordingly, the eigenmodes show mixed properties, which is why we refer to them as quasi-sausage and quasi-kink modes (see also \citeauthor{asymm} \citeyear{asymm}). If the asymmetry is weak, i.e. the pressures, densities and magnetic field strengths do not differ too strongly on the two sides of the slab, the dispersion relation decouples into two equations:
\begin{align}
(k^2 v_{A0}^2-\omega^2) \left[ \frac{ \rho_0}{\rho_1} \frac{m_1}{ (k^2 v_{A1}^2-\omega^2)} + \frac{ \rho_0}{\rho_2} \frac{m_2}{ (k^2 v_{A2}^2-\omega^2)} \right] + 2 m_0 \binom{\tanh}{\coth} \{m_0 x_0\} = 0, \label{surface}
\end{align}
where the substitution of $\tanh{(m_0 x_0)}$ describes quasi-sausage modes, and $\coth{(m_0 x_0)}$ gives quasi-kink mode solutions. In the following sections, these dispersion relations will be further examined in limits that are often used in solar or plasma-astrophysics.
\section{Thin-slab approximation} \label{sec:thin}
In the thin-slab approximation, the wavelength, $\lambda$, of the waves is much greater than the width of the slab: $x_0/\lambda \approx kx_0 \ll 1$. This limit may have both photospheric or coronal applications, if we describe them in Cartesian rather than cylindrical geometry. Such a description may be applicable to various solar phenomena, such as prominences (see \citeauthor{arr-oli-bal-12} \citeyear{arr-oli-bal-12}), sunspot light bridges and light walls \citep{yua-nak-14, yan-zha-16, yan-zha-17}, magnetic bright points \citep{utz-09, liu-18}, or any thin and magnetised plasma-astrophysical object that is sandwiched between uniform, homogeneous but asymmetric magnetised semi-infinite plasma environments as a first approximation.
\subsection{Surface modes} \label{sec:ThinSurface}
\par We have only considered perturbations that are evanescent outside the slab, but it should be noted that surface modes are evanescent inside the slab as well, mostly perturbing regions close to the slab boundaries.
\subsubsection{Quasi-sausage surface modes} \label{sec:QS-ThinSurface}
First, let us examine quasi-sausage surface modes, which are described by the component of Equation \eqref{surface} containing the odd $\tanh{(m_0 x_0)}$ function. Supposing that, in this limit, $m_0 x_0 \ll 1$, it follows that $\tanh{m_0 x_0} \approx m_0 x_0$. Substituting this into equation \eqref{surface}, the dispersion relation for quasi-sausage surface modes becomes
\begin{align}
(k^2 v_{A0}^2-\omega^2) \left[ \frac{ \rho_0}{\rho_1} \frac{m_1} { (k^2 v_{A1}^2-\omega^2) } + \frac{ \rho_0}{\rho_2} \frac{m_2}{ (k^2 v_{A2}^2-\omega^2)} \right] + 2 m_0^2 x_0 = 0. \label{surfacethin}
\end{align}
The frequency $\omega^2 = k^2 v_{A0}^2$ would be a trivial solution not considered here (for reasons see \citeauthor{asymag} \citeyear{asymag}). One group of solutions might occur when the phase speed of the waves approaches the cusp speed: $\omega^2 \rightarrow k^2 c_{T0}^2$. Substitution of this approximation into (\ref{surfacethin}), after some algebra, yields
\begin{align}
\omega^2 = k^2 c_{T0}^2 \left[ 1 + \frac{2 (c_0^2 - c_{T0}^2) (v_{A1}^2-c_{T0}^2)^{1/2} (v_{A2}^2-c_{T0}^2)^{1/2}k x_0}{\rho_0 v_{A0}^2 c_0^2 R_v} \right], \nonumber
\label{eq:ss1}
\end{align}
where
\begin{align}
R_v& = \frac{1}{\rho_2} \frac{(v_{A1}^2-c_{T0}^2)^{1/2} (c_{2}^2-c_{T0}^2)^{1/2}}{(v_{A2}^2+c_{2}^2)^{1/2} (c_{T2}^2-c_{T0}^2)^{1/2}} + \frac{1}{\rho_1} \frac{(v_{A2}^2-c_{T0}^2)^{1/2} (c_{1}^2-c_{T0}^2)^{1/2}}{(v_{A1}^2+c_{1}^2)^{1/2} (c_{T1}^2-c_{T0}^2)^{1/2}} .
\end{align}
\par This wave solution is a slow quasi-sausage surface mode, which nears $\omega^2 \rightarrow k^2 c_{T0}^2$ from above as $k x_0 \rightarrow 0$ (the slab becomes thinner).
The condition for its existence is, without any further information on the values of characteristic speeds on either side of the slab, the following:
\begin{align}
&\sqrt{ c_{T1}^2- c_{T0}^2} > 0 \Rightarrow c_{T0}^2 < c_{T1}^2 \qquad \text{and } \quad \sqrt{ c_{T2}^2- c_{T0}^2} > 0 \Rightarrow c_{T0}^2 < c_{T2}^2. \label{ctconditions}
\end{align}
\par The effect on further possible characteristic speed orderings on this group of solutions is examined in Section \ref{sec:appendix-tube} of the Appendix.
\par A different type of quasi-sausage mode solutions approaches one of the external sound speeds in the thin-slab limit. For example, if we take the approximation $\omega^2 \rightarrow k^2 c_2^2$, the solutions are given by
\begin{align}
\omega^2 &= k^2 c_2^2 - \left[ \frac{\rho_2}{\rho_0}\frac{ 2 (c_{T2}^2 - c_2^2)^{1/2} (v_{A2}^4 - c_2^4)^{1/2} (c_0^2 - c_2^2) k^2 x_0}{ (c_{T0}^2 - c_2^2) (c_0^2 + v_{A0}^2)} + \frac{\rho_2}{\rho_1}\frac{ (c_{T2}^2 - c_2^2)^{1/2} (v_{A2}^4 - c_2^4)^{1/2} (c_{1}^2 - c_2^2)^{1/2} }{ (c_{T1}^2 - c_2^2)^{1/2} (v_{A1}^2 - c_2^2)^{1/2} (v_{A1}^2 + c_1^2)^{1/2} } \right]^2.
\label{eq:ss2}
\end{align}
This surface wave solution exists when $c_2 < c_{T1}$ or $\min{(c_1, v_{A1})} < c_2 < \max{(c_1, v_{A1})}$, since outside these bounds, the waves would become leaky. Naturally, the same type of solution can be found if the indices $j=1,2$ are swapped.
\par Let us now consider the case with an isothermal external environment, i.e. when the external sound speeds are the same: $c_1^2=c_2^2=c_e^2$, the solutions are derived by substituting $\omega^2 \approx k^2 c_e^2$ into Equation \eqref{surfacethin}, yielding
\begin{align}
\omega^2 &= k^2 \left[ c_e^2 + \frac{4 (c_0^2 - c_e^2)^2 (k x_0) ^2 }{\rho_0^2 (v_{A0}^2 + c_0^2)^2 (c_{T0}^2 - c_e^2)^2 R_v^2} \right], \nonumber \\
R_v^2 &= \left[ \frac{1}{\rho_2} \frac{1}{(v_{A1}^2-c_e^2)^{1/2} (c_e^2+ v_{A2}^2)^{1/2} (c_{T2}^2-c_e^2)^{1/2}} + \frac{1}{\rho_1} \frac{1}{(v_{A2}^2-c_e^2)^{1/2} (c_e^2+ v_{A1}^2)^{1/2} (c_{T1}^2-c_e^2)^{1/2}} \right]^2, \label{eq:ss3}
\end{align}
for $v_{A1}, v_{A2} < c_{e}$ and $c_{e} < c_{T1}, c_{T2}$. Supposing that $v_{A1}^2 = v_{A2}^2 = v_{Ae}^2$, then $\rho_1 = \rho_2 = \rho_e$ has to be true as well, which leads back to Equation (16a) of \citeauthor{roberts3} (\citeyear{roberts3}). If the external plasma environment is non-magnetic, this case further reduces to Equation (32) of \citeauthor{asymm} (\citeyear{asymm}).
\subsubsection{Quasi-kink surface modes} \label{sec:QK-ThinSurface}
\par Let us now consider quasi-kink mode solutions, which are governed by the $\coth{(m_0 x_0)}$ part of the decoupled dispersion relation (Equation \ref{surface}). In the limit of $m_0 x_0 \ll 1$, $\coth{m_0 x_0} \approx (m_0 x_0)^{-1}$. Substituting this into (\ref{surface}), the dispersion relation for quasi-kink modes becomes
\begin{equation}
\rho_0 x_0 (k^2 v_{A0}^2-\omega^2) \left[ \frac{ m_1}{\rho_1 (k^2 v_{A1}^2-\omega^2)} + \frac{ m_2}{\rho_2 (k^2 v_{A2}^2-\omega^2) }\right] + 2 = 0. \label{kinkthin}
\end{equation}
One kind of these modes might approach one of the external Alfv{\'e}n speeds in the thin-slab approximation. We can obtain this solution by substituting the limit $\omega^2 \rightarrow k^2 v_{A1}^2$ into Equation \eqref{kinkthin}:
\begin{align}
\omega^2 = k^2 \left[ v_{A1}^2 - \frac{\rho_0^2 \rho_2^2}{\rho_1^2} \frac{(c_1^2 - v_{A1}^2) (v_{A0}^2 - v_{A1}^2)^2 (v_{A2}^2 - v_{A1}^2) (c_{T2}^2 - v_{A1}^2) (k^2 x_0)^2 }{(c_{T1}^2 - v_{A1}^2) R_v^2} \right], \label{eq:sk1}
\end{align}
where, now,
\begin{align}
R_v &= 2 \rho_2 k (v_{A2}^2 - v_{A1}^2)^{1/2} (c_{T2}^2 - v_{A1}^2 )^{1/2} (v_{A1}^2 + c_1^2)^{1/2} + \rho_0 (v_{A0}^2 - v_{A1}^2) (c_2^2 - v_{A1}^2)^{1/2} k^2 x_0 . \nonumber
\end{align}
This mode exists as a trapped perturbation when $v_{A1}^2 < c_{T2}^2$ or $ \min{(v_{A2}^2, c_2^2)} < v_{A1}^2 < \max{(v_{A2}^2, c_2^2)}$. When $v_{A1}^2 = v_{A2}^2 = v_{Ae}^2$, the solution further simplifies to
\begin{align}
\omega^2 = k^2 v_{Ae}^2 \left[ 1- \left( 1- \frac{v_{A0}^2}{v_{Ae}^2}\right)^2 \left(\frac{\rho_0 (k x_0)}{2} \right)^2 \left( \frac{1}{\rho_2} \sqrt{1- \frac{c_2^2}{v_{Ae}^2}} + \frac{1}{\rho_1} \sqrt{1- \frac{c_1^2}{v_{Ae}^2}}\right)^2 \right].
\label{eq:sk2}
\end{align}
In the case of an isothermal external environment, i.e. $c_1^2=c_2^2=c_e^2$, and so $\rho_1=\rho_2=\rho_e$, the obtained solution leads back to the one for the symmetric slab (Equation (18a) of \citeauthor{roberts3} \citeyear{roberts3}).
\par An asymmetric equivalent for a different type of kink-mode solutions can be found as well, namely, for those that approach one of the external cusp speeds. With the substitution $\omega^2 \rightarrow k^2 c_{T1}^2$, Equation \eqref{kinkthin} becomes
\begin{align}
\omega^2 &= k^2 \left[ c_{T1}^2 - \frac{\rho_0^2 \rho_2^2}{\rho_1^2} \frac{ R_{v1} (k^2 x_0)^2}{(v_{A1}^2 - c_{T1}^2) (c_1^2 + v_{A1}^2) R_{v2}^2} \right], \label{eq:sk3}
\end{align}
with
\begin{align}
R_{v1}&= (c_1^2 - c_{T1}^2)(v_{A0}^2 - c_{T1}^2)^2 (v_{A2}^2 - c_{T1}^2) (c_{T2}^2 - c_{T1}^2) (v_{A2}^2 + c_2^2), \nonumber \\
R_{v2}&= 2 \rho_2 k (v_{A2}^2 - c_{T1}^2 )^{1/2} (c_{T2}^2 - c_{T1}^2)^{1/2} (v_{A2}^2 + c_{2}^2)^{1/2} + \rho_0 k^2 x_0 (v_{A0}^2 - c_{T1}^2) (c_2^2 - c_{T1}^2)^{1/2}. \nonumber
\end{align}
This solution is a trapped oscillation when $c_{T1}^2 < c_{T2}^2$ or $ \min{(v_{A2}^2, c_2^2)} < c_{T1}^2 < \max{(v_{A2}^2, c_2^2)}$. When the two external cusp speeds are the same, this case reduces to Equation (18b) of \citeauthor{roberts3} (\citeyear{roberts3}).
An asymmetrised generalisation of \citeauthor{roberts3}'s (\citeyear{roberts3}) Equation (19), the approximation for the case when $v_{Ae}/v_{A0}$ is of the order of $kx_0$ can also be obtained:
\begin{align}
\omega^2 = k^2 v_{A1}^2 \left[1 + \frac{\rho_0 \rho_2}{\rho_1} \frac{v_{A0}^2}{v_{A1}^2} \frac{v_{A2}^2 (kx_0)}{2 \rho_2 v_{A2}^2 + \rho_0 v_{A0}^2 x_0} \right]
\end{align}
if $v_{A1} \ll v_{A2}$ is also satisfied. If, conversely, $v_{A2} \ll v_{A1}$, the solution becomes
\begin{align}
\omega^2 = k^2 v_{A1}^2 \left[1 + \frac{\rho_0 \rho_2}{\rho_1} \frac{v_{A0}^2}{v_{A1}^2} \frac{v_{A1}^2 (kx_0) }{\rho_0 v_{A0}^2 x_0 - 2 \rho_2 v_{A1}^2} \right].
\end{align}
When $v_{A1}^2 = v_{A2}^2 = v_{Ae}^2$ holds, this approximation may be given as
\begin{align}
\omega^2 = k^2 v_{Ae}^2 \left[ 1 + \frac{1}{R} \frac{v_{A0}^2}{v_{Ae}^2 } (kx_0) \right],
\end{align}
where
\begin{align}
R= \left[ \frac{\rho_0}{2} \left( \frac{1}{\rho_1} + \frac{1}{\rho_1} \right) \right]^{-1}
\end{align}
is the measure of the density asymmetry used in \citeauthor{asymag} (\citeyear{asymag}).
\par Equations \eqref{eq:ss1}-\eqref{eq:ss3} and \eqref{eq:sk1}-\eqref{eq:sk3} show us that the overall structure of the solutions in the thin-slab limit of an asymmetric magnetic slab remains similar to the symmetric case. This actually confirms how powerful the initial model of a symmetric slab is, which may be seen as practical tool when interpreting MHD wave observations. While analytical approximations of the solutions can still be given, wave dispersion in the asymmetric configuration, however, becomes more complex. The differences in environmental equilibrium parameters can introduce cut-off frequencies, beyond which the oscillations become leaky. In general, Equations \eqref{eq:ss1}-\eqref{eq:sk3} also reveal that surface waves in the magnetic slab are quite sensitive to the relative magnitudes of external densities compared to the internal one, which is why they can be shown to possess avoided crossings (see Section \ref{sec:avcross}).
\subsection{Body modes} \label{sec:ThinBody}
\par Still in the thin-slab approximation, let us now examine the existence and characteristics of body waves. First of all, the dispersion relation itself can be rewritten without the use of hyperbolic functions. As opposed to surface waves, where $m_0^2$ was positive, in the case of body waves, $m_0^2 < 0$. Defining $n_0^2 := - m_0^2 > 0$, the dispersion relation (Equation \ref{fullsurface}) becomes now:
\begin{align}
& 2 \frac{\rho_0}{\rho_1} m_1 \frac{\rho_0}{\rho_2} m_2 \left( k^2 v_{A0}^2 - \omega^2\right)^2 - 2 n_0^2 \left( k^2 v_{A1}^2 - \omega^2\right) \left( k^2 v_{A2}^2 - \omega^2\right) + \nonumber \\
& \rho_0 n_0 \left( k^2 v_{A0}^2 - \omega^2\right) \left[ \frac{m_1}{\rho_1} \left( k^2 v_{A2}^2 - \omega^2\right) + \frac{m_2}{\rho_2 }\left( k^2 v_{A1}^2 - \omega^2\right) \right] \left[-\tan{n_0 x_0} + \cot{n_0 x_0} \right] =0 \label{fullbody} .
\end{align}
\par Here, not only the full, but also the decoupled counterpart of the dispersion relation (Equation \ref{surface}) may be expressed with the tangent and cotangent functions as
\begin{equation}
(k^2 v_{A0}^2-\omega^2) \left[ \frac{ \rho_0}{\rho_1} \frac{m_1}{(k^2 v_{A1}^2-\omega^2)}+ \frac{ \rho_0}{\rho_2} \frac{m_2}{(k^2 v_{A2}^2-\omega^2)}\right] + 2 n_0 \binom{-\tan}{\cot} \{n_0 x_0\} = 0. \label{body}
\end{equation}
\par Finding body mode solutions generally requires different considerations than those used above for surface modes, since assuming that $m_0 x_0 \rightarrow 0 $ as the slab becomes thinner ($k x_0 \rightarrow 0$) will not describe every possible wave mode \citep{roberts2}. Let us prescribe therefore that $m_0 x_0$ should remain \textit{bounded} as $k x_0 \rightarrow 0$ tends towards zero. Considering the dispersion relation for quasi-sausage body waves, the expression $n_0 \tan{(n_0 x_0)}$ needs to remain finite. This necessitates that $n_0 x_0$ converge to the roots of $\tan{(n_0 x_0)} = 0$, that is, $n_0 x_0 = j \pi $ (for $j=1,2,3$ ...). Substituting $\omega^2 \approx k^2 c_{T0}^2 (1 + \nu (k x_0)^2)$ into the definition of $n_0$ and multiplying by $x_0$, we can find the values of $\nu$ as follows:
\begin{align}
n_0^2 x_0^2 &= - m_0^2 x_0^2 = \frac{(c_0^2-c_{T0}^2) (v_{A0}^2 - c_{T0}^2) }{(c_0^2 + v_{A0}^2) c_{T0}^2 \nu }. \label{sbthinnu}
\end{align}
Due to the condition on the values of $n_0^2 x_0^2$, this also equals $j^2 \pi^2$. Substituting this expression and rearranging the equation yields $\nu$ for every (integer) $j$:
\begin{align}
\nu_j= \frac{(c_0^2-c_{T0}^2) (v_{A0}^2 - c_{T0}^2) }{(c_0^2 + v_{A0}^2) c_{T0}^2 j^2 \pi^2}. \label{nutan}
\end{align}
\par We have thus found that there are countably many quasi-sausage body mode solutions, with a different number of nodes inside the slab, which we will call harmonics in the direction of structuring, or, in short, harmonics. The situation so far is analogue algebraically to that in the asymmetric slab in a field-free environment \citep{asymm}. This type of description so far does not deal with the influence that the difference in external equilibrium parameters has on the slab system. There are two possibilities to provide an approximation that considers the effects of external magnetic asymmetry. For example, it is conceivable that either of the external sound or Alfv{\'e}n speeds being higher than $c_{T0}$ may introduce a cut-off frequency, which prevents the phase speed from converging to the cusp speed in the limit of a thin slab.
\par In the dispersion relation for body modes (\ref{body}), the coefficients $n_0^2, m_1^2, m_2^2$ all must \textit{simultaneously} have positive values. In adherence with these requirements, there are three possibilities for slow body mode waves to exist:
\begin{subequations}
\begin{alignat}{1}
&\max{[c_{T0}, \min{(c_1, v_{A1})}, \min{(c_2, v_{A2})}]} < v_{ph} < \min{}[ \min{(c_0, v_{A0})}, \max{(c_1, v_{A1})}, \max{}(c_2, v_{A2}) ], \label{sba}\\
&\max{[c_{T0}, \min{(c_1, v_{A1})}]} < v_{ph} < \min{[ \min{(c_0, v_{A0})}, \max{(c_1, v_{A1})},c_{T2} ]}, \label{sbb}\\
&c_{T0} < v_{ph} < \min{[ \min{(c_0, v_{A0})}, c_{T1}, c_{T2} ]}. \label{sbc}
\end{alignat}
\end{subequations}
An additional fourth category could be defined by swapping the $i=1,2$ indices in condition \eqref{sbb}. We will, however, not deal with this case in further detail, since it does not describe a qualitatively different type of body mode, and one need only swap the same indices in the description of the solution curves that belong to condition \eqref{sbb}, in order to obtain the solutions for such a mirrored situation. The same will be true for the phase speed bands allowing the existence of fast body modes in the thin-slab approximation, as well as the bands of both slow and fast body waves in the wide-slab limit.
\par Proceeding from here, one possibility is to use Equation \eqref{sbthinnu}, and only accept the solutions while they are in either one of the phase speed bands delineated in Equations \eqref{sba} - \eqref{sbc}. Another approach, which we will follow now, is to use an approximation which bounds the solutions to remain in the above-mentioned bands. One must, however, remember that in the extremes of the thin-slab limit, solutions can become leaky, in which case, the approximation described can only serve as a guideline as to the general shape of the solution curves.
\par In this vein, it is possible to provide an approximate expression in all three cases, which highlights the fact that the phase speed of the wave perturbations in the long wavelength approximation converges either to the internal cusp speed, or in a different ordering of speeds, to a value with a slight offset from this speed:
\begin{align}
\omega^2 &\approx k^2 \left[c_{T0} + f\right]^2 \left[1 + \nu (k x_0)^2\right], \qquad \text{ where } \nu>0. \label{sbgen}
\end{align}
The exact offset speed value given by $f$ depends on which band of body waves one examines, i.e.:
\begin{subequations}
\begin{alignat}{3}
f &= \max{[c_{T0}, \min{(c_1, v_{A1})}, \min{(c_2, v_{A2})} ]} - c_{T0} \quad &&\text{ for case (\ref{sba}),} \label{sbva}\\
f &= \max{[c_{T0}, \min{(c_1, v_{A1})} ]} - c_{T0} \quad &&\text{ for case (\ref{sbb}),} \label{sbvb}\\
f &= 0 \quad &&\text{ for case (\ref{sbc}).} \label{sbvc}
\end{alignat}
\end{subequations}
Substituting the appropriate form of $\omega^2$ into equation (\ref{sbthinnu}) gives us the applicable expression for $\nu$ for every (integer) $j$:
\begin{align}
\nu_j= \frac{[(c_{T0}+f)^2-c_0^2] [v_{A0}^2 - (c_{T0} + f)^2] }{(c_0^2 + v_{A0}^2) (c_{T0} + f)^2 \pi^2 j^2 } .
\end{align}
This may then be substituted into Equation \eqref{sbgen} to obtain the approximate phase speed solutions. The corresponding quasi-kink mode may be found applying similar considerations, with the notable difference being that, here, $n_0 \cot{(n_0 x_0)}$ has to remain finite, and so $n_0 x_0 \rightarrow (j-\frac{1}{2}) \pi $ is required (for $j=1,2,3$ ...). The values of $\nu_j$ are, in this case,
\begin{align}
\nu_j= \frac{[(c_{T0}+f)^2-c_0^2] [v_{A0}^2 - (c_{T0} + f)^2]}{(c_0^2 + v_{A0}^2) (c_{T0} + f)^2 \pi^2 (j-\frac{1}{2})^2 } .
\end{align}
Substituting this expression back into Equation \eqref{sbgen}, it is now possible to obtain an approximation for the phase speed (and dispersion) of the slow quasi-kink body modes. Just like the quasi-sausage modes, these waves also approach the speed limit $c_{T0}+f$ bounding from below as the slab becomes thinner.
\par The fast body modes, when they exist, behave similarly to the slow body modes in the thin-slab approximation. Three bands of phase speed potentially containing body mode solutions can be distinguished:\begin{subequations}
\begin{alignat}{1}
&\max{[ \max{(c_0, v_{A0})}, \min{(c_1, v_{A1})}, \min{(c_2, v_{A2})} ]} < v_{ph} < \min{[ \max{(c_1, v_{A1})}, \max{(c_2, v_{A2})} ]} \label{fba}\\
&\max{[ \max{(c_0, v_{A0})}, \min{(c_1, v_{A1})} ]} < v_{ph} < \min{[ \max{(c_1, v_{A1})}, c_{T2} ]} \label{fbb}\\
&\max{(c_0, v_{A0})} < v_{ph} < \min{( c_{T1}, c_{T2} )} \label{fbc}.
\end{alignat}
\end{subequations}
The question, whether the plasma-$\beta$ ($\beta=(2/\gamma)( c_0^2/ v_{A0}^2)$) is low ($c_0 <v_{A0} $) or high ($v_{A0} < c_0$), determines where the fast mode phase speeds converge to in a thin slab. Let us denote $\max{(c_0^2, v_{A0}^2)}$ with $v_{\mathrm{max}}^2$ and $\min{(c_0^2, v_{A0}^2)}$ with $v_{\mathrm{min}}^2$. Then, we may have two main cases with the same formula:
\begin{align}
\omega^2 &\approx k^2 \left[v_{\mathrm{max}} + f + u\right]^2 \left[1 + \frac{1}{\nu (k x_0)^2}\right], \qquad \text{ where } \nu>0. \label{fbgen}
\end{align}
The exact values of the lower and upper speed boundary, $f$ and $u$, depend on which band of allowed solutions one examines. In case of conditions (\ref{fba}), ..., (\ref{fbc}), we have:
\begin{subequations}
\begin{alignat}{3}
\qquad f &= \max{[ v_{\mathrm{max}}, (\min{(c_1, v_{A1})}, \min{(c_2, v_{A2})} ]} - v_{\mathrm{max}}, \label{fbva}\\
\qquad u &= \min {[ \max{(c_1, v_{A1})}, \max{(c_2, v_{A2})} ]} - f - v_{\mathrm{max}}, \nonumber \\
\qquad f &= \max{ [v_{\mathrm{max}}, (\min{(c_1, v_{A1})} ]} - v_{\mathrm{max}}, \label{fbvb} \\
\qquad u &= \min {[ \max{(c_1, v_{A1})}, c_{T2} ]} - f - v_{\mathrm{max}} , \nonumber \\
\qquad f &= 0, \label{fbvc} \\
\qquad u &= \min {( c_{T1}, c_{T2} )} - v_{\mathrm{max}}, \nonumber
\end{alignat}
\end{subequations}
respectively. For the quasi-sausage modes, as before, $n_0 \tan{(n_0 x_0)}$ needs to remain finite, so $n_0 x_0$ must converge to the roots of $\tan{(n_0 x_0)} = 0$. Substituting the prescribed form of $\omega^2$ into the condition that $n_0 x_0 = j \pi $ (for $j=1,2,3$ ...) allows us to determine the possible values of $\nu$ for each of the harmonics in the direction of stratification:
\begin{align}
\nu_j= \left\{\frac{\pi^2 j^2}{k^2 x_0^2} \frac{[v_{\mathrm{min}}^2 + v_{\mathrm{max}}^2] [2 f v_{\mathrm{max}} + 2 u v_{\mathrm{max}} + (f + u)^2] + v_{\mathrm{max}}^4}{[(v_{\mathrm{max}} + f + u)^2 - v_{\mathrm{min}}^2] [v_{\mathrm{max}}+ f + u]^2 }- \frac{2 f v_{\mathrm{max}} + 2 u v_{\mathrm{max}} + [f + u]^2}{[v_{\mathrm{max}} + f + u]^2} \right\}^{-1} \frac{1}{k^2 x_0^2}.
\end{align}
The quasi-kink mode under the same ordering of characteristic speeds may be shown to have coefficients of the form
\begin{align}
&\nu_j= \left\{\frac{\pi^2 [j-\frac{1}{2}]^2}{k^2 x_0^2} \frac{[v_{\mathrm{min}}^2 + v_{\mathrm{max}}^2] [2 f v_{\mathrm{max}} + 2 u v_{\mathrm{max}} + (f + u)^2] + v_{\mathrm{max}}^4}{[(v_{\mathrm{max}} + f + u)^2 - v_{\mathrm{min}}^2] [v_{\mathrm{max}}+ f + u]^2 } - \frac{2 f v_{\mathrm{max}} + 2 u v_{\mathrm{max}} + [f + u]^2}{[v_{\mathrm{max}} + f + u]^2} \right\}^{-1} \frac{1}{k^2 x_0^2}.
\end{align}
Substituting these coefficients into the dispersion relation given by Equation \eqref{fbgen} provides the approximations for the solutions of permitted wave propagation. This holds when the external sound speeds are greater than the external Alfv{\'e}n speeds. If the opposite is true, $\tan{(n_0x_0)} \rightarrow \infty$ needs to be true for quasi-sausage modes, $\cot{(n_0x_0)} \rightarrow \infty$ must hold for quasi-kink modes, and the coefficients $j$ and $j-1/2$ in the above expressions have to be modified accordingly. \par Generally speaking, both types of fast body waves have countably many harmonics in the direction of structuring, in the phase speed band where they may exist. It may be noted that although the effect of density ratios $\rho_0/\rho_1$ and $\rho_0/\rho_2$ cannot be seen explicitly in the calculations of this subsection, they have an indirect influence on the propagation of body waves, since they determine the values and relations of the characteristic speeds in- and outside the slab.
\par The investigation of the thin-slab approximation has thus revealed that the introduction of magnetic asymmetry results, on the one hand, in important contributions to the dispersion of both surface- and body-mode waves and, on the other hand, in the appearance of cut-off frequencies. Beyond these frequencies, the solutions would become leaky, and therefore, when searching for trapped oscillations in the asymmetric waveguide, certain bands of phase speed must be discarded. Unlike the symmetric case, when there is one band of slow body modes, complemented by one or two bands of fast body modes, the cut-off frequencies in the asymmetric case might even result in the existence of two bands of slow mode solutions, and three bands of fast mode solutions for body waves. It can be said that, in general, the solutions are qualitatively analogue to the kink or sausage mode solutions of the symmetric case, while their exact quantitative description is more complex in the asymmetric case. Approximations can still be given for both surface- and body waves, however, thin-slab solutions for the latter will not always exist as trapped waves.
\section{Wide-slab approximation} \label{sec:Wide}
\par Let us now examine the waves propagating in a wide slab placed in an asymmetric magnetic environment. In solar physics, such a system could represent as an approximation of the global stratification of the atmosphere, e.g. the triad of the photosphere, the interface region, and the corona. The wide-slab approximation can also be used to model high-frequency waves present in light bridges of sunspots or elongated magnetic bright points (MBPs).
\par In the wide-slab limit, the width of the slab is much greater than the wavelength of the waves examined, in short: $kx_0 \gg 1$. For example, only about one third of MBPs have non-circular shapes \citep{boveletmbp}, and under appropriate circumstances, they can be regarded as magnetic slabs (for details see \citeauthor{asymag} \citeyear{asymag}). These bright concentrations of magnetic flux in the photosphere are only a few hundred kilometres across \citep{solankimbp}, therefore, for any perturbations with their wavelength $ \lambda \ll 300 km $, an MBP with a width of $ 2 x_0 \approx 100 km $ can be regarded as a wide slab. For larger wavelengths, the thin slab approximation is more appropriate.
\par Light bridges between sunspot umbrae may have various widths from around 1'' up to 4'' with their extent in one direction often far greater than their length \citep{tor-15, sch-16}. A light bridge of intermediate size, with $ 2 x_0 \approx 1500 km$ (or 2'') width can then be regarded as a wide slab for waves with $ \lambda \ll 5 km$, and as a thin slab for longer wavelengths.
\par In the wide-slab approximation, since we have $kx_0 \gg 1$, $m_0x_0 \gg 1$ also applies (see \citeauthor{roberts2} \citeyear{roberts2}), and so the full dispersion relation (\ref{fullsurface}) reduces to
\begin{align}
& \frac{\rho_0}{\rho_1} m_1 \frac{\rho_0}{\rho_2} m_2 \left( k^2 v_{A0}^2 - \omega^2\right)^2 + m_0^2 \left( k^2 v_{A1}^2 - \omega^2\right) \left( k^2 v_{A2}^2 - \omega^2\right) \\ \nonumber
& \quad + \rho_0 m_0 \left( k^2 v_{A0}^2 - \omega^2\right) \left[ \frac{m_2}{\rho_2} \left( k^2 v_{A1}^2 - \omega^2\right) + \frac{m_1}{\rho_1} \left( k^2 v_{A2}^2 - \omega^2\right) \right] =0.
\end{align}
Since $\tanh{m_0 x_0} \rightarrow 1$ and $\coth{m_0 x_0} \rightarrow 1$ as well, for both quasi-sausage and quasi-kink modes, the decoupled dispersion relation (\ref{surface}) takes now the same form:
\begin{align}
(k^2 v_{A0}^2-\omega^2) \left[ \frac{ \rho_0}{\rho_1} \frac{m_1}{ (k^2 v_{A1}^2-\omega^2) }+ \frac{ \rho_0}{\rho_2} \frac{ m_2}{ (k^2 v_{A1}^2-\omega^2)}\right] + 2 m_0= 0. \label{wideslab}
\end{align}
\par As the width of the slab keeps increasing, the waves at one boundary will be less and less affected by the conditions at the other boundary, essentially reducing the problem to a single interface system. This may be shown by going back to the system of equations presented by the boundary conditions, namely, the continuity of velocity- and total pressure perturbation. These can be summarised in a matrix formally algebraically analogous to that in Equation (18) of \citeauthor{asymm} (\citeyear{asymm}). Rearranging the equations and substituting $\tanh{m_0 x_0} = \coth{m_0 x_0} = 1$ into them leads to
\begin{align}
\Lambda_i +\Lambda_0 = 0,
\end{align}
for $i = 1, 2$, which is the dispersion relation of a single interface (see \citeauthor{roberts1} \citeyear{roberts1}), expressed with the $\Lambda_i$ quantities defined as:
\begin{equation}
\Lambda_i = - \frac{i \rho_i}{\omega} \frac{(k^2 v_{Ai}^2 - \omega^2)}{m_i}.
\end{equation}
\par As for wide-slab body modes, the situation is similar to the thin-slab approximation, in that the results obtained for a symmetric (\citeauthor{roberts2} \citeyear{roberts2}) or asymmetric (\citeauthor{asymm} \citeyear{asymm}) slab in a non-magnetic environment can be generalised, so that the constraints set by the external densities and magnetic fields will now also be taken into account. The phase speed of slow body modes, which would converge to $v_{\mathrm{min}}$ in a field-free environment, might only do so with some offset, which may be described as
\begin{align}
\omega^2 = k^2 [v_{\mathrm{min}}- u]^2 \left[ 1 + \frac{\nu}{(k x_0)^2} \right] , \label{stypeawide}
\end{align}
where the exact value of $u$ depends on which band of solutions we examine, i.e. in case of (\ref{sba})-(\ref{sbc}) \begin{subequations}
\begin{alignat}{3}
u&= v_{\mathrm{min}} - \min{[ \min{(c_0, v_{A0})}, \max{(c_1, v_{A1})} , \max{(c_2, v_{A2})} ]},\\
u&= v_{\mathrm{min}} - \min{[ \min{(c_0, v_{A0})}, \max{(c_1, v_{A1})} , c_{T2} ]}, \\
u&= v_{\mathrm{min}} - \min{[ \min{(c_0, v_{A0})}, c_{T1}, c_{T2} ]},
\end{alignat}
\end{subequations}
respectively. For the quasi-sausage mode solutions, as $kx_0 \rightarrow \infty$, the condition can be set that $\tan{(n_0 x_0)} \rightarrow \pm \infty$, which means for the argument that $n_0 x_0 \rightarrow (j-\frac{1}{2}) \pi$. This gives us the $\nu_j$ coefficients as
\begin{align}
\nu_j= \pi^2 \left[j-\frac{1}{2}\right]^2 \frac{[v_{\mathrm{min}}^4 - (v_{\mathrm{min}}^2 + v_{\mathrm{max}}^2) (2 u v_{\mathrm{min}} - u^2)]}{[v_{\mathrm{max}}^2 - (v_{\mathrm{min}} - u)^2] [v_{\mathrm{min}}- u]^2 } . \label{stypeanusaus}
\end{align}
\par The slow body quasi-kink modes can be found in a similar fashion, by setting $n_0 x_0 \rightarrow j \pi$ so that $\cot{(n_0 x_0)} \rightarrow \pm \infty$. This leads to
\begin{align}
&\nu_j= \pi^2 j^2 \frac{[v_{\mathrm{min}}^4 - (v_{\mathrm{min}}^2 + v_{\mathrm{max}}^2) (2 u v_{\mathrm{min}} - u^2)]}{[v_{\mathrm{max}}^2 - (v_{\mathrm{min}} - u)^2] [v_{\mathrm{min}}- u]^2 }. \label{stypeanukink}
\end{align}
Substituting these into Equation \eqref{stypeawide} gives us the approximations of the body modes in a wide slab. This holds when $v_{A0} > c_0$. In a high-beta slab, however, the condition for the quasi-sausage modes becomes $\tan{(n_0x_0)} \rightarrow 0$, while for quasi-saisage modes, $\cot{(n_0x_0)} \rightarrow 0$, and the expressions containing the coefficients $j$ and $j-1/2$ have to be adjusted accordingly.
\par An analogous derivation leads to the approximate solutions for fast mode body waves in the wide slab. These modes can be assumed to tend towards the higher internal characteristic speed in the field-free configuration in the limit of short wavelength approximation. In the magnetically asymmetric configuration, their dispersion is expected to follow
\begin{align}
&\omega^2 = k^2 [v_{\mathrm{max}} + f]^2 \left[1 + \frac{1}{(k x_0)^2 \nu } \right], \label{fastwide}
\end{align}
where the exact value of $f$ depends on which band of solutions one takes. In case (\ref{fba}), (\ref{fbb}) and (\ref{fbc}), the factors $f$ are defined by Equations \eqref{fbva}, \eqref{fbvb} and \eqref{fbvc}, respectively.
\par For quasi-sausage modes, $n_0 x_0 \rightarrow (j-\frac{1}{2}) \pi$ has to be true for $n_0 \tan{(n_0 x_0)}$ to remain finite. This leads to
\begin{align}
&\nu_j= \left\{ \pi^2 \left[j-\frac{1}{2}\right]^2 \frac{ [(v_{\mathrm{min}}^2 + v_{\mathrm{max}}^2) (2 f v_{\mathrm{max}} + f^2) + v_{\mathrm{max}}^4] }{[(v_{\mathrm{max}} + f)^2 - v_{\mathrm{min}}^2] [v_{\mathrm{max}} + f]^2 } - \frac{[2 f v_{\mathrm{max}} + f^2] [k x_0]^2 }{[v_{\mathrm{max}} + f]^2} \right\}^{-1}. \label{fwnusaus}
\end{align}
Similarly, for the quasi-kink modes, $n_0 x_0 \rightarrow j \pi$, so the coefficients and the frequencies are only marginally different:
\begin{align}
&\nu_j= \left\{ \pi^2 j^2 \frac{[(v_{\mathrm{min}}^2 + v_{\mathrm{max}}^2) (2 f v_{\mathrm{max}} + f^2) + v_{\mathrm{max}}^4] }{[(v_{\mathrm{max}} + f)^2 - v_{\mathrm{min}}^2] [v_{\mathrm{max}} + f]^2 } - \frac{[2 f v_{\mathrm{max}} + f^2] [k x_0]^2}{[v_{\mathrm{max}} + f]^2} \right\}^{-1}. \label{fwnukink}
\end{align}
Substituting the appropriate coefficient $\nu$ from Equations \eqref{fwnusaus} and \eqref{fwnukink}, respectively, into Equation \eqref{fastwide} gives us the quasi-sausage and quasi-kink mode solutions for the MHD wave propagation in the wide-slab approximation. This is true when $c_0 > v_{A0}$. In a low-beta slab, however, the condition for quasi-sausage modes is $\tan{(n_0 x_0)} \rightarrow 0$, while for quasi-kink modes, it is $\cot{(n_0 x_0)} \rightarrow 0$. Further, the coefficients $j$ and $j-1/2$ in the above expressions have to be swapped to fulfil these conditions.
\par Much like in the thin-slab approximation, the effect of the differences in the equilibrium parameters in the external environment on body modes is not obvious immediately. To second order, there are no terms containing the density ratios, unlike for the surface waves.
Overall, we may conclude that a magnetically asymmetric environment has greater effect on MHD surface waves than on body modes. Applications to solar and astrophysical plasmas may be exploited, e.g. by means of solar magneto-seismology. Such analysis may be performed with greater success for MHD waves observed in magnetic structures that can be modelled by the thin-slab approximation, since in wide slabs, the effects of asymmetry can be felt to a lesser degree at either of the interfaces, which are distant from each other.
\section{Low-$\beta$ approximation} \label{sec:Lowbeta}
\par In the low-$\beta$ approximation, the magnetic pressure dominates the gas pressure in a given region of plasma ($\beta_i = p_{i}/p_{i,m} << 1$, for $i=0,1$ or $2$.) Therefore in the low-$\beta$ limit, $c_i/v_{Ai} <<1$. This particular approximation has practical as well as analytical use: it reduces the dispersion relation into a simpler form, and it also has a very significant range of applicability, since from about the mid-chromosphere upwards into the corona, the solar atmosphere is considered to be a low-$\beta$ environment. This is exactly the case that we are first going to investigate in the following section, using a model in which the plasma-$\beta$ is low in all three domains. Afterwards, we will describe the limiting case, whereby all three domains of the asymmetric slab system are filled with cold plasma (that is, $\beta_i =0$, for $i=0,1,2$). This considerably simplifies the analytical expressions describing wave dispersion, while it still approximates well the low values of plasma-$\beta$ found in upper solar atmospheric, e.g., in coronal conditions.
\subsection{Low plasma-$\beta$ in all three domains} \label{sec:lll}
\par In the case when the plasma-$\beta$ is low, but non-zero, it is possible to express the coefficients $m_0$, $m_1$, $m_2$ in terms of $\beta_0, \beta_1, \beta_2$, and apply some simplifications to the dispersion relation. This way, the modified wavenumber coefficients become
\begin{align}
{m}_{{i}}^{{2}}&=\frac{\left({{k}^{{2}} \beta_i \gamma {v}_{{Ai}}^{{2}}- 2 {\omega }^{{2}}}\right)\left({{k}^{{2}}{v}_{{Ai}}^{{2}}-{\omega}^{{2}}}\right)}{\left({{k}^{{2}} \beta_i \gamma {v}_{Ai}^{{4}}- \beta_i \gamma {v}_{Ai}^{{2}}{\omega }^{{2}}- 2 v_{{Ai}}^{{2}}{\omega }^{{2}}}\right)}, \qquad \text{for } i=0,1,2, \label{lowcoeff2}\\
{n}_{{0}}^{{2}}&=\frac{\left({{k}^{{2}} \beta_i \gamma {v}_{{Ai}}^{{2}}- 2 {\omega }^{{2}}}\right)\left({{\omega}^{{2}} - {k}^{{2}}{v}_{{Ai}}^{{2}}}\right)}{\left({{k}^{{2}} \beta_i \gamma {v}_{Ai}^{{4}}- \beta_i \gamma {v}_{Ai}^{{2}}{\omega }^{{2}}- 2 v_{{Ai}}^{{2}}{\omega }^{{2}}}\right)}. \label{lowcoeff1}
\end{align}
Assuming the plasma-$\beta$ is small in all three domains, an expansion of the dispersion relation about $(\beta_0, \beta_1, \beta_2) \approx (0, 0, 0)$ can be performed. Taking only zeroth- and first-order terms into consideration, the dispersion relation for surface modes takes the following form:
\begin{align}
L_1 &+ L_2 + L_{0s} - \frac{\gamma}{4} \left\{ L_1 \beta_1 + L_2 \beta_2 + L_{0s} \beta_0 \pm \frac{2 x_0 \beta_0 }{v_{A0}^2} \left[ 1 - \binom{\mathrm{tanh}^2}{\mathrm{coth}^2}\{m_{0z} x_0\} \right] \right\}=0,
\end{align}
where
\begin{align}
L_j &= \frac{\rho_0}{\rho_j} \frac{m_{jz}}{(k^2 v_{Aj}^2 - \omega^2)}, \qquad \text{for } j=1,2, \\
L_{0s} &= \frac{2 m_{0z}}{(k^2 v_{A0}^2 - \omega^2)} \binom{\tanh{}}{\coth{}} \{m_{0z} x_0\}, \\
m_{iz} &= \left( \frac{k^2 v_{Ai}^2 - \omega^2}{v_{Ai}^2} \right)^{1/2}, \qquad \text{for } i=0,1,2. \label{lm0z}
\end{align}
Here, the index 'z' denotes the form of the wavenumber coefficients when $\beta=0$ in the given domain, and the index 's' refers to the fact that the term $L_{0s}$ in necessary for the description of \textit{surface} waves. In this term, the parts containing the $\tanh{}$, $\coth{}$ functions describe quasi-sausage and quasi-kink \textit{surface} modes, respectively. With the same notation, the expansion of the dispersion relation for \textit{body} waves becomes
\begin{align}
L_1 &+ L_2 + L_{0b} - \frac{\gamma}{4} \left\{ L_1 \beta_1 + L_2 \beta_2 + \beta_0 \left[ L_{0b} \mp \frac{1}{2} L_{0b}^2 x_0 (k^2 v_{A0}^2 - \omega^2) \mp \frac{2 n_{0z}^2 x_0}{(k^2 v_{A0}^2 - \omega^2)}\right] \right\}=0,
\end{align}
where further
\begin{align}
L_{0b} &= \frac{2 n_{0z}}{(k^2 v_{A0}^2 - \omega^2)} \binom{-\tan{}}{\cot{}} \{n_{0z} x_0\}, \\
n_{0z} &= \left( \frac{ \omega^2 - k^2 v_{Ai}^2}{v_{Ai}^2} \right)^{1/2}. \label{ln0z}
\end{align}
Here, the index 'b' expresses that the term $L_{0b}$ is required for the description of body modes, and, again, the upper part (with the $\tan{}$ function and the minus signs) describes quasi-sausage \textit{body} modes, while the lower part governs the dispersion of the quasi-kink \textit{body} modes.
\begin{figure}
\centerline{\hspace*{0.001\textwidth}
\includegraphics[width=0.425\textwidth,height=0.4\textwidth,keepaspectratio]{{lll_c0_1.0_c1_0.597614304667_c2_0.697216688778396_vA0_1.5_vA1_4.0_vA2_3.0_R1_0.21_R2_0.36}.eps}
\hspace*{0.02\textwidth}
\includegraphics[width=0.425\textwidth,height=0.4\textwidth,keepaspectratio]{{lll2_c0_1.0_c1_1.58113883008_c2_1.6531313127682572_vA0_1.2_vA1_3.0_vA2_3.5_R1_0.22_R2_0.17}.eps}
}
\vspace{-0.3\textwidth}
\centerline{\Large \bf
\hspace{0.04 \textwidth} \color{black}{(a)}
\hspace{0.4\textwidth} \color{black}{(b)}
\hfill}
\vspace{0.28\textwidth}
\centerline{\hspace*{0.001\textwidth}
\includegraphics[width=0.425\textwidth,height=0.4\textwidth,keepaspectratio]{{lll3_c0_0.5_c1_0.250362056671_c2_0.2472066162365222_vA0_1.0_vA1_0.7_vA2_0.6_R1_2.3_R2_3.0}.eps}
\hspace*{0.02\textwidth}
\includegraphics[width=0.425\textwidth,height=0.4\textwidth,keepaspectratio]{{lll4_c0_0.2_c1_0.100000000002_c2_0.8000000000004003_vA0_0.3_vA1_0.6_vA2_1.0_R1_0.370967741935_R2_0.0780542986425}.eps}
}
\vspace{-0.3\textwidth}
\centerline{\Large \bf
\hspace{0.04 \textwidth} \color{black}{(c)}
\hspace{0.4\textwidth} \color{black}{(d)}
\hfill}
\vspace{0.28\textwidth}
\caption{The phase speed ($\omega/k$) of magnetoacoustic waves that occur in various low-$\beta$ situations characterised by typical choices of $c_i, v_{Ai}, \rho_i$. Blue (red) curves show quasi-sausage (quasi-kink) modes. Hatching represents regions in which no propagating modes are permitted. \textbf{(a)} Slow and fast mode body waves are visualised when $v_{A0}=1.5 c_0$, $v_{A1}=4 c_0$, $v_{A2}=3 c_0$, $c_{1}=0.5976 c_0$, $c_{2}=0.6972 c_0$, $\rho_1/\rho_0=0.21$, $\rho_2/\rho_0=0.36$. \textbf{(b)} One band of slow-, and two bands of fast body modes appear when $v_{A0}=1.2 c_0$, $v_{A1}=3 c_0$, $v_{A2}=3.5 c_0$, $c_{1}=1.5811 c_0$, $c_{2}=1.6531 c_0$, $\rho_1/\rho_0=0.22$, $\rho_2/\rho_0=0.17$. \textbf{(c)} Only slow body modes can be found when $v_{A1}=0.7 v_{A0}$, $v_{A2}=0.6 v_{A0}$, $c_0= 0.5 v_{A0}$, $c_{1}=0.2504 v_{A0}$, $c_{2}=0.2472 v_{A0}$, $\rho_1/\rho_0=2.3$, $\rho_2/\rho_0=3.0$. \textbf{(d)} Even with more prominent asymmetry, one band of slow-, and one band of fast body modes exist when, e.g., $v_{A0}=0.3 v_{A2}$, $v_{A1}=0.6 v_{A2}$, $c_0= 0.2 v_{A2}$, $c_{1}=0.1 v_{A2}$, $c_{2}=0.8 v_{A2}$, $\rho_1/\rho_0=0.3710$, $\rho_2/\rho_0=0.0871$. In each panel, only a couple of examples in each band of body modes are displayed.}
\label{fig:lowbeta}
\end{figure}
\par \citeauthor{roberts3} (\citeyear{roberts3}) explored the low-$\beta$ case in a magnetic, but symmetric environment of the slab, and their qualitative basic findings still hold in an asymmetric slab. Let us now solve the dispersion relation for a few different and representative asymmetric slab systems filled in all three domains with low-$\beta$ plasma, and visualise the wave spectrum. Panel (a) of Figure (\ref{fig:lowbeta}) shows that, when the ordering of the characteristic propagation speeds is $c_i < c_0 < v_{A0} < v_{Ai}$ (where $i=1,2$), no surface modes, but only body waves can be found. The slow body waves have phase speed $ c_{T0} < v_{ph} < c_0$, and the fast body waves propagate with $v_{A0} < v_{ph} < v_{A2}$, which corresponds to the conditions outlined in case (\ref{sba}) for the slow waves, and case (\ref{fba}) for the fast waves. Both the quasi-sausage and the quasi-kink modes are present.
\par If the slab is cooler than its environment (the sound speeds are interchanged, so the ordering is $c_0 < c_e < v_{A0} < v_{Ai}$), the result is similar: both fast and slow waves may be present, as panel (b) of Figure (\ref{fig:lowbeta}) illustrates. A further interesting observation can be made in this equilibrium configuration. While the slow modes represent case (\ref{sbc}), there are two bands of body waves, corresponding to the conditions in (\ref{fba}) for the faster band, and (\ref{fbc}) for the slower band. A similar result was obtained by \citeauthor{roberts3} (\citeyear{roberts3}) for the symmetric case, illustrated in their Figure 7.
\par The situation is vastly different, however, if the Alfv{\'e}n speeds are interchanged (compared to the original ordering shown in panel (a)). In this case, presented in panel (c) of Figure (\ref{fig:lowbeta}), the internal Alfv{\'e}n speed is higher than both external Alfv{\'e}n speeds, and, just as in the symmetric slab, only the slow body waves remain possible (corresponding to the conditions in (\ref{sba})).
\par Panel (d) of Figure (\ref{fig:lowbeta}) demonstrates that even if the asymmetry is great enough in the system so that the internal sound speed falls between the external ones, two bands of body modes remain possible. The slow band is defined by the criteria of (\ref{sbb}), with phase speeds falling between $c_{T0} < v_{ph} <c_0$. The band of fast body waves possess phase speeds in the range $v_{A0} < v_{ph} < v_{A1}$, corresponding to case (\ref{fbb}).
\subsection{Zero-$\beta$ limit} \label{sec:Zerobeta}
\par An extreme but often practical case of the low-$\beta$ approximation is the zero-$\beta$ limit, in which the sound speeds are negligible as compared to the Alfv{\'e}n speeds: $c_1 \approx c_2 \approx c_0 \approx 0$, which can be said to describe coronal plasma conditions using the MHD framework. This assumption also leads to a vastly simplified equation for the description of wave dispersion. The zero-$\beta$ approximation eliminates slow body waves, and only the fast body waves remain possible, just like in the symmetric case \citep{roberts3}.
\par In the zero-$\beta$ limit the modified wavenumber coefficients are given by Equations \eqref{lm0z} and \eqref{ln0z}, and the first-order terms of the expanded dispersion relation vanish, leaving
\begin{align}
\binom{\tan{}}{-\cot{}} \{n_{0z} x_0\} = \frac{1}{2} \frac{\rho_0}{\rho_1} \frac{v_{A0} (k^2 v_{A0}^2 - \omega^2)^{1/2}} {v_{A1} (k^2 v_{A1}^2 - \omega^2)^{1/2}} + \frac{1}{2} \frac{\rho_0}{\rho_2} \frac{v_{A0} (k^2 v_{A0}^2 - \omega^2)^{1/2}}{v_{A2} (k^2 v_{A2}^2 - \omega^2)^{1/2}} ,
\end{align}
Total pressure balance must be upheld at both interfaces of the asymmetric slab. In terms of the characteristic speeds, this condition can be expressed as
\begin{align}
\frac{\rho_i}{\rho_j}= \frac{c_j^2 + \frac{1}{2} \gamma v_{Aj}^2}{c_i^2 + \frac{1}{2} \gamma v_{Ai}^2}, \quad \text{ where } i=0,1,2; \quad j=0,1,2; \quad i \neq j. \label{eq:ratio}
\end{align}
Since the sound speeds are zero in this limit, Equation \eqref{eq:ratio} can be used to further simplify the dispersion relation:
\begin{align}
\binom{\tan}{-\cot} \{n_{0z} x_0\} &= -\frac{1}{2} \left( \frac{n_{0z}}{m_{1z}} + \frac{n_{0z}}{m_{2z}} \right). \label{zerobeta}
\end{align}
In the fully symmetric case, this expression reduces to Equations (22) and (23) of \citeauthor{roberts3} (\citeyear{roberts3}).
\par In Equation \eqref{zerobeta}, $n_{0z}$, $m_{1z}$ and $m_{2z}$ $>$ $0$, which are only true when $k^2 v_{A0}^2 < \omega^2 < \min{(k^2 v_{A1}^2, k^2 v_{A2}^2)}$. The role of asymmetry manifests in this selection for the lower Alfv{\'e}n speed value. An alternate description of body waves in this band, e.g. in the wide-slab limit, can be constructed by the substitution of $\omega^2 = k^2 v_{A, \mathrm{min}}^2 [\rho_{ \mathrm{min}}/\rho_0] \left[ 1 + \nu/(kx_0)^2 \right]$, where the index $m$ denotes external equilibrium parameters of the side with the lower (external) Alfv{\'e}n speed. Applying the same considerations that we used while deriving the wide-slab approximation in the general case allows us to determine the coefficients $\nu_j$. This process yields the expression
\begin{align}
\omega^2 = k^2 v_{A, \mathrm{min}}^2 \frac{\rho_{\mathrm{min}}}{\rho_0} \left[ 1 + \frac{\pi^2 \left( j - \frac{1}{2}\right)^2}{k^2 x_0^2} \right] \label{zerobetas}
\end{align}
for quasi-sausage modes, and
\begin{align}
\omega^2 = k^2 v_{A, \mathrm{min}}^2 \frac{\rho_{\mathrm{min}}}{\rho_0} \left[ 1 + \frac{\pi^2 j^2}{k^2 x_0^2} \right] \label{zerobetak}
\end{align}
for quasi-kink modes of the fast body wave. A basic diagnostic purpose may be fulfilled by making these approximations. Namely, for given values of $j$, $\omega$ and $k$, Equations \eqref{zerobetas} and \eqref{zerobetak} determine a simple connection between the lower external Alfv{\'e}n speed and the external-to-internal density ratio on the same side, therefore, knowing one of them can provide an estimate of the other.
The description of eigenmodes in the low- and zero-$\beta$ asymmetric slab is formally analogous to that in the symmetric case. However, the difference in equilibrium external parameters - even in this simplified scenario - adds some analytical complexity. Perhaps the most important difference resulting from the asymmetry is that, although the fast body mode solution curves are still located between the external and internal Alfv{\'e}n speeds, they experience a cut-off in the thin-slab limit: with phase speed above the lower external Alfv{\'e}n speed, the waves become leaky.
\section{High-$\beta$ approximation} \label{sec:Highbeta}
In the approximation of high plasma-$\beta$ magnetic pressure is dominated by plasma kinetic pressure. Since this is more generally true for lower solar atmospheric conditions, it is worthwhile to explore the behaviour of wave perturbations in this limit of plasma and magnetic parameters. First, we are going to derive the dispersion relation for the case of high plasma-$\beta$ in all three domains, and provide examples of its numerical solution. Further on, we will demonstrate the analytical ease that the extreme infinite-$\beta$ approximation brings to the problem.
\subsection{High plasma-$\beta$ in all three domains} \label{sec:hhh}
\par If the plasma-$\beta$ is high, the Alfv{\'e}n speeds are negligible compared to the sound speeds of each domain: $c_{i}/v_{Ai} \gg 1$ for $i=0,1,2$. In this limit, the modified wavenumber coefficients take the following form:
\begin{align}
{m}_{{i}}^2&=\frac {({{k}^{{2}}{c}_{{i }}^{{2}}-{\omega }^{{2}}}) ({2 {k}^{{2}} {c}_{{i}}^{{2}}- \gamma {\beta }_{{i}} {\omega }^{{2}}})}{{c}_{{i}}^2 ({2 {k}^{{2}} {c}_{{i}}^{{2}}- 2 {\omega }^{{2}}- \gamma {\beta }_{{i}} {\omega }^{{2}}})} \quad \text{ for } i=0,1,2, \label{highcoeff2} \\
{n}_{{0}}^2&=\frac {({{\omega }^{{2}}- {k}^{{2}}{c}_{{0 }}^{{2}}}) ({2 {k}^{{2}} {c}_{{0}}^{{2}}- \gamma {\beta }_{{0}} {\omega }^{{2}}})}{{c}_{{0}}^2 ({2 {k}^{{2}} {c}_{{0}}^{{2}}- 2 {\omega }^{{2}}- \gamma {\beta }_{{0}} {\omega }^{{2}}})}. \label{highcoeff1}
\end{align}
\par Several modes are possible in this case, which is illustrated in Figure \ref{fig:hhh}, including both surface and body waves. For an analytical description of the wave modes, the dispersion relation can be expanded about $(1/\beta_0, 1/\beta_1, 1/\beta_2) \approx (0,0,0)$. Keeping only zeroth- and first-order terms then yields
\begin{align}
H_1 &+ H_2 + H_{0s} + \frac{1}{\gamma \omega^2} \left\{ \left[ 2 k^2 c_1^2 - \omega^2 \right] \frac{H_1}{\beta_1} + \left[ 2 k^2 c_2^2 - \omega^2 \right] \frac{H_2}{\beta_2} \right. \nonumber \\
& \quad \left. + \left[ 2 k^2 c_0^2 - \omega^2 \right] \frac{H_{0s}}{\beta_0} + \frac{2 x_0 m_{0z}^2}{\beta_0} \left[ 1 - \binom{\mathrm{tanh}^2}{\mathrm{coth}^2}\{ m_{0z} x_0\} \right] \right\} =0
\end{align}
for surface waves, where
\begin{align}
H_j &= - \frac{\rho_0}{\rho_j} \frac{m_{jz}}{\omega^2}, \qquad \text{for } j=1,2, \\
H_{0s} &= - \frac{2 m_{0z}}{ \omega^2 } \binom{\tanh{}}{\coth{}} \{m_{0z} x_0\}, \\
m_{iz} &= \left( \frac{k^2 c_{i}^2 - \omega^2}{c_{i}^2} \right)^{1/2}, \qquad \text{for } i=0,1,2. \label{hm0z}
\end{align}
With the same notation, the expansion of the dispersion relation for body waves becomes
\begin{align}
H_1 &+ H_2 - H_{0b} + \frac{1}{\gamma \omega^2} \left\{ \left[ 2 k^2 c_1^2 - \omega^2 \right] \frac{H_1}{\beta_1} + \left[ 2 k^2 c_2^2 - \omega^2 \right] \frac{H2}{\beta_2} \right. \nonumber \\
& \quad \left. - \frac{H_{0b}}{\beta_0} \left[ 2 k^2 c_0^2 - \omega^2 \right] - \frac{2 x_0 n_{0z}^2}{\beta_0} \left[1 + \binom{\mathrm{tan}^2}{\mathrm{cot}^2}\{ n_{0z} x_0\} \right] \right\} =0,
\end{align}
where further
\begin{align}
H_{0b} &= \frac{2 n_{0z}}{ \omega^2} \binom{-\tan{}}{\cot{}} \{n_{0z} x_0\}, \\
n_{0z} &= \left( \frac{ \omega^2 - k^2 c_{i}^2}{c_{i}^2} \right)^{1/2}. \label{hn0z}
\end{align}
\par Let us now solve the dispersion relation for a few interesting cases of high-$\beta$ slabs, enclosed in high-$\beta$ environments, and visualise the solutions. Panel (a) of Figure \ref{fig:hhh} illustrates the results of the numerical examination in a typical high-$\beta$ equilibrium configuration. There is a band of fast body modes (corresponding to case (\ref{fba})) confined between the sound speeds, and a band of slow body modes between the internal Alfv{\'e}n- and cusp speeds (which represents the conditions outlined in (\ref{sba}). Here, slow surface waves are present as well, as opposed to the low-$\beta$ limit.
\par Next, panel (b) of Figure \ref{fig:hhh} shows that the dispersion curves do not change qualitatively when the Alfv{\'e}n speeds are interchanged. Besides the slow surface mode, there is still a band of fast body modes fulfilling the conditions of (\ref{fba}), and a band of slow body modes representative of (\ref{sbc}). However, when the sound speeds are interchanged, as it may be seen in panel (c) of Figure \ref{fig:hhh}, only the slow surface waves and the band of slow body waves appear, while there are no fast waves present at all.
\par The splitting of body mode bands remains allowed in the high-$\beta$ limit. Slow body modes adhering to the conditions in (\ref{sbc}), as well as slow surface modes are present. One band of fast body modes is confined between the internal sound speed and the lowest of the external cusp speeds, as outlined in (\ref{fbc}). A second band of fast body modes realizes case (\ref{fbb}), comprising of waves with $v_{A1} < v_{ph} < c_{T2}$, while a third band of fast body modes corresponds to case (\ref{fba}) and contains waves with phase speeds $v_{A2} < v_{ph} < c_1$.
\begin{figure}
\centerline{\hspace*{0.001\textwidth}
\includegraphics[width=0.43\textwidth,height=0.4\textwidth,keepaspectratio]{{hhh_c0_1.0_c1_1.66833250083_c2_1.8741664813991312_vA0_0.7_vA1_0.2_vA2_0.1_R1_0.5_R2_0.4}.eps}
\hspace*{0.02\textwidth}
\includegraphics[width=0.425\textwidth,height=0.4\textwidth,keepaspectratio]{{hhh2_c0_1.0_c1_1.5_c2_1.4000000000001738_vA0_0.6_vA1_0.95_vA2_0.9_R1_0.433032616239_R2_0.493358633776}.eps}
}
\vspace{-0.3\textwidth}
\centerline{\Large \bf
\hspace{0.04 \textwidth} \color{black}{(a)}
\hspace{0.4\textwidth} \color{black}{(b)}
\hfill}
\vspace{0.28\textwidth}
\centerline{\hspace*{0.001\textwidth}
\includegraphics[width=0.43\textwidth,height=0.4\textwidth,keepaspectratio]{{hhh3_c0_1.4_c1_1.15_c2_1.100000000000906_vA0_1.0_vA1_0.4_vA2_0.3_R1_1.91871780195_R2_2.1738002594}.eps}
\hspace*{0.02\textwidth}
\includegraphics[width=0.425\textwidth,height=0.4\textwidth,keepaspectratio]{{hhh4_c0_0.5_c1_1.1_c2_1.7999999999999203_vA0_0.2_vA1_0.7_vA2_1.0_R1_0.175077239959_R2_0.069558101473}.eps}
}
\vspace{-0.3\textwidth}
\centerline{\Large \bf
\hspace{0.04 \textwidth} \color{black}{(c)}
\hspace{0.4\textwidth} \color{black}{(d)}
\hfill}
\vspace{0.28\textwidth}
\caption{Solutions to the dispersion relation, similar as Figure \ref{fig:lowbeta}, but for high-$\beta$ cases. \textbf{(a)} Slow and fast mode body waves, as well as slow surface waves are present when $v_{A0}=0.7 c_0$, $v_{A1}=0.2 c_0$, $v_{A2}=0.1 c_0$, $c_{1}=1.6683 c_0$, $c_{2}=1.8742 c_0$, $\rho_1/\rho_0=0.5$, $\rho_2/\rho_0=0.4$. \textbf{(b)} The same modes appear when $v_{A0}=0.6 c_0$, $v_{A1}=0.95 c_0$, $v_{A2}=0.9 c_0$, $c_{1}=1.5 c_0$, $c_{2}=1.4 c_0$, $\rho_1/\rho_0=0.433$, $\rho_2/\rho_0=0.4934$. \textbf{(c)} Only slow surface and body modes can be observed when $v_{A1}=0.4 v_{A0}$, $v_{A2}=0.3 v_{A0}$, $c_0= 1.4 v_{A0}$, $c_{1}=1.15 v_{A0}$, $c_{2}=1.1 v_{A0}$, $\rho_1/\rho_0=1.9188$, $\rho_2/\rho_0=2.1738$. \textbf{(d)} Three bands of fast body modes, one band of slow body modes, and a pair of slow surface modes exist when $v_{A0}=0.2 v_{A2}$, $v_{A1}=0.7 v_{A2}$, $c_0= 0.5 v_{A2}$, $c_{1}=1.1 v_{A2}$, $c_{2}=1.8 v_{A2}$, $\rho_1/\rho_0=0.1751$, $\rho_2/\rho_0=0.071$. In each panel, only a couple of examples in each band of body modes are displayed.}
\label{fig:hhh}
\end{figure}
\subsection{Infinite-$\beta$ limit} \label{sec:Infinitebeta}
In this limit, magnetic forces can be considered negligible as compared to kinetic ones, and so the approximation $ v_{Ai} \approx 0$ for $i= 0, 1, 2$ can be taken, and only "fast" (i.e. essentially purely acoustic) body waves occur. The modified wavenumber coefficients simplify to the expressions of Equations \eqref{hm0z} and \eqref{hn0z}, and the first-order terms vanish from the dispersion relation. Using the pressure balance condition \eqref{eq:ratio}, the dispersion relation for body modes reduces to
\begin{align}
\binom{\tan}{-\cot} \{n_0 x_0\} = \frac{1}{2} \left( \frac{m_1}{n_0} \frac{ c_1^2}{c_0^2} + \frac{m_2}{n_0} \frac{ c_2^2}{c_0^2} \right). \label{infinitebeta_body}
\end{align}
In the symmetric case, Equation \eqref{infinitebeta_body} further simplifies to Equations (24) and (25) of \citeauthor{roberts3} (\citeyear{roberts3}). The condition $n_{0z}$, $m_{1z}$ and $m_{2z}$ $>$ $0$ is only fulfilled when $k^2 c_{0}^2 < \omega^2 < \min{(k^2 c_{1}^2, k^2 c_{2}^2)}$. The band of fast body waves therefore exists between the internal sound speed and the lower of the two external ones. Similarly to the zero-$\beta$ case, an infinite number of harmonics exist in the direction of structuring due to the periodicity of the tangent and cotangent functions. Introducing the notation $ c_m=\min{(c_{1}, c_{2})}$, the waves are expected to behave as $\omega^2 = k^2 c_{m}^2 [\rho_m/\rho_0] \left[ 1 + \nu / (kx_0)^2 \right]$. By using the alternate method described during the derivation of the general wide-slab approximations, the coefficients $\nu_j$ can be determined. Eventually, the quasi-sausage mode solutions are given as
\begin{align}
\omega^2 = k^2 c_{m}^2 \frac{\rho_m}{\rho_0} \left[ 1 + \frac{\pi^2 \left( j - \frac{1}{2}\right)^2}{k^2 x_0^2} \right], \label{infinitebetas2}
\end{align}
while the approximation for quasi-kink modes becomes
\begin{align}
\omega^2 = k^2 c_{m}^2 \frac{\rho_m}{\rho_0} \left[ 1 + \frac{\pi^2 j^2}{k^2 x_0^2} \right]. \label{infinitebetak2}
\end{align}
In this case, Equations \eqref{infinitebetas2} and \eqref{infinitebetak2} showcase a simple connection between the lower external sound speed, and the ratio of the same side's external density to the internal one for any given value of the wavenumber and angular frequency of a given order body mode.
Similarly to the low-$\beta$ case, in the limits of high and infinite plasma-$\beta$ as well, the asymmetry brings a more complex dependence of the frequencies of eigenmodes on the set of external parameters characteristic of the system. The difference of external equilibrium parameters affects the frequencies of surface-, as well as body waves, and introduces cut-off frequencies regarding to the trapped propagation of both. Notably, due to this cut-off, there can be more than one band of either fast or slow body modes. Furthermore, in the wide-slab limit, the phase speeds of surface modes will diverge. The latter can lead to a phenomenon known as avoided crossing, which will be detailed in the next section.
\section{The effect of varying magnetic field and density ratios} \label{sec:avcross}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth, height=0.35\textwidth, keepaspectratio]{{hhh_finer__3d_compare_surface_2_c0_1.0_c1_1.61522230341_c2_1.8741664813991312_vA0_0.7_vA1_0.2_vA2_0.1}.eps}
\caption{The slow quasi-sausage and quasi-kink surface mode solutions of the dispersion relation are plotted for a fixed value of dimensionless slab width ($kx_0$), and changing density ratio on one side of the slab. The other density ratio using the density from the other side of the slab is held fixed at $\rho_2/\rho_0 = 0.4$. The characteristic speed orderings are identical to those of Figure \ref{fig:hhh}, but $c_1$ varies to satisfy equilibrium pressure balance. The black bold line indicates the values of the density ratio and the dimensionless slab width, for which the phase speeds of the quasi-sausage and quasi-kink modes perform a close approach and avoided crossing.}
\label{fig:avcross}
\end{figure}
\begin{sloppypar} Avoided crossings of eigenmodes are known to happen in various physical processes, from quantum mechanics through coupled spring oscillations, to photochemistry \citep{ naqv, deva, heiss, nov}. In MHD, they were first found on dispersion diagrams of magneto-acoustic gravity waves of a plane stratified atmosphere by \citeauthor{abdel} (\citeyear{abdel}), and further examined by e.g. \citeauthor{mather} (\citeyear{mather}). Avoided crossings occur when constraints in a physical system supporting wave perturbations preclude the phase speeds of two modes from being equal, which is accompanied by a transferral of properties between the modes. \citeauthor{asymm} (\citeyear{asymm}) showed that avoided crossing happens between quasi-sausage and quasi-kink modes of a slab in a non-magnetic asymmetric environment, when the density ratio of the two external domains is varied. \end{sloppypar}
\par In the current study, we find that the quasi-sausage and quasi-kink eigenmodes of an asymmetric slab in a magnetic environment perform avoiding crossings as well. Figure \ref{fig:avcross} demonstrates this phenomenon for the slow surface modes under the equilibrium conditions used in Figure \ref{fig:hhh}b. This behaviour is not specific to slow mode solutions, but since the fast surface mode does not exist in a high-$\beta$ configuration, our examination proceeds with the slow surface modes.
\par A substantial difference from the non-magnetic case is that the closest approach between the phase speeds of the slow quasi-sausage and quasi-kink surface modes does not occur at equal external densities this time, due to the presence of the magnetic asymmetry. Keeping the external Alfv{\'e}n speed $v_{A1}$ fixed while varying the external density ratio $\rho_1/\rho_0$ implies that the strength of the external equilibrium magnetic field $B_1$ is continuously changing throughout this numerical examination, too. Thus, the case of equal external densities ($\rho_1=\rho_2$) on its own does not correspond to a symmetric configuration, and the phase speeds of the quasi-modes will show the greatest similarity at a different value of the changing density ratio.
\par It may be seen in Figures \ref{fig:avcross}-\ref{fig:avcross3}, avoided crossings happen when either the density ratio on one side, or the ratio between one of the external equilibrium magnetic field stength values to the internal one is changed. In the figure presented, the left-side external Alfv{\'e}n speed, $v_{A1}$, grows from the lower right to the upper left corner. The displacement perturbations of quasi-sausage and quasi-kink modes, as a result, show the effect of avoided crossing, as one follows the diagonal from the first, through the fifth, to the ninth panel. Figure \ref{fig:avcross3}b illustrates how the changing magnetic field ratio shifts the point of closest approach for different $kx_0$ values.
\par We conclude that, although both thermodynamic and magnetic asymmetry can cause avoided crossings to occur, the behaviour of the slow quasi-sausage and quasi-kink modes during such approaches is qualitatively similar as in the case of an asymmetric slab with a field-free environment. The consecutive panels in the rows of Figure \ref{fig:avcross3}a show that as the symmetric configuration is approached, the amplitude of the quasi-sausage mode on the two interfaces begins to change, and the plane with the highest amplitude eventually shifts from the left side to the right, following the interface with the lower density ratio. In the meantime, the highest amplitude of the quasi-kink mode does the exact opposite, by jumping from the right to the left boundary of the slab, thus following the interface with the higher density ratio. The same exchange of properties can also be observed in the columns of Figure \ref{fig:avcross3}a, this time governed by the relative magnitudes of the external magnetic fields.
\begin{figure}
\centerline{\hspace*{0.001\textwidth}
\includegraphics[width=0.9\textwidth,height=0.65\textwidth,clip=]{{hhh_FULLAC_B1_R1_vary_c0_1.0_c1_1.64301548991_c2_1.8741664814_vA0_0.7_last_vA1_0.139833071739_vA2_0.1}.eps}
}
\vspace{-0.6\textwidth}
\centerline{\Large \bf
\hspace{0.05 \textwidth} \color{black}{(a)}
\hfill}
\vspace{0.58\textwidth}
\centerline{\hspace*{0.001\textwidth}
\includegraphics[width=0.8\textwidth,height=0.55\textwidth,clip=]{{hhh_FULLAC_rho_omega_grid}.eps}
}
\vspace{-0.52\textwidth}
\centerline{\Large \bf
\hspace{0.05 \textwidth} \color{black}{(b)}
\hfill}
\vspace{0.48\textwidth}
\caption{(a) The spatial variation of the transverse displacement perturbation ($\hat{\xi_x}$) is plotted. The upper (lower) parts of the panel represent the quasi-sausage (quasi-kink) mode solutions. In each column, the left-side density ratio remains constant, while in each row, the ratio of the left-side external magnetic field to the internal one ($B_1^{*} = B_1/B_0$) is kept at the same value. The right-side density ratio is held fixed at $\rho_2/\rho_0 = 0.4$. The characteristic speeds are: $v_{A0}=0.7 c_0$, $v_{A1}=0.2 c_0$, $v_{A2}=0.1 c_0$, $c_{2}=1.8742 c_0$, but $c_1$ varies to satisfy equilibrium pressure balance. Panel (b) displays solution curves corresponding to different values of $B_1^{*}$, for specific values of the non-dimensional slab width ($kx_0$).}
\label{fig:avcross3}
\end{figure}
\section{Conclusion} \label{sec:conc}
\par Wave dispersion in a magnetic slab embedded in plasma atmospheres of various structures (magnetic or free of field, uniform or asymmetric) is a complex problem that has been studied for decades, and yet still offers new solutions and discoveries. The associated dispersion relation for wave propagation, in general, is a transcendental equation. The dispersion relation often describes a rich spectrum of normal modes. Investigating a magnetic slab surrounded by an asymmetric field-free environment, \citeauthor{asymm} (\citeyear{asymm}) found that the difference in external conditions leads to important changes in wave dispersion. All of the solutions are described by one and the same dispersion relation, and the eigenmodes show mixed characteristics.
\par The situation is qualitatively similar in the case of added magnetic asymmetry in the environment. After deriving the equation that governs wave dispersion in this configuration, and examining the incompressible limit \citep{asymag}, we have now continued to explore various approximations in important and limiting cases. With the aim of providing the theoretical background for future applications, analytically solvable equations descriptive of wave behaviour were retrieved for slabs much thinner or wider than the characteristic length-scale set by the wavelength of perturbations. The presence of a magnetically asymmetric environment modifies the frequencies of eigenmodes, and introduces a number of cut-off frequencies, as well as new possibilities for the ordering of characteristic speeds, and therefore, different phase speed bands in which trapped solutions remain possible. All these various new and interesting cases deserve their own description, since the analytical expressions retrieved in the thin- and wide-slab, as well as low- and high-$\beta$ limits simplify the calculations to be performed. Furthermore, they provide clear connections between the physical parameters describing the system, and the properties (wavenumbers, angular frequencies) of eigenmodes, which express the influence of environmental asymmetry.
\par With these approximations, thus, a set of mathematical tools is provided, that we can use to describe a plethora of asymmetric solar astrophysical waveguides, such as e.g. the global stratification of the solar atmosphere, prominences or plumes in the corona, and magnetic bright points, light bridges or light walls in the photosphere. While these are all promising candidates to apply our asymmetric slab model to, we emphasize that there are natural limitations to the applicability. The validity of considering a solar structure as a slab sandwiched between asymmetric external layers is case-dependent and determined first and foremost by the extent of local gradients in plasma/magnetic parameters. Using an asymmetric slab model to describe a solar structure is a sensible approach if the difference between the three regions constructed is relatively big compared to the variation of background parameters within the three regions (which are essentially averaged out in this description). Therefore, the spatial scale of local gradients in the direction of structuring (i.e. the $x$-direction) should be comparable to the size of the slab. This assumption may or may not be true in general; it should be evaluated on a case-by-case basis for the specific waveguides one intends to study.
\par For a thin slab, most of the solutions are analogous to the supported modes of a slab placed in a symmetric magnetic environment. There are, however, a few more possibilities to arrange characteristic speeds, not all of which can be attributed to a direct parallel with a simple symmetrisation of external parameters. For a first approximation for body modes, the asymmetry mainly shows as quantitative modifications and cut-offs in their frequency, beyond which the modes would become leaky. The ratio of the internal density to the external ones directly appears in the description of surface waves, while it does not appear in the approximation for body modes. This is pointing to the fact that the latter are less sensitive to changes in the density ratio.
\par We have also examined how the ratio of plasma kinetic and magnetic pressures affects supported modes. These approximations can serve as the basis of direct applications to solar physics, which is to be the subject of a follow-up article. Here, it was detailed how, in a more general high-$\beta$ environment, representative of photospheric circumstances, all but the fast surface mode solutions might appear. However, under upper-chromospheric/coronal conditions, when the plasma-$\beta$ is low in all three domains, only body waves are present.
\par The model becomes even more adaptable by combining the equations of geometrical and plasma-$\beta$ approximations, and provides analytical solutions for various structures in the solar atmosphere which can be handled as a slab. For example, the region of coronal hole boundaries might be thought of as an asymmetric magnetic slab, and plumes have already been reported to show MHD perturbations.
\acknowledgments
Acknowledgements: All numerical results are derived using Python, an open-source and community-developed programming language. The authors thank M. Allcock and M. Barbulescu for the basis of the root-finding algorithm used during the numerical investigation. The authors also acknowledge the support received from the Erasmus Programme of the EU for enabling the start of this research. N. Zs{\'a}mberger is also grateful to the University of Debrecen, E{\"o}tv{\"o}s Lor{\'a}nd University and the University of Sheffield. R. Erd{\'e}lyi is grateful to Science and Technology Facilities Council (STFC, grant numbers ST/M000826/1) for the support received. R.Erd{\'e}lyi also acknowledges the support received by the CAS Presidents International Fellowship Initiative Grant No. 2019VMA052 and the warm hospitality received at USTC of CAS, Hefei, where part of his contribution was made.
|
2,869,038,156,310 | arxiv | \section{Introduction}
Superconducting circuits with a single degree of freedom, such as the transmon and fluxonium qubits, constitute the backbone of present solid-state quantum computers~\cite{martinis,kjaergaard,krantz,preskill,arute,havlicek}. The development of qubits equipped with a higher dimensional configuration space has recently opened a way to host protected states and represents a new platform to explore a wide range of fundamental quantum phenomena~\cite{bell,kou,kalashnikov,gyenis}. An alternative path to increase the effective dimensionality of a qubit state without introducing complex multinode circuits is to encode quantum information into time-dependent states. When an artificial atom is irradiated by an intense field, the emerging system provides controllable states with desirable coupling to the environment~\cite{huang}. Such strongly driven superconducting circuits have been extensively studied in the context of Landau-Zener interference~\cite{ashhab,shevchenko,oliver,sillanpa,silveri}, sideband transitions~\cite{beaudoin,strand,li,naik}, Floquet-engineering~\cite{deng1,deng2,zhang}, on-demand dynamical control of operating points~\cite{didier,hong,fried,didier2}, tunable coupling schemes~\cite{bertet,niskanen,mckay,wu,reagor,mundada}, and many-body interaction in quantum simulators~\cite{roushan,wang,cai,kyriienko,sameti}.
Here, we experimentally characterize a fluxonium qubit under strong external flux modulation and use the Floquet states to store quantum information. One advantage of this approach is that we can create a dynamical flux-insensitive working point by harnessing the avoided crossings between quasienergy levels. This allows us to systematically realize favorable working bias points with \textit{in situ} tunable transition energies to help avoid frequency-crowding challenges in multiqubit processors. Floquet theory provides a powerful tool to intuitively describe the behavior of this driven system\cite{floquet,grifoni,chu,son,silveri2}.
The key idea of this formalism is that, when the evolution of a system is governed by a time-periodic Hamiltonian $H(t) = H(t + 2\pi/\Omega)$ with modulation frequency $\Omega$, there exists a special set of solutions $|\psi_\alpha(t)\rangle$ of the time-dependent Schr\"odinger equation such that $|\psi_\alpha(t)\rangle$ can be expressed as the product of a dynamical phase factor and a time-periodic state $|\Phi_{\alpha}(t)\rangle$ in the form of $|\psi_\alpha(t)\rangle = e^{-i\epsilon_\alpha t/\hbar}|\Phi_{\alpha}(t)\rangle$, where $|\Phi_{\alpha}(t)\rangle = |\Phi_{\alpha}(t + 2\pi/\Omega)\rangle$. These states are the quasi-stationary Floquet states with $\alpha=0,1,\ldots$ labeling the solutions, while the characteristic energy $\epsilon_\alpha$ appearing in the dynamical phase factor is the quasienergy of the state. Importantly, $|\Phi_{\alpha}(t)\rangle$ has the same time-periodicity as the drive, and thus can be expressed as a discrete Fourier series in terms of the harmonics of the drive frequency $|\Phi_{\alpha}(t)\rangle=\sum_n e^{in\Omega t} |\phi_{\alpha}^{(n)}\rangle$. The unnormalized Fourier coefficients $|\phi_{\alpha}^{(n)} \rangle$ represent the atomic wavefunctions dressed by the periodic drive~\cite{wilson1,wilson2} and are called quasi-wavefunctions. In this work, we use the Floquet states $|\Phi_{\alpha}(t)\rangle$ as the computational basis states for our qubit.
The outline of the paper is as follows. In Sec.~\ref{The Floquet-fluxonium qubit} we describe the driven fluxonium qubit and the corresponding emerging Floquet states based on numerical diagonalization of the underlying Floquet Hamiltonian. In Sec.~\ref{Mapping the quasi-energy structure} we present the experimental signatures of Floquet states: Floquet polaritons and excitations of sidebands. In Sec.~\ref{Coherence of the Floquet states} we present time-domain measurements that demonstrate enhanced qubit coherence away from the high symmetry point resulting from a dynamical sweet spot. We provide additional data and theoretical details in the Appendices.
\section{The Floquet-fluxonium qubit}\label{The Floquet-fluxonium qubit}
In this work, we present a strongly-driven fluxonium qubit and show that coherence can be enhanced compared to static operation by operating at the dynamical sweet spots. Our qubit is operated in the light fluxonium regime~\cite{nguyen}, where a small Josephson junction is shunted by a relatively small capacitance and a large inductance. The qubit can be biased with an external flux that can be periodically-modulated. The driven fluxonium Hamiltonian is:
\begin{equation}\label{eq:hamiltonian}
H(t) = 4E_Cn^2 - E_J\cos\varphi +\frac{1}{2}E_L\left(\varphi-2\pi\frac{\Phi_\mathrm{ext}(t)}{\Phi_0}\right)^2,
\end{equation}
where $\varphi$ and $n$ are the phase and charge operators, $E_J / h =$ 2.65 GHz is the Josephson energy, $E_L/ h$ = 0.54 GHz is the inductive energy and $E_C / h = $ 1.17 GHz is the capacitive energy. Lastly, $\Phi_\mathrm{ext}(t)$ denotes the time-dependent magnetic flux threaded through the loop formed by the junction and inductor, and $\Phi_0$ is the magnetic flux quantum. The qubit is capacitively coupled to a tantalum-based~\cite{place} coplanar resonator with a resonance frequency of 7.30 GHz, allowing us to perform dispersive readout~\cite{blais}.
DC flux bias control is commonly used to tune the transition energies of the qubit, but also adversely affects coherence by exposing the qubit to ubiquitous $1/f$ flux noise. To protect against this noise, it is necessary to set the qubit energy to an extremum value (called a first-order-insensitive \textit{static sweet spot}), which diminishes the advantage of the tunability of the device. Fortunately, by leveraging modulation techniques, we have the capability to recover -- under certain conditions -- the flexibility of choosing the optimal working point of the qubit and create an in-situ tunable \textit{dynamical sweet spot}.
We consider the case when the magnetic flux is modulated with a single frequency $\Omega$ and amplitude $\xi$ around the static flux bias point $\Phi_\mathrm{ext}^0$, i.e., $\Phi_\mathrm{ext}(t) = \Phi_\mathrm{ext}^0 +\xi\cos(\Omega t)$. Floquet's theorem offers a natural description of such systems, where the Fourier components $|\phi_{\alpha}^{(n)} \rangle$ and the corresponding quasienergies $\epsilon_\alpha$ describe the dynamics. Importantly, as we illustrate below, these emerging Floquet states are suitable for quantum information processing in a manner similar to stationary qubits. To get an intuitive picture for the structure of the fluxonium Floquet states~\cite{rudner}, it is advantageous to introduce the time-averaged spectral function $A_\alpha(\omega)=\sum_n\langle\phi_{\alpha}^{(n)}|\phi_{\alpha}^{(n)}\rangle\delta(\epsilon_\alpha+n\Omega-\omega)$, which captures the energy distribution of the spectral weight of a given Floquet state over multiple sidebands.
\begin{figure*}
\centering
\includegraphics[width = \textwidth]{figure1_v3.pdf}
\caption{Numerically calculated Floquet quasienergy spectrum and quasi-wavefunctions. (a-c) Gray lines show the quasienergy $\epsilon_\alpha$ as function of external flux and flux drive amplitude $\xi$ at a fixed flux drive frequency of $\Omega/2\pi$ = 0.4 GHz. The weight of the time-averaged spectral functions $A_\alpha(\omega)$ are visualized by colored lines (red, blue, purple). For illustration purposes, we replaced the Dirac delta functions with Kronecker deltas in the definition of the spectral functions: $A_\alpha(\omega)=\sum_n\langle\phi_{\alpha}^{(n)}|\phi_{\alpha}^{(n)}\rangle\delta_{\epsilon_\alpha+n\Omega,\omega}$. The distribution of the spectral weight among different sidebands depends on the flux dispersion of the states and strength of the flux modulation. (d-f) The Fourier components of the Floquet states $\langle\varphi|\phi_{\alpha}^{(n)}\rangle$ visualized on a two-dimensional space spanned by the canonical phase variable and the sideband index. The spread of the wavefunctions into sidebands increases with the modulation strength.}
\label{fig:2D states}
\end{figure*}
In \autoref{fig:2D states}, we present the quasienergies, quasi-wavefunctions and time-averaged spectral weights of the Floquet states as a function of DC flux bias and AC modulation amplitude as obtained numerically by diagonalizing the time-independent Floquet Hamiltonian. First, the quasienergies of the driven fluxonium (gray lines in \autoref{fig:2D states}a-c), can be characterized by an infinite set of multiphoton resonances, with period corresponding to the drive frequency. This redundancy of the quasienergies is the result of the discrete time-translation invariance of the driven-qubit Hamiltonian.
To illustrate how the drive strength affects the Floquet states, we consider the time-averaged spectral function (colored lines in \autoref{fig:2D states}a-c), as well as the Fourier components of the ground and first excited Floquet states at different driving amplitudes (\autoref{fig:2D states}d-f). In the weak modulation-strength limit, the Floquet states resemble the static fluxonium states with spectral weight primarily located in a single harmonic and energy levels equivalent to the bare fluxonium qubit (\autoref{fig:2D states}a). Consequently, the wavefunctions are localized in a single mode ($n=0$), and they have the same shape in configuration space as the original fluxonium wavefunctions (\autoref{fig:2D states}d). We observe that as the drive power is increased, the spectral weight of the Floquet states spreads into sidebands (\autoref{fig:2D states}b,e). Intriguingly, at even higher powers, the Floquet states only have a weak resemblance to the original fluxonium states, and the quasienergies form a complex pattern with spectral weight widely spread over numerous sidebands (\autoref{fig:2D states}c,f).
Finally, the spectrum exhibits avoided crossings at various flux-bias values due to hybridization between Floquet sidebands. At these special parameter values~\cite{huang}, the system has dynamical sweet spots with transition energies first-order insensitive to the DC flux bias. In this work, we focus on these regions, where the Floquet-fluxonium qubit can be operated while maintaining high coherence.
\section{Measuring the quasi-energy spectrum}\label{Mapping the quasi-energy structure}
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{Fig_2_polariton_v2.pdf}
\caption{Floquet polariton states as a function of flux drive amplitude. (a-c) Vacuum Rabi splitting of the resonator with the Floquet sidebands measured in the transmission spectrum when $\Omega/2\pi$ = 0.2 GHz. (d) Experimentally extracted normalized coupling rates $g_m$ (solid dots) for the various sidebands $m$ with calculations based on a rotating-wave-approximation model (solid lines) and Floquet theory (dotted-dashed lines). Generally, as the modulation amplitude is increased, the spectral weight shifts towards higher-order sidebands.}
\label{fig:polariton}
\end{figure}
In order to coherently control this strongly-driven Floquet qubit, we must first experimentally characterize and verify the quasienergy spectrum. We first focus on parametrically induced vacuum Rabi oscillations between a single mode of the readout cavity and the Floquet states. Similar to standard transmission measurements in the strong-coupling regime of superconducting qubits~\cite{wallraff}, we measure the response of the cavity as a function of DC flux bias. When one of the quasienergy differences is in resonance with the cavity frequency, the coherent exchange of a photon between the qubit and the cavity is indicated by avoided crossings in the transmission signal, and the coupling rate $g$ is captured by the size of the crossing. We systematically characterize these Floquet polariton states~\cite{clark} as a function of modulation amplitude, while keeping the drive frequency constant. As \autoref{fig:polariton}a shows, the transmission data features a single avoided crossing in the absence of flux modulation corresponding to the transition from the ground state to the third excited state of the fluxonium qubit. When the amplitude of the drive is increased (\autoref{fig:polariton}b,c), the spectral weight splits into higher-order sidebands, enhancing the dipole coupling between the bands detuned by multiples of the flux modulation. This is indicated by the emergence of additional avoided crossings of the cavity with the higher-order sidebands as a function of the modulation amplitude. As the magnitude of the dipole matrix element is proportional to the amplitude of the wavefunctions, measuring the strength of the avoided crossing enables us to directly characterize the redistribution of the wavefunction in the different sidebands. The measured coupling rates (\autoref{fig:polariton}d) reveal that the spectral weight continuously transfers to the higher order sidebands as the drive strength is increased.
\begin{figure*}
\centering
\includegraphics[width = \textwidth]{figure3_v2_final.pdf}
\caption{Spectroscopy measurements of the Floquet states. (a-e) Calculated dispersion of the quasienergies (red lines), and the simulated qubit excitation during spectroscopy show dynamical sweet spots away from half flux bias. The steady-state simulation utilizes the Floquet master equation and is used to mimic the spectroscopy signals shown in the lower panels. Both green and brown arrows indicate dynamical sweet spots. The spots with first-order insensitivity to the flux bias are shown by green arrows, while the brown arrow indicates insensitivity to modulation amplitude. A double sweet spot must be simultaneously insensitive to both the dephasing channels. (f-j) Two-tone spectroscopy data on the driven fluxonium in the vicinity of half flux quantum. The measured transition energies match with the calculated quasienergy differences (red dashed lines) and are well reproduced in the steady state simulation. With increasing drive amplitude, more transitions are observed in the data due to the splitting of spectral weight, activating sideband transitions. The blue arrow marks the multi-photon transition between the cavity and higher qubit levels}
\label{fig:spectroscopy}
\end{figure*}
We now proceed to spectroscopic measurements to map out the dynamical sweet spots in the driven fluxonium qubit, which are first-order insensitive against fluctuations of the DC flux bias and AC flux modulation amplitude. Here, we focus on the flux region close to half flux quantum, and perform two-tone spectroscopy in the low-energy region by monitoring the cavity transmission while an additional weak tone is applied to the flux-modulated system. Due to the ac-Stark effect, occupation of the qubit's excited state shifts the cavity's transition frequency. This leads to a reduction in our transmission signal when the qubit is excited. For comparison, we use the Floquet master equation to compute the steady-state qubit population during the spectroscopy experiment (\autoref{fig:spectroscopy}a-e) and which agrees well with the spectroscopy data observed in \autoref{fig:spectroscopy}f-j. In the undriven case (\autoref{fig:spectroscopy}a,f), the spectral weight in the sidebands is absent, and thus, the spectroscopic data shows a single transition with a static flux sweet spot at a half-flux quantum. The additional transition observed in the experiment (blue arrow in \autoref{fig:spectroscopy}f) can be accounted for as a multi-photon transition between the cavity and higher qubit level. Similar to the previously discussed transmission measurements, as the amplitude of the flux modulation is increased, the spectral weight propagates into the harmonics of the drive frequency. This enables transitions between the sidebands of the Floquet states due to the weak probe field. The obtained low-energy spectra (\autoref{fig:spectroscopy}g-i) demonstrate the growing number of allowed transitions between these harmonics as predicted by the Floquet theory. This behavior is even more apparent in \autoref{fig:spectroscopy}j, which shows the excitation spectrum as a function of drive amplitude at a fixed DC flux bias.
Importantly, the flux modulation not only redistributes the spectral weight of the qubit state into sidebands but also changes the flux dispersion of the quasienergies. This can be understood as an interaction between the sidebands, which exhibit avoided crossings with an energy splitting proportional to the flux drive amplitude. Such avoided crossings create dynamical sweet spots, where the derivative of the quasienergy differences vanishes, offering first-order insensitivity against DC flux noise at tunable flux bias values (green arrows in \autoref{fig:spectroscopy}).
We emphasize that by coupling the qubit to the flux drive, we also introduce additional coupling to fluctuating parameters, for instance, drive frequency, amplitude and phase, which can lead to dephasing. Given the frequency stability of commercial microwave generators, we focus on the dephasing caused by fluctuations in the drive amplitude. The DC-flux value corresponding to the dynamical sweet spot is strongly dependent on the amplitude of the drive $\xi$ (green arrows in \autoref{fig:spectroscopy}), which enables noise in the drive amplitude to potentially degrade the flux insensitivity of the dynamical sweet spots -- unless those are also insensitive to drive amplitude fluctuations. In other words, first-order insensitivity against noise in the DC flux bias is necessary but not sufficient to preserve the phase of the qubit. An example of a sweet spot for the amplitude of the drive is shown in \autoref{fig:spectroscopy}e and j (brown arrow). A double sweet spot, which has vanishing derivatives with respect to both DC flux bias and AC drive amplitude, provides simultaneous insensitivity to the DC bias fluctuations and the AC flux noise~\cite{didier,didier2,huang}. Fortunately, as shown below, such double sweet spots under flux modulation can be found in the drive parameters.
\section{Coherence of the Floquet states}\label{Coherence of the Floquet states}
\begin{figure*}
\centering
\includegraphics[width = \textwidth]{Fig_4_FloquetT2R_v3.pdf}
\caption{Floquet states in the time-domain. (a) The measured homodyne voltage signal at the end of the Ramsey-type protocol (pulse scheme is depicted in panel e inset). The homodyne signal is proportional to the ground state population of the static qubit, and the oscillations are the result of the dynamical phase accumulation due to the time evolution of the Floquet states. (d) The extracted rate of phase accumulation matches perfectly with the numerical results obtained from Floquet theory. (b) Calculation~\cite{huang} of the pure dephasing rate for the Floquet states shows a dynamical sweet spot in the $(\xi,\Omega)$ flux-drive-parameter space. (c) Experimental measurements of Ramsey dephasing times $T_{\mathrm{2R}}$ for different modulation strengths and frequencies demonstrates similar behavior. At the simultaneous sweet spot a 40 times enhancement of the coherence time is observed compared to the undriven case. (e) Measured $T_{\mathrm{2R}}$ as a function of the drive amplitude at various drive frequencies (corresponding to the linecuts in panel c). All time-resolved experimental data and calculations are performed at $\Phi_\mathrm{ext}^0 = 0.451$~$\Phi_0$.}
\label{fig:coherence}
\end{figure*}
With the location of sweet spots established, we can now present the central result of this paper: time-domain measurements of the coherence properties of the Floquet states. We measure the acquired dynamical phase of the Floquet states in a Ramsey-type protocol. As \autoref{fig:coherence}e inset shows, we initially prepare an equal superposition of the ground and first excited state of the undriven qubit by applying an $X_{90}$ gate to the ground state. We then adiabatically turn on the flux modulation such that the system follows the instantaneous Floquet states \cite{deng1,deng2}, which creates an equal superposition of the ground and first excited Floquet states. Following this, the system evolves under a modulation with constant amplitude $\xi$ and frequency $\Omega$ for time $\Delta t$. At the end, the modulation is again adiabatically turned off, and the excited state population is measured after another $X_{90}$ gate. The measured time evolution of the qubit population for different driving amplitudes and the rates of phase accumulation are plotted in \autoref{fig:coherence}a,d. The frequency components of the oscillation are expected to follow the quasienergy difference $\Delta\epsilon+n\Omega$ displaced by the bare qubit frequency $\omega_0$ (due to the $X_{90}$ gates). The extremum of the quasienergy difference, i.e. the dynamical sweet spot can be found by comparing the measured frequency components to numerical simulation and the spectroscopic data.
The presented Ramsey-type protocol can be used to determine the coherence of the Floquet states. By changing the length of the modulation pulse for extended periods and measuring the amplitude of the decaying oscillation, we can probe the driven qubit coherence. To find the double sweet spot, we measure the coherence both as function of drive amplitude and frequency. The time-domain measurements (\autoref{fig:coherence}c) reveal the presence of such a double sweet spot as predicted by the Floquet theory (in \autoref{fig:coherence}b). The double sweet spot provides a 40-fold enhancement of $T_{2R}$ at the cost of a 3.5-fold reduction in $T_1$ (refer \autoref{table1}). This result clearly demonstrates the potential of Floquet engineering for achieving ideal trade-offs between depolarization and dephasing in quantum processors.
\begin{table}[h]
\caption{Measured coherence times of the system. At the double sweet spot, Floquet drive decreases the $T_1$ of the fluxonium away from half flux bias by 3.5 times while increasing the $T_{2R}$ by 40 times.}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{lll
$\Phi_{\mathrm{ext}}^0$ $(\Phi_0)$ & $T_1$ ($\mu$s) & $T_{2R}$ ($\mu$s) \\\hline
$0.500$ (undriven) & $162 \pm 15$ & $76 \pm 5$\\
$0.451$ (undriven) & $91 \pm 10$ & $0.63\pm 0.07$ \\
$0.451$ (driven) & $26 \pm 5$ & $23 \pm 5$ \\
\end{tabular}
\end{ruledtabular}
\end{center}
\label{table1}
\end{table}
\section{Conclusions}
In this work, we have presented the steady-state response and time-resolved behavior of a fluxonium qubit under strong flux modulation. The measured spectroscopic features are in excellent agreement with numerical calculations based on Floquet theory, and clearly demonstrate the emergence of tunable dynamical sweet spots that can be used to preserve coherence away from the static sweet spot. In particular, we engineer a dynamical sweet spot which is simultaneously first-order-insensitive against fluctuations in DC flux bias and AC modulation amplitude. At this bias point, the coherence time approaches the measured value at the static sweet spot and is forty times greater than the coherence observed in static operation at the same bias point, away from the static sweet spot. Our findings open new possibilities to realize and control versatile superconducting circuits that combine the coherence benefits of operation at a sweet spot, while maintaining a degree of tunability.
\section*{Acknowledgements}
This work is supported by Army Research Office Grant No.\ W911NF-19-1-0016. Devices were fabricated in the Princeton University Quantum Device Nanofabrication Laboratory and in the Princeton Institute for the Science and Technology of Materials (PRISM) cleanroom. The authors acknowledge the use of Princeton's Imaging and Analysis Center, which is partially supported by the Princeton Center for Complex Materials, a National Science Foundation (NSF)-MRSEC program (DMR-1420541).
|
2,869,038,156,311 | arxiv | \section{Introduction}
Phase space formulation of quantum mechanics since pioneering
papers by Weyl\cite{Weyl-27}, Wigner\cite{Wigner-32} and
Moyal\cite{Moyal-49} who were motivated by the obvious
reason to realize quantum mechanics as some extended
version of the Hamiltonian mechanics rather than somewhat
sharp step to the theory of operators acting on Hilbert space,
now becomes of special interest at least for two major reasons.
The first reason is that the quantum mechanics in phase space
represents an example of theory with non-commutative geometry.
Moyal\cite{Moyal-49} and Bayen {\it et al.}\cite{Bayen-78}
developed non-commutative algebra of functions on phase space
which is aimed to represent non-commutative property of
the operators. In turn, the operators are sent to functions
on phase space - symbols - due to the symbol
map\cite{Weyl-27,Wigner-32}, which is well defined
one-to-one map. So, the symbol calculus\cite{Hormander-79,Berezin-80}
provides a reformulation of whole machinery of quantum
mechanics in terms of non-commutative functions on phase space.
Also, Bopp\cite{Bopp-61} and Kubo\cite{Kubo-64} extended the phase
space and introduced non-commutative variables in terms
of which they expressed the Wigner operator\cite{Wigner-32,JP-75}
and the Wigner density operator\cite{Kubo-64,Feynman-72,BJP-76}.
The Bopp-Kubo formulation deals with functions on the extended
phase space which are also non-commuting due to the non-commutative
character of the variables.
Recent studies of the non-commutative phase
space\cite{Wess-90}-\cite{Dimakis-92}
are much in the spirit of modern non-commutative
geometry\cite{Connes-85}-\cite{Coquereaux-92}.
Exterior differential calculus in the quantum mechanics in phase
space has been proposed recently by Gozzi and Reuter\cite{GR-93,GR-93a}.
They studied in detail algebraic properties of the symbol
calculus, and have found\cite{GR-93b}, particularly,
quantum analogue of the classical canonical transformations.
Gozzi and Reuter have argued that the quantum mechanics in phase
space can be thought of as a smooth deformation of the
classical one.
Jannussis, Patargias and Brodimas\cite{JPB-78} have constructed
creation and annihilation operators in phase space, and studied
harmonic oscillator in phase space\cite{JP-77}.
Various problems related to the Wigner operator, Wigner
distribution function, and the density matrix in phase space
have been investigated in a series of papers
by Jannussis {\it et al.}\cite{GJP-77}-\cite{JLPFFV-82}.
The second reason of the importance of the quantum mechanics
in phase space is that the resulting formalism is very similar
to the Hamiltonian formulation of classical
mechanics (not surprise certainly).
An obvious advantage of the phase space formulation of
quantum mechanics is that it arises to a tempting possibility
to exploit this formal similarity, provided by the smooth deformation,
to extend some of the useful notions and tools,
such as action-angle variables, ergodicity, mixing,
Kolmogorov-Sinai entropy, and chaos, which had been elaborated
in Hamiltonian mechanics to quantum mechanics.
The only thing that one should keep in mind here is that
the phase space quantum mechanics deals with the non-commutative
symplectic geometry rather than the usual symplectic geometry.
So, one should take care of this, primarily because the usual
notion of phase space points is lost in non-commutative case so
that one is forced to work mostly in algebraic terms rather than
to invoke to geometrical intuition. For example, it is not obvious
what is an analogue of the Lyapunov exponents when there are no
classical trajectories.
However, as a probe in this direction, we attempt to
formulate, in this paper, the
extention of the classical ergodicity condition.
We should emphasize here that, clearly, it is highly suitable
to have at disposal the phase space formulation before going
into details of quantum mechanical analogues of the
classical chaos and related phenomena.
As to chaos in dynamical sytems, it should be noted that
the evolution equations, both in the classical and quantum
mechanics in phase space, are Hamiltonian flows, which are
deterministic in the sense that there are no source terms
of stochasticity. In view of this, chaos can be still
thought of as an extreme sensitivity of the long-time
behavior of the probability density and, therefore, of the other
observables of interest, to initial state.
Another fundamental aspect of this consideration is the
process of measurements. However, we shall not
discuss this problem here.
As a specific example of quantum mechanical system in
phase space, we consider, in this paper, one-dimensional
harmonic oscillator.
We study also the {\it $q$-deformed} oscillator in phase
space which is now of special interest in view of the
developments of quantum algebras\cite{Drinfeld-86}-\cite{Bernard-90}.
We should note here that the quantum algebras are particular
cases of the Lie-admissible algebras\cite{J-91}-\cite{JBB-92}.
The algebra underlying the properties of the $q$-oscillator
in phase space appears to be
the algebra $su_{q}(2)$\cite{Biedenharn-89}-\cite{Fiore-93}.
The paper is organized as follows.
In Sec 2.1, we briefly recall the Bopp-Kubo
formulation of quantum mechanics in phase space.
Sec 2.2 is devoted to Weyl-Wigner-Moyal symbol-calculus
approach to quantum mechanics the main results of which
are sketched.
In Sec 2.3, we discuss, following Gozzi and Reuter\cite{GR-93b},
modular conjugation and unitary transformations.
We show that the Bopp-Kubo formulation and the symbol calculus
are explicitly related to each other.
We give an interpretation of the quantum mechanics in phase
space in terms of non-commutative geometry.
Quantum mechanical extention of the classical ergodicity
condition is proposed.
In Sec 2.4, we study translation operators in phase space.
Commutation relations of the Bopp-Kubo translation operators,
both in Hamiltonian and Birkhoffian cases, are presented.
The results presented in Sec 2 are used in Sec 3 to study
the harmonic ($q-$)oscillator in phase space.
In Sec 3.1, we present the main properties of the
one-dimensional oscillator in terms of annihilation and
creation operators in phase space. We identify fundamental
$2D$-lattice structure of the phase space resulting from the
commutation relations of the Bopp-Kubo translation operators.
The Fock space for the oscillator is found to be related
to the double Hilbert space of the Gelfand-Naimark-Segal
construction.
In Sec 3.2, we study $q$-deformed harmonic oscillator in
phase space. The Wigner operator is found to be proportional
to the 3-axis spherical angular momentum operator of the
algebra $su_{q}(2)$.
Also, the Wigner density operator appeared to be related to
the 3-axis hyperbolical angular momentum operator of the
algebra $su_{q}(1,1) \approx sp_{q}(2,R)$.
\section{Phase space formulation of quantum mechanics}
\subsection{Bopp-Kubo formulation. Non-commutative coordinates}
In studying Wigner representation\cite{Wigner-32}
of quantum mechanics, Bopp\cite{Bopp-61} and Kubo\cite{Kubo-64}
started from classical Hamiltonian $H(p,q)$ and used the
variables (see also \cite{JP-78,JPLFFSV-82})
\begin{eqnarray}
\label{PQ}
P= p - \frac{i\hbar}{2}\frac{\partial}{\partial q},\quad
Q= q + \frac{i\hbar}{2}\frac{\partial}{\partial p} \\ \label{PQ*}
P^*= p + \frac{i\hbar}{2}\frac{\partial}{\partial q},\quad
Q^*= q - \frac{i\hbar}{2}\frac{\partial}{\partial p}
\end{eqnarray}
instead of the usual $(p,q)$
and obtained the Wigner operator, $W_-$, and the Wigner density operator,
$W_+$, in the following form:
\begin{equation}
W_{\pm} = H(P,Q) \pm H(P^*,Q^*) \label{W}
\end{equation}
These operators enter respectively the Wigner equation
\begin{equation}
i\hbar\partial_t \rho = W_- \rho \label{Wigner}
\end{equation}
and the Bloch-Wigner equation\cite{Kubo-64}
\begin{equation}
\partial_\beta F + \frac{1}{2} W_+ F = 0 \qquad\qquad
\beta = \frac{1}{kT} \label{BlochWigner}
\end{equation}
Here, $\rho=\rho (p,q)$ is the Wigner distribution
function\cite{Wigner-32,JLPFFV-82}
and $F=F(p,q,p',q';\beta )$ is the Wigner density matrix\cite{BJP-76}.
As it is wellknown, the Wigner equation (\ref{Wigner}) is
a phase space counterpart of the usual von Neumann equation of quantum
mechanics while the Bloch-Wigner equation (\ref{BlochWigner})
describes quantum statistics in phase space\cite{Feynman-72,GJP-77}.
In view of the definitions (\ref{PQ})-(\ref{PQ*}) of the
variables, it is quite natural to
treat the above formulation in terms of non-commutative
geometry\cite{Connes-90}.
With the usual notation,
$\phi^i = (p_{1} , \dots , p_n ,q^1 , \dots , q^n )$,
$\phi^i \in M_{2n}$,
the first step is to extend the phase space $M_{2n}$
to the (co-)tangent phase space $TM_{2n}$ and define the complex
coordinates,
\begin{equation}
\Phi^{i}_{\pm} = \phi^i \pm
\frac{i\hbar}{2}\omega^{ij}\frac{\partial}{\partial \phi^{j}}
\label{Phi}
\end{equation}
where $\omega^{ij}$ is a fundamental symplectic tensor,
$\omega^{ij}=-\omega^{ji}; \ \omega_{ij}\omega^{jk}=\delta^{k}_{i}$.
We observe immediately that these coordinates are non-commutative,
\begin{equation}
\bigl[\Phi^{i}_{\pm},\Phi^{j}_{\pm}\bigr] = \pm i\hbar\omega^{ij}
\qquad \bigl[\Phi^{i}_{\pm},\Phi^{j}_{\mp}\bigr] = 0 \label{comff}
\end{equation}
and do not mix under time evolution.
The natural projection $TM_{2n} \rightarrow M_{2n}$ comes
with the classical limit $\hbar \rightarrow 0$.
Commutation relations (\ref{comff}) imply that the
"holomorphic" functions, $f(\Phi_-)$, and "anti-holomorphic"
functions, $f(\Phi_+)$, form two mutually commuting closed
algebras on space of functions $C(TM_{2n})$.
Thus, the holomorphic, $H(\Phi_{-})$, and anti-holomorphic,
$H(\Phi_{+})$, Hamiltonians define two separate dynamics,
which are not mixed.
Wigner operators (\ref{W}) are simply sum and difference
between these two Hamiltonians, respectively,
\begin{equation}
W_{\pm} = H(\Phi_- ) \pm H(\Phi_+ ) \label{WPhi}
\end{equation}
So, physical dynamics comes with the combinations of these
two Hamiltonians. In the classical limit, the Wigner operators
cover the Liouvillian $L$ and the Hamiltonian,
\begin{equation}
W_{-} = -i\hbar L + O(\hbar^2 ) \qquad \label{Wclass}
W_{+} = 2H(p,q) + O(\hbar^2 )
\end{equation}
where $L \equiv \ell_{h}=-h^{i}\partial_{i}$ is a Lie derivative along
the Hamiltonian vector
field $h^{i} = \omega^{ij}\partial_{j}H$\cite{Arnold-78,Abraham-78}.
According to complex character of the variables (\ref{Phi}),
one can define the involution ${\cal J}$ acting simply
as complex conjugation
\begin{equation}
{\cal J}:\ \Phi^{i}_{\pm} \rightarrow \Phi^{i}_{\mp} \label{JPhi}
\end{equation}
This involution may be thought of as a conjugation
interchanging the two pieces of the physical dynamics.
\subsection{Weyl-Wigner-Moyal formulation. Symbol calculus}
In order to achieve phase space formulation of quantum mechanics,
Weyl\cite{Weyl-27} and Wigner\cite{Wigner-32} introduced
symbol map associating with each operator $\hat A$, acting on
Hilbert space, a symbol $A(\phi )$, function on phase space,
$A(\phi ) = symb(\hat{A})$, due to
\begin{equation}
A(\phi ) =
\int \frac{d^{2n}\phi_{0}}{(2\pi\hbar)^n}
exp\Bigl[\frac{i}{\hbar}\phi^{i}_{0}\omega_{ij}\phi^{j}\Bigr]
Tr\bigl( \hat T (\phi_{0})\hat A \bigr) \label{symbol}
\end{equation}
with
\begin{equation}
\hat T (\phi_{0}) =
exp\Bigl[\frac{i}{\hbar}\phi^{i}_{0}\omega_{ij}\hat\phi^{j}\Bigr]
\label{TWeyl}
\end{equation}
The symbol map is well defined invertible one-to-one map from space
of operators, ${\cal O}$, to space of functions depending on phase space
coordinates\cite{Hormander-79,GR-93a},
${\cal O} \rightarrow C(M_{2n})$.
Particularly, Hermitean operators are mapped to real functions,
and vice versa.
The key property of the symbol calculus is that the ordinary
pointwise product of the functions is appropriately generalized
to reproduce the non-commutative product of the operators.
The product on $C(M_{2n})$, making the symbol map an
algebraic homomorphism, is the Moyal product\cite{Moyal-49,Berezin-80},
\begin{eqnarray}
\label{mp}
(A*B)(\phi ) &=&
symb(\hat A \hat B) \nonumber \\
&=&
A(\phi )
exp\bigl[\frac{i}{2\hbar}\bar\partial_{i}\omega_{ij}\vec\partial_{j}\bigr]
B(\phi ) \\
&=&
A(\phi )B(\phi ) + O(\hbar ) \nonumber
\end{eqnarray}
The Moyal product is associative but apparently non-commutative,
and represents, in $C(M_{2n})$, non-commutative property of the algebra of
operators,
and non-local character of quantum mechanics.
The Moyal bracket\cite{Moyal-49}
\begin{eqnarray}
\{ A, B \}_{mb} &=& \label{mb}
symb(\frac{1}{i\hbar}\bigl[ A, B ] ), \nonumber \\
&=&
\frac{1}{i\hbar}( A*B - B*A) \\
&=&
\{ A, B \}_{pb} + O(\hbar^2) \nonumber
\end{eqnarray}
is a symbol of commutator between two operators, and reduces to
the usual Poisson bracket $\{ .,. \}_{pb}$ in the
classical limit.
Thus, the algebra $(C(M_{2n}), \{ ,\}_{mb})$ is an algebra
of quantum observables, and it can be continuously
reduced to the algebra $(C(M_{2n}), \{ ,\}_{pb})$ of classical
observables.
Symbol map of the von Neumann's equation is written as\cite{GR-93a}
\begin{eqnarray}
\label{me}
\partial_t \rho (\phi , t) &=&
-\{ \rho, H \}_{mb} \nonumber \\
&=&
-\ell_h \rho + O(\hbar^2 )
\end{eqnarray}
In the classical limit, this equation covers
the Liouville equation of classical mechanics\cite{Koopman-31,Neumann-32}.
To summarize, the symbol calculus can be treated as a smooth deformation
of classical mechanics linking non-associative Poisson-bracket algebra of
classical observables, $A(\phi), \dots ,$ and an
associative commutator algebra of quantum observables, $\hat A , \dots $.
Full details of the symbol calculus may be found in \cite{GR-93a,GR-93b}
and references therein.
\subsection{Chiral symmetry and unitary transformations}
Gozzi and Reuter\cite{GR-93b} have investigated recently the
algebraic properties of the quantum counterpart of the classical
canonical transformations using the symbol-calculus approach
to quantum mechanics.
They found, particularly, that the operators $L_f$ and $R_f$
acting as the left and right multiplication with symbol $f$,
respectively,
\begin{equation}
L_{f}g = f*g \qquad R_{f}g = g*f \label{LR}
\end{equation}
form two mutually commuting closed algebras, ${\cal A}_{L}$
and ${\cal A}_{R}$, (cf. \cite{JSPSV-77})
\begin{equation}
\label{LL}
\bigl[ L_{f_1} ,L_{f_2} \bigr] = i\hbar L_{\{f_1 f_2\}_{mb} } \qquad
\bigl[ R_{f_1} ,R_{f_2} \bigr] =-i\hbar R_{\{f_1 f_2\}_{mb} } \qquad
\bigl[ L_{f_1} ,R_{f_2} \bigr] = 0
\end{equation}
which are explicitly isomorphic to the original Moyal-bracket
algebra on $C(M_{2n})$.
Also, $L_f$ and $R_f$ can be presented by virtue of the
Moyal product (\ref{mp}) as\cite{GR-93b}
\begin{equation}
L_{f} = : f(\Phi^{i}_{+}) : \qquad
R_{f} = : f(\Phi^{i}_{-}) : \label{LRPhi}
\end{equation}
where $\Phi^{i}_{\pm}$ are defined by (\ref{Phi}), and $:\dots :$
means normal ordering symbol (all derivatives $\partial_{i}$ should be
placed to the right of all $\Phi$'s).
It has been shown\cite{GR-93b} that the linear combinations of the
above operators,
\begin{equation}
V^{\pm}_{f} = \frac{1}{i\hbar}(L_f \pm R_f ) \label{V}
\end{equation}
for real $f$, generate non-unitary,
$\hat g \rightarrow \hat{U}\hat g \hat{U}$,
and unitary,
$\hat g \rightarrow \hat{U}\hat g \hat{U^{-1}}$,
transformations, respectively ($\hat{U}$ is an unitary operator).
We see that the Wigner operators, $W_{\pm}$, given in the
Bopp-Kubo formulation by (\ref{WPhi}), are just
\begin{equation}
W_{\pm} = L_H \pm R_H = i\hbar V^{\pm}_{H} \label{WLR}
\end{equation}
so that the Wigner equation (\ref{Wigner}) can be written as
\begin{equation}
\partial_t \rho = V^{-}_{H} \rho \label{Wigner2}
\end{equation}
where $H$ is the Hamiltonian. So, we arrive at the conclusion that the
representations (\ref{LRPhi}) provide the relation between the Bopp-Kubo
and Weyl-Wigner-Moyal formulations.
Various algebraic properties of the generators $V^{-}_{f}$
have been found by Gozzi and
Reuter\cite{GR-93b}. Particularly, they found that
in two dimensional phase space the generators $V^{-}_{f}$, in the basis
$V_{\vec m} = -exp(i\vec m \vec \phi),\
\vec m = (m_1 , m_2) \in Z^2 $, on torus $M_2 = S^1 \times S^1$,
satisfy a kind of the $W_{\infty}$-algebra commutation relations,
\begin{equation}
\bigl[ V^{-}_{\vec m}, V^{-}_{\vec n} \bigr] =
\frac{2}{\hbar} \label{Winfty}
sin(\frac{\hbar}{2}m_{i}\omega^{ij}m_{j})V^{-}_{\vec m + \vec n}
\end{equation}
which are deformed version of the $w_{\infty}$-algebra
of the classical $sdiff(T^2 )$, area preserving diffeimorphisms
on the torus.
Also, an important result shown in \cite{GR-93b} is that $V^{-}_{f}$ is
invariant under the modular conjugation operator defined on symbols by
\begin{equation}
{\cal J} f = f^* \qquad
{\cal J}(f*g) = {\cal J}(g)*{\cal J}(f) \label{J}
\end{equation}
Namely,
\begin{equation}
{\cal J} L_f {\cal J} = R_f \qquad \label{JLJ}
{\cal J} R_f {\cal J} = L_f \qquad
{\cal J} V^{-}_f{\cal J} = V^{-}_f \qquad
{\cal J} V^{+}_f{\cal J} = -V^{+}_f
\end{equation}
This symmetry resembles the {\it chiral} symmetry and seems
to be broken in the classical mechanics.
This argument is supported by the fact that the Moyal
product (\ref{mp}) becomes commutative in the classical limit.
Indeed, in the classical case, the difference between the left
and right multiplications
on $C(M_{2n})$ disappears so that there is no room for the
modular conjugation operator ${\cal J}$, and the original algebra
${\cal A}_{L}\otimes{\cal A}_{R}$ is contracted to its diagonal
subalgebra\cite{GR-93b}.
The operator $V^{+}_f$ seems to have no analogue in the geometry
of phase space of classical mechanics
since $V^{+}_f$ blows up at $\hbar \rightarrow 0$ due to the
factor $1/i\hbar$ in the definition (\ref{V}).
However, $i\hbar V^{+}_{f}$ is ${\cal J}$-invariant and has
the classical limit
$i\hbar V^{+}_{f} = 2f + O(\hbar^{2})$
so that
$i\hbar V^{+}_{H} = W_{+}= 2H + O(\hbar^{2})$ is simply
two times Hamiltonian.
In the Bloch-Wigner equation (\ref{BlochWigner}),
$i\hbar V^{+}_f$ plays the role of Hamiltonian
defining the density matrix in quantum statistics\cite{Feynman-72}.
The operator $V^{-}_f$ has an explicit interpretation\cite{GR-93b} as a
{\it quantum deformed Lie derivative along the hamiltonian vector field}
in accordance with (\ref{Wigner2}).
Furthermore, in quantum mechanics the ${\cal J}$-invariance of
$V^{-}_f$ provides {\it unitary} time evolution due to the Wigner
equation (\ref{Wigner2}).
The structure of the Weyl-Wigner-Moyal calculus, which deals with
non-commutative algebra, may be seen in a more refined way from the
non-commutative geometry\cite{Connes-90,Coquereaux-92}
point of view as follows.
First, recall that usual definition of topological space $M$ is equivalent
to definition of commutative algebra ${\cal A}$ due to the identification
${\cal A} = C(M)$, with the algebra $C(M)$ of continuous complex valued
functions on $M$ (Gelfand correspondence).
Conversely, $M$ can be understood as the
spectrum of algebra ${\cal A}$, {\it i.e.} points $x \in M$ are irreducible
representations owing to the relation $x[f] = f(x)$ when $f \in {\cal A}$.
Next step is that one is free to assume that the algebra ${\cal A}$ is
non-commutative in general, and then think about a non-commutative
version of the space $M$. Particularly, classical notion of point
$x \in M$ is modified, in non-commutative geometry, due to the
basic relation mentioned above.
Specific example we will consider for our aims is a non-commutative
vector bundle.
In classical geometry, sections of a vector bundle $E$ above
a manifold $M$ play, in physical context, the role of matter fields.
Here, an important point to be noted is that the space ${\cal E}$ of the
sections is a bimodule over the algebra ${\cal A} = C(M)$ of the functions
on $M$.
In the non-commutative case, there are {\it left} and {\it right}
modules over non-commutative algebra ${\cal A}$ instead of the bimodule.
That is, for $\sigma \in {\cal E}$ and $f \in {\cal A}$, $f\sigma$ and
$\sigma f$ are not both made sense as elements of ${\cal E}$.
One may choose, for convenience, the right module, and then
characterize the non-commutative vector bundle as a quotient
of free module $A^{m}$, {\it i.e.} as the (right)
projective module over the algebra ${\cal A}$,
$\, {\cal E} = P{\cal A}^{m}$, for some projector $P, \, P^{2} = P$,
and some $m$.
In the symbol calculus, we have, obviously, ${\cal E} = {\cal A}$ itself,
where ${\cal A} = (C(M_{2n}), *)$ is the non-commutative algebra endowed
with the Moyal product. The sections are functions on $M_{2n}$ acting
by the left and right multiplications and forming, respectively,
left and right ${\cal A}$-modules. The modular conjugation acts due to
$$ {\cal J}: {\cal A}\otimes {\cal A} \rightarrow {\cal A}\otimes {\cal A}
$$
$$ (L,R) \mapsto (R,L)
$$
and ${\cal E}$ is the quotient,
${\cal E}= {\cal A}\otimes {\cal A} /{\cal J}$.
The ${\cal A}$-modules to be {\it unital} one has to put
$I_{L}*f = f$
and
$f*I_{R} = f$, $\forall f \in C(M_{2n})$,
with $I_{L,R}$ being the left and right "identity" elements of ${\cal E}$.
Because
${\cal E}={\cal A}$,
we have actually $I_{L}=I_{R} = I \in C(M_{2n})$ so that the above
conditions imply
$$
f*I - I*f = 0 \qquad
f*I + I*f = 2f \qquad
$$
According to definitions (\ref{LR}) and (\ref{WLR}), these
equations can be rewritten as
\begin{equation}
V^{-}_{f}I = 0 \qquad \label{unital}
i\hbar V^{+}_{f}I = 2f \quad\qquad \forall f \in C(M_{2n})
\end{equation}
The question arises as to existence of such unique function $I$
that the both equations (\ref{unital})
are satisfied for any function $f$.
We observe that in the classical case the last two equations have correct
limits at $\hbar \rightarrow 0$, and are satisfied for any $f$ identically
only if $I(\phi ) = 1$, as it was expected ($1f = f1 = f$).
The Bopp-Kubo representation provides a realization of representation
space of the algebra ${\cal A}$, with the variables $\Phi^{i}_{\pm}$,
which extends the usual $M_{2n}$ for the non-commutative case.
In the remainder of this section, we will consider the
extention of the classical {\it ergodicity} condition\cite{Arnold-68}.
Quantum mechanical analogue of the classical condition
of ergodicity can be written as
\begin{equation}
V^{-}_{H}\rho = 0 \label{ergoGozzi}
\end{equation}
due to comparison of (\ref{me}) and (\ref{Wigner2}),
with the solution $\rho$ being non-degenerate, at least at the classical
level.
In the classical limit, this equation covers the usual equation,
$L\rho = 0$, where $L$ denotes the Liouvillian, whose non-degenerate
eigenfunctions with zero eigenvalues describe ergodic Hamiltonian
systems\cite{Arnold-68}, which are characterized by the only
constant of motion, energy $H$.
As to solutions, recent
studies\cite{GRT-91a}-\cite{Aringazin-93b}
of the classical ergodicity condition
within the path integral approach to classical mechanics show that
the solution is given specifically by the Gibbs state form.
The condition (\ref{ergoGozzi}) can be rewritten in the Bopp-Kubo
representation as
\begin{equation}
H(\Phi_{+})\rho(\phi ) = H(\Phi_{-})\rho(\phi ) \label{ergoBopp}
\end{equation}
where we have used the relations (\ref{WPhi}) and (\ref{WLR}),
that means that the holomorphic and anti-holomorphic Hamiltonians
have the same spectrum.
Also, it is remarkable to note that the equation (\ref{ergoGozzi})
is similar to the first equation of (\ref{unital}), with
$f(\phi ) = H(\phi )$ and $I(\phi ) = \rho(\phi )$.
We pause here with the further discussion stating that more
analysis is needed to verify the proposed extention of the
ergodicity condition (\ref{ergoGozzi}) which may be made
elsewhere.
\subsection{Translation operators}
The operator $T(\phi_0)$ defined by (\ref{TWeyl})
and used to represent the Weyl symbol map (\ref{symbol})
has a meaning of the operator of translations in phase space.
Bopp\cite{Bopp-61} has introduced such an operator in
$\Phi^{i}_{\pm}$-variables representation and Jannussis
{\it et al.} \cite{JPB-78} have studied their properties.
Let us define the translation operators, in the Bopp-Kubo
formulation,
\begin{equation}
T_{\pm}(\phi_{0}) =
exp\bigl[ \pm\frac{i}{\hbar}\phi^{i}_{0}\omega_{ij}\Phi^{j}_{\pm}\bigr]
\label{T}
\end{equation}
where $\Phi^{i}_{\pm}$ are defined by (\ref{Phi}).
It is easy to verify that due to the fundamental commutation
relations (\ref{comff}) they build up two mutually commuting
algebras,
\begin{eqnarray}
\label{TT}
\bigl[ T_{\pm}(\phi_{1}),T_{\pm}(\phi_{2})\bigr] =
\pm 2i\ sin(\frac{1}{\hbar}\phi^{i}_{1}\omega_{ij}\phi^{j}_{2})
T_{\pm}(\phi_{1} + \phi_{2}) \\
\bigl[ T_{\pm}(\phi_{1}),T_{\mp}(\phi_{2})\bigr] = 0
\end{eqnarray}
In the case of Birkhoffian generalization of Hamiltonian
mechanics\cite{Aringazin-93a}-\cite{GRT-91b}
one supposes that the symplectic 2-form $\omega$ depends on
phase space coordinates, $\omega = \omega (\phi)$, but it is
still non-degenerate and closed, $d\omega=0$. Consistency of the
Birkhoffian mechanics is provided by the Lie-isotopic
construction\cite{Santilli-88}-\cite{Aringazin-mono-91}.
In this case, the fundamental commutation relations (\ref{comff})
are essentially modified,
\begin{eqnarray}
\label{comffBirk}
\bigl[ \Phi^{i}_{\pm} , \Phi^{j}_{\pm} \bigr] &=&
\pm i\hbar\omega^{ij}
+ (\frac{i\hbar}{2})^2\omega^{mn}\omega^{ij}_{\ \ ,m}\partial_{n} \\
\bigl[ \Phi^{i}_{\pm} , \Phi^{j}_{\mp} \bigr] &=&
\mp (\frac{i\hbar}{2})^2\omega^{mn}\omega^{ij}_{\ \ ,m}\partial_{n} \nonumber
\end{eqnarray}
Consequently, the commutation relations (\ref{TT}) for the
translation operators are also changed.
Tedious calculations show that
\begin{eqnarray}
\label{TTBirk}
\bigl[ T_{\pm}(\phi_{1}), T_{\pm}(\phi_{2})\bigr] =
\pm 2i\ sin \Bigl( \frac{1}{\hbar}\phi^{i}_{1}\phi^{j}_{2}
(\omega_{ij} + \frac{1}{2}\omega_{ij,m}\phi^{m}) \Bigr)
T_{\pm}(\phi_{1}+\phi_{2}) \\
\bigl[ T_{\pm}(\phi_{1}),T_{\mp}(\phi_{2})\bigr] = \nonumber \\
\pm 2i\ sin \Bigl( \frac{1}{\hbar}\phi^{i}_{1}\phi^{j}_{2}
(\omega_{im,j} - \frac{1}{2}\omega_{ij,m})\phi^{m} \Bigr)
exp\bigl[\pm \frac{i}{\hbar}
(\phi^{i}_{1}\omega_{ij}\Phi^{j}_{\pm}
-\phi^{i}_{2}\omega_{ij}\Phi^{j}_{\mp})\bigr]
\end{eqnarray}
Here, we have used the identity
$\omega^{im}\omega^{jk}_{\ \ ,m} +
\omega^{jm}\omega^{ki}_{\ \ ,m} +
\omega^{km}\omega^{ij}_{\ \ ,m} = 0$,
and denote
$\omega_{ij,m} = \partial_{m}\omega_{ij}$.
We see that in the Birkhoffian case the holomorphic and
anti-holomorphic functions do not form two mutually
commuting algebras, in contrast to the Hamiltonian case
characterized by $\omega_{ij,m} = 0$.
Evidently, the Birkhoffian generalization is important
for the case when the symplectic manifold can not be covered
by {\it global} chart with constant symplectic tensor
$\omega_{ij}$. This is, for example, the case of $M_{2n}$ with
a non-trivial topology. However, it should be noted that
the symplectic manifold can be always covered by local
charts with constant $\omega_{ij}$ due to Darboux theorem.
\section{$q$-deformed harmonic oscillator in phase space}
\subsection{Harmonic oscillator in the Bopp-Kubo phase space representation}
Instead of studying the harmonic oscillator in phase space via
the Wigner equation (\ref{Wigner}) it is more convenient
to exploit corresponding creation and annihilation operators
in the phase space.
Jannussis, Patargias and Brodimas\cite{JPB-78} have defined the
following two pairs of the creation and annihilation operators
following the Bopp-Kubo formulation:
\begin{eqnarray}
\label{a-}
a_{-} = \frac{1}{\sqrt 2}
\bigl(\sqrt{\frac{m\omega}{\hbar}}Q+i\sqrt{\frac{1}{m\omega\hbar}}P\bigr) \\
a^{+}_{-} = \frac{1}{\sqrt 2} \label{a+-}
\bigl(\sqrt{\frac{m\omega}{\hbar}}Q-i\sqrt{\frac{1}{m\omega\hbar}}P\bigr) \\
a_{+} = \frac{1}{\sqrt 2} \label{a+}
\bigl(\sqrt{\frac{m\omega}{\hbar}}Q^*
+i\sqrt{\frac{1}{m\omega\hbar}}P^*\bigr)\\
a^{+}_{+} = \frac{1}{\sqrt 2} \label{a++}
\bigl(\sqrt{\frac{m\omega}{\hbar}}Q^* -
i\sqrt{\frac{1}{m\omega\hbar}}P^*\bigr)
\end{eqnarray}
These operators obey the following usual commutation relations:
\begin{equation}
\bigl[a_{\pm}, a^{+}_{\pm}\bigr] = 1 \quad \label{aa}
\bigl[a_{\pm}, a^{+}_{\mp}\bigr] =
\bigl[a_{\pm}, a_{\mp}\bigr] =
\bigl[a^{+}_{\pm}, a^{+}_{\mp}\bigr] = 0
\end{equation}
The Bopp-Kubo holomorphic and anti-holomorphic Hamiltonians for
the harmonic oscillator then read
\begin{eqnarray}
\label{H}
H(P,Q) =
\frac{P^2}{2m} + \frac{m}{2}\omega^2 Q^2
=
\hbar\omega (a^{+}_{-}a_{-} + \frac{1}{2}) \\
H(P^* ,Q^* ) =
\frac{P^{*2}}{2m} + \frac{m}{2}\omega^2 Q^{*2}
=
\hbar\omega (a^{+}_{+}a_{+} + \frac{1}{2}) \\
\end{eqnarray}
and the Wigner operator due to (\ref{W}) takes the form
\begin{eqnarray}
\label{Waa}
W_{-} &=& \hbar\omega (a^{+}_{+}a_{+} - a^{+}_{-}a_{-}) \\
\label{Wnn}
&\equiv& \hbar\omega (\hat{n_1} - \hat{n_2})
\end{eqnarray}
In the two-particle Fock space ${\cal F}_{1} \otimes{\cal F}_{2}$
with the basis $|n_1 \, n_2\rangle$, the pairs of operators
(\ref{a-})-(\ref{a++}) act due to
\begin{eqnarray}
\label{aa+-}
a^{+}_{-}|n_1 \, n_2\rangle = \sqrt{n_1 +1}|n_1 +1 \, n_2\rangle \\
\label{aa-}
a^{ }_{-}|n_1 \, n_2\rangle = \sqrt{n_1} |n_1 -1 \, n_2\rangle \\
\label{aa++}
a^{+}_{+}|n_1 \, n_2\rangle = \sqrt{n_2 +1}|n_1 \, n_2 +1\rangle \\
\label{aa+}
a^{ }_{+}|n_1 \, n_2\rangle = \sqrt{n_2} |n_1 \, n_2 -1\rangle
\end{eqnarray}
Then, the Wigner operator (\ref{Wnn}) has the following eigenvalues
\begin{equation}
W_{-}|n_1 \, n_2\rangle = (n_1 - n_2)|n_1 \, n_2 \rangle \label{W-eigen}
\end{equation}
The eigenfunctions of the Wigner operator (\ref{Wnn}) have
the following form\cite{JPB-78,JP-77}:
\begin{equation}
\varphi_{n_1 n_2}(p,q) = \label{varphi}
\int dp_0 dq_0\ T_{+}(p_0 , q_0 )\varphi_{0n_2}(p,q)\varphi_{n_1 0}(p,q)
\end{equation}
where
\begin{equation}
\varphi_{n_1 0} =
\frac{1}{\pi\sqrt{\hbar}}\frac{1}{\sqrt{n_1 !}}
\Bigl(\frac{2m\omega}{\hbar}\Bigr)^{n_1 /2}
\Bigl( q - i\frac{p}{m\omega}\Bigr)^{n_1}
exp\Bigl(- \frac{2H(p,q)}{\hbar\omega}\Bigr) \label{varphi1}
\end{equation}
and the same for $\varphi_{0 n_2}$ with the replacement
$n_1 \rightarrow n_2$ in the r.h.s. of (\ref{varphi1}).
The vacuum is characterized by the Gibbs state form
\begin{equation}
\varphi_{00} = \frac{1}{\pi\sqrt{\hbar}}
exp\Bigl(- \frac{2H(p,q)}{\hbar\omega}\Bigr) \label{vacuum}
\end{equation}
The action of the Bopp translation operators on the functions
(\ref{varphi1}) can be easily determined, and the result is
\begin{equation}
T_{\pm}(p_0 , q_0 )\varphi_{n_1 0}(p,q)=
exp\Bigl(\pm \frac{i}{\hbar}(p_0 q - q_0 p)\Bigr)
\varphi_{n_1 0}(p+p_0 , q+q_0 ) \label{Tvarphi}
\end{equation}
The commutators (\ref{TT}) take the form
\begin{equation}
\label{TT2dim}
\bigl[ T_{\pm}(p_{1},q_{1}), T_{\pm}(p_{2},q_{2})\bigr] =
\pm 2i\ sin\frac{1}{\hbar}(p_{1}q_{2}-q_{1} p_{2})
T_{\pm}(p_{1}+p_{2},q_{1}+q_{2})
\end{equation}
The translation operators in (\ref{TT2dim}) commute when
\begin{equation}
\frac{1}{\hbar}(p_{1}q_{2} - q_{1}p_{2}) = \pi l
\qquad l \in Z \label{flux}
\end{equation}
This condition is similar to the one of quantization of magnetic
flux for $2D$-electron gas in uniform magnetic field\cite{JPB-78}.
This means that the phase space asquires $2D$-lattice structure with
the basic unit-cell vectors
$\vec \phi_{1} = (p_{1},q_{1})$
and
$\vec \phi_{2} = (p_{2},q_{2})$
obeying (\ref{flux}), i.e.
\begin{equation}
\vec n \cdot\vec \phi_{1}\times\vec \phi_{2} = l\Psi_{0}
\qquad \Psi_{0} = \pi\hbar \label{aflux}
\end{equation}
The degeneracy of the energy levels of the harmonic oscillator
in the phase space is then related to the lattice structure.
Namely, the representation (\ref{varphi}) of $\varphi_{n_{1}n_{2}}$
means that one "smears" the product
$\varphi_{n_{1}0}\varphi_{0n_{2}}$
(a "composite state" of two identical systems) over all the phase space.
So, $\varphi_{n_{1}n_{2}}$ remain to be eigenfunctions with the same
eigenvalues under the translations of the form
$\vec R = N_{1}\vec \phi_{2} + N_{2}\vec \phi_{2},\ N_{1,2}\in Z,$
leaving the lattice invariant. This is a kind of the magnetic group
periodicity\cite{Zak-64}-\cite{Cristofano-91}.
To implement the lattice structure of the phase space explicitly
one may start with the vacuum state (\ref{vacuum}), which is
characterized by zero angular momentum, to define four sets of
functions
\begin{equation}
\varphi^{(\alpha )}_{\vec k}(\vec \phi ) =
\sum_{\vec R^{\alpha}} exp(i\vec k \vec R^{\alpha}) \label{varphiR}
T_{+}(\vec R^{\alpha})\varphi_{00}(\vec \phi )
\end{equation}
where
\begin{eqnarray}
\label{R}
\vec R^{\alpha} = \vec R_{0} + I^{\alpha}_{i} \qquad
\vec R_{0} = N_{1}\vec \phi_{2} + N_{2}\vec \phi_{2} \qquad
\alpha = 0,1,2,3 \\
I^{0}_{i}= (0,0)\qquad
I^{1}_{i}= (1,0)\qquad
I^{2}_{i}= (0,1)\qquad
I^{3}_{i}= (1,1) \nonumber
\end{eqnarray}
and the sum is over all four-sets of the $2D$-lattice points.
The unit cell in the definition of each $\vec R^{\alpha}$
has $4l$ flux quanta $\Psi_{0}$ passing through it.
Gozzi and Reuter\cite{GR-93b} have argued that there is a close
relation between the symbol-calculus formalism and the
Gelfand-Naimark-Segal construction\cite{Thirring-79}.
In general, the GNS construction is specifically aimed to
define non-commutative measure and topology\cite{Coquereaux-92}.
The GNS construction provides bra-ket-type averaging, instead of
the usual trace averaging, in the thermo field theory\cite{Umezava-82}
when one deals with {\it mixed} states. This construction assumes
a double Hilbert space representation of states,
$||\hat A \rangle\rangle =
\sum A_{\alpha\beta}|\alpha\rangle \otimes|\beta\rangle
\in {\cal H}\otimes{\cal H}$.
So, particularly, the average of $\hat A$ is given by
$\langle \hat A \rangle =
\langle\langle \hat \rho^{1/2}||
\hat{A} \otimes \hat{I}
||\hat \rho^{1/2}\rangle\rangle $,
with $\hat{I}$ being identity operator.
The modular conjugation operator ${\cal J}$ acts on the double
Hilbert space by interchanging the two Hilbert spaces\cite{GR-93b}.
Time evolution of the GNS density is given by
$i\hbar\partial_{t}||\hat \rho^{1/2}\rangle\rangle
= H^{-}||\hat \rho^{1/2}\rangle\rangle$,
Here, the GNS Hamiltonian
$H^{-} = \hat{H}\otimes I - {\cal J}(\hat{H}\otimes I){\cal J}$
can be evidently associated with the Wigner operator
$W_{-} = i\hbar V^{-}_{H}$.
In view of the analysis of the oscillator in phase space,
the eigenfunctions $\varphi_{n_{1}0}$ and $\varphi_{0n_{2}}$
can be ascribed to the two pieces of the GNS double Hilbert space.
Also, the GNS double Hilbert space is associated to the
double Fock space ${\cal F}_{1}\otimes{\cal F}_{2}$, with the
modular conjugation operator ${\cal J}$ acting on
${\cal F}_{1}\otimes{\cal F}_{2}$ by interchanging the two Fock
spaces.
\subsection{$q$-deformed harmonic oscillator in phase space}
The $q$-deformation of the commutation relations (\ref{aa})
for the Bopp-Kubo creation and annihilation operators
(\ref{a-})-(\ref{a++}) reads
\begin{equation}
b_{-}b^{+}_{-} - \frac{1}{q}b^{+}_{-}b_{-} = q^{\hat n_{1}} \qquad\label{bqb}
b_{+}b^{+}_{+} - \frac{1}{q}b^{+}_{+}b_{+} = q^{\hat n_{2}}
\end{equation}
The bozonization procedure of the above operators according to
Jannussis {\it et al.}\cite{JPB-78} yields the following expressions
for the $q$-deformed operators ($q$-bosons):
\begin{eqnarray}
\label{boson}
b_{-}=\sqrt{\frac{\bigl[\hat n_{1} +1\bigr]}{\hat n_{1} +1}}a_{-} \qquad
b^{+}_{-}=a^{+}_{-}\sqrt{\frac{\bigl[\hat n_{1} +1\bigr]}{\hat n_{1} +1}}\\
b_{+}=\sqrt{\frac{\bigl[\hat n_{2} +1\bigr]}{\hat n_{2} +1}}a_{+} \qquad
b^{+}_{+}=a^{+}_{+}\sqrt{\frac{\bigl[\hat n_{2} +1\bigr]}{\hat n_{2} +1}}
\end{eqnarray}
where
$\bigl[ x\bigr] = (q^{x}-q^{-x})/(q-q^{-1})$ and $a_{\pm}$ and
$a^{+}_{\pm}$ are given by (\ref{a-})-(\ref{a++}).
Due to these definitions, we can directly find that
\begin{eqnarray}
\label{bn}
b_{-}b^{+}_{-} = \bigl[ \hat n_{1} +1\bigr] \qquad
b^{+}_{-}b_{-} = \bigl[ \hat n_{1} \bigr] \\
b_{+}b^{+}_{+} = \bigl[ \hat n_{2} +1\bigr] \qquad
b^{+}_{+}b_{+} = \bigl[ \hat n_{2} \bigr]
\end{eqnarray}
Clearly, $b^{+}_{\pm}=(b_{\pm})^{\dagger}$ if $q \in R$ or $q \in S^{1}$.
The actions of the $q$-boson operators on the Fock space
${\cal F}_{1}\otimes{\cal F}_{2}$ with the basis
\begin{equation}
| n_{1}\, n_{2}\rangle =
\frac{(b^{+}_{-})^{n_{1}}(b^{+}_{+})^{n_{2}}}
{\sqrt{n_{1}!}\sqrt{n_{2}!}}| 0\, 0\rangle \label{basis}
\end{equation}
have the form
\begin{eqnarray}
\label{b-action}
b_{-} | n_{1}\, n_{2}\rangle =
\sqrt{\bigl[n_{1}\bigr]} | n_{1}-1\, n_{2}\rangle \qquad
b^{+}_{-}| n_{1}\, n_{2}\rangle =
\sqrt{\bigl[n_{1}+1\bigr]}| n_{1}+1\, n_{2}\rangle \\
b_{+} | n_{1}\, n_{2}\rangle =
\sqrt{\bigl[n_{2}\bigr]} | n_{1}\, n_{2}-1\rangle \qquad
b^{+}_{+}| n_{1}\, n_{2}\rangle =
\sqrt{\bigl[n_{1}+1\bigr]}| n_{1}\, n_{2}+1\rangle
\end{eqnarray}
In the following we consider the algebra implied by the generators
\begin{equation}
J_+ = b_{-}b^{+}_{+} \qquad
J_- = b_{+}b^{+}_{-} \label{J+-}
\end{equation}
It is a matter of straightforward calculations to find that
\begin{equation}
\bigl[J_{+},J_{-}\bigr] = \bigl[2J_{3}\bigr] \qquad
\bigl[J_{3},J_{\pm}\bigr] = \pm J_{\pm} \label{J+J-}
\end{equation}
where
\begin{equation}
2J_{3} = \hat n_{1} - \hat n_{2} \label{J3}
\end{equation}
One can recognize that the above relations are standard quantum
algebra $su_{q}(2)$ commutation relations,
in the Kulish-Reshetikhin-Drinfeld-Jimbo
realization\cite{Biedenharn-89}-\cite{SK-92} according to which
$su_{q}(2)$ can be realized by two commuting sets of $q$-bosons
($q-$deformed version of the Jordan-Schwinger approach to angular
momentum). Hereafter, we write $su_{q}(2)$ to denote the
quantum algebra which is in fact $U_{q}(su(2))$.
Comparing (\ref{J3}) with (\ref{Wnn}) we see that the Wigner
operator for harmonic oscillator is just proportional to the
3-axis projection of the ($q$-deformed) spherical angular momentum
operator,
\begin{equation}
W_{-} = 2\hbar\omega J_{3} \label{WJ}
\end{equation}
Indeed, in the $su(2)$ notations\cite{Macfarlane-89}
for basis vector $|n_{1}n_{2}\rangle$,
\begin{equation}
|j\, m\rangle = |n_{1}\, n_{2}\rangle \qquad \label{jm-state}
j = \frac{1}{2}(n_{1}+n_{2}) \qquad
m = \frac{1}{2}(n_{1}-n_{2})
\end{equation}
the operators $J_{\pm}$ and $J_{3}$ act on
${\cal F}_{1}\otimes{\cal F}_{2}$ according to
\begin{eqnarray}
\label{J-action}
J_{-}|j\, m\rangle =
\sqrt{\bigl[j+m\bigr]\bigl[j-m+1\bigr]}|j\, m-1\rangle \nonumber\\
J_{+}|j\, m\rangle =
\sqrt{\bigl[j-m\bigr]\bigl[j+m-1\bigr]}|j\, m+1\rangle \\
J_{3}|j\, m\rangle =
m|j\, m\rangle \nonumber
\end{eqnarray}
For a fixed value $2j \in Z$, vector $|j\ m\rangle$ span the irrep $(j)$
of the quantum algebra $su_{q}(2)$.
We assume that $q$ is not root of unity.
Acordingly, the charge operator $J = \frac{1}{2}(\hat{n_{1}}+\hat{n_{2}})$
commutes with $J_{\pm ,3}$, and $J|j\, m\rangle = j|j\, m\rangle$.
The indication of the Wigner density operator $W_{+}$ may be seen from
the following.
The basic fact\cite{SK-92}
is that the vector $|n_{1}n_{2}\rangle \equiv |j\ m\rangle$
can be represented also as a basis vector $|k\ l\rangle$ for the irrep
beloning to the positive discrete series of
$su_{q}(1,1) \approx sp_{q}(2,R)$ with the hyperbolic angular momentum,
$k = \frac{1}{2}(n_{1}-n_{2}-1)= m-\frac{1}{2}$,
and 3-axis projection,
$l = \frac{1}{2}(n_{1}+n_{2}+1)= j+\frac{1}{2}$.
The generators of $su_{q}(1,1)$ are
\begin{equation}
K_{+} = b^{+}_{+}b^{+}_{-} \qquad
K_{-} = b_{+}b_{-} \qquad
K_{3} = J + \frac{1}{2}
\end{equation}
Particularly, the 3-axis hyperbolic angular momentum operator $K_{3}$
acts due to
\begin{equation}
K_{3}|k\ l\rangle = l|k\ l\rangle
\end{equation}
Thus, the Wigner density operator
$W_+ = H(\Phi_{-}) + H(\Phi_{+}) = \hbar\omega (\hat n_{1}+\hat n_{2}+1)$
can be immediately identified with $K_{3}$,
\begin{equation}
W_{+} = 2\hbar\omega K_{3} \label{WK}
\end{equation}
To summarize, we note that the harmonic ($q$-)oscillator in phase space
naturally arises to the Jordan-Schwinger approach
to ($q$-deformed) angular momentum, with the Wigner operator
$W_{-}$ ($W_{+}$) being
identified with the 3-axis spherical (hyperbolical) angular
momentum operator.
As a final remark, we notice that there are ways to give geometrical
interpretation of the quantum algebras and its representations.
Namely, one may follow the line of reasoning
by Fiore\cite{Fiore-93} and construct a realization of the
quantum algebra within $Diff(M_{q})$, where $M_{q}$ is
a $q$-deformed version of the ordinary manifold.
For example, in the context of the $q$-oscillator in phase space
it is highly interesting to find such a realization
for the algebra $su_{q}(1,1) \approx sp_{q}(2,R)$, which
is concerned the $q$-deformed phase space.
Also, there is a possibility\cite{Franco-93} to give
a geometric interpretation of the representations of $su_{q}(2)$
following the lines of the standard Borel-Weyl-Bott
theory\cite{Wallach-73,Fulton-91}.
\section{Conclusions}
We studied the relation between the Bopp-Kubo formulation and
the Weyl-Wigner-Moyal calculus of quantum mechanics in phase space
which is found to arise from the fact that the Moyal product of
functions on phase space,
$f(\phi )*g(\phi )$,
can be rewritten equivalently as the product of functions
defined on the extended phase space,
$f(\Phi )g(\phi )$.
From the non-commutative geometry point of view, the phase-space
formulation of quantum mechanics is an example of the theory
with non-commutative geometry. The non-commutative algebra ${\cal A}$
is the algebra of functions on phase space endowed with the
Moyal product. The right and left ${\cal A}$-modules are interchanged
by the modular conjugation ${\cal J}$ so that the space of sections
${\cal E} ={\cal A} \otimes {\cal A}/{\cal J}$, and there is
a kind of chiral symmetry due to the non-commutativity.
Due to a similarity between the phase-space formulation of quantum
mechanics and Hamiltonian formulation of classical mechanics,
there is an attractive possibility to extend useful classical
notions and tools to quantum mechanics. An attempt is made
to formulate the quantum extension of the classical ergodicity
condition.
We studied one-dimensional harmonic ($q$-)oscillator in phase space.
The phase space has a $2D$-lattice structure, similar to the one
of the magnetic group periodicity for the $2D$-electron gas in
magnetic field.
The Fock space for the oscillator is related to the double
Hilbert space of the Gelfand-Naimark-Segal construction in
accordance with the relation between the symbol calculus and
the GNS construction.
For the $q$-oscillator, the Wigner operator $W_{-}$($W_{+}$)
is found to be proportional to the
3-axis spherical (hyperbolical) angular momentum operator of the
$q$-deformed algebra $su_{q}(2)$ ($su_{q}(1,1)\approx sp_{q}(2,R)$).
The analysis of this paper is valid at fixed value of time. Its
reformulation in a form invariant under time evolution will be studied in a
future paper.
\newpage
|
2,869,038,156,312 | arxiv |
\section{Introduction}
Graph partitioning (GP) is a key prerequisite for efficient large-scale parallel graph algorithms.
A prominent example is the PageRank algorithm \cite{BrinP98}, which is used by search engines such as Google to order web pages displayed to the user by their importance.
As huge networks become abundant, there is a need for their parallel analysis.
In many cases, a graph needs to be partitioned or clustered such that there are few edges between the blocks (pieces).
In particular, when you process a graph in parallel on $k$ PEs (processing elements), you often want to partition the graph into $k$ blocks of about equal
size. In this paper we focus on a version of the problem that constrains the
maximum block size to $(1+\epsilon)$ times the average block size and tries to
minimize the total cut size, i.e., the number of edges that run between blocks.
It is well-known that there are more realistic (and more complicated) objective
functions involving also the block that is worst and the number of its
neighboring nodes \cite{HendricksonK00}, but minimizing the cut size has been adopted as
a kind of standard since it is usually highly correlated with the other
formulations.
The graph partitioning problem is NP-complete \cite{Hyafil73,Garey1974} and there is no approximation algorithm with a constant ratio factor for general graphs \cite{BuiJ92}.
Hence, heuristic algorithms are used in practice.
A successful heuristic for partitioning large graphs is the \emph{multilevel graph partitioning} (MGP) approach depicted in Figure~\ref{fig:mgp},
where the graph is recursively \emph{contracted} to achieve smaller graphs which should reflect the same basic structure as the input graph. After applying an \emph{initial partitioning} algorithm to the smallest graph, the contraction is undone and, at each level, a
\emph{local search} method is used to improve the partitioning induced by the coarser level.
The main contributions of this paper are a scalable parallelization of the size-constrained label propagation algorithm and an integration into a multilevel framework that enables us to partition large complex networks.
The parallel size-constrained label propagation algorithm is used to compute a graph clustering which is contracted.
This is repeated until the graph is small enough.
The coarsest graph is then partitioned by the coarse-grained distributed evolutionary algorithm KaFFPaE~\cite{kaffpaE}.
During uncoarsening the size-constraint label propagation algorithm is used as a simple, yet effective, parallel local search algorithm.
The presented scheme speeds up computations and improves solution quality on graphs that have a very irregular structure such as social networks or web graphs.
For example, a variant of our algorithm is able to compute a partition of a web graph with billions of edges in only a few seconds while producing high quality solutions.
We organize the paper as follows.
We begin in Section~\ref{s:preliminaries} by introducing basic concepts and outlining related work.
Section~\ref{s:sequentialcontraction} reviews the recently proposed cluster contraction algorithm \cite{pcomplexnetworksviacluster} to partition complex networks, which is parallelized in this work.
The main part of the paper is Section~\ref{s:parallelization}, which covers the parallelization of the size-constrained label propagation algorithm, the parallel contraction and uncontraction algorithm, as well as the overall parallel system.
A summary of extensive experiments to evaluate the algorithm's performance is presented in Section~\ref{s:experiments}.
Finally, we conclude in Section~\ref{s:conclusion}.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{pics/MGP.pdf}
\caption{Multilevel graph partitioning. The graph is recursively contracted to achieve smaller graphs. After the coarsest graph is initially partitioned, a local search method is used on each level to improve the partitioning induced by the coarser level.}
\label{fig:mgp}
\end{figure}
\vfill
\pagebreak
\section{Preliminaries}\label{s:preliminaries}
\subsection{Basic concepts}
Let $G=(V=\{0,\ldots, n-1\},E,c,\omega)$ be an undirected graph
with edge weights $\omega: E \to \ensuremath{\mathbb{R}}_{>0}$, node weights
$c: V \to \ensuremath{\mathbb{R}}_{\geq 0}$, $n = |V|$, and $m = |E|$.
We extend $c$ and $\omega$ to sets, i.e.,
$c(V')\Is \sum_{v\in V'}c(v)$ and $\omega(E')\Is \sum_{e\in E'}\omega(e)$.
$\Gamma(v)\Is \setGilt{u}{\set{v,u}\in E}$ denotes the neighbors of $v$.
A node $v \in V_i$ that has a neighbor $w \in V_j, i\neq j$, is a \emph{boundary node}.
We are looking for \emph{blocks} of nodes $V_1$,\ldots,$V_k$
that partition $V$, i.e., $V_1\cup\cdots\cup V_k=V$ and $V_i\cap V_j=\emptyset$
for $i\neq j$. The \emph{balancing constraint} demands that
$\forall i\in \{1..k\}: c(V_i) \leq L_{\max} := (1+\epsilon)\lceil\frac{c(V)}{k}\rceil$
for some imbalance parameter $\epsilon$.
The objective is to minimize the total \emph{cut} $\sum_{i<j}w(E_{ij})$ where
$E_{ij}\Is\setGilt{\set{u,v}\in E}{u\in V_i,v\in V_j}$.
We say that a block $V_i$ is \emph{underloaded} if $|V_i| < L_{\max}$ and \emph{overloaded} if $|V_i| > L_{\max}$.
A clustering is also a partition of the nodes, however, $k$ is usually not given in advance and the balance constraint is removed.
A size-constrained clustering constrains the size of the blocks of a clustering by a given upper bound $U$ such that $c(V_i) \leq U$.
Note that by adjusting the upper bound one can somehow control the number of blocks of a feasible clustering.
For example, when using $U=1$, the only feasible size-constrained clustering in an unweighted graph is the clustering where each node forms a block on its own.
An abstract view of the partitioned graph is the so-called \emph{quotient graph}, in which nodes represent blocks and edges are induced by connectivity between blocks.
The \emph{weighted} version of the quotient graph has node weights which are set to the weight of the corresponding block and edge weights which are equal to the weight of the edges that run between the respective blocks.
By default, our initial inputs will have unit edge and node weights.
However, even those will be translated into weighted problems in the course of the multilevel algorithm.
In order to avoid tedious notation, $G$ will denote the current state of the graph before and after a (un)contraction in the multilevel scheme throughout this paper.
\subsection{Related Work}
\label{s:related}
There has been a \emph{huge} amount of research on graph partitioning so that we refer the reader to \cite{schloegel2000gph,GPOverviewBook,SPPGPOverviewPaper} for most of the material.
Here, we focus on issues closely related to our main contributions.
All general-purpose methods that are able to obtain good partitions for large real-world graphs are based on the multilevel principle.
The basic idea can be traced back to multigrid
solvers for solving systems of linear equations \cite{Sou35} but
more recent practical methods are based on mostly graph theoretic aspects, in
particular edge contraction and local search.
There are many ways to create graph hierarchies such as matching-based schemes \cite{Chaco,Walshaw07,karypis1998fast,Monien2000,Scotch} or variations thereof \cite{Karypis06} and techniques similar to algebraic multigrid \cite{meyerhenke2006accelerating,ChevalierS09,SafroSS12}. We refer the interested reader to the respective papers for more details.
Well-known software packages based on this approach include Chaco \cite{Chaco}, Jostle~\cite{Walshaw07}, Metis \cite{karypis1998fast}, Party~\cite{Monien2000} and Scotch \cite{Scotch}.
While Chaco and Party are no longer developed and have no parallel version,
the others have been parallelized, too.
Most probably the fastest available parallel code is the parallel version of Metis, ParMetis \cite{karypis1996parallel}.
The parallel version of Jostle \cite{Walshaw07} applies local search to pairs of neighboring partitions and is restricted to the case where the number of blocks equals the number of processors.
This parallelization has problems maintaining the balance of the partitions since at
any particular time, it is difficult to say how many nodes are assigned to a
particular block.
PT-Scotch \cite{ptscotch}, the parallel version of Scotch, is based on
recursive bipartitioning. This is more difficult to parallelize than direct
$k$-partitioning since in the initial bipartition, there is less parallelism
available. The unused processor power is used by performing several independent
attempts in parallel. The involved communication effort is reduced by considering only nodes
close to the boundary of the current partitioning (band-refinement).
KaPPa \cite{kappa} is a parallel matching-based MGP algorithm which is also restricted to the case where the number of blocks equals the number of processors used.
PDiBaP \cite{Meyerhenke12shape} is a multilevel diffusion-based algorithm that is targeted at small to medium scale parallelism with dozens of processors.
As reported by~\cite{tian2013think}, most large-scale graph processing toolkits based on cloud computing use ParMetis or rather
straightforward partitioning
strategies such as hash-based partitioning. While hashing often leads to acceptable balance, the edge cut obtained for complex
networks is very high. To address this problem, Tian et~al.\ \cite{tian2013think} have recently proposed a partitioning algorithm
for their toolkit Giraph++. The algorithm uses matching-based coarsening and ParMetis on the coarsest graph. This strategy
leads to better cut values than hashing-based
schemes. However, significant imbalance is introduced by their method, so that their results are incomparable to ours.
The label propagation clustering algorithm was initially proposed by Raghavan et~al.\ \cite{labelpropagationclustering}.
Moreover, the label propagation algorithm has been used to partition networks by Uganer and Backstrom \cite{UganderB13}. The authors do not use a multilevel scheme and rely on a given or random partition which are improved by combining the unconstrained label propagation approach with linear programming. Hence, the approach does not yield high quality partitionings.
Another distributed algorithm for balanced graph partitioning has been proposed by Rahimian et~al.\ \cite{jabeja}.
The authors use random initializations as starting point for local search which is basically node swapping.
However, if the initialization is not balanced, the final partition computed by the algorithm will also be imbalanced and the largest graph under consideration has less than 70K nodes.
Recent work by Kirmani and Raghavan \cite{KirmaniR13} solves a relaxed version of the graph partitioning problem where no strict balance constraint is enforced. The
blocks only have to have approximately the same size. Thus the problem is easier than the version of the problem where a strict balance constraint has to be fullfilled. Their approach attempts to obtain information on the graph structure by computing an embedding into the coordinate space using a multilevel graph drawing algorithm. Afterwards partitions are computed using a geometric scheme.
\vfill
\pagebreak
\subsection{KaHIP}
\label{s:kaHIP}
Within this work, we use the open source multilevel graph partitioning framework KaHIP~\cite{kaHIPHomePage,kabapeE} (Karlsruhe High Quality Partitioning).
More precisely, we employ the distributed evolutionary algorithm KaFFPaE contained therein to create high quality partitions of complex networks at the coarsest level of the hierarchy.
Hence, we shortly outline the main components of KaHIP.
KaHIP implements many different algorithms, for example flow-based methods and more-localized local searches within a multilevel framework called KaFFPa, as well as several coarse-grained parallel and sequential meta-heuristics.
The algorithms in KaHIP have been able to improve the best known partitioning results in the Walshaw Benchmark~\cite{soper2004combined} for many inputs using a short amount of time to create the partitions.
Recently, also specialized methods to partition social networks and web graphs have been included into the framework \cite{pcomplexnetworksviacluster}. In this work, we parallelize the main techniques presented therein which are reviewed in Section~\ref{s:sequentialcontraction}.
\subsubsection*{KaFFPaE}
We now outline details of the evolutionary algorithm, KaFFPaE \cite{kaffpaE}, since we use this algorithm to obtain a partition of the coarsest graph of the hierarchy.
KaFFPaE is a coarse-grained evolutionary algorithm, i.e.\ each processing element has its own population (set of partitions) and a copy of the graph.
After initially creating the local population, each processor performs combine and mutation operations on the local population/partitions.
The algorithm contains a general combine operator framework provided by modifications of the multilevel framework KaFFPa.
All combine operators can assure that the offspring has a solution quality at least as good as the better of both parents.
The basic combine operation works as follows:
Let $\mathcal{P}_1$ and $\mathcal{P}_2$ be two partitions of the graph $G$.
The partitions are both used as input to the multilevel graph partitioner KaFFPa in the following sense.
First of all, all edges that are cut edges in any of the two input partitions, i.e.\ edges that run between two blocks, are not eligible for contraction during the coarsening phase.
This means that they are not contracted during the coarsening phase.
As soon as the coarsening phase is stopped, the better partition is applied to the coarsest graph and used as
initial partitioning.
This is possible since we did not contract any cut edge of $\mathcal{P}_1$ or $\mathcal{P}_2$.
Since local search algorithms guarantee no worsening of the input partition and random tie breaking is used, it is assured that partition quality is not decreased during uncoarsening.
Note that the local search algorithms can effectively exchange good parts of the solution on the coarse levels by moving only a few nodes.
To exchange individuals between the processors over time, the algorithm is equipped with a scalable communication protocol similar to randomized rumor spreading.
That means that from time to time, the best local partition is sent to a random selection of other processors.
For more details, we refer the reader to \cite{kaffpaE}.
\section{Cluster Contraction}
\label{s:sequentialcontraction}
We now review the basic idea \cite{pcomplexnetworksviacluster} which we chose to parallelize.
The approach for creating graph hierarchies is targeted at complex network such as social networks and web graphs.
We start by explaining the size-constrained label propagation algorithm, which is used to compute clusterings of the graph.
To compute a graph hierarchy, the clustering is contracted by replacing each cluster by a single node, and the process is repeated recursively until the graph is small.
Due to the way the contraction is defined, it is ensured that a partition of a coarse graph corresponds to a partition of the input network with the same objective and balance.
Note that cluster contraction is an aggressive coarsening strategy. In contrast to most previous approaches, it can drastically shrink the size of irregular networks.
The intuition behind this technique is that a clustering of the graph (one hopes) contains many edges running inside the clusters and only a few edges running between clusters, which is favorable for the edge cut objective.
Regarding complexity, experiments in \cite{pcomplexnetworksviacluster} indicate already one contraction step can shrink the graph size by orders of magnitude and that the average degree of the contracted graph is smaller than the average degree of the input network.
Moreover, the clustering algorithm is fast and essentially runs in linear time.
On the other hand, the clustering algorithm is parallelizable and, by using a different size-constraint, the label propagation algorithm can also be used as a simple strategy to improve a solution on the current level.
\subsection{Label Propagation with Size Constraints}
\begin{figure}[b]
\begin{center}
\includegraphics[width=.45\textwidth]{pics/scanLP.pdf}
\end{center}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=2.5cm]{pics/LPgain.pdf} & \begin{minipage}{.025\textwidth}\vspace*{-2cm}\textbf{$\rightarrow$}\end{minipage} &
\includegraphics[width=2.5cm]{pics/LPgain2.pdf}
\end{tabular}
\end{center}
\caption{An example round of the label propagation graph clustering algorithm. Initially each node is in its own block. The algorithm scans all vertices in a random order and moves a node to the block with the strongest connection in its neighborhood.}
\end{figure}
Originally, the \emph{label propagation clustering} algorithm was proposed by Raghavan et~al.\ \cite{labelpropagationclustering} for graph clustering.
It is a very fast, near linear-time algorithm that locally optimizes the number of edges cut. We outline the algorithm briefly.
Initially, each node is in its own cluster/block, i.e.\ the initial block ID of a node is set to its node ID.
The algorithm then works in rounds.
In each round, the nodes of the graph are traversed in a random order.
When a node $v$ is visited, it is \emph{moved} to the block that has the strongest connection to $v$, i.e.\ it is moved to the cluster $V_i$ that maximizes $\omega(\{(v, u) \mid u \in N(v) \cap V_i \})$.
Ties are broken randomly.
The process is repeated until the process has converged.
Here, at most $\ell$ iterations of the algorithm are performed, where $\ell$ is a tuning parameter.
One round of the algorithm can be implemented to run in $\Oh{n+m}$ time.
The computed clustering is contracted to obtain a coarser graph.
\emph{Contracting a clustering} works as follows:
each block of the clustering is contracted into a single node.
The weight of the node is set to the sum of the weight of all nodes in the original block.
There is an edge between two nodes $u$ and $v$ in the contracted graph if the
two corresponding blocks in the clustering are adjacent to each other in $G$,
i.e.\ block $u$ and block $v$ are connected by at least one edge.
The weight of an edge $(A,B)$ is set to the sum of the weight of edges that run between block $A$ and block $B$ of the clustering.
Due to the way contraction is defined, a partition of the coarse graph corresponds to a partition of the finer graph with the same cut and balance.
An example contraction is shown in Figure~\ref{fig:clustercontraction}.
In contrast to the original label propagation algorithm~\cite{labelpropagationclustering}, Meyerhenke et~al.\ \cite{pcomplexnetworksviacluster} ensure that each block of the cluster fulfills a size constraint. There are two reason for this.
First, consider a clustering of the graph in which the weight of a block would exceed $(1+\epsilon) \lceil \frac{|V|}{k} \rceil$.
After contracting this clustering, it would be impossible to find a partition of the contracted graph that fulfills the balance constraint.
Secondly, it has been shown that using more balanced graph hierarchies is beneficial when computing high quality graph partitions~\cite{kappa}.
To ensure that blocks of the clustering do not become too large, an upper bound $U := \max( \max_v c(v), W)$ on the size of the blocks is introduced, where $W$ is a parameter that will be chosen later. When the algorithm starts to compute a
graph clustering on the input graph, the constraint is fulfilled since each of the blocks contains exactly one node.
A neighboring block $V_\ell$ of a node $v$ is called \emph{eligible} if $V_\ell$ will not be overloaded once $v$ is moved to $V_\ell$.
Now when a node $v$ is visited, it is moved to the \emph{eligible block} that has the strongest connection to $v$.
Hence, after moving a node, the size of each block is still smaller than or equal to $U$.
Moreover, after contracting the clustering, the weight of each node is smaller or equal to $U$.
One round of the modified version of the algorithm can still run in linear time by using an array of size $|V|$ to store the block sizes. Note that when parallelizing the algorithm this is something that needs to be adjusted since storing an array of size $|V|$ on a single processor would cost too much memory.
The parameter $W$ is set to $\frac{L_{\text{max}}}{f}$, where $f$ is a tuning parameter.
Note that the constraint is rather soft during coarsening, i.e.\ in practice it does no harm if a cluster contains slightly more nodes than the upper bound. We go into more detail in the next section.
The process of computing a size-constrained clustering and contracting it is repeated recursively.
As soon as the graph is small enough, it is initially partitioned.
That means each node of the coarsest graph is assigned to a block.
Afterwards, the solution is transferred to the next finer level.
To do this, a node of the finer graph is assigned to the block of its coarse representative.
Local improvement methods of KaHIP then try to improve the solution on the current level, i.e.\ reducing the number of edges cut.
Recall that the label propagation algorithm traverses the nodes in a random order and moves a node to a cluster with the strongest connection in its neighborhood to compute a clustering.
Meyerhenke et~al.\ \cite{pcomplexnetworksviacluster} have shown that using the ordering induced by the node degree (increasing order) improves the overall solution quality \emph{and} running time.
Using this node ordering means that in the first round of the label propagation algorithm, nodes with small node degree can change their cluster before nodes with a large node degree.
Intuitively, this ensures that there is already a meaningful cluster structure when the label propagation algorithm chooses the cluster of a high degree node.
Hence, the algorithm is likely to compute better clusterings of the graph by using node orderings based on node degree.
By using a different size-constraint -- the constraint $W := L_{\text{max}}$ of the original partitioning problem -- the label propagation is also used as a simple and fast local search algorithm to improve a solution on the current level \cite{pcomplexnetworksviacluster}.
However, small modifications to handle overloaded blocks have to be made.
The block selection rule is modified when the algorithm is used as a local search algorithm in case that
the current node $v$ under consideration is from an overloaded block $V_\ell$.
In this case it is \emph{moved} to the eligible block that has the strongest connection to $v$ without considering the block $V_\ell$ that it is contained in.
This way it is ensured that the move improves the balance of the partition (at the cost of the number of edges cut).
\begin{figure}[t]
\centering
\includegraphics[width=.35\textwidth]{pics/cluster_contraction.pdf}
\caption{Contraction of clusterings. Each cluster of the graph on the left hand side corresponds to a node in the graph on the right hand side. Weights of the nodes and the edges are choosen such that a partition of the coarse graph induces a partition of the fine graph having the same cut and balance.}
\label{fig:clustercontraction}
\end{figure}
\section{Parallelization}
\label{s:parallelization}
We now present the main contributions of the paper.
We begin with the distributed memory parallelization of the size-constrained label propagation algorithm and continue with the parallel contraction and uncoarsening algorithm.
At the end of this section, we describe the overall parallel system.
\subsection{Parallel Label Propagation}
We shortly outline our parallel graph data structure and the implementation of the methods that handle communication.
First of all, each processing element (PE) gets a subgraph, i.e.\ a contiguous range of nodes $a..b$, of the whole graph as its input, such that the subgraphs combined correspond to the input graph.
Each subgraph consists of the nodes with IDs from the interval $I:=a..b$ and the edges incident to the nodes of those blocks, as well as the end points of edges which are not in the interval $I$ (so-called ghost or halo nodes).
This implies that each PE may have edges that connect it to another PE and the number of edges assigned to the PEs might vary significantly.
The subgraphs are stored using a standard adjacency array representation, i.e.\ we have one array to store edges and one array for nodes storing head pointers to the edge array.
However, the node array is divided into two parts.
The first part stores local nodes and the second part stores ghost nodes. The method used to keep local node IDs and ghost node IDs consistent is explained in the next paragraph.
Additionally, we store information about the nodes, i.e.\ its current block and its weight.
Instead of using the node IDs provided by the input graph (called global IDs), each PE maps those IDs to the range $0\, .. \, n_p-1$, where $n_p$ is the number of distinct nodes of the subgraph. Note that this number includes the number of ghost nodes the PE has.
Each global ID $i \in a \, .. \, b$ is mapped to a local node ID $i-a$. The IDs of the ghost nodes are mapped to the remaining $n_p - (b-a)$ local IDs in the order in which they appeared during the construction of the graph structure.
Transforming a local node ID to a global ID or vice versa, can be done by adding or subtracting $a$.
We store the global ID of the ghost nodes in an extra array and use a hash table to transform global IDs of ghost nodes to their corresponding local IDs.
Additionally, we store for each ghost node the ID of the corresponding PE, using an array for $\mathcal{O}(1)$ lookups.
To parallelize the label propagation algorithm, each PE performs the algorithm on its part of the graph.
Recall, when we visit a node $v$, it is moved to the block that has the strongest connection.
Note that the cluster IDs of a node can be arbitrarily distributed in the range $0\, .. \, n-1$ so that we use a hash map to identify the cluster with the strongest connection.
Since we know that the number of distinct neighboring cluster IDs is bounded by the maximum degree in the graph, we use hashing with linear probing.
At this particular point of the algorithm, it turns out that hashing with linear probing is much faster than using the hash map of the STL.
During the course of the algorithm, local nodes can change their block and hence the blocks in which ghost nodes are contained can change as well.
Since communication is expensive, we do not want to perform communication each time a node changes its block.
We use the following scheme to \emph{overlap} communication and computation.
The scheme is organized in phases.
We call a node \emph{interface node} if it is adjacent to at least one ghost node. The PE associated with the ghost node is called adjacent PE.
Each PE stores a separate send buffer for all adjacent PEs.
During each phase, we store the block ID of interface nodes that have changed into the send buffer of each adjacent PE of this node.
Communication is then implemented asynchronously.
In phase $\kappa$, we send the current updates to our adjacent PEs and receive the updates of the adjacent PEs from round $\kappa-1$, for $\kappa>1$.
Note that in case the label propagation algorithm has converged, i.e.\ no node changes its block any more, the communication volume is really small.
The degree-based node ordering approach of the label propagation algorithm that is used during coarsening is parallelized by considering only the local nodes for this ordering.
In other words, the ordering in which the nodes are traversed on a PE is determined by the node degrees of the local nodes of this PE. During uncoarsening random node ordering is used.
\subsection{Balance/Size Constraint}
\label{ss:balanceconstraint}
Recall that we use the size-constrained label propagation algorithm during coarsening using $\frac{L_{\max}}{f}$ as a size constraint and during uncoarsening using $L_{\max}$ as a size constraint.
Maintaining the balance of blocks is somewhat more difficult in the parallel case than in the sequential case.
We use two different approaches to maintain balance, one of which is used during coarsening and the other one is used during uncoarsening.
The reason for this is that during coarsening there is a large number of blocks and the constraint is rather soft, whereas during uncoarsening the number of blocks is small and the constraint is tight.
We maintain the balance of different blocks \emph{during coarsening} as follows.
Roughly speaking, a PE maintains and updates only the local amount of node weight of the blocks of its local and ghost nodes.
Due to the way the label propagation algorithm is initialized, each PE knows the exact weights of the blocks of local nodes and ghost nodes in the beginning. The label propagation then uses the local information to bound the block weights. Once a node changes its block, the local block weight is updated.
Note that this does not involve additional amounts of communication.
We decided to use this localized approach since the balance constraint is not tight during coarsening.
More precisely, the bound on the cluster sizes during coarsening is a tuning parameter and the overall performance of the system does not depend on the exact choice of the parameter.
\emph{During uncoarsening} we use a different approach since the number of blocks is much smaller and it is unlikely that the previous approach yields a feasible partition in the end.
This approach is similar to the approach that is used within ParMetis~\cite{karypis1996parallel}.
Initially, the exact block weights of all $k$ blocks are computed locally.
The local block weights are then aggregated and broadcast to all PEs. Both can be done using one allreduce operation.
Now each PE knows the global block weights of all $k$ blocks.
The label propagation algorithm then uses this information and locally updates the weights.
For each block, a PE maintains and updates the total amount of node weight that local nodes contribute to the block weights.
Using this information, one can restore the exact block weights with one allreduce operation which is done at the end of each computation phase.
Note that this approach would not be feasible during coarsening since there are $n$ blocks in the beginning of the algorithm and each PE holds the block weights of all blocks.
\subsection{Parallel Contraction and Uncoarsening}
The \emph{parallel contraction} algorithm works as follows.
After the parallel size-constrained label propagation algorithm has been performed, each node is assigned to a cluster.
Recall the definition of our general contraction scheme.
Each of the clusters of the graph corresponds to a coarse node in the coarse graph and the weight of this node is set to the total weight of the nodes that are in that cluster. Moreover, there is an edge between two coarse nodes iff there is an edge between the respective clusters and the weight of this edge is set to the total weight of the edges that run between these clusters in the original graph.
In the parallel scheme, the IDs of the clusters on a PE can be arbitrarily distributed in the interval $0\, ..\, n-1$, where $n$ is the total number of nodes of the input graph of the current level.
Consequently, we start the parallel contraction algorithm by finding the number of distinct cluster IDs which is also the number of coarse nodes.
To do so, a PE $p$ is assigned to count the number of distinct cluster IDs in the interval $I_p:= p\lceil \frac{n}{P} \rceil+1 \, .. \, (p+1)\lceil \frac{n}{P} \rceil$, where $P$ is the total number of PEs used.
That means each PE $p$ iterates over its local nodes, collects cluster IDs $a$ that are not local, i.e.\ $a \not \in I_p$, and then sends the non-local cluster IDs to the responsible PEs.
Afterwards, a PE counts the number of distinct local cluster IDs so that the number of global distinct cluster IDs can be derived easily by using a reduce operation.
Let $n'$ be the global number of distinct cluster IDs.
Recall that this is also the number of coarse nodes after the contraction has been performed.
The next step in the parallel contraction algorithm is to compute a mapping $q: 0 \, ..
\, n-1 \to 0 \, .. \, n'-1$ which maps the current cluster IDs to a contiguous interval over all PEs. This mapping can be easily computed in parallel by computing a prefix sum over the number of distinct local cluster IDs a PE has.
Once this is done, we compute the mapping $C: 0 \, .. \, n-1 \to 0 \, .. \, n'-1$ which maps a node ID of $G$ to its coarse representative.
Note that, if a node $v$ is in cluster $V_\ell$ after the label propagation algorithm has converged, then $C(v) = q(\ell)$.
After computing this information locally, we also propagate the necessary parts of the mapping to neighboring PEs so that we also know the coarse representative of each ghost node.
When the contraction algorithm is fully completed, PE $p$ will be \emph{responsible} for the subgraph $p\lceil \frac{n'}{P} \rceil+1\, .. \, (p+1)\lceil \frac{n'}{P} \rceil$ of the coarse graph.
To construct the final coarse graph, we first construct the weighted quotient graph of the local subgraph of $G$ using hashing.
Afterwards, each PE sends an edge $(u,v)$ of the local quotient graph, including its weight and the weight of its source node, to the responsible PE.
After all edges are received, a PE can construct its coarse subgraph locally.
The implementation of the \emph{parallel uncoarsening} algorithm is simple. Each PE knows the coarse node for all its nodes in its subgraph (through the mapping $C$). Hence, a PE requests the block ID of a coarse representative of a fine node from the PE that holds the respective coarse node.
\subsection{Miscellanea}
\subsubsection*{Iterated Multilevel Schemes}
A common approach to obtain high quality partitions is to use a multilevel algorithm multiple times using different random seeds
and use the best partition that has been found.
However, one can do better by transferring the solution of the previous multilevel iteration down the hierarchy.
In the graph partitioning context, the notion of V-cycles was introduced by Walshaw \cite{walshaw2004multilevel}. More recent
work augmented them to more complex cycles~\cite{kaffpa}.
These previous works use matching-based coarsening with cut edges not being matched (and hence cut edges are not contracted).
Thus, a given partition on the finest level can be used as initial partition of the coarsest graph (having the same balance and cut as the partition of the finest graph).
Iterated V-cycles are also used within clustering-based coarsening by Meyerhenke et~al.\ \cite{pcomplexnetworksviacluster}.
To adopt the iterated multilevel technique for this coarsening scheme, it has to be ensured that cut edges are not contracted after the first multilevel iteration.
This is done by modifying the label propagation algorithm such that each cluster of the computed clustering is a subset of a block of the input partition.
In other words, each cluster only contains nodes of one unique block of the input partition.
Hence, when contracting the clustering, every cut edge of the input partition will remain.
Recall that the label propagation algorithm initially puts each node in its own block so that in the beginning of the algorithm each cluster is a subset of one unique block of the input partition.
This property is kept during the course of the label propagation algorithm by restricting the movements of the label propagation algorithm, i.e.\ we move a node to an eligible cluster with the strongest connection in its neighborhood that is in the same block of the input partition as the node itself.
We do the same in our parallel approach to realize V-cycles.
\begin{figure}[t]
\includegraphics[width=.475\textwidth]{pics/overallsystem.pdf}
\caption{The overall parallel system. It uses the parallel cluster coarsening algorithm, the coarse-grained distributed evolutionary algorithm KaFFPaE to partition the coarsest graph and parallel uncoarsening/local search. After the first iteration of the multilevel scheme the input partition is used as a partition of the coarsest graph and used as a starting point by the evolutionary algorithm.}
\label{fig:}
\end{figure}
\subsection{The Overall Parallel System}
The overall parallel system works as follows.
We use $\ell$ iterations of the parallel size-constrained label propagation algorithm to compute graph clusterings and contract them in parallel.
We do this recursively until the remaining graph has $10\,000 \cdot k$ nodes left, where $k$ is the number of blocks that the input network should be partitioned in. The distributed coarse graph is then collected on each PE, i.e.\ each PE has a copy of the complete coarsest graph.
We use this graph as input to the coarse-grained distributed evolutionary algorithm KaFFPaE, to obtain a high quality partition of it.
KaFFPaE uses modified combine operations that also use the clustering-based coarsening scheme from above.
The best solution of the evolutionary algorithm is then broadcast to all PEs which transfer the solution to their local part of the distributed coarse graph.
Afterwards, we use the parallel uncoarsening algorithm to transfer the solution of the current level to the next finer level and apply $r$ iterations of the parallel label propagation algorithm with the size constraints of the original partitioning problem (setting $W=(1+\epsilon) \lceil\frac{|V|}{k}\rceil$) to improve the solution on the current level.
We do this recursively on each level and obtain a $k$-partition of the input network in the end.
If we use iterated V-cycles, we use the given partition of the coarse graph as input to the evolutionary algorithm.
More precisely, one individual of the population is the input partition on each PE.
This way it is ensured that the evolutionary algorithm computes a partition that is at least as good as the given partition.
\section{Experiments}
\label{s:experiments}
In this section we carefully evaluate the performance of the proposed algorithm. We start by presenting our methodology, the systems used for the evaluation and the benchmark set that we used. We then look into solution quality as well as weak and strong scalability comparing our algorithm to ParMetis, which is probably the most widely used parallel partitioning algorithm.
\subsection{Methodology}
We have implemented the algorithm described above using C++ and MPI.
Overall, our parallel program consists of about 7000 lines of code (not including the source of KaHIP 0.61).
We compiled it using g++ 4.8.2 and OpenMPI 1.6.5.
For the following comparisons we used ParMetis 4.0.3. All programs have been compiled using 64 bit index data types.
We also ran PT-Scotch 6.0.0, but the results have been consistently worse in terms of solution quality and running time compared to the results computed by ParMetis, so that we do not present detailed data for PT-Scotch.
Our default value for the allowed imbalance is 3\% since this is one of the values used in \cite{walshaw2000mpm} and the default value in Metis.
By default we perform ten repetitions for each configuration of the algorithm using different random seeds for initialization and report the arithmetic average of computed cut size, running time and the best cut found.
When further averaging over multiple instances, we use the geometric mean in order to give every instance a comparable influence on the final score.
Unless otherwise stated, we use the following factor $f$ of the size-constraint (see Section~\ref{ss:balanceconstraint} for the definition): during the first V-cycle the factor $f$ is set to $14$ on social networks as well as web graphs and to $20\,000$ on mesh type networks. In later V-cycles we use a random value $f \in_{\text{rnd}}[10,25]$ to increase the diversifaction of the algorithms.
Our experiments mainly focus on the case $k=2$ to save running time and to keep the experimental evaluation simple.
Moreover, we used $k=16$ for the number of blocks when performing the weak scalability experiments in Section~\ref{s:expweakscalability}.
\subsubsection*{Algorithm Configurations}
Any multilevel algorithm has a considerable number of choices between
algorithmic components and tuning parameters.
We define two ``good'' choices: the \Id{fast} setting aims at a low execution
time that still gives good partitioning quality and the \Id{eco}
setting targets even better partitioning quality without investing an outrageous amount
of time. When not otherwise mentioned, we
use the \Id{fast} parameter setting.
The \Id{fast} configuration of our algorithm uses three label propagation iterations during coarsening and six during refinement.
We also tried larger amounts of label propagation iterations during coarsening, but did not observe a significant impact on solution quality.
This configuration gives the evolutionary algorithm only enough time to compute the initial population and performs two V-cycles.
The \Id{eco} configuration of our algorithm uses three label propagation iterations during coarsening and six label propagation iterations during refinement as well.
Time spent during initial partitioning is dependent on the number of processors used.
To be more precise, when we use one PE, the evolutionary algorithm has $t_1=2^{11}s$ to compute a partition of the coarsest graph during the first V-cycle.
When we use $p$ PEs, then it gets time $t_p=t_1/p$ to compute a partition of an instance. This configuration performs five V-cycles.
There is also a \Id{minimal} variant of the algorithm, which is similar to the \Id{fast} configuration but only performs one V-cycle.
We use this variant of the algorithm only once -- to create a partition of the largest web graph uk-2007 on machine~B (described
below).
\subsubsection*{Systems}
We use two different systems for our experimental evaluation.
\emph{System A} is mainly used for the evaluation of the solution quality of the different algorithms in Table~\ref{tab:bipartitioningresults}. It is equipped with four Intel Xeon E5-4640 Octa-Core processors (Sandy Bridge) running at a clock speed of 2.4 GHz. The machine has 512 GB main memory, 20 MB L3-Cache and 8x256 KB L2-Cache.
\emph{System B} is a cluster where each node is equipped with two Intel Xeon E5-2670 Octa-Core processors (Sandy Bridge) which run at a clock speed of 2.6 GHz.
Each node has 64 GB local memory, 20 MB L3-Cache and 8x256 KB L2-Cache.
All nodes have local disks and are connected by an InfiniBand 4X QDR interconnect, which is characterized by its very low latency of about 1 microsecond and a point to point bandwidth between two nodes of more than 3700 MB/s. We use machine $B$ for the scalability experiments in Section~\ref{s:expweakscalability}.
\subsubsection*{Instances}
We evaluate our algorithms on graphs collected from \cite{benchmarksfornetworksanalysis,UFsparsematrixcollection,BoVWFI,snap}.
Table~\ref{tab:scalefreegraphstable} summarizes the main properties of the benchmark set.
Our benchmark set includes a number of graphs from numeric simulations as well as social networks and web graphs.
Moreover, we use the two graph families \Id{rgg} and \Id{del} for comparisons.
\Id{rgg$X$} is a \emph{random geometric graph} with
$2^{X}$ nodes where nodes represent random points in the unit square and edges
connect nodes whose Euclidean distance is below $0.55 \sqrt{ \ln n / n }$.
This threshold was chosen in order to ensure that the graph is almost certainly connected.
The largest graph of this class is \Id{rgg31}, which has about 21.9 billion edges.
\Id{del$X$} is a Delaunay triangulation of $2^{X}$
random points in the unit square.
The largest graph of this class is \Id{del31}, which has about 6.4 billion edges.
Most of these graphs are available at the 10th DIMACS Implementation Challenge~\cite{dimacschallengegraphpartandcluster} website.
The largest graphs (with $2^{26}$ to $2^{31}$ nodes) of these families have been generated using modified code taken from \cite{kappa}.
We will make these graphs available on request.
\begin{table}[t]
\centering
\caption{Basic properties of the benchmark set with a rough type classification. S stands for social or web graphs, M is used for mesh type networks.}
\label{tab:scalefreegraphstable}
\begin{tabular}{|l|r|r||r||r|}
\hline
graph & $n$ & $m$ & Type & Ref. \\
\hline
\hline
\multicolumn{4}{|c|}{Large Graphs} \\
\hline
amazon & $\approx$407K & $\approx$2.3M &S& \cite{snap}\\
eu-2005 & $\approx$862K & $\approx$16.1M &S& \cite{benchmarksfornetworksanalysis}\\
youtube & $\approx$1.1M & $\approx$2.9M &S& \cite{snap}\\
in-2004 & $\approx$1.3M & $\approx$13.6M &S& \cite{benchmarksfornetworksanalysis}\\
packing & $\approx$2.1M & $\approx$17.4M &M& \cite{benchmarksfornetworksanalysis}\\
enwiki & $\approx$4.2M & $\approx$91.9M &S& \cite{webgraphWS} \\
channel & $\approx$4.8M & $\approx$42.6M &M& \cite{benchmarksfornetworksanalysis}\\
hugebubble-10 & $\approx$18.3M & $\approx$27.5M &M& \cite{benchmarksfornetworksanalysis}\\
nlpkkt240 & $\approx$27.9M & $\approx$373M &M& \cite{UFsparsematrixcollection}\\
uk-2002 & $\approx$18.5M & $\approx$262M &S& \cite{webgraphWS} \\
del26 & $\approx$67.1M & $\approx$201M &M& \cite{kappa} \\
rgg26 & $\approx$67.1M & $\approx$575M &M& \cite{kappa} \\
\hline
\multicolumn{4}{|c|}{Larger Web Graphs} \\
\hline
arabic-2005 & $\approx$22.7M & $\approx$553M &S& \cite{webgraphWS} \\
sk-2005 & $\approx$50.6M & $\approx$1.8G &S& \cite{webgraphWS} \\
uk-2007 & $\approx$105.8M & $\approx$3.3G &S& \cite{webgraphWS} \\
\hline
\multicolumn{4}{|c|}{Graph Families} \\
\hline
delX & [$2^{19}, \ldots, 2^{31}$] & $\approx$1.5M--6.4G &M& \cite{kappa}\\
\hline
rggX & [$2^{19}, \ldots, 2^{31}]$ & $\approx$3.3M--21.9G &M& \cite{kappa}\\
\hline
\end{tabular}
\end{table}
\subsection{Main Results and Comparison to ParMetis}
\begin{table*}[htb]
\small
\centering
\caption{Average performance (cut and running time) and best result achieved by different partitioning algorithms. Results are for the bipartitioning case $k=2$. All tools used 32 PEs of machine A. Results indicated by a * mean that the amount of memory needed by the partitioner exceeded the amount of memory available on that machine when 32 PEs are used (512GB RAM). The ParMetis result on arabic has been obtained using 15 PEs (the largest number of PEs so that ParMetis could solve the instance).}
\label{tab:bipartitioningresults}
\begin{tabular}{|l||r|r|r||r|r|r||r|r|r|}
\hline
algorithm & \multicolumn{3}{c||}{ParMetis} & \multicolumn{3}{c||}{Fast} & \multicolumn{3}{c|}{Eco} \\
\hline
graph & avg. cut & best cut & $t$[s]& avg. cut & best cut & $t$[s] & avg. cut & best cut & $t$[s]\\
\hline
\hline
amazon & \numprint{48104} & \numprint{47010} & \numprint{0.49} & \numprint{46641} & \numprint{45872} & \numprint{1.85} & \numprint{44703} & \textbf{\numprint{44279}} & \numprint{71.04}\\
eu-2005 & \numprint{33789} & \numprint{24336} & \numprint{30.60} & \numprint{20898} & \numprint{18404} & \numprint{1.63} & \numprint{18565} & \textbf{ \numprint{18347}} & \numprint{70.04} \\
youtube & \numprint{181885} & \numprint{171857} & \numprint{6.10} & \numprint{174911} & \numprint{171549} & \numprint{8.74} & \numprint{167874} & \textbf{\numprint{164095}} & \numprint{105.87} \\
in-2004 & \numprint{7016} & \numprint{5276} & \numprint{3.43} & \numprint{3172} & \numprint{3110} & \numprint{1.38} & \numprint{3027} & \textbf{\numprint{2968}} & \numprint{69.19} \\
packing & \numprint{11991} & \numprint{11476} & \numprint{0.24} & \numprint{10185} & \numprint{9925} & \numprint{1.84} & \numprint{9634} & \textbf{\numprint{9351}} & \numprint{68.69} \\
enwiki & \numprint{9578551} & \numprint{9553051} & \numprint{326.92} & \numprint{9622745} & \numprint{9565648} & \numprint{157.32} & \numprint{9559782} & \textbf{\numprint{9536520}} & \numprint{264.64} \\
channel & \numprint{48798} & \textbf{\numprint{47776}} & \numprint{0.55} & \numprint{56982} & \numprint{55959} & \numprint{2.71} & \numprint{52101} & \numprint{50210} & \numprint{71.95} \\
hugebubbles & \numprint{1922} & \numprint{1854} & \numprint{4.66} & \numprint{1918} & \numprint{1857} & \numprint{38.00} & \numprint{1678} & \textbf{\numprint{1620}} & \numprint{216.91} \\
nlpkkt240 & \numprint{1178988} & \textbf{\numprint{1152935}} & \numprint{15.97} & \numprint{1241950} & \numprint{1228086} & \numprint{35.06} & \numprint{1193016} & \numprint{1181214} & \numprint{192.78} \\
uk-2002 & \numprint{787391} & \numprint{697767} & \numprint{128.71} & \numprint{434227} & \numprint{390182} & \numprint{19.62} & \numprint{415120} & \textbf{\numprint{381464}} & \numprint{146.77} \\
del26 & \numprint{18086} & \numprint{17609} & \numprint{23.74} & \numprint{17002} & \numprint{16703} & \numprint{165.02} & \numprint{15826} & \textbf{\numprint{15690}} & \numprint{697.43} \\
rgg26 & \numprint{44747} & \numprint{42739} & \numprint{8.37} & \numprint{38371} & \numprint{37676} & \numprint{55.91} & \numprint{34530} & \textbf{\numprint{34022}} & \numprint{263.81} \\
arabic-2005 & *\numprint{1078415}& *\numprint{968871}& *\numprint{1245.57} & \numprint{551778} & \textbf{\numprint{471141}} & \numprint{33.45} & \numprint{511316} & \numprint{475140} & \numprint{184.01} \\
sk-2005 & * & * & * & \numprint{3775369} & \numprint{3204125} & \numprint{471.16} & \numprint{3265412} & \textbf{\numprint{2904521}} & \numprint{1688.63} \\
uk-2007 & * & * & * & \numprint{1053973} & \numprint{1032000} & \numprint{169.96} & \numprint{1010908} & \textbf{\numprint{981654}} & \numprint{723.42} \\
\hline
\end{tabular}
\end{table*}
\label{s:expsolutionquality}
In this section we compare variants of our algorithm against ParMetis in terms of solution quality, running time as well as weak and strong scalability.
We start with the comparison of solution quality (average cut, best cut) and average running time on most of the graphs from Table~\ref{tab:scalefreegraphstable} when 32 PEs of machine A are used. Table~\ref{tab:bipartitioningresults} gives detailed results per instance.
First of all, ParMetis could not solve the largest instances in our benchmark set, arabic, sk-2005 and uk-2007 when 32 PEs of machine A are used. This is due to the fact that ParMetis cannot coarsen the graphs effectively so that
the coarsening phase is stopped too early.
Since the smallest graph is replicated on each of the PEs, the amount of memory needed by ParMetis is larger than the amount of memory provided by the machine (512GB RAM).
For example, when the coarsening phase of ParMetis stops on the instance uk-2007, the coarsest graph still has more than 60M vertices. This is less than a factor of two reduction in graph size compared to the input network (the same holds for the number of edges in the coarse graph).
The same behaviour is observed on machine B, where even less memory per PE is available.
Contrarily, our algorithm is able to shrink the graph size significantly.
For instance, after the first contraction step, the graph is already two orders of magnitude smaller and contains a factor of 300 less edges than the input graph uk-2007.
We also tried to use a smaller amount of PEs for ParMetis.
It turns out that ParMetis can partition arabic when using 15 PEs cutting nearly twice as many edges and consuming thirty-seven times more running time than our \Id{fast} variant. Moreover, ParMetis could not solve the instance sk-2005 and uk-2007 for any number of PEs.
When only considering the networks that ParMetis could solve in Table~\ref{tab:bipartitioningresults}, our \Id{fast} and \Id{eco} configuration compute cuts that are 19.2\% and 27.4\% smaller on average than the cuts computed by ParMetis, respectively.
On average, \Id{fast} and \Id{eco} need more time to compute a partition.
However, there is a well defined \emph{gap} between mesh type networks that usually do not have a community structure to be found and contracted by our algorithm, and social networks as well as web graphs, which our algorithm targets.
Considering only social networks and web graphs, our \Id{fast} algorithm is more than a factor two faster on average and improves the cuts produced by ParMetis by 38\% (the \Id{eco} configuration computes cuts that are 45\% smaller than the cuts computed by ParMetis).
The largest speedup over ParMetis in Table~\ref{tab:bipartitioningresults} was obtained on eu-2005 where our algorithm is more than eighteen times as fast as ParMetis and cuts 61.6\% less edges on average.
In contrast, on mesh type networks our algorithm does not have the same advantage as on social networks.
For example, our \Id{fast} configuration improves on ParMetis only by 2.9\% while needing more than five times as much running time.
This is due to the fact that this type of network usually has no community structure so that the graph sizes do not shrink as fast.
Still the \Id{eco} configuration computes 11.8\% smaller cuts than ParMetis.
To obtain a fair comparison on this type of networks, we also compare the best cut found by ParMetis against the average cuts found by our algorithms.
While the best cuts on mesh type networks of ParMetis are comparable to the average results of our \Id{fast} configuration, the \Id{eco} configuration still yields 8.2\% smaller cuts.
When partitioning the instances into 32 blocks, improvements are distributed similarly, i.e.\ they are much larger on social networks than on mesh type networks. Overall, our \Id{fast} and \Id{eco} configuration compute 6.8\% and 16.1\% smaller cuts than ParMetis, respectively. Yet, in this comparison
ParMetis simplifies the problem by relaxing it: On some instances it does not respect the balance
constraint and computes partitions with up to 6\% imbalance.
\begin{figure}[h]
\vspace*{-.5cm}
\begin{center}
\hspace*{-.03\textwidth}
\includegraphics[width=.5\textwidth]{pics/weakscaling_all.pdf}
\end{center}
\vspace*{-.75cm}
\caption{Weak scaling experiments for random geometric graph class \Id{rggX} and the Delaunay triangulation graph class \Id{delX}. When using $p$ PEs, the instance with $2^{19}p$ nodes from the corresponding graph class was used, i.e.\ when using 2048 cores all algorithms partition the graphs \Id{del30} and \Id{rgg30}. The figure shows the time spend per edge. Sixteen blocks have been used for the partitioning task.}
\label{fig:weakscalingall}
\end{figure}
A table reporting detailed results can be found in the Appendix.
We now turn to the evaluation of \emph{weak scalability}. These experiments have been performed on the high-performance cluster (machine B).
To evaluate weak scalability, we use the two graph families \Id{rgg$X$} and \Id{del$X$}, and use $k=16$ for the number of blocks for the partitioning task.
Moreover, we focus on the \Id{fast} configuration of our algorithm and
ParMetis
to save running time. We expect that the scalability of the
\Id{eco} configuration of our algorithm is similar.
When using $p$ PEs, the instance with $2^{19}p$ nodes from the corresponding graph class is used, i.e.\ when using 2048 cores, all algorithms partition the graphs \Id{del30} and \Id{rgg30}.
Figure~\ref{fig:weakscalingall} reports the running time per edge of the algorithms under consideration.
Our algorithm shows weak scalability \emph{all the way down} to the largest number of cores used while the running time per edge has a somewhat stronger descent compared to ParMetis.
ParMetis has trouble partitioning the largest Delaunay graphs.
The largest Delaunay graph that ParMetis could partition was \Id{del28} using 512 cores.
Considering the instances that ParMetis could solve, our \Id{fast} configuration improves solution quality by 19.5\% on
random geometric graphs and by 11.5\% on Delaunay triangulations on average.
Since the running time of the \Id{fast} configuration is slower on both graph families, we again compare the best cut results of ParMetis achieved in ten repetitions against our average
results to obtain a fair comparison (in this case ParMetis has a slight advantage in terms of running time).
Doing so, our algorithm still yields an improvement of 16.8\% on the random geometric graphs and an improvement of 9.5\% on the Delaunay triangulations.
Eventually, ParMetis is slower than the \Id{fast} version of our partitioner. On the largest random geometric graph used during this test, we are about a factor of two faster than ParMetis, while improving the results of ParMetis by 9.5\%. In this case our partitioner needs roughly 65 seconds to compute a 16-partition of the graph. In addition, our algorithm is a factor five faster on the largest Delaunay graph that ParMetis could solve and produces a cut that is 9.5\% smaller than the cut produced by ParMetis.
\begin{figure}
\centering
\vspace*{-.5cm}
\hspace*{-.03\textwidth}
\vspace*{-.5cm}
\includegraphics[width=.5\textwidth]{pics/delstrongscaling.pdf}
\vspace*{-.5cm}
\hspace*{-.03\textwidth}
\includegraphics[width=.5\textwidth]{pics/rggstrongscaling.pdf}
\vspace*{-.5cm}
\hspace*{-.03\textwidth}
\includegraphics[width=.5\textwidth]{pics/socialstrongscaling.pdf}
\caption{Top: Strong scaling experiments on Delaunay networks. The largest graph that ParMetis could partition from this graph family was \Id{del27}. Middle: Strong scaling experiments on random geometric networks. Bottom: Strong scaling experiments on the largest social networks from our benchmark set. Due to ineffective coarsening, ParMetis was not able to partition any of these graphs on machine B. On the largest graph, uk-2007, we also used the \Id{minimal} variant of our algorithm. \emph{Note}, although our system is not built for mesh-type networks such as Delaunay and random geometric graphs, we can partition larger instances and compute better solutions than ParMetis.
}
\label{fig:strongscalingrgg}
\end{figure}
\label{s:expweakscalability}
We now look at \emph{strong scalability}. Here we use a subset of the random geometric and Delaunay graphs as well as the four large graphs arabic, uk-2002, sk-2007, and uk-2007. In all cases we use up to 2048 cores of machine B (except for del25 and rgg25, for which we only scaled up to 1024 cores).
Again, we focus on the \Id{fast} configuration of our algorithm and ParMetis to save running time. Figure~\ref{fig:strongscalingrgg} summarizes the results of the experiments.
First of all, we observe strong scalability to thousands of processors if the graphs are large enough.
On del29 and rgg31, our algorithm scales all the way down to 2048 cores.
Using all 2048 cores, we need roughly 6.5 minutes to partition del31 and 73 seconds to partition rgg31.
Note that the rgg31 graph has three times more edges than del31 but the running time needed to partition del31 is higher.
This is due to the fact that the Delaunay graphs have very bad locality, i.e.\ when partitioning del31 more than 40\% of the edges are ghost edges, whereas we observe less than 0.5\% ghost edges when partitioning the largest random geometric graph.
Although the scaling behaviour of ParMetis is somewhat better on the random geometric graphs rgg25-29, our algorithm is eventually more than three times faster on the largest random geometric graph under consideration when all 2048 cores are used.
As on machine A, ParMetis could not partition the instances uk-2002, arabic, sk-2007 and uk-2007 -- this is again due to the amount of memory needed arising from ineffective coarsening.
On the smaller graphs, uk-2002 and arabic, our algorithm scales up to 128 cores obtaining a 35-fold and 32-fold speed-up compared to the case where our algorithm uses only one PE.
On the larger graphs sk-2007 and uk-2007 we need more memory. The smallest number of PEs needed to partition sk-2007 and uk-2007 on machine B has been 256 PEs and 512 PEs, respectively.
We observe scalability up to 1K cores on the graph sk-2007 (although, to be fair, the running time does not decrease much in that area).
On uk-2007 we do not observe further scaling when switching from 512 - 2048 cores so that it is unclear where the sweet spot is for this graph.
We also applied the \Id{minimal} configuration on machine B to the largest web graph uk-2007 in our benchmark set. The \Id{minimal} configuration needs 15.2 seconds to partition the graph when 512 cores are used.
The cut is 18.2\% higher compared to the cut of the \Id{fast} configuration, which needs roughly 47 seconds to perform the partitioning task and cuts approximately 1.03M edges on average. This is fifty-seven times faster than the time needed to partition this graph using one core of machine A (which is faster).
\label{s:expstrongscalability}
\label{s:comparisonetc}
\section{Conclusion and Future Work}
\label{s:conclusion}
Current state-of-the-art graph partitioners have difficulties when
partitioning massive complex networks, at least partially due to ineffective coarsening.
We have demonstrated that high quality partitions of such networks can be obtained in
parallel in a scalable way.
This was achieved by using a new multilevel scheme based on the contraction of size-constrained clusterings, which can reduce the size of the graph very fast.
The clusterings have been computed by a parallelization of the size-constrained label propagation algorithm \cite{pcomplexnetworksviacluster}.
As soon as the graph is small enough, we use a coarse-grained distributed memory parallel evolutionary algorithm to compute a high quality partitioning of the graph.
By using the size-constraint of the graph partitioning problem to solve, the parallel label propagation algorithm is also used as a very simple, yet effective, local search algorithm. Moreover, by integrating techniques like V-cycles and the evolutionary algorithm on the coarsest level, our system gives the user a gradual choice to trade solution quality for running time.
The strengths of our new algorithm unfolds in particular on social networks and web graphs, where average solution quality \emph{and} running time is much better than what is observed by using ParMetis. Due to the ability to shrink highly complex networks drastically, our algorithm is able to compute high quality partitions of web scale networks in a matter of seconds, whereas ParMetis fails to compute any partition.
Moreover, our algorithm scales well up to thousands of processors.
Considering the good results of our algorithm, we want to further improve and release it.
An important emerging application scenario are large-scale graph processing toolkits based on cloud computing.
Significant savings for several algorithmic kernels within the toolkit GPS have been reported by using
graph partitioning~\cite{Salihoglu:2013:GGP:2484838.2484843} -- ParMetis in their case. Due to the superiority of our new algorithm compared to ParMetis on large
complex networks, further running time savings can be anticipated, also for related tools like
Google's Pregel~\cite{Malewicz:2010:PSL:1807167.1807184}, Apache Giraph (\url{https://giraph.apache.org/}), Giraph++~\cite{tian2013think}, and GraphLab~\cite{Low:2012:DGF:2212351.2212354}.
In future work we want to develop a very fast prepartitioner for such systems and we want to take advantage of the already computed partition in later multilevel iterations to further minimize the communication needed by the label propagation algorithm.
Our algorithm may also be very helpful if a prepartition of the graph is already available, e.g.\ from geographic initializations as in \cite{UganderB13}.
This prepartition could be directly fed into the first V-cycle and consecutively be improved.
In practical applications it may be advantageous to impart the information given by ground-truth communities if such are available.
It will be very interesting to generalize our algorithm for graph clustering w.r.t. modularity.
For example, it should be straightforward to integrate the algorithm of Ovelgönne and Geyer-Schulz \cite{ogs12} to compute a high quality modularity graph clustering on the coarsest level of the hierarchy.
This would enable researchers to compute graph clusterings of huge unstructured graphs in a short amount of time.
Another issue that we want to look at are other objective functions.
For example, it might be interesting to integrate other objective functions such as maximum/total communication volume or maximum quotient graph degree into the evolutionary algorithm which is called on the coarsest graph of the hierarchy as well as into the label propagation algorithm.
\section*{Acknowledgements}
We would like to thank the Steinbuch Centre of Computing for giving us access to the Instituts Cluster II.
Moreover, we would like to thank Horst Gernert and his team for the support that we received while we performed the scalability experiments on the cluster.
\bibliographystyle{IEEEtran}
|
2,869,038,156,313 | arxiv | \section{Introduction}
{\it Introduction} -- Topological insulators (TI) have attracted great attention in condensed matter physics \cite{KM, BAB}.
The main feature of 2D topological insulators is the existence of conducting edge states protected by the time-reversal symmetry (TRS). Each edge state is a helical Kramers doublet (KD) with opposite spins propagating in opposite directions. TRS forbids a spin-flip backscattering within the same KD, but allows it between two different KDs. In a non-interacting system, a backscattering between different doublets generated by a disorder localises all edge states for even number of KDs and allows odd number (at least one) of delocalised edge modes if the number of KDs is odd \cite{Bardarson}. The former case then corresponds to a trivial insulator whereas the latter must be referred to as topological. The TRS argument \cite{Bardarson} then states that the main distinction between topological and trivial insulators is the parity of the number of Kramers doublets. This is the conclusion reached on the basis of symmetries of the scattering matrix which is valid for non-interacting systems only. The effect of interactions on the edge states behavior under perturbations is of a great importance. It was studied intensively for a system with a single KD \cite{chu1,chu2,chu3,chu4}, and for systems with one or more KDs \cite{Moore,San_G,San_G2,neu,ster}. One of the main conclusions of these studies was that an even number of KDs can be stabilised by interactions and remain conducting. On the other hand, to the best of our knowledge the existing experiments provide so far only evidence of the existence of 2D topological insulators with a single KD \cite{Wurz}.
In this Letter, we consider an arbitrary number $N$ of KDs existing at the edge of a 2D material. Assuming realistic situation that all KDs exist within a layer which is narrower than a screening radius, we apply model of featureless (Coulomb-blockade or 'orthodox' model) interaction between them. We will show that for generic interaction parameters CDW instability of repulsive fermions in a clean (translation invariant) system leads to the formation of a rigid structure (similar to the Wigner crystal in higher dimensions) stemming from the freezing of $(N-1)$ gapped modes. The remaining single gapless mode describes sliding of the total charge and it gets pinned by a backscattering term generated by a random inhomogeneity leading to a full localisation of the edge modes when number of Kramers doublets exceeds two, $N>2$. The conductance is not fully suppressed by disorder in two situations only. In the case of a single Kramers doublet, $N=1$, no gaps due to interaction could be generated and the dimensionless edge conductance may be equal to one for a wide range of parameters. A pair of doublets, $N=2$, also may survive pinning by disorder (maintaining dimensionless edge conductance equal to two) but the stability region is small and, therefore, difficult to reach and observe experimentally.
Note that we are interested in the weak interaction problem, hence not all symmetry allowed scattering process create spectral gaps. Rather, only processes that are relevant in the renormalisation group (RG) sense become potential candidates for opening gaps in the excitation spectrum. This is in contrast to a strong interaction problem (see, for example \cite{neu}, and references therein), in which case all symmetry allowed interactions have to be taken into account on equal footing, and the Haldane criterion \cite{hal} must be applied to singling out the maximal number of consistent conditions for spectral gaps.
We will show below that for $N$ repulsive KDs one can always find $(N-1)$ interaction processes that glue together density profiles of different KDs creating single conducting mode (CDW regime) that may slide in TRS system. There is another region of parameters where CDW gets pinned and TRS is spontaneously broken. For this set of RG-relevant interactions, the Haldane criterion \cite{hal} is automatically satisfied and, therefore, our analysis is insensitive to the parity of KDs number.
The proper way to describe a one-dimensional physics with interactions is the Luttinger liquid (LL) theory \cite{Gim} description. To consider multiple edge states, one has to study a multi-channel system in the framework of sliding Luttinger liquid (sLL) \cite{Sondhi,sLL,Kane2002,smectic,XY}. It is convenient to define a Luttinger matrix $\hat K$ \cite{Yur1, Yur2, KLY, Yur, ACh, JLY} which is a generalisation of a Luttinger parameter $K$ for a single channel. All scaling dimensions of all symmetry allowed perturbations can be expressed using this single matrix ${\hat K}$. This matrix provides information on the relevance of perturbations and, therefore, a stability region for a topological insulator.
This manuscript is organised as follows: we start with the formulation of the model and introduction of the perturbations present in a clean (translation invariant) system. The renormalisation group (RG) analysis of this model will allow us to single out gapless modes and formulate low-energy effective model. We will then treat the effect of a disorder on the survived low-energy mode and build the phase diagrams with the focus on robustness of topological insulators against random disorder.
{\it The model} --
The Lagrangian describing a multichannel Luttinger liquid is built on two vector fields, ${\bm\phi}=(\phi_1\,,...\,,\phi_N)$ and ${\bm\theta}=(\theta_1\,,...\,,\theta_N)$, parametrising excitation densities, $\rho_i=\partial_x\phi_i/2\pi$, and currents, $j_i=\partial_x\theta_i/2\pi$, in each channel $i$ ($1\leq i \leq N$)\cite{Sondhi,sLL,Kane2002,smectic,XY}. The Lagrangian, ${\cal L}_0$, written in terms of the composite field
${\bm\Psi}^{\rm T}=({\bm\phi}^{\rm T}\,,{\bm\theta}^{\rm T})$,
\begin{equation}\label{L0}
{\cal L}_0=\frac{1}{8\pi}{\bm \Psi}^{\rm T}\,\left[{\hat\tau}_1\,\partial_t+{\hat V}\,\partial_x\right]\,\partial_x\,{\bm \Psi}\,,
\end{equation}
includes block-diagonal matrix ${\hat V}={\rm diag}[{\hat V}_+\,,{\hat V}_-]$ with each block describing density-density, ${\hat V}_+$, and current-current, ${\hat V}_-$, interactions; ${\hat\tau}_1$ is the Pauli matrix.
Interaction matrices for KDs in topological insulators should be distinguished from a standard multi-channel (array of wires) model, where inter-wire interactions decay with the distance between wires, or even only nearest-neighbour (nearest, adjacent wires) interaction is assumed. Since all KDs are localised near an edge there spatial separation can be much shorter than the screening length of the interaction. This is the model we analyse below. Taking all velocities equal each other and we can put them equal unity. Inter-KDs interactions are assumed to be equivalent for all KDs,:
\begin{equation}
V^{ij}_{\pm}=\left(1+g_{\pm}\right)\,\delta_{ij}+g'_{\pm}\,\left(1-\delta_{ij}\right)\,,
\end{equation}
All parameters are defined following standard nomenclature: $g_{\pm}=g_4\pm g_2$ with coupling $g_4$ being an interaction strength between electrons moving in the same direction (right- with right-movers, and left- with left-movers), and $g_2$ is the interaction strength between electrons moving in the opposite directions within the same KD. The couplings with prime have similar meaning for inter-channel interactions.
It is convenient to represent the matrices as sums of two terms acting in orthogonal subspaces,
\begin{equation}\label{V}
\hat{V}_{\pm}=v_{\parallel}\,K_{\parallel}^{\pm 1}\,{\hat\Pi}+v_{\perp}\,K_{\perp}^{\pm 1}\left(\hat{\mathbb{1}}-{\hat\Pi}\right)\,,
\end{equation}
with two projectors in channel space defined by
\begin{equation}
\hat{\Pi}=N^{-1}\,{\bf e}\otimes{\bf e}\,,\quad {\bf e}=\left(1,1, ..., 1\right)\,,\quad
{\hat \Pi}_{\perp}={\hat{\mathbb{1}}}-{\hat\Pi}.
\end{equation}
The 'effective' Luttinger parameters
\begin{equation}\label{K}
K_{\perp}=K\,\sqrt{\frac{1-\alpha_-}{1-\alpha_+}}\,,\quad K_{\parallel}=K\,\sqrt{\frac{1+(N-1)\alpha_-}{1+(N-1)\alpha_+}}\,,
\end{equation}
are related to to the standard Luttinger parameter $K$ defined in the absence of inter-channel interactions and inter-channel couplings $\alpha_{\pm}=g'_{\pm}/(1+g_{\pm})$ (we omit definitions of the velocities because their values are irrelevant for the analysis below).
{\it Interactions} - The model of interacting KDs contains terms describing multi-particle interactions beyond (forward scattering) quadratic Lagrangian. The most general interaction is written as
\begin{equation}\label{int}
{\cal L}_{\rm int}=\sum\limits_{Q=0}\,h({\bf j},{\bf q})\,e^{i({\bf j}{\bm\phi}+{\bf q}{\bm\theta})}\,,
\end{equation}
where the summation is restricted by the neutrality requirement $Q=0$ since the charge of the vertex (number of created minus number of annihilated particles) labelled by pair $({\bf j},{\bf q})$ is equal to $Q=2{\bf q}{\bf e}$. The vectors ${\bf j}$ and ${\bf q}$ have components that take integer and half-integer values and corresponding components, $j_i$ and $q_i$, must be both either integer or half-integer.
The possible amplitudes of the couplings are related to each other by hermiticity ${\bar h}({\bf j},{\bf q})=h(-{\bf j},-{\bf q})$ and time-reversal symmetry (TRS):
\begin{equation}\label{TRS}
h({\bf j},{\bf q})=h({\bf j},-{\bf q})\,(-1)^J\,,\quad J={\bf j}{\bf e}\,.
\end{equation}
Note that the neutrality requirement $Q=0$ implies that $J$ is an integer.
{\it Relevance of perturbations} --
Since we are dealing with a weak interaction case, not all perturbations present in Eq.~(\ref{int}) dictate the system state. Only those terms that are relevant in the renormalisation group sense should be taken into account. Discarding irrelevant terms we will be left with the effective low-energy action. The scaling dimension of an arbitrary vertex from interactions Eq.~(\ref{int}) can be written as
\begin{equation}\label{d}
\Delta({\bf j},{\bf q})={\bf j}\,{\hat K}\,{\bf j}+{\bf q}\,{\hat K}^{-1}\,{\bf q}\,,
\end{equation}
where matrix ${\hat K}$ is the generalisation of the single-channel Luttinger parameter to multi-channel case (please note that it is not a statistics matrix, sometimes called ${\cal K}-matrix$, used in description of fractional liquids). The matrix ${\hat K}$ employed in Eq.~(\ref{d}) is the solution of the algebraic matrix equation \cite{Yur,KLY}:
\begin{equation}\label{K}
{\hat K}\,{\hat V}_+\,{\hat K}={\hat V}_-\,.
\end{equation}
Solving this equation for the interaction matrices ${\hat V}_{\pm}$ defined in the Eq.~(\ref{V}),
\begin{equation}
{\hat K}=K_{\parallel}\,{\hat\Pi}+K_{\perp}\,{\hat\Pi}_{\perp}\,,
\end{equation}
one easily finds the scaling dimensions of the vertices in the interaction term:
\begin{equation}\label{delta}
\Delta({\bf j},{\bf q})=K_{\perp}\,{\bf j}^2+K^{-1}_{\perp}\,{\bf q}^2+(K_{\parallel}-K_{\perp})\,\frac{J^2}{N}\,.
\end{equation}
The perturbations in Eq.~(\ref{int}) may have random amplitudes $h$ with mean zero value (stemming from disorder) and non-random amplitudes allowed in a translation invariant system. They should be treated differently. Let us first analyse the latter.
{\it Clean system} --
Perturbations allowed in a translation invariant system are further restricted by the momentum conservation $J=0$. The scaling dimensions of zero-current, $J=0$, interactions
\begin{equation}\label{delta-inv}
\Delta({\bf j},{\bf q})=K_{\perp}\,{\bf j}^2+K^{-1}_{\perp}\,{\bf q}^2\,,
\end{equation}
The most RG dangerous terms are known (see e.g. \cite{stab1,stab2, stab3, stab4, stab5}). They correspond to the minimal values of the scaling dimensions. There are three different terms but one of them that corresponds to the choice ${\bf j}=\pm{\bf q}={\bf t}_{ij}/2$ (where vector ${\bf t}_{ij}=(0,...,1_i, ..., -1_j, ..., 0)$ with arbitrary $i\neq j$) is a single-particle scattering and will be ignored in our analyses because these processes have been accounted for in constructing a non-interacting KDs model. The other two terms are known to be responsible for charge density wave,
\begin{equation}
{\cal L}^{\rm cdw}\sim\sum\,h^{\rm cdw}_{ij}\,e^{i(\phi_i-\phi_j)}\,,
\end{equation}
with scaling dimension $\Delta^{\rm cdw}=\Delta({\bf t}_{ij}\,,0)$, and superconductivity,
\begin{equation}
{\cal L}^{\rm sc}\sim\sum\,h^{\rm sc}_{ij}\,e^{i(\theta_i-\theta_j)}\,,
\end{equation}
with scaling dimension $\Delta^{\rm sc}=\Delta(0\,,{\bf t}_{ij}$). The explicit expressions for their scaling dimensions can be found from Eq. \eqref{delta-inv}:
\begin{eqnarray}
\Delta^{\rm cdw}=2\,K_{\perp}\,,\quad \Delta^{\rm sc}=2\,K^{-1}_{\perp}\,.
\end{eqnarray}
Translation invariant perturbations are RG-relevant when their scaling dimensions are below the physical dimension $d$ and $d=2$ in this case. Note that one of two two-particle perturbations is {\it always} relevant (smaller than $2$), and therefore freezes $N-1$ differences between corresponding bosonic fields and opens $N-1$ gaps. Before we turn to the effect a disorder on the remaining single gapless mode, we have to separate gapped and gapless modes to write the effective low-energy field theory of the translation invariant system. This task can be achieved by an orthogonal transformation on both ${\bm\phi}$- and ${\bm\theta}$-vector fields to diagonalise Hamiltonian and preserve commutations. The orthogonal matrix of the form ${\hat O}=({\bf e}_1\,, ..., {\bf e}_{N-1}\,,{\bf e}/\sqrt{N})$ (with all mutually orthogonal vectors) will achieve the goal. Same procedure can be described by the following separation of the vector fields into orthogonal subspaces using projector introduced above,
\begin{equation}
{\bm\phi}=\frac{\Phi\,{\bf e}}{\sqrt{N}}+{\bm\phi}_{\perp}\,,\quad
{\bm\phi}_{\perp}={\hat\Pi}_{\perp}{\bm\phi}\,,
\end{equation}
and similar expression for the conjugate ${\bm\theta}$-fields. This transformation may be thought of as an introduction of the 'centre-of-mass' coordinates ($\Phi$ and $\Theta$) and the relative to it $(N-1)$ 'positions' ${\bm\phi}_{\perp}$ and ${\bm\theta}_{\perp}$. The Lagrangian in the new fields decomposes into two terms ${\cal L}_0={\cal L}_{\perp}+{\cal L}_{\parallel}$. The fields ${\bm\Psi}_{\perp}=({\bm\phi}_{\perp},{\bm\theta}_{\perp})$ are gapped by $(N-1)$ RG-relevant terms
\begin{eqnarray}\nonumber
{\cal L}_{\perp}&=&\frac{1}{8\pi}\,{{\bm\Psi}}_{\perp}\left[\tau_1\partial_t+v_{\perp}
\,{\hat \kappa}_{\perp}\,\partial_x\right]\partial_x{{\bm\Psi}}_{\perp}\\
&+&\sum_{Q=J=0}\,h({\bf j}, {\bf q})\,e^{i({\bf j}{\bm\phi}_{\perp}+{\bf q}{\bm\theta}_{\perp})}
\end{eqnarray}
where ${\hat \kappa}_{\perp}={\rm diag}\left[K^{-1}_{\perp}\,{\hat{\mathbb{1}}}\,,K_{\perp}\,{\hat{\mathbb{1}}}\right]$.
The 'internal' degrees of freedom are necessarily gapped by either CDW or SC coupling.
For the repulsive interaction, the case we are analysing in this paper, $K_{\perp}<1$ and the most dangerous terms with scaling dimension $\Delta<2$ are $(N-1)$ terms with ${\bf q}=0$ and $J=0$.
The 'centre-of-mass' coordinates drop out of all terms describing inter-channel coupling in a translation invariant system due to $J=0$ restriction. The corresponding mode cannot be gapped and the Lagrangian of gapless $\Phi$ and $\Theta$,
\begin{equation}
{\cal L}_{\parallel}=\frac{1}{4\pi}\,\partial_t\Theta\,\partial_x\Phi-
\frac{v_{\parallel}}{8\pi}\left[\frac{1}{K_{\parallel}}\left(\partial_x\Phi\right)^2
+K_{\parallel}\left(\partial_x\Theta\right)^2\right]\,,
\end{equation}
describes low-energy behaviour.
{\it Disorder} --
Inhomogeneity breaks translation invariance allowing $J\neq 0$ terms to appear in the Hamiltonian. Allowed terms should not contain gapped modes: ${\bm\phi}_{\perp}$-field is frozen and the conjugate to it ${\bm\theta}_{\perp}$-field would make corresponding terms irrelevant (in particular, single-particle inter-channel backscattering). The field $\Theta$ cannot appear in the interactions due to $Q=0$ neutrality restriction. This consideration leads to the following Lagrangian describing low-energy disordered system of $N$ KDs:
\begin{equation}\label{dis}
{\cal L}_{\rm dis}={\cal L}_{\parallel}+\sum_{n=1}^{\infty}\,h_{2n}\,e^{i\frac{2n}{\sqrt{N}}{\Phi}}\,,\quad
h_J=\sum_{\left\{{\bf j}:J\neq 0\right\}}\,h({\bf j})\,.
\end{equation}
The restriction $J=2n$ in this summation is the result of TRS requirement $(-1)^J=1$ (see Eq.~(\ref{TRS}) with ${\bf q}=0$). The scaling dimension of each interaction term in Eq.~(\ref{dis}) follows from Eq.~(\ref{delta}):
\begin{equation}
\Delta_J=J^2K_{\parallel}/N\,.
\end{equation}
Most dangerous term satisfying TRS corresponds to $J=2$. One of the examples would be
a simultaneous backscattering of two particles in two different channels,
\begin{equation}
L^{J=2}_{\rm dis}\sim \int\,dx\,\xi_{ij}(x)\,{\bar R}_i\,{\bar R}_j\,L_i\,L_j+\mathrm{c. c.}\,.
\end{equation}
with random anti-symmetric matrix $\xi_{ij}$. Since disorder assumes zero mean value of $\xi_{ij}$, the scaling dimension of this term should be compared with $3/2$.
If $J=2$ backscattering terms (pinning potential for a structure similar to the Wigner crystal) were irrelevant, this single mode would be conducting with dimensionless conductance equal to the total number of the Kramers doublets. The conductance cannot be changed by irrelevant perturbations acting on the collective 'centre-of-mass' coordinate and this fact is reflected in the relationship between total density and current and the centre-of-mass variables:
\begin{equation}
\rho=\frac{\sqrt{N}}{2\pi}\,\partial_x\,\Phi\,,\quad j=\frac{\sqrt{N}}{2\pi}\,\partial_x\,\Theta\,.
\end{equation}
The 'Wigner crystal' slides if scaling dimension, $\Delta$, of $J=2$ processes below $3/2$. Otherwise, when scaling dimension,
\begin{equation}
\Delta=\frac{4\,K_{\parallel}}{N}\leq\frac{3}{2}\,,
\end{equation}
multi-backscattering processes pin CDW \cite{Gim}.
{\it Spontaneous TRS breaking} --
The pinning of Wigner crystal structure is always accompanied by TRS breaking. The expectation of terms like $\cos({\bf j}{\bm\phi})$ must vanish in TRS system if corresponding vector ${\bf j}$ belongs to the sector of odd integer $J={\bf q}{\bf e}$. Freezing all $N$ fields ${\bm\phi}$ leads to all such terms acquiring finite value that means spontaneous TRS breaking.
Note that we have not referred to the Haldane criterion \cite{hal} since we are dealing with weak interaction problem and, therefore, do not assume that all amplitudes of all allowed processes are infinitely strong and open gaps. Our choice of interactions was motivated by RG analyses and only those terms that are RG-relevant became potential candidate for opening of gaps. It turned out that there are exactly $(N-1)$ such terms and they do not break TRS. All these terms contain only density fields ${\bm\phi}$ and, therefore, commute with each other, making check of Haldane compatibility condition \cite{hal} unnecessary. An additional term that potentially could be relevant for closing the remaining gap, should also contain density field since all current fields are irrelevant. When this additional $J\neq 0$-term becomes relevant, it necessarily breaks TRS and this fact is not related to the parity of the KDs number (similar to fractional topological insulator \cite{ster}).
Let us comment here on the correspondence between our week interaction problem and strong interaction problem analysed in \cite{neu}. As it was shown above, the repulsion implies RG relevance of the terms containing density ${\bm\phi}$-fields only and, therefore, one immediately constructs $(N-1)$ vertex operators with $J={\bf j}{\bf e}=0$ since the subspace of the vectors ${\bf j}$ orthogonal to the vector ${\bf e}$ is $(N-1)$-dimensional. Disorder allows vertices with $J\neq 0$ which may gap the last mode exhausting $N$-dimensional space of vectors ${\bf j}$. Our construction is dictated by RG analysis and leaves us no choice. Had we dealt with a strong interaction problem and used $(N-1)$ current ${\bm\theta}$-fields instead (like it was done in \cite{neu}), we would immediately present $(N-1)$ interaction vertices with $Q={\bf q}{\bf e}=0$. But the extra term that could potentially gap the remaining mode could not be picked up from the same current fields due to neutrality $Q=0$ condition. This vortices neutrality condition breaks the duality between density and current vertices. The extra term could come from invoking conjugate density field and that is where the Haldane criterion \cite{hal} becomes crucial in justification that the additional term is consistent with already built $(N-1)$ vertices. One might think that this situation may appear in weak interaction problem when superconducting vertices become relevant perturbations at $K_{\perp}>1$ but it is not obvious because describing superconductivity one has to include anomalous terms that break $Q=0$ neutrality condition.
{\it Phase diagram} -- In general, the phase diagram should be drawn in three-dimensional space of parameters characterising intra-channel interaction (standard Luttinger parameter $K$) and two (density-density and current-current) inter-wire interactions $\alpha_{\pm}$. Below we will analyse in detail the commonly accepted model that includes only density-density interaction assuming current-current interactions matrix $\hat{V}_-=\hat{\mathbb{1}}$ in Eq.~\eqref{V}. The two parameters $K$ and $\alpha_+$ characterise intra- and inter-mode interactions, respectively, and define the effective Luttinger parameters,
\begin{eqnarray}
K_{\perp}&=&K\,(1-\alpha_+)^{-1/2}\,,\\
K_{\parallel}&=&K\,\left[1+(N-1)\alpha_+\right]^{-1/2}\,,
\end{eqnarray}
The region of existence of a delocalised (conducting) mode of repulsive electrons, $K < 1$, in CDW regime, $K_{\perp} < 1$, and irrelevant pinning by disorder, $\Delta > 3/2$, is defined by the inequality:
\begin{equation}\label{phase}
\frac{3N}{8}\left[1+(N-1)\alpha_+\right]^{1/2} < K < (1-\alpha_+)^{1/2}\,.
\end{equation}
It is clear from these inequalities that {\it more than two ($N> 2$) interacting Kramers doublets are always pinned by disorder}. In Fig.~\ref{CDW_two} we show regions of stability for systems with $N=1$ and $N=2$ KDs. Both regions are defined by the inequalities Eq.~(\ref{phase}). In the single KD situation there is no inter-channel interaction and one should put $\alpha_+=0$ in the inequalities Eq.~\eqref{phase} for $N=1$.
A single conducting state even for a system with a pair of KDs ($N=2$) survives pinning only in a small region of interaction parameters. It exists for weak interactions and immediately disappears if either inter- or intra-mode interaction becomes strong. There is no solution to the inequality \eqref{phase} for $N>2$ meaning that higher number of KDs is unobservable since the system becomes insulating for any inter- and intra-interaction strength due to the pinning by disorder.
\begin{figure}
\includegraphics[width=0.9 \linewidth]{phase_diagram.pdf}
\caption{The phase diagram for a set of $N$ Kramers doublets under repulsive density-density interaction. The only two stable states, $N=1$ and $N=2$, are shown by light and dark blue regions correspondingly.}
\label{CDW_two}
\end{figure}
{\it Conclusions} --
We have studied a topological insulator with $N$ Kramers doublets at the edge in the model of 'Coulomb blockade' type (long range featureless) interaction. This type of interaction is relevant for the situation when the screening radius is much larger than the Fermi wavelength (i.e. the width of the region occupied by edge states). We have shown that in a clean system the perturbations allowed by TRS always open $N-1$ gaps. In a non-superconducting regime, when the relevant perturbation in a clean system is of CDW type, the opening of $N-1$ gaps reduces the number of conducting edge channels to one. The disorder can either reduce the number of conducting channels to zero or leave the {\it only} conducting channel unaffected. We have found that only single Kramers doublet or a pair of them may survive pinning by disorder. The phase diagram contains a small pocket where both $N=1$ and $N=2$ are conducting. The relatively small size of this region might be responsible for elusiveness of experimental observation of states with two Kramers doublets. Any higher number $N>2$ of Kramers doublets, irrespective of parity, gets fully localised by disorder when density-density repulsion is taken into account. This conclusion comes from the fact that a featureless long range interaction between Kramers doublets in a topological insulator leads to formation of a {\it single} gapless edge mode that get easily pinned by the disorder-induced two-particle backscattering.
{\it Acknowledgments} --
This work was supported by the Leverhulme Trust Grant No.\ RPG-2016-044 (IVY). The authors are grateful for hospitality extended to them at the Center for Theoretical Physics of Complex Systems, Daejeon, South Korea.
|
2,869,038,156,314 | arxiv | \section{Introduction}
Quasielastic scattering of light, x-rays, and neutrons has proven to be a powerful experimental tool for the study of complex fluids. For dilute macromolecule solutions, quasielastic scattering has extensive analytic applications for particle sizing\cite{phillies1990z}. With non-dilute solutions and more complex systems, quasielastic scattering reveals consequences of intermacromolecular and other interactions.
In a substantial enhancement of the classical method, the fluid of interest is doped with trace concentrations of monodisperse probe particles. Probes have ranged from small molecules to micron-size colloidal particles. The diffusion of probe particles through the complex fluid is then observed. Early successful studies of probe diffusion in complex fluids using quasielastic light scattering were by Hallett and co-workers\cite{gray1974a,turner1976a}, who in 1974 and 1976 observed the diffusion of polystyrene spheres through hyaluronic acid and dextran solutions, and compared probe diffusion with the rheological properties of their solutions. The subsequent four decades have seen an enormous extension of this approach\cite{phillies2011a}, including studies of probes in highly viscous simple liquids\cite{phillies1981z}, polymer melts\cite{lin1986a}, chemically cross-linked gels\cite{schmidt1989b}, surfactant solutions\cite{phillies1993b}, protein solutions\cite{ullmann1985a}, and the interior of living cells\cite{lubyphelps1987a}. More recently, quasielastic x-ray scattering has been used to extend the range of distance scales over which diffusion can be observed\cite{dierker1995a}.
Probe diffusion has also been studied by a series of other physical techniques, each technique being sensitive to distinctive range of time and distance scales or other features of probe motion. For example, fluorescence correlation spectroscopy\cite{magde1972a}, which by varying the probe concentration can measure both the self diffusion coefficient and the mutual diffusion coefficient of the labeled species\cite{phillies1975a,scalettar1989a}, has in recent years been extensively used to measure tracer diffusion. Recent work using probe diffusion is sometimes termed \emph{microrheology}, the term microrheology referring to a particular model\cite{mason1996a} for interpreting quasielastic scattering spectra. In some studies, probe diffusion has been viewed as being of interest because it is a path to measuring the viscoelastic properties of the solution. In other studies, probe diffusion has been viewed as being of interest because it measures solution properties that are not the same as the viscoelastic properties of the solution.
A valuable complement to quasielastic scattering studies of probe diffusion is provided by measurements on probes subject to external driving forces. The overwhelming bulk of the literature on driven probe motion is provided by studies of capillary electrophoresis. The electrophoretic literature primarily emphasizes improving separations of different charged species. However, electrophoretic separations often use solutions of neutral polymers as the support medium. At the same time as these experiments are performing separations, these experiments are also giving information on the dynamics of the neutral polymers\cite{phillies2011a,phillies2012e}. A substantial literature exists on buoyancy-driven probe motion in the ultracentrifuge\cite{laurent1963a,ye1998a}. In a few experiments, magnetic\cite{hough1999a,schmidt2000a} or optical\cite{amblard1996a} tweezers were used to examine oscillatory driven movements of probes. Tweezer experiments are particularly interesting because the experimenter can separately control two of the three: drive force, drive frequency, and particle displacement. An alternative complement to probe diffusion is provided by tracking probes in complex fluids in which the fluid itself is performing driven motion, e.g., shear\cite{tapadia2006a}.
Quasielastic scattering of light and other photons is most commonly studied via correlation methods, in which the quantity measured directly is the intensity-intensity time correlation function
\begin{equation}
S(q,t) = \langle I(q,\tau) I(q, \tau+t) \rangle,
\label{eq:Sqtdef}
\end{equation}
Here $q$ is the magnitude of the scattering vector, $I(q,\tau)$ and $I(q, \tau+t)$ are the scattering intensities over short time intervals near $\tau$ and $\tau+t$, and the brackets $\langle \cdots \rangle$ represent an average. Scattering is said to be due to scatterers within the medium, scatterers generally being represented mathematically by points whose locations are known. Scattering from extended bodies, such as high-molecular-weight polymer chains, is often treated by representing the scattering body as a series of scattering points whose relative positions are partly fixed. If the volume being observed is much larger than the volumes over which particle positions and displacements are correlated, quasielastic scattering corresponds to the intermediate structure factor (or field correlation function) $g^{(1)}(q,t)$ via\cite{crosignani1975a}
\begin{equation}
S(q,t) = A |g^{(1)}(q,t)|^{2} + B.
\label{eq:Sqg1def}
\end{equation}
In this equation $A$ and $B$ are constants determined by details of the experimental apparatus; these constants have no effect on the time dependence. Homodyne rather than heterodyne detection of the scattered light is assumed. The factorization of $S(q,t)$ into $g^{(1)}(q,t)$ is sometimes termed the ``Gaussian approximation''. This Gaussian approximation is not related to the Gaussian approximation for the particle displacements as discussed below.
The intermediate structure factor is in turn determined by the time-dependent positions of the scattering particles via
\begin{equation}
g^{(1)}(q,t) = \left\langle \sum_{i=1}^{N} \sum_{j=1}^{N} \exp(\imath {\bf q} \cdot ({\bf r}_{i}(t+\tau) - {\bf r}_{j}(\tau) )) \right\rangle
\label{eq:g1qgeneral}
\end{equation}
In this equation, sums on $i$ and $j$ proceed separately over all $N$ particles in the system, while ${\bf r}_{i}(t+\tau)$ and ${\bf r}_{j}(\tau)$ are the locations of scatterers $i$ and $j$ at times $t+\tau$ and $\tau$, respectively.
In applying eq \ref{eq:g1qgeneral}, two particularly interesting experimental circumstances are described as mutual diffusion measurements and as probe diffusion measurements. In a measurement on a binary solvent: scatterer system, the scattering particles may be concentrated or dilute. Quasielastic scattering on such a system measures the mutual diffusion coefficient, which describes the diffusion of the scatterers down a concentration gradient\cite{phillies1974a,phillies1974b}. Tracer diffusion experiments examine ternary solvent: matrix : probe systems. In these systems the matrix component may be dilute or concentrated, is substantially responsible for the system's rheological and other interesting properties, but is nearly optically inert. Conversely, the probe (tracer) component is dilute, has virtually no effect on rheological and other properties of the solvent: matrix system, but dominates scattering by the ternary mixture. If matrix scattering is not entirely negligible, there are established, reliable ways to isolate the probe scattering, based on spectral subtraction at the level of the field correlation function.
Because probe particles very nearly do not interact with each other, the field correlation function for probe diffusion reduces (up to normalization constants) to the incoherent scattering function
\begin{equation}
g^{(1s)}(q,t) = \langle \exp(\imath q \Delta x(t))\rangle.
\label{eq:g1sandr}
\end{equation}
with $\Delta x(t)$ being the component parallel to ${\bf q}$ of $\mathbf{\Delta r}(t) = {\bf r}_{i}(t+\tau) - {\bf r}_{i}(\tau)$. Probe motions perpendicular to $\mathbf{q}$ do not contribute to $g^{(1s)}(q,t)$. In moving from eq \ref{eq:g1qgeneral} to eq \ref{eq:g1sandr}, terms of eq \ref{eq:g1qgeneral} in which $i \neq j$ were taken to average to zero, because the relative positions of dilute probes are very nearly uncorrelated. An expression formally identical to eq \ref{eq:g1sandr} describes diffusion measurements using pulsed-field-gradient nuclear magnetic resonance, though with this method $q$ has an entirely different meaning, namely in the simplest case $q = \gamma \delta g$, where $\gamma$ is the gyromagnetic ratio, $\delta$ is a pulse width, and $g = dB/dz$ is the field gradient.
The averages in eqs \ref{eq:g1qgeneral} and \ref{eq:g1sandr} may formally be phrased as averages over displacement distribution functions such as $P(\Delta x, t)$, which gives the time-dependent probability that a scattering particle will displace through $\Delta x$ during time $t$. Two previous papers\cite{phillies2005a,phillies2012a} examined how $g^{(1s)}(q,t)$ and $g^{(1)}(q,t)$ are actually related to the displacement distribution functions. The two prior papers were primarily concerned with establishing formal relationships between dynamic structure factors and probabilities for scatterer displacement. The significance of these relationships for the interpretation of experimental measurements was at most a secondary consideration. This paper focuses on interpreting experimental measurements.
Section II of this paper presents the correct general relationship between $g^{(1s)}(q,t)$ and $P(\Delta x, t)$. Section III discusses the special case of probe particles in a purely Newtonian fluid. Section IV notes experimental findings bearing on the relative significance of Sections II and III. Section V considers paths for interpreting probe diffusion spectra. Section VI treats the determination of $P(\Delta x, t)$, relationships between $g^{(1s)}(q,t)$ and trapping/hopping behavior, and, closing on a positive note, cases in which quasielastic scattering from diffusing probes, correctly interpreted, has given valuable information about complex fluids and the objects diffusing in them.
\section{General Case\label{sectiongeneralcase}}
This section summarizes what $g^{(1s)}(q,t)$ and $g^{(1)}(q,t)$ actually reveal about particle displacements. Extended derivations have appeared previously in two earlier papers, refs \onlinecite{phillies2005a} and \onlinecite{phillies2012a}, and are not repeated here. For probe diffusion, the intermediate structure factor is always determined by the displacement distribution function, namely the average in eq \ref{eq:g1sandr} can be written as
\begin{equation}
g^{(1s)}(q,t) = \int_{-\infty}^{\infty} d(\Delta x) \exp(i q \Delta x) P(\Delta x, t).
\label{eq:g1sPDelta}
\end{equation}
$P(\Delta x, t)$ is taken to be properly normalized.
On taking a Taylor series expansion of the exponential in powers of $q$, reflection symmetry, namely $P(\Delta x, t) = P(-\Delta x, t)$, eliminates all terms odd in $q$. As a result, $g^{(1s)}(q,t)$ and its logarithm are necessarily power series in $q^{2}$. The coefficients of the $q^{2n}$ are generated by the even moments $\langle(\delta x)^{2n}\rangle$ of $P(\Delta x, t)$. As shown previously\cite{phillies2005a}, the lead terms of an expansion for $g^{(1s)}(q,t)$ are
\begin{equation}
g^{(1s)}(q,t) = N \exp\left( - \frac{1}{2} q^2 \langle \Delta x(t)^{2} \rangle + \frac{1}{24} q^{4}( \langle \Delta x(t)^{4} \rangle - 3\langle \Delta x(t)^{2} \rangle^{2}) - {\cal O}(q^{6})\right).
\label{eq:g1sanddisplacements}
\end{equation}
All even moments $\langle(\delta x)^{2n}\rangle$ are required for the complete expansion.
It was early shown that quasielastic scattering from a binary solvent: macromolecule system determines the mutual diffusion coefficient, not the self diffusion coefficient\cite{phillies1974a,phillies1974b}. Theoretical approaches to computing $g^{(1)}(q,t)$ and the mutual diffusion coefficient of non-dilute colloid solutions have historically followed routes very different from the routes that are based on the displacement distribution function $P(\Delta x, t)$. Only very recently\cite{phillies2012a} was a solution for $g^{(1)}(q,t)$ in terms of displacement distribution functions obtained. In this solution, the expansion of eq \ref{eq:g1qgeneral} was shown to require averages over two different displacement distribution functions, namely $P(\Delta x, t)$ and a new distribution function $P_{2}(\Delta x, t, \mathbf{R}_{12})$. $P_{2}$ is a two-particle conditional displacement distribution function, in which $\Delta x$ is the displacement of particle $1$ during $(0, t)$ given that the vector $\mathbf{R}_{12}$ from particle $1$ to some particle $2$ at time $0$ has a given value.
\section{Special Case: Probes in Simple Newtonian Liquids \label{sectionsimple} }
The earliest quasielastic scattering experiments were performed on dilute suspensions of monodisperse scattering particles in simple Newtonian solvents. Cummins, et al.'s results on polystyrene spheres in water\cite{cummins1964a} are the archetype. The resulting spectra were interpreted by invoking a mechanical model for the motions of diffusing particles. The mechanical model was provided by the Langevin equation, which in one dimension is
\begin{equation}
m\frac{d^{2} x(t) }{dt^{2}} = - f \frac{dx(t)}{dt} +{\cal F}_{x}(t).
\label{eq:Langevin}
\end{equation}
Here $x(t)$ is a coordinate of the diffusing particle, $m$ is the particle mass, $f$ is the particle's drag coefficient, and ${\cal F}_{x}(t)$ is the random force, called \emph{random} because in the Langevin model the values of ${\cal F}_{x}(t)$ at different instants in time are uncorrelated. Within the model, ${\cal F}_{x}$ cannot be predicted beyond stating that ${\cal F}_{x}$ has certain statistical properties.
The canonical literature treatment of the Langevin model as applied to quasielastic light scattering is the volume by Berne and Pecora\cite{berne1976a}, notably their Section 5.9. Berne and Pecora show that the Langevin model is appropriate for polystyrene spheres in water, on the time and distances scales observed by quasielastic light scattering. From the Langevin model and the requirement that the system remains in thermal equilibrium, a series of conclusions about the statistical properties of the particle motion follow. In particular:
\begin{description}
\item[(i)] The mean-square average value of ${\cal F}_{x}(t)$ must be consistent -- the fluctuation-dissipation theorem -- with the drag coefficient $f$ and the thermal energy $k_{B}T$.
\item[(ii)] The distribution $P(\Delta x)$ of particle displacements $\Delta x$ during a time interval $\Delta t$ is the same for all time intervals $(t, t+\Delta t)$.
\item[(iii)] Velocity correlations are evanescent. For time steps appreciably longer than $m/f$, which for Brownian particles is actually a quite short time, particle displacements in a series of time steps are very nearly independent from each other.
\end{description}
Conclusion (ii) corresponds to the statement that $x(t)$ is the sum of a series of identically-distributed random variables. Conclusion (iii) corresponds to the independent statement that the time evolution of $x(t)$ is described by a Markoff process. In this very special case, the distribution of particle displacements is described by Doob's Theorem\cite{doob1942a}. Doob's theorem is closely related to the central limit theorem. Doob's theorem treats random processes such as $\Delta x(t)$, while the central limit theorem treats random variables. For the Langevin model, Doob's Theorem shows that the distribution of particle displacements is a Gaussian
\begin{equation}
P(\Delta x) = \left(2\pi \langle \Delta x^{2} \rangle \right)^{-1/2} \exp(- (\Delta x(t))^{2}/ 2 \langle \Delta x^{2} \rangle ).
\label{eq:gaussianform}
\end{equation}
For this special case, the incoherent scattering function reduces to
\begin{equation}
g^{(1s)}(q,t) = \exp(- q^{2} \langle (\Delta x(t))^{2} \rangle/2).
\label{eq:g1swrong}
\end{equation}
Equation \ref{eq:g1swrong} is quite accurate for the systems considered in Ref.\ \cite{berne1976a}, namely highly dilute solutions of monodisperse objects in simple Newtonian solvents.
However, Berne and Pecora\cite{berne1976a}, especially their Appendix 5.A and Section 5.9 leading to their eq 5.9.6, also prove the other important consequence of the Langevin model and Doob's theorem, namely that the Langevin model determines the exact value of $\langle (\Delta x(t))^{2} \rangle$. On the time and distance scales accessible to quasielastic scattering, the Langevin model requires
\begin{equation}
\langle (\Delta x(t))^{2} \rangle = 2 D t.
\label{eq:meansquare}
\end{equation}
Here $k_{B}$ is Boltzmann's constant and $T$ is the absolute temperature. $D = k_{B}T/f$ is the diffusion constant, a quantity \emph{that does not depend on time}. Time independence of $D$ is required by the calculation, because $D$ results from a time integral over ($0 \leq t \leq \infty$).
Equations \ref{eq:g1swrong} and \ref{eq:meansquare} come as a package; they are equally consequences of the Langevin model. Correspondingly, Berne and Pecora show for diffusing monodisperse Brownian particles that the Langevin model requires that the field correlation function is a simple exponential
\begin{equation}
g^{(1s)}(q,t) = \exp(- q^{2} D t).
\label{eq:g1ssimple}
\end{equation}
For unclear reasons -- the literature error noted in the introduction -- Berne and Pecora's entirely correct Chapter 5 is being misread as proving that eq \ref{eq:g1swrong} is always correct, even when the time relaxation of $g^{(1s)}(q,t)$ is not a simple exponential. Berne and Pecora in fact prove exactly the opposite, namely the contrapositive of their result is that if the relaxation is not a single exponential, then the Langevin model must not be applicable to the system, and therefore invocation of the the Langevin Model prediction eq \ref{eq:g1swrong} is incorrect.
Berne and Pecora's discussion refers purely and exclusively to the special case of particle motion that is described by the Langevin equation, in which case eqs \ref{eq:gaussianform}-\ref{eq:g1ssimple} are all correct. This special case corresponds to most of the experiments that were of interest at the time that ref \onlinecite{berne1976a} was written, namely applications of quasielastic scattering for particle sizing\cite{dahneke1983}. For diffusing dilute colloids, the $t$ and $q$ dependences of $g^{(1s)}(q,t)$ are precisely as predicted by the Langevin model, in particular $g^{(1s)}(q,t) \sim \exp(- \Gamma t)$ with $\Gamma \propto q^{2}$. The quasielastic scattering spectrum only leads to the time-dependent mean-square particle displacement if the spectrum is a pure exponential in $q^{2} t$. In this special case, the mean-square displacement increases linearly with $t$, so that the short-time and long-time behaviors of $g^{(1s)}(q,t)$ are one and the same.
If the decay of the field correlation function is not a simple exponential in $t q^{2}$, then eq \ref{eq:gaussianform} and the Langevin model cannot not possibly describe how the scattering particles move. If eq \ref{eq:gaussianform} described the particle motions, then the spectrum would be a simple exponential. In systems in which the spectrum is more complex than a simple exponential, eq \ref{eq:g1swrong} is invalid. That is, if $g^{(1s)}(q,t)$ is not a simple exponential in $t$, $\log(g^{(1s)}(q,t))$ does not reveal the mean-square displacement of the particles.
Why does eq \ref{eq:g1sanddisplacements} ever reduce to eq \ref{eq:g1swrong}? If and only if $P(\Delta x, t)$ is a Gaussian in $\Delta x$, $P(\Delta x, t)$ is entirely characterized by its second moment $\langle \Delta x^{2} \rangle$. For a Gaussian displacement distribution function, the higher moments of $P(\Delta x, t)$ have values such that the coefficients of the higher-order terms ($q^{2n}$ for $n \geq 2$) of eq \ref{eq:g1sanddisplacements} all vanish. For a Gaussian $P(\Delta x, t)$, the only non-zero part of eq \ref{eq:g1sanddisplacements} is eq \ref{eq:g1swrong}. This disappearance of the higher-order terms is unique to a Gaussian $P(\Delta x, t)$. For any other $P(\Delta x, t)$, the higher-order terms of eq \ref{eq:g1sanddisplacements} do not vanish.
\section{Experimental Findings \label{sectionexperiment}}
What do experiments say about $P(\Delta x, t)$ and $g^{(1s)}(q,t)$? There are systems in which the Langevin model is adequate, namely dilute monodisperse particles suspended in simple Newtonian fluids. The Langevin model provides the solid foundation for particle sizing via quasielastic scattering\cite{dahneke1983}. For probe diffusion in complex fluids, experiment provides a far more complex picture. Consider a few representative experiments that have determined $P\Delta x, t)$ or $g^{(1s)}(q,t)$ for probe particles in complex fluids.
On relatively long time scales, $P(\Delta x,t)$ is accessible via particle tracking methods. As examples, note experimental studies by Apgar, et al.\cite{apgar2000a}, Tseng and Wirtz\cite{tseng2001a}, and Xu, et al.\cite{xu2002a} on probe diffusion in glycerol, actin solutions and gels, and gliadin solutions. These authors used video recording and computer image analysis to track simultaneously the motions of large numbers of particles. They report $P(\Delta x, t)$ and $\langle \Delta x(t)^{2} \rangle$ in their systems. Probes in glycerol follow eqs \ref{eq:gaussianform} and \ref{eq:meansquare}. For probes in various complex fluids, $P(\Delta x, t)$ has decidedly non-Gaussian forms. In these systems, the mean-square displacement does not increase linearly in time. By direct measurement, eq \ref{eq:gaussianform} and \ref{eq:meansquare} are not uniformly correct for probes in polymer solutions.
Quasielastic scattering spectra of probes in polymer solutions are often markedly non-exponential. For polystyrene latex sphere probes in hydroxypropylcellulose: water, this author and Lacroix\cite{lacroix1997a} found stretched exponentials in time
\begin{equation}
g^{(1s)}(q,t) = a \exp(- \theta t^{\beta}) \equiv a \exp(- (t/\tau)^{\beta}).
\label{eq:gisqeext}
\end{equation}
Here $\beta$ is a scaling exponent while $\theta$ and $\tau$ are prefactors. A series of papers by Streletzky and collaborators on the same chemical system (most recently, ref.\ \onlinecite{phillies2003a}) established by viewing a much wider range of delay times that $g^{(1s)}(q,t)$ is in fact a sum of two stretched exponentials in time.
Finally, I note a very simple model system in which eqs \ref{eq:gaussianform} and \ref{eq:g1swrong} fail. The system is a dilute aqueous dispersion of polystyrene spheres, in which the spheres are of two different sizes. There are no sphere-sphere interactions. Each sphere individually performs Brownian motion as described by the Langevin equation. Therefore, for each sphere in the mixture, $P(\Delta x, t)$ is a Gaussian in $\Delta x$ and $\langle (\Delta x)^{2} \rangle$ increases linearly in time. For the mixture as a whole, the mean-square displacement averaged over all the particles must therefore also increase linearly with time, at a rate determined by a weighted average of the diffusion coefficients of the two sphere species.
However, the mixture's field correlation function is a sum of two exponentials
\begin{equation}
g^{(1s)}(q,t) = A_{1} \exp(-D_{1} q^{2} t) + A_{2} \exp(-D_{2} q^{2} t)
\label{eq:doubleexponential}
\end{equation}
where $A_{i}$ is the scattering amplitude of species $i$ and $D_{i}$ is the diffusion coefficient of species $i$. Correspondingly, the displacement distribution function for all the particles in solution is a weighted sum of two Gaussians, one Gaussian for each sphere size.
Suppose one were used eq \ref{eq:g1swrong} to determine $\langle (\Delta x)^{2} \rangle$ of the sphere mixture. According to eq \ref{eq:g1swrong}
\begin{equation}
\langle (\Delta x)^{2} \rangle = - \ln( g^{(1s)}(q,t))/q^{2}.
\label{eq:meansquare2}
\end{equation}
At short times, $g^{(1s)}(q,t)$ contains contributions from both species, and the $\langle (\Delta x)^{2} \rangle$ computed from eq \ref{eq:meansquare2} is the weighted average of the mean-square displacements of the two species. At long times, in eq \ref{eq:doubleexponential} the exponential corresponding to the more rapidly-moving species has decayed to zero, so the nominal mean-square displacement from eq \ref{eq:meansquare2} is determined by scattering from the larger, more slowly-moving spheres. For this simple bidisperse sphere suspension, eq \ref{eq:meansquare2} asserts that the slope of $\langle (\Delta x)^{2}\rangle$ depends on time, $\langle (\Delta x)^{2}\rangle$ increasing more rapidly at short times and more slowly at long times. The assertion is incorrect. In this system $\langle (\Delta x)^{2}\rangle$ increases linearly with $t$. At large times the smaller spheres continue to move, but they stop contributing to $g^{(1s)}(q,t)$ or to the nominal $\langle (\Delta x)^{2}\rangle$ from eq \ref{eq:meansquare2}.
Experiment thus demonstrates that neither eq \ref{eq:gaussianform} nor eq \ref{eq:g1ssimple} is generally valid for probe diffusion in complex fluids. Even in a Newtonian fluid, a model system in which $g^{(1s)}(q,t)$ does not decay exponentially in time does not follow eqs \ref{eq:g1swrong} and \ref{eq:meansquare2}, a result that is exactly as required by Doob's theorem. Interpretations of quasielastic scattering spectra for probes in complex fluids, based on the Gaussian approximation of eq \ref{eq:g1swrong}, are therefore incorrect. Interpretations of spectra of probes in complex fluids in terms of particle displacements are properly based on eq \ref{eq:g1sanddisplacements}, which correctly reflects the non-Gaussian displacement distribution function of real probes.
\section{Interpretations of Quasielastic Scattering Spectra \label{sectioninterpretations}}
First, every single physical $g^{(1s)}$ viewed only as a function of $t$ corresponds to a system in which the mean-square particle displacement increases linearly with time. However, the correspondence is not unique. The same $g^{(1s)}(q,t)$ may also correspond to systems in which particle thermal motions are more complex. In consequence, from a $g^{(1s)}(q,t)$ measured over a full range of times and a single $q$ one cannot infer how the particle displacement depends on time.
This result has a purely mathematical basis, namely that the field correlation function can always be represented via a Laplace transform as
\begin{equation}
g^{(1s)}(q,t) = \int_{0}^{\infty} A(\Gamma) \exp(- \Gamma t).
\label{eq:laplace}
\end{equation}
Here $\Gamma$ is a relaxation rate and $A(\Gamma)$ is the contribution of relaxations having decay rate $\Gamma$ to $g^{(1s)}(q,t)$. So long as the system does not have relaxational modes that have negative amplitudes, $A(\Gamma)$ is everywhere positive or zero. In this case, there is always a system having the same $g^{(1s)}(q,t)$ as the system of interest, and in which $ \langle (\Delta x(t) )^{2} \rangle$ increases linearly in time. The system can be physically constructed as a solution of of polystyrene spheres of all different sizes. The composition of the mixture is determined by $A(\Gamma)$: One adds to the mixture just enough polystyrene spheres having diffusion coefficient $\Gamma/q^{2}$ so that their contribution to the scattering spectrum is $A(\Gamma)$. For each sphere, $\langle (\Delta x(t) )^{2} \rangle$ increases linearly in time, so therefore $\langle (\Delta x(t) )^{2} \rangle$ of the mixture also increases linearly in time. Thus, an arbitrary physically-acceptable (i.e., $A(\Gamma) > 0 \ \forall \ \Gamma$) form for $A(\Gamma)$ corresponds as one non-unique possibility to a system in which $\langle (\Delta x(t) )^{2} \rangle$ increases linearly in time.
It has repeatedly been found that $g^{(1s)}(q,t)$ decays in time as the stretched exponential of eq \ref{eq:gisqeext}. If one interpreted this time dependence by applying eq \ref{eq:g1swrong}, one would conclude
\begin{equation}
\langle (\Delta x(t) )^{2} \rangle = \theta t^{\beta}.
\label{eq:subdiffusive}
\end{equation}
In the common case $\beta < 1$, from eq \ref{eq:subdiffusive} one would infer that the mean-square particle displacement increases less rapidly at large times than would be the case for simple diffusion, a behavior that has been termed 'subdiffusive' motion. The inference is incorrect. A more reasonable interpretation for $\beta <1$ is that diffusion in the complex fluid is created by modes having a range of relaxation times, some longer than others, the contribution of the slower modes to the spectrum becoming more important at longer times.
It is not suggested that there cannot exist subdiffusive motion. Such motion has unambiguously been observed experimentally. Amblard, et al.,\cite{amblard1996a} studied probe motion in f-actin solutions, using magnetic tweezers and video microscopy. Small-bead motion was diffusive; larger-bead diffusion was subdiffusive with $\beta \approx 3/4$. The non-classical drag forces for diffusive motion and for driven motion are the same, in the sense that the displacement under an applied force and the mean-square displacement for diffusion have the same linear or sublinear time dependences, depending on the probe size.
There are sometimes suggestions that probe behavior should approach Stokes' Law and the Stokes Einstein-equation as the size of the probes is increased. Amblard, et al.'s experiments show precisely the opposite behavior. Deviations from simple diffusion are larger for the larger particles. Whenever particle motion is subdiffusive, the light scattering spectrum will not be a simple exponential. The scattering spectrum will follow eq \ref{eq:g1sanddisplacements}, showing that the relationship between $g^{(1s)}(q,t)$ and $\langle (\Delta x(t) )^{2} \rangle$ is a neither-way street. Just as one cannot in general calculate $\langle (\Delta x(t) )^{2} \rangle$ from $g^{(1s)}(q,t)$, so also one cannot in general calculate $g^{(1s)}(q,t)$ from $\langle (\Delta x(t) )^{2} \rangle$, because all higher moments of $P(\Delta x)$ are needed in order to calculate $g^{(1s)}(q,t)$.
\section{Discussion\label{sectiondiscussion}}
The primary objective of this paper was to correct a widespread literature error, namely the false belief that that $g^{(1s)}(q,t)$ can in general be used to determine the mean-square displacement of probes in complex fluids. The belief appears to have arisen from a misreading of Berne and Pecora's excellent monograph, Chapter 5, in which Berne and Pecora discuss the motions of monodisperse probe particles in simple Newtonian fluids by using the Langevin model. The spectra of monodisperse Langevin-model particles are without exception single exponentials. The calculation is correct, but does not refer to non-Newtonian fluids or polydisperse probe systems. Having corrected this error, I turn to several subsidiary points of misinterpretation.
The functional form of $P(\Delta x, t)$ can be inferred, at least approximately, from the angular dependence of $g^{(1s)}(q,t)$, namely as seen in eq \ref{eq:g1sPDelta} the correlation function $g^{(1s)}(q,t)$ at fixed $t$ is the spatial Fourier transform of $P(\Delta x, t)$. If $g^{(1s)}(q,t)$ is determined sufficiently accurately over an adequate range of $q,$ an inverse spatial Fourier transform can take the experimenter back from $g^{(1s)}(q,t)$ to $P(\Delta x, t)$. To the author's knowledge, this inversion has only been done for the ubiquitous polystyrene spheres in distilled water, for which $g^{(1s)}$ is a simple exponential $\exp(- \Gamma t)$, while $\Gamma$ is found to be accurately linear in $q^{2}$, showing that $P(\Delta x, t)$ in this system has the Gaussian form of eq \ref{eq:gaussianform}. Measurements of the $q$-dependence of $g^{(1s)}$ for probes in complex fluids are less common, though note, e.~g., Streletzky and Phillies\cite{streletzky1998q}. These authors found spectra having multiple relaxational modes, some of which had relaxation rates that did not scale linearly in $q^{2}$, proving that the modes did not correspond to Gaussian displacement distribution functions.
Particle tracking is sometimes used to generate the simplified $\langle \Delta x(t)^{2} \rangle$, rather than the full $P(\Delta x, t)$. The simplification is potentially hazardous, because if one has not determined $P(\Delta x, t)$ one does not know if the particle motion process corresponds to simple diffusion. Any physically reasonable $P(\Delta x, t)$ has some second moment $\langle \Delta x(t)^{2} \rangle$, but if the form of $P(\Delta x, t)$ is unknown, one cannot tell if it is meaningful to characterize $P(\Delta x, t)$ by its second moment or by the corresponding nominal diffusion coefficient
\begin{equation}
D(t) = \langle \Delta x(t)^{2} \rangle/ 2 t
\label{eq:timedependentD}
\end{equation}
The notion that a diffusive process can be characterized by $\langle \Delta x(t)^{2} \rangle$ corresponds to particle diffusion in simple Newtonian liquids, in which $D(t)$ as defined here is a constant independent of time. In a complex fluid, characterization of probe motion via measurement of the second moment $\langle \Delta x(t)^{2} \rangle$ must be expected to be inadequate.
The assertion that the central limit theorem guarantees that $P(\Delta x, t)$ is a Gaussian in $\Delta x$ is sometimes describes as the "Gaussian Approximation". Experiments such as those summarized above prove that this assertion is incorrect. The central limit theorem (for random variables) and Doob's theorem (for random processes) are well-known. Where do their invocations go wrong? The central limit theorem and Doob's theorem are statements about the sum of a large number of identically distributed, independent random processes. As applied to the diffusion problem, the displacement $\Delta x$ during experimentally accessible times can be expressed as the sum of a large number of far smaller steps $\delta x$, each taken during a far smaller time interval $\delta t$. If a random variable $\Delta x$ is the sum of a large number of identically distributed \emph{independent} variables, i.~e., if $\delta x(t)$ is a Markoff process, it must in general be the case that $P(\Delta x, t)$ is a Gaussian in $\Delta x$. This rationale fails because it refers to a sum of \emph{independent} random processes. The process that generates $\delta x$ for probes in a viscoelastic fluid is not a Markoff process, because the system has memory. The central limit theorem and Doob's theorem are therefore not applicable. For probes in complex fluids, the processes generating the steps $\delta x$ are highly correlated, because the "random" forces that determine the $\delta x(t)$ are controlled by the shear modulus $G(t)$, which in a complex fluid has a long correlation time. Correspondingly, the time correlation function of the random force $\langle {\cal F}_{x}(0) {\cal F}_{x}(t) \rangle$ is long-lived, not $\sim \delta(t)$, and in the Langevin equation the friction force $f \dot{x}(t)$ is replaced with a memory function $\sim \int ds \langle {\cal F}_{x}(0) {\cal F}_{x}(s) \rangle \dot{x}(t-s)$.
An alternative to the central limit theorem is the \emph{small-$q$ approximation}. The nominal idea in the small-q approximation is that the rhs of eq \ref{eq:g1sanddisplacements} is a power series in $q$. If one went to sufficiently small $q$, one might hope that the $q^{2}$ term in the exponential would become dominant, so that eq \ref{eq:g1swrong} would approach being valid. This hope is not met. For the simplest case of a mixture of diffusing particles, $g^{(1s)}(q,t)$ is in fact a power series in $q^{2} t$. If one goes to smaller $q$, in order to keep fitting spectra equally accurately one needs to observe the same fractional degree of decay of $g^{(1s)}(q,t)$; one must therefore go out to longer times. At those longer times, the \cal{O}($q^{4}$) terms are as significant as they were at larger $q$, smaller $t$, and the same $q^{2} t$. Said differently, the coefficients of the correct Taylor series (in $q$) expansion of $g^{(1s)}(q,t)$ are time-dependent. In order for the lead term of the expansion to be dominant, the expansion must be limited not only to small $q$ but also to to small $t$. If $t$ is large, no matter how small $q$ has been made, the higher-order in $q$ terms are as important as the lower-order terms. Only at small $q$ and small $t$ is a single-term small-$q$ expansion valid. The valid small-$q$ expansion is $1-q^{2} \langle \Delta x(t)^{2} \rangle$, which only describes the leading slope of $g^{(1s)}(q,t)$ at small times.
Consider spectra described by eq \ref{eq:gisqeext}. The exponential in eq \ref{eq:g1sanddisplacements} scales as $q^{2}$ or is a power series in $q^{q}$, so therefore $\theta$ should also be a power series in $q^{2}$, perhaps simply by being linear in $q^{2}$. Indeed, for probes in in some but not other water: hydroxypropylcellulose solutions, Streletzky\cite{streletzky1998q} confirmed experimentally $\theta \sim q^{2}$ over a wide range of $q$. If $\theta$ were replaced with $\tau^{-\beta}$, one would have
\begin{equation}
\tau \sim q^{-2/\beta}.
\label{eq:tauq}
\end{equation}
$\beta$ is often in the range 0.5-1.0, so $\tau$ often depends on $q$ as $q^{-3\pm 1}$. If one interpreted $\tau$ to be a relaxation time, the $q$-dependence from eq \ref{eq:tauq} would be strange indeed: The relaxation would occur more rapidly over large distances (small $q$) than over short distances (large $q$). This strange $q$-dependence is simply an artifact of the choice of parameterization of $g^{(1s)}(q,t)$, and the identification of $\tau$ as a relaxation time. In terms of eq \ref{eq:gisqeext}, $(\theta, \beta)$ provides a natural parameterization while $(\tau,\beta)$ is less transparent. If mean relaxation times are inferred from the spectral time moments
\begin{equation}
\langle T_{n} \rangle = \int_{0}^{\infty} t^{n} g^{(1s)}(q,t) dt
\label{eq:Tmoments}
\end{equation}
of $g^{(1s)}(q,t)$, the choice of parameterizations in eq \ref{eq:gisqeext} has no consequences. The two paramaterizations of $g^{(1s)}(q,t)$ must lead to the same $\langle T_{n} \rangle$.
Spectra of diffusing probes showing two relaxations on very different time scales are sometimes interpreted in terms of caging and hopping relaxations. The notion is that the medium supplies regions of low potential energy within which probes are free to move ("caging"). The regions are separated by barriers of high potential energy, across which probes only pass on rare occasion ("hopping"). The short time-scale relaxation is said to correspond to caging, while the long time-scale relaxation is said to correspond to hopping.
Computer simulation studies by Luo and Phillies and Luo, et al., test the caging-hopping interpretation\cite{luo1995a,luo1996a}. These simulations represented Brownian particles moving through a square lattice or a random glass of Lennard-Jones force centers. The force centers were immobile. Probe motions were generated via the Metropolis algorithm. These studies differed from some earlier work in that they determined not only time dependent mean-square displacements and effective diffusion coefficients but also obtained $P(\Delta r,t)$ and $g^{(1s)}(q,t)$. By varying the nominal temperature, trapping, hopping, and hindered diffusion behaviors were obtained. At low temperatures, probe particles explored the volume of their traps; after a certain relaxation time $\langle r^{2}(t) \rangle$ ceased to increase. At high temperatures, $P(\Delta r, t)$ was nearly Gaussian, with $\langle r^{2}(t) \rangle$ increasing linearly in time even at short times.
Luo, et al., evaluated $g^{(1s)}(q,t)$ for $q^{-1}$ extending from a small fraction of the size of a single potential energy minimum out to distances substantially larger than a typical distance between force centers. At low and high temperatures, $g^{(1s)}(q,t)$ showed nearly exponential relaxations, though at small $T$ and small $q$ the relaxation fell to a non-zero baseline. The baseline was non-zero because the particles were permanently trapped in small isolated volumes of the system. At intermediate temperatures, relaxations were single-exponential at large $q$ but double-exponential at small $q$. At the same intermediate temperatures, $P(\Delta r, t)$ was radically non-Gaussian, with local maxima and minima created by local potential energy minima, potential energy saddle points, and times required to traverse local energy maxima.
Other, physically different, systems also give bimodal spectra. In contrast to Luo, et al.'s probes moving though a fixed matrix, in which relaxations are only bidisperse for some values of $q$, relaxations of dilute bidisperse suspensions are double-exponential at all $q$. An alternative model system in which monodisperse particles show several very different classes of relaxation behavior is shown by Glotzer, et al.'s\cite{glotzer2000a} computer simulations of three-dimensional glasses, in which one finds distinct long-lived populations of slow and fast-moving particles, with the immobile particles in clumps and the rapidly moving particles lying in thin ribbons.
Thus, in order to distinguish between systems containing species with two different dynamic behaviors, and systems in which there is local trapping with escapes from the traps at longer times, it is experimentally necessary to study $g^{(1s)}(q,t)$ over a wide range of $q$. Observations at fixed $q$ of double-exponential relaxations do not reveal whether one is seeing trapping with hopping, or whether the system is in some sense dynamically bidisperse. Furthermore, in the cases in which $g^{(1s)}(q,t)$ was observed by Luo, et al., to be very nearly the sum of two exponentials, $P(\Delta r,t)$ on interesting distance scales had an elaborate dependence on $r$ with multiple maxima and deep minima. The interpretation that a biexponential $g^{(1s)}(q,t)$ must correspond to a $P(\Delta r,t)$ that is a sum of two Gaussians, each with a mean-square width increasing linearly in time, is categorically disproved by Luo, et al.'s measured forms for $P(\Delta r,t)$.
Finally, the observation that quasielastic scattering does not determine the mean-square probe displacement certainly does not mean that probe diffusion is ineffective as an experimental technique. Probe diffusion measurements can certainly be used to obtain novel information about complex fluids. The richness of the revealed information corresponds to the depth with which models for probe motion are constructed. As a positive conclusion, two successful applications of probe diffusion are noted:
(i) A long-time question in the study of surfactant solutions is the determination of the aggregation number $n$ of surfactant molecules in micelles. One of many approaches to this question has been to use quasielastic scattering to determine an effective hydrodynamic radius of the micelles. Perhaps after some hydrodynamic modeling to account for micelle shape, spherical micelles being the simplest case, the measured diffusion coefficient can be transformed to an apparent hydrodynamic radius $r_{H}$, to a hydrodynamic volume $V_{h}$, and (taking into account the surfactant density and molecular weight) finally to a nominal aggregation number. This procedure was criticized by Kratohvil\cite{kratohvil1980a}, who noted that the hydrodynamic volume of the micelle might well include solvent molecules rather than being composed of pure surfactant. Probe diffusion experiments prove that Kratovil was correct. The diffusion of probe particles through micellar solutions is retarded by hydrodynamic and direct interactions between the micelles and the probe particles. The degree of retardation is determined by the volume fraction of micelles in the solution. By combining quasielastic scattering measurements on surfactant solutions and on surfactant-probe mixtures, quasielastic scattering has been used to determine the size, volume fraction, and thus number density of micelles in solution, leading to determinations of the micellar aggregation number and, independently, the (substantial) degree of hydration of micelles, as seen in studies by Phillies and collaborators\cite{phillies1993a}.
(ii) Diffusion of mesoscopic probe particles in polymer solutions is not Stokes-Einsteinian. $D$ is not determined by the macroscopic viscosity $\eta$. Therefore, one cannot use the Stokes-Einstein equation for sphere diffusion
\begin{equation}
D = \frac{k_{B}T}{6 \pi \eta R}
\label{eq:SEeq}
\end{equation}
(where $k_{B}$ is Boltzmann's constant, $T$ is the absolute temperature, $\eta$ is the solution viscosity, and $R$ is the sphere radius) to determine the size of probe particles in polymer solutions. However, by using probes of known size, Ullmann and Phillies\cite{ullmann1983a} were able to quantitate the degree of failure of the Stokes-Einstein equation for their polymer solutions, allowing them to measure the size of unknown probe particles in the same solutions. This approach permitted a quantitative study of the extent of polymer adsorption onto particles chosen for their ability to bind polymers in solution.
|
2,869,038,156,315 | arxiv | \section{Introduction}
There has been a rapid expansion of the derivatives market in the last two decades, specifically, the exchange-traded markets in India [as summarised by \citet{vashishtha2010development}], which has led to an increased need for research in the market risk management of derivative instruments. The development of appropriate risk management methods has been of keen interest to minimize the cost of regulatory capital requirements primarily for investment banks and algorithmic trading firms. Moreover, with enhanced internet facilities and online banking platforms, trading in the options market has become easily accessible to common people and there is an urgency to strengthen the understanding of risk management, consequently creating a necessity to develop efficient, cost-effective, and easily comprehensible risk mitigation techniques feasible with the real-time trading constraints. \\
Hedging is a widely practiced risk management tool, specifically dynamic delta hedging for European options. The delta hedging requires a daily rebalancing of the hedge portfolio by taking positions on the underlying or the futures. In an ideal setup, hedging is instantaneous, however, due to impractical transaction costs, discrete rebalancing is adopted which may give rise to hedge error. Further, it is a known deficiency of dynamic hedging that large movements in the underlying and highly volatile conditions may cause significant losses. We have been experiencing such extreme market events recently, for example, the 2007 world crisis to recent covid stressed conditions. \citet{bakshi2003delta} demonstrated that the delta hedge strategy underperforms at times of higher volatility. There are many works carried out to address the drawbacks of dynamic hedging - largely to create more robust dynamic hedging or to develop alternative hedging strategies like static hedging.\\
\citet{coleman2003dynamic} showed that the performance of dynamic hedging with local volatility function (computed as proposed by \citet{coleman2001reconstructing}) had a smaller hedge error compared to dynamic hedging with implied volatility. \citet{kennedy2009dynamic} set up a dynamic hedging strategy using a hedge portfolio consisting of the underlying asset and liquidly traded options to minimize the hedging errors in the presence of transaction cost under jump-diffusion frameworks. \citet{breeden1978prices} pioneered an alternative approach to match payoffs instead of risk sensitivities, which is robust by design as contracts with matching payoffs will behave similarly regardless of underlying risk dynamics even with the presence of random jumps. The fundamental idea is that a path-independent claim can be replicated by a portfolio of standard options with the same maturity as the claim. However, the class of claims that can be hedged by this approach is limited and cannot hedge options with different maturities. The idea was further elaborated by \citet{green1987spanning}, \citet{nachman1988spanning} and \citet{carr2001optimal}. \citet{takahashi2009new} proposed a new scheme for static hedging of European path-independent derivatives under stochastic volatility models by applying the static replication method for one-dimensional price processes developed by \citet{takahashi2007efficient}. \citet{wu2016simple} provides a new hedging strategy of the target option using three options at different maturities and strikes based on approximate matching of target option function expansion along with maturity and strike rather than risk factors. \citet{carr2014static} (referred to as Carr-Wu static hedge in this paper) derived a new static spanning relation between a target option and a continuum of shorter-term options written on the same asset under a single-factor markovian setting. Further, they developed an approximation for the static hedging strategy using only a finite number of short-term options with the strike levels and the associated portfolio weights obtained using a Gauss–Hermite quadrature method. However, in the real trading scenario, the strikes obtained by the Gauss-Hermite method need not necessarily be available or liquid. \citet{bossu2021functional} showed that exact payoff replication of European Options may be achieved with a discrete portfolio of special options using the spectral decomposition techniques. Further, \citet{bossu2022static} extended the study on static replication to European standard dispersion options written on the Euclidean norm of a vector of multi-asset performances with the help of integral equation techniques. \citet{lokeshwar2022explainable} presented an explainable neural network for pricing and static hedging of OTC options by learning the weights and strikes of the hedging portfolio. We provide a data-driven framework for semi-static hedging of Exchange-traded options accommodating transaction costs and empirically study the performance in the Indian market with index options traded on the National Stock Exchange (NSE). Therefore, strikes and portfolio weights of the static hedge portfolio are overhauled by real-time trading constraints such as the availability and liquidity of shorter-term options, and transaction costs. Further, we benchmark our hedging results with dynamic hedging and Carr-Wu static hedge results using the Superior Predictive Ability (SPA) test developed by \citet{hansen2005test}. \\
\section{Static Hedging Framework}
\label{Static Hedging Framework}
This section illustrates the static hedging framework and introduces the notations used in this paper. We are interested in hedging the longer-term European option\footnote{In this paper, the term options is used generally for exchange-traded options. The longer-term option to be hedged is referred to as the target option.} maturing at $T_2$ by a self-replicating portfolio of liquid shorter-term options (referred to as hedging portfolio in this paper) with all constituent options maturity at $T_1$ ($T_2 \ge T_1$) and a cash position $C(t)$ at any time $t$, such that, $T_0=0 \le t \le T_1$. Let the portfolio weights be $W = (w_1, w_2, .... w_n)^\intercal \in \mathbb{R}^{n}$, where, $n$ is the number of liquid options available in the market, which constitutes the hedging portfolio. We propose a Monte-Carlo-based machine learning algorithm to learn $W$ and the amount of cash to be held. \\
We assume a complete probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and filtration $\mathcal{F}_t: \ t \in [T_0=0,T_1]$ with an adapted underlying asset (equity or index) process $S_t$. We assume $\mathbb{P}$ is the risk-neutral (martingale) measure on the $\sigma$-algebra $\mathcal{F}$ and hence no arbitrage. The stochastic dynamics of the underlying at any time $t$ follows Geometric Brownian Motion (GBM) and the solution (from \citet{shreve2004stochastic}) is,
\begin{equation}\label{GBM}
S_{t} = S_{T_0} \ exp \left( \Big( \ r^{'} - \frac{\sigma^2}{2} \ \Big) \ (t - T_0) + \sigma \ W_{t} \right) ,
\end{equation}
where $S_{T_0}$ is the initial value of the underlying, $r^{'}$ is the equity/index return implied from the futures market (in other words, $r^{'} = r - q$, $r$ is the risk-free interest rate and $q$ is the implied dividend yield for the underlying asset). $\sigma$ is the time-zero ATM Volatility corresponding to the tenor $t-T_0$ in the implied volatility surface and $W_{t}$ is a Brownian Motion. \\
Let $K^{*} \in \mathbb{R}$ be the strike of the option to be hedged maturing at $T_2$ and $V(T_0; S_{T_1})$ be the target option price at future time $T_1$ (valued as of $T_0$) obtained by the Black-Scholes pricing model proposed by \citet{black1973pricing}. The strikes of liquid options in the hedging portfolio (inclusive of calls and puts) maturing at $T_1$ be $K = (\kappa_{i})_{i=1, .., n}^{\intercal} \in \mathbb{R}^{n}$. The option payoff of each shorter-term option considered is $\phi_{\kappa_i}(S_{T_1}) = max\big( i_{cp} \cdot (S_{T_1} - \kappa_i), 0 \big)$, where, $i_{cp} = +1$ for call options and $-1$ for put options. The payoff vector of hedging portfolio options is represented as $\phi(S_{T_1}) = \big(\phi_{\kappa_i}(S_{T_1}) \big)_{i=1, .., n}^{\intercal} \in \mathbb{R}^{n}$.\\
We are interested in learning the hedging portfolio weights and cash position $C(T_1)$ such that it replicates the target option price at time $T_1$, and thereby replicating the target option backwards till time-zero under no-arbitrage assumption. In other words, we are interested in building a model to optimize the portfolio weights $W$ and cash position $C(T_1)$, such that $W^\intercal \cdot \phi(S_{T_1}) + C(T_1)$ is close to $V(T_0; S_{T_1})$. The number of samples required to build such model is equal to number of simulation paths $N$ used to simulate the index levels $S_{T_1}$ at time $T_1$. In each simulated path $\omega_i \in \Omega$ ($i = 1, 2, ....., N$) of the index level $S_{T_{1}}(\omega_i)$, the option payoffs vector of the $n$ options in the hedging portfolio is $\phi \big(S_{T_1}(\omega_i) \big) \in \mathbb{R}^{n}$ and the target option price is $V \big(T_0; S_{T_1}(\omega_i) \big) \in \mathbb{R}$. \\
\subsection{Optimisation Problem}
\label{Optimisation Problem}
We propose lasso regression illustrated by \citet{tibshirani1996regression} to learn the portfolio weights of the hedging portfolio and the cash position. The other modeling choices considered were multiple linear regression, artificial neural networks, and ridge regression. However, the regularisation nature of the optimization problem and constraints in lasso regression (refer to Equation \ref{eq:lasso_opt2}) tend to shrink some model coefficients and limit other insignificant coefficients to exactly zero unlike other regression and neural network models. This has a financial significance because the shorter-term options corresponding to which portfolio weights are zero can be ignored for hedging, resulting in a relatively smaller and more efficient hedging portfolio. Further, the single-layer RELU based neural network model for static hedging of OTC derivatives proposed by \citet{lokeshwar2022explainable}) can be replaced by Lasso regression by considering each node in the hidden layer as an independent variable in the regression, thereby, achieving an efficient runtime as an additional advantage.
To learn the portfolio weights of the hedging portfolio and the cash position at time-zero, lasso regression is performed at simulation time $T_1$, where, the regressor variables vector is $\phi(S_{T_1})$ and response variable is $V(T_0; S_{T_1})$. Based on $N$ monte-carlo simulations of future index levels at $T_1$, we have the data $ \Big(X_{i} = \phi \big(S_{T_1}(\omega_i) \big), \ y_{i} = V \big(T_0; S_{T_1}(\omega_i) \big) \Big)$, $i = 1, 2, ..., N$. Further, we define $\textbf{X} = (X_i)_{i=1, .., N} \in \mathbb{R}^{n \times N}$ and $\textbf{Y} = (y_i)^{\intercal}_{i=1, 2, ..., N} \in \mathbb{R}^{N}$. Let $\tilde{\textbf{X}} \in \mathbb{R}^{(n+1) \times N}$ be the augmented matrix, where, the first row is an unit vector $(1)_{i = 1,...., N}^{\intercal}$ and other rows correspond to $X$, similary, $\tilde{W} \in \mathbb{R}^{n + 1}$ be augmented weight vector with the first element as $w_0 = C(T_1)$ and the others as $W$. The constant term in the lasso regression $w_0$ correspond to the cash position at future time $T_1$. The lasso optimisation problem can be written as,
\begin{mini}[2]
{\tilde{W} \in \mathbb{R}^{n+1}}
{\frac{1}{N} \ \Vert \tilde{\textbf{X}}^{\intercal} \tilde{W} \ - \ \textbf{Y} \Vert^{2}_{2} }
{\label{eq:lasso_opt}}
{}
\addConstraint{\Vert \tilde{W} \Vert_{1}}{\leq \zeta}
\end{mini}
\noindent or equivalently, the Lagrangian form is,
\begin{mini}[2]
{\tilde{W} \in \mathbb{R}^{n+1}}
{\frac{1}{N} \ \Vert \tilde{\textbf{X}}^{\intercal} \tilde{W} \ - \ \textbf{Y} \Vert^{2}_{2} + \lambda \Vert \tilde{W} \Vert_{1}}
{\label{eq:lasso_opt2}}
{}
\end{mini}
where $\Vert . \Vert_{1}$ and $\Vert . \Vert_{2}$ are L1 and L2 norms respectively. $\zeta(\ge 0)$ is a tuning parameter and $\lambda \ge 0$ is the lagrange multiplier. $\lambda$ is a user-defined input for the optimization problem and acts as a shrinkage parameter to prevent multi-collinearity and reduce model complexity. If $\lambda = 0$, the lasso reduces to multiple linear regression, and as $\lambda$ increases to $\infty$ the coefficients reduce to $zero$. In other words, it helps to control the trade-off between model complexity and model accuracy. The optimal $\lambda$ is selected for the model by observing where the learning curve (the plot of $\lambda$ on the x-axis against model error on the y-axis) flattens. \\
A generalized algorithm or steps for performing a Lasso Static Hedge is provided as Algorithm \ref{algo}. The Lasso regression fit for the NIFTY and BANKNIFTY Options (ATM Call and Put) on the first week of the covid stressed period is demonstrated in Figure \ref{lasso_regression_fit.png}. The plot is intended to just show the performance of the Lasso Regression model fit, while the details on the methodology, market data, and analysis of the lasso static hedge regression are discussed in the future Section \ref{Empirical Analysis of Lasso Static Hedge}. Further, the model fit shown is based on the index simulations independent of that used to train the model. The absolute mean error as a ratio of underlying index levels is in the order of $10^{-5}$ in all the cases considered.
\begin{algorithm}[!ht]
\DontPrintSemicolon
Generate Implied Volatility Surface as of time $T_0$ based on liquid options available with maturities corresponding to target longer-term options and shorter-term options of hedge portfolio.\\
Create the strikes vector $K$ by filtering all liquid shorter-term Call and Put options for creating the hedging portfolio \\
Generate $S_{T_1} (\omega_i)$ for $i=1, ..., 5000$ using GBM with index returns implied from futures market and ATM volatility corresponding to the tenor $T_1 - T_0$.\\
Calculate target option price $V(T_0, S_{T_1})$ based on Black-Scholes Formula (with volatility obtained from step 1) and the hedging portfolio payoff vector $\phi(S_{T_1})$ for each simulated index/equity levels. \\
Perform a Lasso Regression fit between $V(T_0, S_{T_1})$ and $\phi(S_{T_1})$ to generate the shorter term hedge portfolio weights and the cash position. \\
Repeat the hedging at the frequency of shorter-term maturity.
\caption{Lasso Static Hedging Portfolio Generation}
\label{algo}
\end{algorithm}
\begin{figure}[t]
\begin{center}
\includegraphics[width=4.5in, height=3.5in]{lasso_regression_fit.png}
\caption{Lasso Regression Fit} \label{lasso_regression_fit.png}
\end{center}
\end{figure}
\section{Empirical Analysis of Lasso Static Hedge}
\label{Empirical Analysis of Lasso Static Hedge}
This section empirically studies the proposed lasso regression-based static hedging performance in the National Stock Exchange (NSE) on the two most liquid European index options, namely, NIFTY and BANKNIFTY options. As 1-month and 1-week options are the most traded index options, we consider static hedging of 1-month options by the algorithm-driven portfolio of 1-week options for the analysis\footnote{The python codes used for the empirical analysis are available at \\ \textit{http://github.com/Vikranth1508/FinML/tree/static\_hedge\_codes}}. \\
\subsection{Modelling Framework}
\label{Modelling Framework}
The NSE market issues one-month options\footnote{In this paper, if we mention 1-month options, they are options issued on the last Thursday of a month and expiring on the last Thursday of the following month. Similarly, 1-week options refer to options issued on all Thursdays and expiring on the subsequent Thursday.} on the last Thursday of every month and one-week options on all Thursdays (the previous working day if Thursday is not a Business Day). For a certain 1-month target option to be hedged, we set up the hedging portfolio at the beginning of every Thursday in the month and monitor the daily changes in hedging portfolio value against the target option value change on the entire week (till the hedging portfolio expires). In other words, we compare the daily Profit and Loss (termed as PnL in this paper) of the hedging portfolio and the target option for one week. We perform this comparison except for the last week of every month, exactly when the target option and hedging portfolio will have the same maturity, and taking an offsetting position would be a perfect hedge. \\
To perform the proposed static hedging exercise of any target 1-month option (with strike $K^{*}$) on an underlying index, firstly, at the beginning of each week, we select all available strikes ($K$) of liquid call and put 1-week options on the index. We achieve the selection by considering three parameters: Trading Volume, Open Interest and Option Moneyness. We select options only with trading volume and open interest greater than the respective $50^{th}$ percentiles calculated from all the available 1-week options on the interested index in the market. Further, we filter only ATM and OTM 1-week options (calls and puts) for constructing the hedging portfolio. Secondly, at the starting of each week ($T_0$), we create 5000 Monte-Carlo scenarios of index levels on the weekly expiry date ($T_1$) using the equation \ref{GBM}. Finally, at future time $T_1$, we perform lasso regression between the hedging portfolio value vector $\phi(S_{T_1})$ as regressor variables and target option value $V(T_0; S_{T_1})$ as response variable to learn the portfolio weights used to set up the hedging portfolio and cash position at the beginning of the week. We also highlight that the transaction cost is also considered to long/short any position on the hedging portfolio.\\
The required market data for the analysis is discussed in Section \ref{Market Data}. For simulation of index levels each week, we need the index returns and ATM volatility of the index corresponding to one week tenor at the beginning of every week. The methodology adopted for the implied volatility surface construction is discussed in Section \ref{Implied Volatility Surface Construction}. Similarly, the model used for pricing the target option one week in the future is discussed in Section \ref{Pricing Model}.
\subsubsection{Market Data Information}
\label{Market Data}
The historical data of one year from last Thursday of July 2019 till last Thursday of
July 2020 is considered. The historical index spot values, options, futures data, and the information on transaction costs are sourced from historical contract-wise data archives from the NSE platform\footnote{The market data used are available at \\ \textit{http://github.com/Vikranth1508/FinML/tree/static\_hedge\_mktdata}}. The repo rates published by the Reserve Bank of India are considered risk-free interest rates. The end-of-day settlement price of the index, futures, and options is considered the respective instruments' daily price. The futures and options data are mainly used to extract settlement prices, strikes of available options, trading volume, and open interest.
\subsubsection{Implied Volatility Surface Construction}
\label{Implied Volatility Surface Construction}
In this section, we illustrate the methodology used to construct implied volatility surface across moneyness $M$ (in this paper, $ M = \frac{Spot}{Strike} $) and time to maturity. On every Thursday $T_0$, we generate the volatility surface across the moneyness of liquid options and two tenors $T_2 - T_0$ and $T_1 - T_0$, where, $T_2$ and $T_1$ are monthly and weekly option expiries respectively as defined in Section \ref{Static Hedging Framework}. On the other days of the week, we generate an implied volatility smile corresponding to only monthly options which are required as an input exclusively in the dynamic hedging procedure discussed in Section \ref{Benchmark against Dynamic Delta Hedging}. \\
Based on the same liquidity filters used in Section \ref{Modelling Framework}, we select liquid ATM and OTM options (calls and puts). We apply the root-finding method introduced by \citet{brent1971algorithm} to find the implied volatility that matches Black Scholes price (refer \citet{black1973pricing}) with the market price. The OTM wing of the calibrated implied volatility surface is constructed from the liquid OTM call options, whereas, the ITM wing is from liquid OTM puts. In the ATM region, call and put implied volatilities are averaged to have a smooth smile. \\
For each time to maturity on the surface, the volatility smile constructed has few anchor points corresponding to the moneyness of liquid options selected for the calibration. Therefore, to generate volatility on the continuum of moneyness levels, four implied volatility surface parameterization/interpolation methods are considered: Linear Interpolation, Cubic Spline interpolation, Quadratic function fit, and Cubic polynomial fit. In the case of parameterization by fitting functions, we highlight that the quadratic and cubic functions of moneyness are used to fit the implied volatility smile for each tenor in the volatility surface separately. Flat extrapolation is used in all the methods considered. To obtain the implied volatility across the continuum of time to maturity, linear interpolation in variance-time space and flat extrapolation is used. Figure \ref{A3_IV_CubicSpline_MonthlyOptions_Week0.png} compares the different fits/interpolations of implied volatility smile for one-month tenor as on last Thursday of every month considered. Few interesting observations include liquid options in normal market conditions are available predominantly in the moneyness range of $0.8-1.2$. However, in the covid stress period, we could observe the liquidity expanding to the moneyness levels of approximately $0.6-1.4$ due to highly volatile index movements during the period. We also visually observe a good cubic fit compared to a quadratic fit during the stressed period. The cubic spline and linear interpolation are very close to each other except in the non-linear region of the smiles.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=\textwidth, height=6in]{A3_IV_CubicSpline_MonthlyOptions_Week0.png}
\caption{Implied Volatility Smile - Curve Fitting/Interpolation Comparison} \label{A3_IV_CubicSpline_MonthlyOptions_Week0.png}
\end{center}
\end{figure}
\subsubsection{Pricing Model}
\label{Pricing Model}
The Black-Scholes formula is used to price the long term option with strike $K^{*}$ (maturing at $T_2$) on each simulated index level at $T_1$ with market data as of $T_0$ (in other words, conditioned on $\mathcal{F}_{T_0}$). The Black-Scholes price is given by,
\begin{align}\label{BS}
V(T_0; S_{T_1}) &= i_{cp} \cdot \ N(i_{cp} \cdot d_1) S_{T_1} exp\big(-q (T_2 - T_1) \big) \\
& \ - \ i_{cp} \cdot N(i_{cp} \cdot d_2) K e^{-r (T_2 - T_1)}, \nonumber \\
d_1 &= \frac{ln\Big(\frac{S_{T_1}}{K} \ + \ (r - q + \frac{\sigma^{2}}{2}) (T_2 - T_1) \Big)}{\sigma \sqrt{T_2-T_1}} , \nonumber \\
d_2 &= d_1 - \sigma \sqrt{T_2-T_1}, \nonumber
\end{align}
\noindent where N(.) is Standard Normal Cumulative Distribution Function, $q$ is the index dividend inferred from index futures.
There are four alternative volatility inputs ($\sigma$ in Equation \ref{BS}) considered to price the target option on each simulated index path at time $T_1$: Constant Volatility, Constant Volatility Smile, Constant Volatility Surface, and Forward Smile. \\
Let $\sigma \big(T_0; M,[T_1-T_0, T_2 - T_0] \big)$ be the implied volatility surface across moneyness $M$ and two time to maturities ($T_2-T_0$ and $T_1-T_0$) as of time $T_0$ calibrated by the methodlogy detailed in Section \ref{Implied Volatility Surface Construction}. In constant volatility approach, $\sigma(T_0; M(S_{T_0}), T_2-T_0)$, where $M(S_{T_0}) = \frac{S_{T_0}}{K^{*}}$ is the moneyness of the target option at $T_0$, is extracted from the calibrated surface and used as volatility input $\sigma$ in pricing model Equation \ref{BS}. In Constant Smile approach, $\sigma \Big(T_0; M \big(S_{T_1}(\omega_i) \big), T_2-T_0 \Big)$, where $M \big(S_{T_1}(\omega_i) \big) = \frac{S_{T_1}(\omega_i)}{K^{*}}$ is the moneyness of the target option at future time $T_1$ with respect to each simulated path $\omega_i$ ($i = 1, 2, 3, ...., N$), is used in the options pricing formula. In Constant Surface approach, $\sigma \Big(T_0; M \big(S_{T_1}(\omega_i) \big), T_2-T_1 \Big)$ is the volatility input for pricing target option on each simualted path $\omega_i$ and the only difference from constant smile approach is that the tenor corresponds to time to maturity of the target option as of $T_1$. In the forward volatility model, the volatility input is the forward smile implied volatility (or simply forward vol) $\sigma_{fwd}$, where the forward variance $\sigma^{2}_{fwd} = \sigma^{2} \Big(T_0; M \big(S_{T_1}(\omega_i) \big), T_2-T_0 \Big) (T_2-T_0) - \sigma^{2} \Big(T_0; M \big(S_{T_1}(\omega_i) \big), T_1-T_0 \Big) (T_1 - T_0) $. Please note that if moneyness or time to maturity of the option at simulated paths is not any of the anchor points in the volatility surface, any one of the interpolation or fitting methods discussed in the Section \ref{Implied Volatility Surface Construction} is used to generate the volatility. The model options due to different volatility inputs are referred to as the pricing model alternatives/types. For example, if we mention Constant Volatility model, it means that Black-Scholes pricing model used to price target option at simulated index levels uses constant volatility $\sigma(T_0; M(S_{T_0}), T_2-T_0)$. The summary of the pricing model types and the corresponding volatility inputs discussed in this Section are presented in Table \ref{tab:pricing_model_options}.
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|c|l|}
\toprule
\textbf{S.No.} & \textbf{Pricing Model Type} & \textbf{Volatility Input $\sigma$} \\
\midrule
1 & Constant Volatility & $\sigma(T_0; M(S_{T_0}), T_2-T_0)$, \\
& & where, $M(S_{T_0}) = \frac{S_{T_0}}{K^{*}}$ is the moneyness \\
& & of the target option at $T_0$. \\
\midrule
2 & Constant Volatility Smile & $\sigma \Big(T_0; M \big(S_{T_1}(\omega_i) \big), T_2-T_0 \Big)$, \\
& & where, $M \big(S_{T_1}(\omega_i) \big) = \frac{S_{T_1}(\omega_i)}{K^{*}}$ is the moneyness \\
& & of the target option at future time $T_1$ \\
& & with respect to each simulated path $\omega_i$ \\
& & ($i = 1, 2, 3, ...., N$). \\
\midrule
3 & Constant Volatility Surface & $\sigma \Big(T_0; M \big(S_{T_1}(\omega_i) \big), T_2-T_1 \Big)$. \\
\midrule
4 & Forward Smile Volatility & $\sigma_{fwd}$, where, the forward variance is \\
& & $\sigma^{2}_{fwd} = \sigma^{2} \Big(T_0; M \big(S_{T_1}(\omega_i) \big), T_2-T_0 \Big) (T_2-T_0)$ \\
& & $ \ \ \ \ \ \ \ \ \ - \sigma^{2} \Big(T_0; M \big(S_{T_1}(\omega_i) \big), T_1-T_0 \Big) (T_1 - T_0) $. \\
\bottomrule
\end{tabular}%
}
\caption{Target Options Pricing Model Types}
\label{tab:pricing_model_options}%
\end{table}%
\subsubsection{Model Alternatives and Test Scenarios}
\label{Model Alternatives and Test Scenarios}
In this section, we summarise all the lasso regression static hedge model options, different underlying, trade characteristics, and market scenarios considered for the analysis of the static hedging performance.
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|c|c|}
\toprule
\textbf{S.No.} & \textbf{Pricing Model Type} & \textbf{Implied Volatility Surface Type} \\
\midrule
1 & Constant Volatility & Linear Interpolation \\
\midrule
2 & Constant Volatility Smile & Cubic Smile \\
\midrule
3 & Constant Volatility Surface & Quadratic Function \\
\midrule
4 & Forward Smile Volatility & Cubic Polynomial \\
\bottomrule
\end{tabular}%
}
\caption{Static Hedge Model Options}
\label{tab:model_options}%
\end{table}%
The different static hedge models are based on the different implied volatility surface parameterization/interpolation techniques discussed in Section \ref{Implied Volatility Surface Construction} and the Black-Scholes pricing model options based on four different input volatility types discussed in Section \ref{Pricing Model}. The table \ref{tab:model_options} describes different alternative models and each combination of the volatility surface type and pricing model forms a static hedge model type, therefore generating 16 different static hedging model options. For example, the constant volatility pricing model and cubic polynomial volatility surface together form a Constant Volatility Cubic Polynomial Lasso static hedge model. \\
For dynamic delta hedging and Carr-Wu static hedging model (which are used to benchmark the Lasso static hedge), there are four alternatives based on the type of interpolation or fitting used in generating implied volatility input to calculate delta (first derivative of the option price to the underlying) and Carr-Wu portfolio weights respectively. \\
We analyze two trade types: call and put options on BANKNIFTY and NIFTY indices. Additionally, we also consider different moneyness levels of the monthly option trade for the study. Based on the liquidity constraint for the full historical period, we consider the available option with moneyness $M(S_{T_0})$ closest to 1 as At the Money (ATM), 0.9 as Out of The Money (OTM), and 1.1 as In The Money (ITM) for call and the inverse for the put. For example, in the case of ATM options, we consider only the ATM option at the beginning of every start date of the monthly option (i.e. the last Thursday of each month in the historical data) and study the static hedge performance. We perform the analysis under two market conditions: one with the full historical data and the other with the covid historical window. \\
Given the challenge of comparing a large number of models and selecting the best model, the Superior Predictive Ability test (discussed in the next section) is a sophisticated model comparison methodology adopted in this paper. \\
\subsection{Superior Predictive Analysis}
\label{Superior Predictive Analysis}
The Superior Predictive Analysis (SPA) developed by \citet{hansen2005test} is the testing framework for comparing multiple forecasting models, which is known for high power and less sensitivity to poor alternative models. In the chosen static hedge model comparison approach, we apply SPA to first select the superior model within lasso-based static hedging, dynamic hedging, and carr-wu static hedging model alternatives individually, and then, we benchmark the superior lasso model against the best dynamic hedging and carr-wu static hedging models. We perform this test under different trade characteristics like underlying index, option type, and moneyness with the full historical period and covid stressed period. \\
For Lasso static hedge, we select a 1-month option at the beginning of each month in the historical period considered and statically hedge them every week using the 16 alternative models discussed in Section \ref{Model Alternatives and Test Scenarios}. In other words, we forecast the daily PnL of the monthly option using the daily PnL of the static hedge portfolio. Let the PnL forecasts of each model $m$ be $\hat{\delta}_{m, t}$ ($m = 0, 1, 2, .... M$ are the list of available models) and let the realised target option PnL be $\delta_{t}$, where, $t = 1, 2, 3, ...., d$ are historical daily data points available for the analysis. Therefore, each model yields a loss sequence, $L_{m,t} = L(\hat{\delta}_{m, t}, \delta_{t})$. We consider two different loss functions $L(.)$: Squared Error Loss and Absolute Error Loss function. The squared loss function calculates the squared difference sequence, whereas, the absolute error loss function calculates the absolute difference sequence between the predicted and realised PnLs. For a benchmark model $m=0$, we find the relative performance sequence of each alternative model m as $R_{m,t} = (L_{0,t} - L_{m,t})$, where $m = 1, ..., M$. We define the perfermance sequence of each model as $R_m = (R_{m, 1}, R_{m, 2}, ....., R_{m, d})^{\intercal} \in \mathbb{R}^{d} $ and $ \boldsymbol{\mu} = (\mathbb{E}[R_{m}])_{m=1, 2, ..., M}^{\intercal} \in \mathbb{R}^{M} $. We assume $R_m$ is strictly stationary with the Expectation $\mathbb{E}[|R_m|^{\gamma+\xi}] < \infty$ and Variance $Var[R_m] > 0$ and a $\alpha$-mixing of size $-(2 + \xi)(\gamma + \xi)/(\gamma-2)$, for some $\gamma > 2$, $\xi > 0$ and $m = 1, 2, ...., M$. We also empirically tested the stationarity using Augmented Dickey-Fuller (ADF) test and Kwiatkowski–Phillips–Schmidt– Shin (KPSS) test under 95\% confidence. Across all model comparisons in Section \ref{Selection of Superior Lasso Static hedge Model}, \ref{Benchmark against Dynamic Delta Hedging} and \ref{Benchmark against Carr-Wu Static hedge}, we observe that performance time series is stationary for squared error loss function and there are very few exceptions for absolute loss function among all the comparisons - four exceptions within 1440 lasso static hedging comparisons and four exceptions in NIFTY ITM Put Options within the dynamic hedging comparisons.
The Null hypothesis for the test is, $H_0 : \boldsymbol{\mu} \le 0 $. In other words, the null hypothesis is that the benchmark model is not inferior to any of the alternative models. This test uses a studentized test statistic defined below:
\begin{equation}
T_{d}^{SPA} = max \Big[ \max_{m = 1,...,M} \ \sqrt{d} \frac{\bar{R}_m}{ \hat{\omega}_m}, \ 0 \Big],
\end{equation}
\noindent where, $\hat{\omega}_{m}^{2}$ is a consistent estimator of $Var(\sqrt{d}\bar{R}_m)$, $\bar{R}_m = d^{-1} \sum_{t = 1}^{d} R_{m,t}$. The sampling distrubution under Null hypothesis is Normal $N(\hat{\mu^c}, \hat{\Sigma})$, where, $\hat{\Sigma}$ is the variance-covariance estimate of $(\bar{R}_m)^{\intercal}_{m = 1, ..., M}$, $ \hat{\mu^c} = (\hat{\mu}_{m}^{c})^{\intercal}_{m = 1, ..., M}$ and for each m, $\hat{\mu}_{m}^{c} = \bar{R}_m \mathbbm{1}_{[ \ \sqrt{d} \bar{R}_m / \hat{\omega}_m \ \le \ -2\sqrt{2loglogd}\ ]}$ and $\mathbbm{1}_{[.]}$ is an indicator function. We also test the hypothesis with respect to other Expectantions of Null Distribution $\hat{\mu}^l = (\hat{\mu}^l_m)_{m=1, ..., M}^{\intercal}$ and $\hat{\mu}^u = (\hat{\mu}^u_m)_{m=1, ..., M}^{\intercal}$, where, $\hat{\mu}^l_m = min(\bar{R}_m, 0) $ and $\hat{\mu}^u_m = 0$, for all m. \\
We implement stationary bootstrap by \citet{politis1994stationary} to estimate the sampling distribution of the test statistic $T_d^{SPA}$ under Null Hypothesis. This doesn't need an estimation of $\hat{\Sigma}$, which is impossible to estimate accurately if the number of models is high. In the analysis, we construct pseudo time series $R^{*}_{m,b} = (R^{*}_{m, b, t})^{\intercal}_{t = 1, ...., d}$ by resampling from the original performance sequence data $R_m$ for each model $m$ and $b= 1, 2, ...., B=1000$ ($B$ is the total number of bootstrapped resamples). Each such pseudo sample $b$ is constructed by randomly sampling blocks of an optimal block length defined by \citet{politis2004automatic}. From each bootstrapped resample, we calculate their sample averages, $\bar{R}_{m, b}^{*} = d^{-1} \sum_{t=1}^{d} R^{*}_{m, b, t}$ and it follows from \citet{gonccalves2003consistency} that the empirical distribution of $\sqrt{d} \bar{R}_{m, b}^{*} $ converges to the true asymptotic distribution of $\sqrt{d}\bar{R}_m $. Further, with the resamples, we can have a consistent estimator of variance $\hat{\omega}_{m}^{2} = \frac{d}{B} \big( \bar{R}_{m, b}^{*} - \bar{R}_{m} \big)^{2} $. This enables us to approximate the distribution of $T_{d}^{SPA}$ by the empirical distribution of $T_{b, d}^{SPA*, i} = \underset{m = 1, ..., M}{\max} \frac{\sqrt{d} \ (\bar{R}_{m, b}^{*} - \hat{\mu}_{m}^{i})}{\hat{\omega}_{m}}$, $b = 1, .., B$ and $i =l, c, u$. We calculate the p-value of the SPA test as $\hat{P}_{SPA}^{i} = B^{-1} \sum_{b=1}^{B} \mathbbm{1}_{[T_{b, d}^{SPA*, i} > T_{d}^{SPA}]} $, for i =l, c, u. The three choices of $\hat{\mu}^{i}_m$ ($i = l, c, u$) give rise to three p-values. The p-value based on $\hat{\mu}^{c}_m$ is consistent for the true p-value, whereas, p-values corresponding to $\hat{\mu}^{l}_m$ and $\hat{\mu}^{u}_m$ provide the lower and upper bound for the true p-values respectively. In this paper, we refer the three tests with p-values $\hat{P}_{SPA}^{l}$, $\hat{P}_{SPA}^{c}$ and $\hat{P}_{SPA}^{u}$ as SPA Lower, SPA Consistent and SPA Upper. The hypothesis test is performed at 95\% confidence level, in other words, the null hypothesis is rejected if p-value is less than 5\%.
\subsubsection{Selection of Superior Lasso Static hedge Model}
\label{Selection of Superior Lasso Static hedge Model}
We performed SPA with each one of the 16 lasso-based static hedge models as a benchmark and compared it against other models. We consider a model is universally superior among a set of models if two conditions are satisfied:
\begin{itemize}
\item If we cannot reject the null hypothesis of the SPA (Consistent) test with the corresponding model as the benchmark model (and others as alternatives) under all the testing scenarios (based on different underlying, option types, moneyness, and error loss functions).
\item If all other alternative models have at least one rejection case of the null hypothesis when used as a SPA (Consistent) test benchmark.
\end{itemize}
We observed that the Constant Volatility Linear Model\footnote{Constant Volatility Linear Model, in other words, Lasso static hedge with constant volatility as volatility input for pricing target option in future simulated time and volatility being extracted from the calibrated volatility surface by linear interpolation} is the universally superior model among the 16 lasso-based static hedge models. SPA P-values (Lower, Consistent, Upper) of the benchmark Lasso Constant Volatility Linear Model with Absolute Error Loss and Squared Error Loss function are presented in Table \ref{tab:static_winner} across NIFTY/BANKNIFTY Call and Put target options with ATM, ITM, and OTM scenarios. We also show the regression fit between realized PnL and static hedge PnL obtained by four different pricing models (with linearly interpolated calibrated volatility surface) for the NIFTY ATM Call option in Figure \ref{P2_NIFTY_Linear_ATM_CE_static_regression_fit.png}. We observe that realized PnL and constant volatility model PnL closely align. We see a good alignment for other alternative models, but the R-Square is slightly lower and the beta coefficient is slightly farther from $1$ than the constant volatility hedge.
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|p{3.3em}|p{3em}|c|c|c|c|c|c|}
\toprule
\multicolumn{9}{|c|}{\textbf{Static Hedging Comparisons P-Values}} \\
\midrule
\multicolumn{9}{|c|}{\textbf{Benchmark: Constant Volatility Linear Model}} \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[4]{*}{\textbf{Index}}} & {\multirow{3}[4]{*}{\parbox{1cm}{\textbf{Option \\ Type }}}} & \multirow{3}[4]{*}{\parbox{1cm}{\textbf{Money \\ ness}}} & \multicolumn{3}{c|}{\textbf{Absolute Error Loss Function}} & \multicolumn{3}{c|}{\textbf{Squared Error Loss Function}} \\
\cmidrule{4-9} & & & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } \\
\cmidrule{4-9} & & & \textbf{Lower} & \textbf{Consistent} & \textbf{ Upper} & \textbf{Lower} & \textbf{Consistent} & \textbf{Upper} \\
\midrule
\multicolumn{1}{|c|}{\multirow{6}[12]{*}{NIFTY}} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Call}} & ATM & 0.63 & 0.93 & 0.95 & 0.64 & 0.94 & 0.97 \\
\cmidrule{3-9} & & ITM & 0.38 & 0.71 & 0.71 & 0.25 & 0.69 & 0.69 \\
\cmidrule{3-9} & & OTM & 0.70 & 0.97 & 0.97 & 0.53 & 0.93 & 0.93 \\
\cmidrule{2-9} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Put}} & ATM & 0.63 & 0.77 & 0.95 & 0.66 & 0.95 & 0.96 \\
\cmidrule{3-9} & & ITM & 0.74 & 0.93 & 0.93 & 0.61 & 0.94 & 0.94 \\
\cmidrule{3-9} & & OTM & 0.19 & 0.43 & 0.59 & 0.24 & 0.46 & 0.46 \\
\midrule
\multicolumn{1}{|c|}{\multirow{6}[12]{*}{BANKNIFTY}} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Call}} & ATM & 0.71 & 0.84 & 1.00 & 0.67 & 0.99 & 0.99 \\
\cmidrule{3-9} & & ITM & 0.49 & 0.76 & 0.90 & 0.38 & 0.72 & 0.72 \\
\cmidrule{3-9} & & OTM & 0.76 & 0.88 & 0.96 & 0.56 & 0.91 & 0.91 \\
\cmidrule{2-9} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Put}} & ATM & 0.66 & 0.98 & 1.00 & 0.56 & 0.99 & 0.99 \\
\cmidrule{3-9} & & ITM & 0.73 & 0.99 & 1.00 & 0.58 & 0.83 & 0.95 \\
\cmidrule{3-9} & & OTM & 0.10 & 0.41 & 0.65 & 0.13 & 0.63 & 0.63 \\
\bottomrule
\end{tabular}%
}
\caption{SPA Comparison among Static Hedge Models}
\label{tab:static_winner}%
\end{table}%
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=\textwidth, height=5in]{P2_NIFTY_Linear_ATM_CE_static_regression_fit.png}
\caption{NIFTY ATM CALL Option: Static Hedging PnL Regression Fits} \label{P2_NIFTY_Linear_ATM_CE_static_regression_fit.png}
\end{center}
\end{figure}
\subsubsection{Benchmark against Dynamic Delta Hedging}
\label{Benchmark against Dynamic Delta Hedging}
To perform dynamic hedging, we assume that the underlying index is traded and we set up the dynamic hedge portfolio daily by matching the target option value by taking option delta positions in the underlying and the remaining in a deposit account (with the risk-free interest rate for borrowing and lending). In this section, firstly, we select the superior dynamic hedge model among all dynamic hedge alternatives, which turned out to be the Linear dynamic hedge model\footnote{In Linear dynamic hedge model, the volatility to calculate delta is obtained by linear interpolation of calibrated implied volatility surface} (Please refer to Table \ref{tab:dynamic_winner} for the SPA P-values). Then, we perform a SPA comparison between the Constant Volatility Linear Static Hedge model and the selected dynamic hedging model. We observed that the Constant Volatility Linear static hedge is universally superior under both full market and covid market conditions. We repeated the experiment 100 times with an independent bootstrap sampling seed and observed that there were no rejections of the Null hypothesis of the SPA (Consistent) test for the Constant Volatility Linear static hedge model (the SPA results with the benchmark as the static hedge are available in Table \ref{tab:stat_dyn_winner} and \ref{tab:stat_dyn_winner_covid} for full market data and covid stressed period respectively). In the Figures \ref{P2_NIFTY_Linear_ATM_CE_dynamic_regression_fit.png}, \ref{P2_NIFTY_Linear_ATM_PE_dynamic_regression_fit.png}, \ref{P2_BANKNIFTY_Linear_ATM_CE_dynamic_regression_fit.png}, and \ref{P2_BANKNIFTY_Linear_ATM_PE_dynamic_regression_fit.png}, it is very evident that the static hedge PnL has a better fit with Realised PnL compared to dynamic hedge PnL. Further, Figure \ref{P2_NIFTY_Linear_ATM_CE_static_vs_dynamic_pnL_error.png} shows that the constant volatility model performs better than a dynamic hedge (NIFTY ATM case) and in general, we observed that static hedge error is lower across all static model choices when compared with dynamic hedge error (the error is calculated as the difference of realized PnL and hedge PnL). In the NIFTY ATM case shown, there are few spikes observed in dynamic hedge errors primarily in the covid stressed period, whereas, the constant volatility static hedge model consistently performs well.
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|p{3.3em}|p{3em}|c|c|c|c|c|c|}
\toprule
\multicolumn{9}{|c|}{\textbf{Dynamic Hedging Comparisons P-Values}} \\
\midrule
\multicolumn{9}{|c|}{\textbf{Benchmark: Dynamic Linear Model}} \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[4]{*}{\textbf{Index}}} & {\multirow{3}[4]{*}{\parbox{1cm}{\textbf{Option \\ Type }}}} & \multirow{3}[4]{*}{\parbox{1cm}{\textbf{Money \\ ness}}} & \multicolumn{3}{c|}{\textbf{Absolute Error Loss Function}} & \multicolumn{3}{c|}{\textbf{Squared Error Loss Function}} \\
\cmidrule{4-9} & & & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } \\
\cmidrule{4-9} & & & \textbf{Lower} & \textbf{Consistent} & \textbf{ Upper} & \textbf{Lower} & \textbf{Consistent} & \textbf{Upper} \\
\midrule
\multicolumn{1}{|c|}{\multirow{6}[12]{*}{NIFTY}} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Call}} & ATM & 0.16 & 0.62 & 0.71 & 0.17 & 0.65 & 0.72 \\
\cmidrule{3-9} & & ITM & 0.56 & 0.76 & 0.76 & 0.35 & 0.75 & 0.75 \\
\cmidrule{3-9} & & OTM & 0.54 & 0.66 & 0.66 & 0.31 & 0.68 & 0.68 \\
\cmidrule{2-9} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Put}} & ATM & 0.71 & 0.91 & 0.91 & 0.34 & 0.80 & 0.80 \\
\cmidrule{3-9} & & ITM & 0.19 & 0.62 & 0.62 & 0.23 & 0.59 & 0.59 \\
\cmidrule{3-9} & & OTM & 0.14 & 0.14 & 0.14 & 0.41 & 0.65 & 0.65 \\
\midrule
\multicolumn{1}{|c|}{\multirow{6}[12]{*}{BANKNIFTY}} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Call}} & ATM & 0.51 & 0.71 & 0.71 & 0.39 & 0.68 & 0.68 \\
\cmidrule{3-9} & & ITM & 0.59 & 0.92 & 0.98 & 0.56 & 0.98 & 0.98 \\
\cmidrule{3-9} & & OTM & 0.45 & 0.45 & 0.45 & 0.65 & 0.97 & 0.97 \\
\cmidrule{2-9} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Put}} & ATM & 0.56 & 0.81 & 0.81 & 0.26 & 0.70 & 0.70 \\
\cmidrule{3-9} & & ITM & 0.42 & 0.56 & 0.83 & 0.35 & 0.52 & 0.83 \\
\cmidrule{3-9} & & OTM & 0.27 & 0.45 & 0.45 & 0.58 & 0.89 & 0.89 \\
\bottomrule
\end{tabular}%
}
\caption{SPA Comparison among Dynamic Hedge Models}
\label{tab:dynamic_winner}%
\end{table}%
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|p{3.3em}|p{3em}|c|c|c|c|c|c|}
\toprule
\multicolumn{9}{|c|}{\textbf{SPA (P-Values) : Lasso Static Hedge vs Dynamic Hedge}} \\
\midrule
\multicolumn{9}{|c|}{\textbf{Benchmark: Constant Volatility Linear Static Hedge Model}} \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[4]{*}{\textbf{Index}}} & {\multirow{3}[4]{*}{\parbox{1cm}{\textbf{Option \\ Type }}}} & \multirow{3}[4]{*}{\parbox{1cm}{\textbf{Money \\ ness}}} & \multicolumn{3}{c|}{\textbf{Absolute Error Loss Function}} & \multicolumn{3}{c|}{\textbf{Squared Error Loss Function}} \\
\cmidrule{4-9} & & & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } \\
\cmidrule{4-9} & & & \textbf{Lower} & \textbf{Consistent} & \textbf{ Upper} & \textbf{Lower} & \textbf{Consistent} & \textbf{Upper} \\
\midrule
\multicolumn{1}{|c|}{\multirow{6}[12]{*}{NIFTY}} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Call}} & ATM & 0.53 & 0.53 & 1.00 & 0.51 & 0.51 & 0.99 \\
\cmidrule{3-9} & & ITM & 0.52 & 0.52 & 1.00 & 0.53 & 0.53 & 0.99 \\
\cmidrule{3-9} & & OTM & 0.52 & 0.52 & 0.99 & 0.53 & 0.87 & 0.87 \\
\cmidrule{2-9} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Put}} & ATM & 0.53 & 0.53 & 1.00 & 0.52 & 0.52 & 0.98 \\
\cmidrule{3-9} & & ITM & 0.49 & 0.49 & 1.00 & 0.54 & 0.54 & 1.00 \\
\cmidrule{3-9} & & OTM & 0.53 & 0.53 & 0.98 & 0.56 & 0.56 & 0.96 \\
\midrule
\multicolumn{1}{|c|}{\multirow{6}[12]{*}{BANKNIFTY}} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Call}} & ATM & 0.50 & 0.50 & 1.00 & 0.56 & 0.56 & 0.99 \\
\cmidrule{3-9} & & ITM & 0.52 & 0.52 & 1.00 & 0.56 & 0.56 & 0.98 \\
\cmidrule{3-9} & & OTM & 0.51 & 0.51 & 1.00 & 0.55 & 0.55 & 0.97 \\
\cmidrule{2-9} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Put}} & ATM & 0.50 & 0.50 & 1.00 & 0.48 & 0.69 & 0.69 \\
\cmidrule{3-9} & & ITM & 0.51 & 0.92 & 0.92 & 0.32 & 0.32 & 0.32 \\
\cmidrule{3-9} & & OTM & 0.53 & 0.53 & 0.99 & 0.55 & 0.92 & 0.92 \\
\bottomrule
\end{tabular}%
}
\caption{SPA Comparison (P-Values): Lasso Static Hedge vs Dynamic Hedge}
\label{tab:stat_dyn_winner}%
\end{table}%
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|p{3.3em}|p{3em}|c|c|c|c|c|c|}
\toprule
\multicolumn{9}{|c|}{\textbf{SPA (P-Values) : Lasso Static Hedge vs Dynamic Hedge (Covid Market conditions)}} \\
\midrule
\multicolumn{9}{|c|}{\textbf{Benchmark: Constant Volatility Linear Static Hedge Model}} \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[4]{*}{\textbf{Index}}} & {\multirow{3}[4]{*}{\parbox{1cm}{\textbf{Option \\ Type }}}} & \multirow{3}[4]{*}{\parbox{1cm}{\textbf{Money \\ ness}}} & \multicolumn{3}{c|}{\textbf{Absolute Error Loss Function}} & \multicolumn{3}{c|}{\textbf{Squared Error Loss Function}} \\
\cmidrule{4-9} & & & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } \\
\cmidrule{4-9} & & & \textbf{Lower} & \textbf{Consistent} & \textbf{ Upper} & \textbf{Lower} & \textbf{Consistent} & \textbf{Upper} \\
\midrule
\multicolumn{1}{|c|}{\multirow{6}[12]{*}{NIFTY}} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Call}} & ATM & 0.51 & 0.51 & 1.00 & 0.51 & 0.51 & 1.00 \\
\cmidrule{3-9} & & ITM & 0.52 & 0.52 & 1.00 & 0.52 & 0.52 & 1.00 \\
\cmidrule{3-9} & & OTM & 0.52 & 0.52 & 0.99 & 0.50 & 0.85 & 0.85 \\
\cmidrule{2-9} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Put}} & ATM & 0.52 & 0.52 & 1.00 & 0.52 & 0.52 & 0.98 \\
\cmidrule{3-9} & & ITM & 0.50 & 0.50 & 1.00 & 0.50 & 0.50 & 1.00 \\
\cmidrule{3-9} & & OTM & 0.56 & 0.56 & 0.99 & 0.54 & 0.54 & 0.96 \\
\midrule
\multicolumn{1}{|c|}{\multirow{6}[12]{*}{BANKNIFTY}} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Call}} & ATM & 0.55 & 0.55 & 1.00 & 0.54 & 0.54 & 0.97 \\
\cmidrule{3-9} & & ITM & 0.52 & 0.52 & 1.00 & 0.54 & 0.54 & 0.98 \\
\cmidrule{3-9} & & OTM & 0.51 & 0.51 & 1.00 & 0.54 & 0.54 & 0.96 \\
\cmidrule{2-9} & \multicolumn{1}{c|}{\multirow{3}[6]{*}{Put}} & ATM & 0.52 & 0.52 & 0.98 & 0.48 & 0.55 & 0.55 \\
\cmidrule{3-9} & & ITM & 0.51 & 0.90 & 0.90 & 0.35 & 0.35 & 0.35 \\
\cmidrule{3-9} & & OTM & 0.52 & 0.52 & 1.00 & 0.56 & 0.56 & 0.96 \\
\bottomrule
\end{tabular}%
}
\caption{SPA Comparison (P-Values): Lasso Static Hedge vs Dynamic Hedge (Covid Market Conditions)}
\label{tab:stat_dyn_winner_covid}%
\end{table}%
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=\textwidth, height=3in]{P2_NIFTY_Linear_ATM_CE_dynamic_regression_fit.png}
\caption{NIFTY ATM CALL Option: Constant Volatility Linear Static Hedging vs Dynamic Hedging PnL Regression Fits} \label{P2_NIFTY_Linear_ATM_CE_dynamic_regression_fit.png}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=\textwidth, height=3in]{P2_NIFTY_Linear_ATM_PE_dynamic_regression_fit.png}
\caption{NIFTY ATM PUT Option: Constant Volatility Linear Static Hedging vs Dynamic Hedging PnL Regression Fits} \label{P2_NIFTY_Linear_ATM_PE_dynamic_regression_fit.png}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.5in, height=3in]{P2_BANKNIFTY_Linear_ATM_CE_dynamic_regression_fit.png}
\caption{BANKNIFTY ATM CALL Option: Constant Volatility Linear Static Hedging vs Dynamic Hedging PnL Regression Fits} \label{P2_BANKNIFTY_Linear_ATM_CE_dynamic_regression_fit.png}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=\textwidth, height=3in]{P2_BANKNIFTY_Linear_ATM_PE_dynamic_regression_fit.png}
\caption{BANKNIFTY ATM PUT Option: Constant Volatility Linear Static Hedging vs Dynamic Hedging PnL Regression Fits} \label{P2_BANKNIFTY_Linear_ATM_PE_dynamic_regression_fit.png}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=\textwidth, height=5in]{P2_NIFTY_Linear_ATM_CE_static_vs_dynamic_pnL_error.png}
\caption{NIFTY ATM CALL Option: Static Hedging vs Dynamic Hedging PnL Errors} \label{P2_NIFTY_Linear_ATM_CE_static_vs_dynamic_pnL_error.png}
\end{center}
\end{figure}
\subsubsection{Benchmark against Carr-Wu Static hedge}
\label{Benchmark against Carr-Wu Static hedge}
We construct the Carr-Wu static hedge portfolio (with 10 constituent shorter-term call options) for the target longer-term call options by generating the portfolio weights and the theoretical strikes of shorter-term options by using the Gaussian-Hermite quadrature as proposed in \citet{carr2014static}. Once the theoretical strikes of the options are obtained, we match them to the closest available strikes of liquid options in the market and use the corresponding portfolio weights. \\
For SPA comparison, firstly, we select the Linear Carr-Wu static hedge model (where the volatility required to generate the strikes from the quadrature points is obtained by linearly interpolating from calibrated volatility surface) as the superior model among Carr-Wu static hedge model alternatives since there was no single universally superior model and linear interpolation being the simplest of all methods. Then, we perform the SPA test among Lasso constant volatility static hedge, dynamic hedge, and carr-wu static hedge (all models using linear interpolation technique on calibrated implied volatility surface) and observed that Lasso static hedge is universally superior. The P-values of the SPA test with Lasso static hedge as the benchmark is shown in Table \ref{tab:stat_dyn_carr_winner}. We also present the comparison of Lasso and Carr-Wu static hedge PnL regression fits against realized PnL for NIFTY and BANKNIFTY ATM Call options in Figures \ref{P2_NIFTY_Linear_ATM_CE_static_regression_fit_carrWu.png} and \ref{P2_BANKNIFTY_Linear_ATM_CE_static_regression_fit_carrWu.png}. We observed that the Lasso static hedge performs better than the Carr-Wu static hedge in all cases. Further, from the figures, we could see that Carr-Wu static hedge PnL fits the realized PnL very well on NIFTY but the performance on BANKNIFTY was not satisfactory. The main reason for this difference is that the performance of the Carr-Wu static hedge depends on the availability of liquid option strikes that are close to the theoretical strikes, which is not always guaranteed.
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|p{3em}|c|c|c|c|c|c|}
\toprule
\multicolumn{8}{|c|}{\textbf{SPA (P-Values) : Lasso Static Hedge , Dynamic Hedge, Carr Static Hedge (Call Options)}} \\
\midrule
\multicolumn{8}{|c|}{\textbf{Benchmark: Constant Volatility Linear Static Hedge Model}} \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[4]{*}{\textbf{Index}}} & \multirow{3}[4]{*}{\parbox{1cm}{\textbf{Money \\ ness}}} & \multicolumn{3}{c|}{\textbf{Absolute Error Loss Function}} & \multicolumn{3}{c|}{\textbf{Squared Error Loss Function}} \\
\cmidrule{3-8} & & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } & \textbf{SPA } \\
\cmidrule{3-8} & & \textbf{Lower} & \textbf{Consistent} & \textbf{ Upper} & \textbf{Lower} & \textbf{Consistent} & \textbf{Upper} \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{NIFTY}} & ATM & 0.51 & 0.51 & 1.00 & 0.51 & 0.51 & 1.00 \\
\cmidrule{2-8} & ITM & 0.52 & 0.52 & 1.00 & 0.56 & 0.56 & 1.00 \\
\cmidrule{2-8} & OTM & 0.55 & 0.89 & 0.93 & 0.65 & 0.93 & 0.93 \\
\midrule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{BANKNIFTY}} & ATM & 0.63 & 0.63 & 1.00 & 0.75 & 0.75 & 1.00 \\
\cmidrule{2-8} & ITM & 0.52 & 0.52 & 1.00 & 0.54 & 0.54 & 1.00 \\
\cmidrule{2-8} & OTM & 0.53 & 0.94 & 0.99 & 0.49 & 0.78 & 0.92 \\
\bottomrule
\end{tabular}%
}
\caption{SPA Comparison (P-Values): Lasso Static Hedge, Dynamic Hedge, Carr Static Hedge (Call Options)}
\label{tab:stat_dyn_carr_winner}%
\end{table}%
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=\textwidth, height=3in]{P2_NIFTY_Linear_ATM_CE_static_regression_fit_carrWu.png}
\caption{NIFTY ATM CALL Option: Lasso vs Carr Wu Static Hedging PnL Regression Fits} \label{P2_NIFTY_Linear_ATM_CE_static_regression_fit_carrWu.png}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=\textwidth, height=3in]{P2_BANKNIFTY_Linear_ATM_CE_static_regression_fit_carrWu.png}
\caption{BANKNIFTY ATM CALL Option: Lasso vs Carr Wu Static Hedging PnL Regression Fits} \label{P2_BANKNIFTY_Linear_ATM_CE_static_regression_fit_carrWu.png}
\end{center}
\end{figure}
\clearpage
\section{Analysis on Static Hedge Performance}
We identified a few dates for the NIFTY ATM call target option (based on Figure \ref{P2_NIFTY_Linear_ATM_CE_static_vs_dynamic_pnL_error.png}) and developed two cases: One where the dynamic delta hedging has the worst performances and the other where static hedging has the worst performance. We detailedly analyzed the PnL attribution and the reason for such performance in this Section. We extended the analysis to ITM and OTM options for the same dates (to be noted that the term moneyness of the target option is based on the beginning of the month and not on the analysis date). Further, we have also generated the full PnL, delta, gamma, and vega PnL for the target option and static hedge portfolio across the entire historical period considered and performed a detailed analysis. For all the analyses, we assume a constant risk-free interest rate and zero dividends.
In the first case, we attempted to identify the possible factors that could explain the superior performance of the Constant Volatility Linear Lasso Static hedge model over the dynamic hedge. In the PnL error plot \ref{P2_NIFTY_Linear_ATM_CE_static_vs_dynamic_pnL_error.png}, we observed two large spikes in the dynamic hedge error during the covid stress period on 7 APR 2020 and 4 MAY 2020 while the constant volatility static hedge error remained low. The realized target option PnL, delta hedge PnL, and static hedge PnL corresponding to these dates are available in Table \ref{tab:realstatdynPnL}. There was an $8.76\%$ rise and $5.74\%$ fall in the NIFTY index on 7th April and 4th May respectively.
Firstly, we tried to explain the PnL for the target option, static hedge portfolio, and dynamic hedge portfolio due to changes in the index levels and having other risk factors like volatility constant. It was observed on both dates that the delta PnL ($Option \ Delta \ * \ Change \ in \ index \ levels$) couldn't explain the changes in target option value due to index changes. However, together with gamma PnL ($0.5 \ * \ Option \ Gamma \ * \ Square\ of \ Change \ in \ index \ levels$), it is explained for the most part. The gamma PnL as a percentage of marginal PnL due to only index change was very significant around 26\% and 44\% for the NIFTY ATM target option. Interestingly, the target option delta PnL and gamma PnL were close to delta and gamma PnL of the static hedge portfolio in most cases, thereby indicating that the static hedge portfolio has the potential of capturing the gamma risk. In the case of the dynamic hedge, only delta risk is captured and failure in capturing the gamma risk is one of the key reasons for the poor performance of the dynamic hedge on the selected dates. The marginal PnL due to index change, the PnL Contribution due to delta PnL, and gamma PnL are available in Table \ref{tab:deltagamma} for the analysis dates.
Secondly, we tried to explain the remaining portion of PnL by other risk factors: implied volatility (which is again not captured by a dynamic hedge) and passage of time (theta risk). We could explain most of the marginal PnL due to change in implied volatility by Vega PnL ($Option \ Vega \ * \ Change \ in$ $Implied$ $volatility$) and Volga PnL ($0.5 \ * \ Option \ Volga \ * \ Square\ of \ Change \ in$ $Implied$ $Volatility$) and marginal PnL due to change in time by Theta PnL ($Option \ Theta \ * \ Passage \ of$ $time$) for target option and static hedge portfolio individually. If we accommodate the change in the implied volatility and theta factor, the remaining portion of the PnL (apart from delta and gamma exposure) is relatively close between the static hedge portfolio and the target option, thereby causing the full PnL to be closer to the target option PnL when compared with a dynamic hedge. However, we can only partially explain the reason for the PnL match between the static hedge and target option with vega PnL and theta PnL and there are cases where Vega PnL is significantly higher for the static hedge compared to the target option, though overall PnL is close. This could potentially be because of significant higher-order risks and interaction terms between the risk factors which could drive the static hedge PnL. Further, the option value as a function of volatility is relatively simple for the target option compared to a static hedge portfolio with a large number of constituent options leading to a high-dimensional surface. The PnL due to the passage in time is relatively close to the target option and static hedge portfolio (except in some cases like NIFTY ITM on 4 MAY), which is also close to Theta PnL. For a dynamic hedge, the marginal PnL due to the passage of time (contributed by deposit account) is very low as expected, thereby not accommodating the theta risk of the target option. The marginal PnLs due to volatility change (along with vega PnL and Volga PnL) and marginal PnL due to theta risk are available in Table \ref{PnL Analysis - Change in Time}.
In the second case, we considered the worst performing case of the static hedge (30 March 2020) for the NIFTY ATM call option (the Worst Performance static hedge error is not as high as the worst performance dynamic hedge error). In this case, for the target option, the delta PnL was almost close to full PnL, whereas vega PnL and gamma PnL were low and were together nearly canceling with theta PnL. However, for static hedge, delta PnL and gamma PnL was closer to the target option, but, the lower vega PnL and larger theta (negative) PnL causes the difference in full PnL.
In Figures \ref{ATM_NIFTY_Analysis}, \ref{ITM_NIFTY_Analysis}, and \ref{OTM_NIFTY_Analysis}, we present the observations across the entire historical period for ATM, ITM and OTM NIFTY call options respectively. In each figure, the subplot (a) shows the PnL Error for the dynamic hedge and linear constant volatility static hedge model (PnL Error is calculated as the difference between realized PnL and hedge PnL). The subplot (b) shows the Marginal PnL Absolute Error for delta PnL and delta-gamma PnL ($delta \ PnL \ + \ gamma \ PnL$) for the target option. Please note that the Marginal PnL Absolute Error is the absolute error between marginal PnL (PnL only due to index change and keeping other risk factors constant) and delta or delta-gamma PnL. The subplot (c) shows the regression fit of delta-gamma PnL of the target option against delta-gamma PnL of the static hedge portfolio. The subplot (d) shows the regression fit of the target option vega PnL against static hedge portfolio vega PnL.
From Figures \ref{ATM_NIFTY_Analysis}, \ref{ITM_NIFTY_Analysis}, and \ref{OTM_NIFTY_Analysis}, we observe that for ATM, ITM and OTM options, the delta hedge spikes in subplot (a) correspond to Marginal delta PnL Absolute Error spikes in subplot (b). This clearly shows that failure in capturing gamma risk is the key reason for the poor performance of the delta hedge. Further, in subplot (c), we observe that the delta-gamma PnL of the target option and static hedge portfolio closely align with each other, which explains the superior performance of the static hedge. However, when comparing the vega PnL of the target option and static hedge portfolio (shown in subplot (d)), we observe that a shorter-term static hedge portfolio can only explain partially the vega PnL of the target option. This could be intuitively understood because the vega of the target option corresponds to a longer maturity whereas the vega of the shorter-term portfolio is for a small time horizon and shorter-term volatility could be more dynamic. Further, static hedge PnL due to change in implied volatility might require higher-order terms and other interaction terms with theta risks especially being a higher-dimensional surface depending on the number of constituent options. However, this requires detailed study and is considered for future scope.
\begin{figure}[!htb]%
\centering
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P2_analysis_section_NIFTY_Linear_ATM_CE_static_vs_dynamic_pnL_error.png}\label{P2_analysis_section_NIFTY_Linear_ATM_CE_static_vs_dynamic_pnL_error.png}}%
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P5_analysis_section_NIFTY_ATM_CE_delta_vs_deltagamma.png}\label{P5_analysis_section_NIFTY_ATM_CE_delta_vs_deltagamma.png}}\\
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P5_analysis_section_NIFTY_ATM_CE_deltagamma_regression.png}\label{P5_analysis_section_NIFTY_ATM_CE_deltagamma_regression.png}}%
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P5_analysis_section_NIFTY_ATM_CE_vega_regression.png}\label{P5_analysis_section_NIFTY_ATM_CE_vega_regression.png}}%
\caption{PnL Attribution Analysis - NIFTY ATM CALL options}%
\label{ATM_NIFTY_Analysis}%
\end{figure}
\begin{figure}[!htb]%
\centering
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P2_analysis_section_NIFTY_Linear_ITM_CE_static_vs_dynamic_pnL_error.png}\label{P2_analysis_section_NIFTY_Linear_ITM_CE_static_vs_dynamic_pnL_error.png}}%
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P5_analysis_section_NIFTY_ITM_CE_delta_vs_deltagamma.png}\label{P5_analysis_section_NIFTY_ITM_CE_delta_vs_deltagamma.png}}\\
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P5_analysis_section_NIFTY_ITM_CE_deltagamma_regression.png}\label{P5_analysis_section_NIFTY_ITM_CE_deltagamma_regression.png}}%
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P5_analysis_section_NIFTY_ITM_CE_vega_regression.png}\label{P5_analysis_section_NIFTY_ITM_CE_vega_regression.png}}%
\caption{PnL Attribution Analysis - NIFTY ITM CALL options}%
\label{ITM_NIFTY_Analysis}%
\end{figure}
\begin{figure}[!htb]%
\centering
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P2_analysis_section_NIFTY_Linear_OTM_CE_static_vs_dynamic_pnL_error.png}\label{P2_analysis_section_NIFTY_Linear_OTM_CE_static_vs_dynamic_pnL_error.png}}%
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P5_analysis_section_NIFTY_OTM_CE_delta_vs_deltagamma.png}\label{P5_analysis_section_NIFTY_OTM_CE_delta_vs_deltagamma.png}}\\
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P5_analysis_section_NIFTY_OTM_CE_deltagamma_regression.png}\label{P5_analysis_section_NIFTY_OTM_CE_deltagamma_regression.png}}%
\subfloat[]{\includegraphics[width=0.5\textwidth, height=2in]{P5_analysis_section_NIFTY_OTM_CE_vega_regression.png}\label{P5_analysis_section_NIFTY_OTM_CE_vega_regression.png}}%
\caption{PnL Attribution Analysis - NIFTY OTM CALL options}%
\label{OTM_NIFTY_Analysis}%
\end{figure}
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|p{3em}|c|c|c|c|c|}
\toprule
\multicolumn{1}{|p{5.135em}|}{\textbf{Date}} & \textbf{Money -ness} & \multicolumn{1}{p{5.455em}|}{\textbf{Target Option PnL}} & \multicolumn{1}{p{5.91em}|}{\textbf{Static Hedg PnL}} & \multicolumn{1}{p{6.775em}|}{\textbf{Delta Hedge PnL}} & \multicolumn{1}{p{7.275em}|}{\textbf{Static Hedge Error}} & \multicolumn{1}{p{6.865em}|}{\textbf{Delta Hedge Error}} \\
\midrule
\multirow{3}[6]{*}{\textbf{7 April 2020}} & ATM & 360.00 & 363.55 & 246.15 & -3.55 & 113.85 \\
\cmidrule{2-7} & ITM & 519.40 & 513.80 & 398.71 & 5.60 & 120.69 \\
\cmidrule{2-7} & OTM & 106.15 & 144.57 & 71.83 & -38.42 & 34.32 \\
\midrule
\multirow{3}[6]{*}{\textbf{4 May 2020}} & ATM & -163.85 & -158.96 & -290.39 & -4.89 & 126.54 \\
\cmidrule{2-7} & ITM & -362.70 & -362.15 & -516.73 & -0.55 & 154.03 \\
\cmidrule{2-7} & OTM & -7.60 & -12.64 & -34.40 & 5.04 & 26.80 \\
\midrule
\multirow{3}[6]{*}{\textbf{30 March 2020}} & ATM & -200.00 & -231.44 & -214.33 & 31.44 & 14.33 \\
\cmidrule{2-7} & ITM & -277.85 & -267.21 & -262.55 & -10.64 & -15.30 \\
\cmidrule{2-7} & OTM & -79.60 & -145.00 & -123.68 & 65.40 & 44.08 \\
\bottomrule
\end{tabular}%
}
\caption{Realised PnL against Static and Dynamic Hedge PnL}
\label{tab:realstatdynPnL}
\end{table}
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|p{3em}|c|c|c|c|c|c|}
\toprule
\multicolumn{1}{|c|}{\multirow{2}[4]{*}{\textbf{Date}}} & \textbf{Money -ness} & \multicolumn{3}{p{12.545em}|}{\textbf{Target Option PnL}} & \multicolumn{3}{p{12.455em}|}{\textbf{Static Hedg PnL}} \\
\cmidrule{3-8} & \multicolumn{1}{c|}{} & \multicolumn{1}{p{5.32em}|}{\textbf{Marginal PnL due to only index change}} & \multicolumn{1}{p{3.68em}|}{\textbf{Delta PnL}} & \multicolumn{1}{p{3.545em}|}{\textbf{Gamma PnL}} & \multicolumn{1}{p{5.18em}|}{\textbf{Marginal PnL due to only index change}} & \multicolumn{1}{p{3.365em}|}{\textbf{Delta PnL}} & \multicolumn{1}{p{3.91em}|}{\textbf{Gamma PnL}} \\
\midrule
\multirow{3}[6]{*}{\textbf{7 April 2020}} & ATM & 337.39 & 247.28 & 88.25 & 337.47 & 240.12 & 93.66 \\
\cmidrule{2-8} & ITM & 476.74 & 400.49 & 83.11 & 466.84 & 373.05 & 99.27 \\
\cmidrule{2-8} & OTM & 131.36 & 72.17 & 46.36 & 144.33 & 79.01 & 51.18 \\
\midrule
\multirow{3}[6]{*}{\textbf{4 May 2020}} & ATM & -201.32 & -288.31 & 89.42 & -168.81 & -272.10 & 123.81 \\
\cmidrule{2-8} & ITM & -461.48 & -513.22 & 37.00 & -376.93 & -477.19 & 114.04 \\
\cmidrule{2-8} & OTM & -15.25 & -34.15 & 29.01 & -14.30 & -35.07 & 32.83 \\
\midrule
\multirow{3}[6]{*}{\textbf{30 March 2020}} & ATM & -195.30 & -212.52 & 16.84 & -181.98 & -200.48 & 18.64 \\
\cmidrule{2-8} & ITM & -246.48 & -260.43 & 13.35 & -229.25 & -245.97 & 16.57 \\
\cmidrule{2-8} & OTM & -105.00 & -122.57 & 18.14 & -103.06 & -120.21 & 18.01 \\
\bottomrule
\end{tabular}%
}
\caption{PnL Analysis - Change in Index Levels}
\label{tab:deltagamma}
\end{table}
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|p{3em}|c|c|c|c|c|c|}
\toprule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{\textbf{Date}}} & \textbf{Money -ness} & \multicolumn{6}{p{30.13em}|}{\textbf{Marginal PnL due to change in Implied Volatility}} \\
\cmidrule{3-8} & \multicolumn{1}{c|}{} & \multicolumn{3}{p{14.68em}|}{\textbf{Target Option}} & \multicolumn{3}{p{15.45em}|}{\textbf{Static Hedge Portfolio}} \\
\cmidrule{3-8} & \multicolumn{1}{c|}{} & \multicolumn{1}{p{5.455em}|}{\textbf{Marginal PnL}} & \multicolumn{1}{p{4.68em}|}{\textbf{Vega PnL}} & \multicolumn{1}{p{4.545em}|}{\textbf{Volga PnL}} & \multicolumn{1}{p{5.635em}|}{\textbf{Marginal PnL}} & \multicolumn{1}{p{5.135em}|}{\textbf{Vega PnL}} & \multicolumn{1}{p{4.68em}|}{\textbf{Volga PnL}} \\
\midrule
\multirow{3}[6]{*}{\textbf{7 April 2020}} & ATM & 56.74 & 56.03 & 1.62 & 153.16 & 135.74 & 26.95 \\
\cmidrule{2-8} & ITM & 95.89 & 95.88 & 0.06 & 200.13 & 174.44 & 37.56 \\
\cmidrule{2-8} & OTM & 1.49 & 1.48 & 0.02 & 64.13 & 55.66 & 12.42 \\
\midrule
\multirow{3}[6]{*}{\textbf{4 May 2020}} & ATM & 80.00 & 80.02 & -0.01 & 93.95 & 76.21 & 27.02 \\
\cmidrule{2-8} & ITM & 84.84 & 63.54 & 27.16 & 60.96 & 51.17 & 15.78 \\
\cmidrule{2-8} & OTM & 44.82 & 31.00 & 15.28 & 40.49 & 25.43 & 20.08 \\
\midrule
\multirow{3}[6]{*}{\textbf{30 March 2020}} & ATM & 27.47 & 27.47 & 0.00 & 22.54 & 14.97 & 6.58 \\
\cmidrule{2-8} & ITM & 1.58 & 1.58 & 0.00 & 16.59 & 11.46 & 4.53 \\
\cmidrule{2-8} & OTM & 57.85 & 57.02 & 0.91 & 35.20 & 21.74 & 11.34 \\
\bottomrule
\end{tabular}%
}
\caption{PnL Analysis - Change in Implied Volatility}
\label{tab:addlabel}%
\end{table}%
\begin{table}[!htb]
\centering
\resizebox{0.85\textwidth}{!}{%
\begin{tabular}{|c|p{3em}|c|c|c|c|c|}
\toprule
\multicolumn{1}{|c|}{\multirow{3}[6]{*}{\textbf{Date}}} & \textbf{Money -ness} & \multicolumn{5}{p{25.41em}|}{\textbf{Marginal PnL due to change in time (Time to Maturity)}} \\
\cmidrule{3-7} & \multicolumn{1}{c|}{} & \multicolumn{2}{p{10.135em}|}{\textbf{Target Option}} & \multicolumn{2}{p{10em}|}{\textbf{Static Hedge Portfolio}} & \multicolumn{1}{p{5.275em}|}{\textbf{Dynamic Hedge}} \\
\cmidrule{3-7} & \multicolumn{1}{c|}{} & \multicolumn{1}{p{5.09em}|}{\textbf{Marginal PnL}} & \multicolumn{1}{p{5.045em}|}{\textbf{Theta PnL}} & \multicolumn{1}{p{5.275em}|}{\textbf{Marginal PnL}} & \multicolumn{1}{p{4.725em}|}{\textbf{Theta PnL}} & \multicolumn{1}{l|}{\textbf{Marginal PnL}} \\
\midrule
\multirow{3}[6]{*}{\textbf{7 April 2020}} & ATM & -30.83 & -29.97 & -24.46 & -21.17 & -1.14 \\
\cmidrule{2-7} & ITM & -37.82 & -36.42 & -22.53 & -15.43 & -1.77 \\
\cmidrule{2-7} & OTM & -12.55 & -13.03 & -13.34 & -13.62 & -0.34 \\
\midrule
\multirow{3}[6]{*}{\textbf{4 May 2020}} & ATM & -23.24 & -22.46 & -14.05 & -13.11 & -2.08 \\
\cmidrule{2-7} & ITM & -12.05 & -12.21 & 1.56 & 7.81 & -3.51 \\
\cmidrule{2-7} & OTM & -5.51 & -5.90 & -5.45 & -6.23 & -0.25 \\
\midrule
\multirow{3}[6]{*}{\textbf{30 March 2020}} & ATM & -31.33 & -30.64 & -55.88 & -49.94 & -1.82 \\
\cmidrule{2-7} & ITM & -32.03 & -31.41 & -44.78 & -36.83 & -2.11 \\
\cmidrule{2-7} & OTM & -23.71 & -23.37 & -53.94 & -50.33 & -1.11 \\
\bottomrule
\end{tabular}%
}
\caption{PnL Analysis - Change in Time}
\label{PnL Analysis - Change in Time}%
\end{table}%
\clearpage
\section{Conclusions}
Based on the empirical analysis and SPA tests, we conclude that the Constant Volatility Linear Lasso regression-based static hedging tends to perform better than dynamic hedging across all trade characteristics and market scenarios considered. We also observe that the superiority of the static hedging model over the dynamic hedging model is relatively more evident during highly volatile market conditions (leading to high vega exposure) or when there are jumps in underlying (leading to high gamma exposure) like the covid stress period. This is mainly because a static hedge has the potential to capture delta risk, gamma risk, and partially theta risk and vega risk, whereas the dynamic hedge could capture only delta risk. Further, the usage of Lasso regression-based static hedging has clear benefits of better interpretability of the model, generation of smaller hedge portfolio as insignificant coefficients are pushed to zero, and efficient runtime. The average runtime on an Intel Xeon Gold 6130 processor is $14.2$ seconds for constructing the static hedge portfolio for the entire historical period of one year.
Overall, we observe a better performance of the Lasso-based static hedge over the dynamic hedge across different trade characteristics and market scenarios with additional useful characteristics like easy implementation, automation, and efficient runtime for setting up the hedge.
\bibliographystyle{unsrtnat}
|
2,869,038,156,316 | arxiv | \section*{Introduction}
Two parallel plates in a random acoustic field will experience a force analogous to the quantum Casimir force [1]. Owing to the finite bandwidth of the noise, this force can be either attractive or repulsive depending on the plate separation. Larraza et al. [2] proposed and experimentally demonstrated this acoustic Casimir effect, while also providing the theory for perfectly reflecting plates.
If the plates are not perfectly reflective, the wavevector component perpendicular to the plates is no longer restricted to integer multiples of $\pi/L$, and the total density of modes must be calculated. Esquivel-Sirvent et al. [3] derived an expression for the resulting pressure using a Green’s function approach. In this work, we apply their expression to uncover a subtle behavior of the acoustic Casimir force at small plate separations, which does not occur for perfect reflectors.
When perfectly reflecting plates are separated by less than the smallest half-wavelength contained in the background noise, the force is attractive and independent of the separation (no modes exist between the plates) [2]. This is no longer the case for imperfect reflectors, and when the noise intensity is constant over the considered bandwidth, there is actually a small value of the plate separation below which the force rapidly tends to repulsion. This can be viewed as a critical phenomenon, and we derive an associated quantity $\tilde{q}_{c} \sim (1-\eta)^{1/2}$ as $\eta\to1$; the quantity $\tilde{q}_{c}$ is a dimensionless critical plate separation or infrared wavenumber cutoff depending on the context, and $\eta$ is the product of the reflectivities of the plates.
We then turn to power-law noise spectra, paying close attention to infinite-bandwidth limits; and finally to narrow unimodal spectra, inspired by the formalism of Lee et al. [4]. The case of imperfect reflectors in a narrowly peaked noise background exhibits singular behavior analogous to that described above.
\section*{Flat spectra and critical phenomena}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig1a.png}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.465\linewidth}
\includegraphics[width=\linewidth]{Fig1b.png}
\caption{}
\end{subfigure}
\caption{Band-limited acoustic Casimir force between imperfect reflectors. An attractive/repulsive force is given by a positive/negative value respectively. Here $f_{1}=4.8\text{ kHz}$, $f_{2}=16\text{ kHz}$, $I_{\omega}=2.84\cdot10^{-4}\text{ J}/\text{m}^{2}$, $c=343\text{ m/s}$ and the area of each plate is $1.77\cdot10^{-2}\text{ m}^{2}$.}
\label{fig:coffee1}
\end{figure}
Consider two parallel rigid plates with reflectivities $r_{A}, r_{B}$ separated by a distance $L$, and define $\eta \equiv r_{A}r_{B}$. If the plates are immersed in an isotropic noise field of constant spectral intensity $I_{\omega}$ within the frequency range $[f_{1},f_{2}]$, the resulting net pressure is $\text{Re}[P]$, where [3]
\begin{equation}
P(\eta,L,q_{1},q_{2})=\frac{I_{\omega}}{2\pi}\iiint\limits_{q_{1}<|\textbf{k}|<q_{2}}dk_{x}dk_{y}dk_{z}\frac{k_{z}^{2}}{|\textbf{k}|^{4}}\frac{1}{\eta^{-1}e^{-2ik_{z}L}-1}
\end{equation}
and $q_{1,2}=2\pi f_{1,2}/c$. If $\eta$ is constant over the considered bandwidth, this integral can be expressed in terms of polylogarithms [5]: for $|\eta|\leq1$,
\begin{align}
P &= I_{\omega}\int_{0}^{\pi}d\phi\text{ }\sin{\phi}\int_{q_{1}}^{q_{2}}dq\frac{\cos^{2}\phi}{\eta^{-1}e^{-2iLq\cos{\phi}}-1} \nonumber \\
&=I_{\omega}\int_{-1}^{1}du\int_{q_{1}}^{q_{2}}dq\frac{u^{2}}{\eta^{-1}e^{-2iuLq}-1} \nonumber \\
&=I_{\omega}\int_{-1}^{1}du\text{ }\frac{iu}{2L}\left[\text{ln}(1-\eta e^{2iuLq_{2}})-\text{ln}(1-\eta e^{2iuLq_{1}})\right] \nonumber \\
&=\frac{I_{\omega}}{L}\left[F(Lq_{1})-F(Lq_{2})\right]
\end{align}
where
\begin{equation}
F(\tilde{q}) \equiv F(\tilde{q},\eta) = \frac{\text{Li}_{2}(\eta e^{-2i\tilde{q}})}{4\tilde{q}}+\frac{\text{Li}_{2}(\eta e^{2i\tilde{q}})}{4\tilde{q}}+\frac{\text{Li}_{3}(\eta e^{-2i\tilde{q}})}{8i\tilde{q}^{2}}-\frac{\text{Li}_{3}(\eta e^{2i\tilde{q}})}{8i\tilde{q}^{2}},
\end{equation}
and
\begin{equation*}
\text{Li}_{m}(z)\equiv\sum_{n=1}^{\infty}\frac{z^{n}}{n^{m}}.
\end{equation*}
In Fig. 1, we plot $-P(L)$ with fixed noise parameters for different values of $\eta$. As $\eta\to1$, the result converges to that of Larraza et al. [2]. For sufficiently large plate separation, the main action of reduced reflectivity, besides reducing the magnitude of the force, is the smoothing of the kinks at which $q_{1,2}L/\pi$ is an integer. In this example, the ranges of $L$ in which the force is attractive/repulsive are barely affected by $\eta$ for $L>\pi/q_{2}$. For $L<\pi/q_{2}$, the behavior differs remarkably from the $\eta=1$ case; the force is no longer constant in $L$, and actually becomes repulsive for small enough $L$: we have
\begin{equation*}
\lim_{L\to0}P(1,L,q_{1},q_{2}) = I_{\omega}\cdot\frac{q_{1}-q_{2}}{3},
\end{equation*}
but if $\eta\neq1$,
\begin{equation}
\lim_{L\to0}P(\eta,L,q_{1},q_{2}) = I_{\omega}\cdot\frac{2\eta}{3}\left(\frac{q_{1}-q_{2}}{\eta-1}\right),
\end{equation}
which diverges as $\eta\to1$.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig2a.png}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.50\linewidth}
\includegraphics[width=\linewidth]{Fig2b.png}
\caption{}
\end{subfigure}
\caption{The singular bahaviour of $F(\tilde{q},\eta)$ as $\eta$ approaches $1$.}
\label{fig:coffee2}
\end{figure}
This can be resolved in part by investigating the small-$\tilde{q}$ behavior of $F(\tilde{q},\eta)$ as $\eta \to 1$. First note that $F\to0$ as $\tilde{q}\to\infty$. For $\eta = 1$, we see that $F$ tends to $-\pi/4$ as $\tilde{q}\to0$, as expected since in the infinite-bandwidth limit for perfect reflectors [2], $P=-\pi I_{\omega}/4L$. However, if $\eta\neq1$ then the $\tilde{q}\to0$ limit is actually zero. For $\eta$ slightly less than 1, $F$ has a minimum at some small $\tilde{q}_{c}$, with $F(\tilde{q}_{c})$ slightly greater than $-\pi/4$. As $\tilde{q}$ is varied from $\tilde{q}_{c}$ to 0, $F$ rapidly rises to zero (cf. Fig. 2).
We consider $\tilde{q}_{c}(\eta)$ as a nondimensional cutoff, and study its behavior as $\eta\to1$. To this end, we define
\begin{equation*}
\psi(\tau,\tilde{q}) \equiv \left.\frac{\partial F}{\partial\tilde{q}}\right|_{\eta\to 1-\tau} \ \ \ \ \
\psi_{0}(\tilde{q}) \equiv \psi(0,\tilde{q}) \ \ \ \ \ \ \
\psi_{1}(\tilde{q}) \equiv -\left.\frac{\partial\mathcal{\psi}}{\partial\tau}\right|_{(0,\tilde{q})}
\end{equation*}
where $\tau\equiv 1-\eta>0$ plays a role analogous to a reduced temperature, and now
\begin{equation*}
\tilde{q}_{c} = \min\{\tilde{q}\text{ }|\text{ }\psi(\tau,\tilde{q})=0\text{ \& }\tilde{q}>0\}.
\end{equation*}
As $\tau\to 0$, we have to lowest order in $\tilde{q}_{c}$,
\begin{equation}
\psi(\tau,\tilde{q}_{c}) = \psi_{0}(\tilde{q_{c}})-\tau\psi_{1}(\tilde{q_{c}}) = 0 \implies \tau = \frac{\psi_{0}(\tilde{q_{c}})}{\psi_{1}(\tilde{q_{c}})} = \frac{2\tilde{q_{c}}^{2}}{3} \Leftrightarrow \boxed{\tilde{q_{c}}(\eta) = \sqrt{\tfrac{3}{2}}(1-\eta)^{1/2}}
\end{equation}
and
\begin{equation}
F\left(\text{ }\tilde{q}_{c}\text{ },\text{ }1-\tfrac{2}{3}\tilde{q}_{c}^{2}\text{ }\right)=\frac{2\tilde{q}_{c}}{3}-\frac{\pi}{4} \Leftrightarrow \boxed{F(\tilde{q}_{c}(\eta),\eta) = \sqrt{\tfrac{2}{3}}(1-\eta)^{1/2}-\frac{\pi}{4}}
\end{equation}
This analysis is confirmed by numerically solving for the first zero of $\psi(\tau,\tilde{q})$ without taking any expansion. We find that over the range $0.9<\eta<1$, the relations
\begin{equation}
\tau = \frac{2\tilde{q}_{c}^{2}}{3}+0.349\tilde{q}_{c}^{3}
\end{equation}
and
\begin{equation}
F\left(\text{ }\tilde{q}_{c}\text{ },\text{ }1-\tfrac{2}{3}\tilde{q}_{c}^{2}-0.349\tilde{q}_{c}^{3}\text{ }\right)=\frac{2\tilde{q}_{c}}{3}+0.0872\tilde{q}_{c}^{2}-\frac{\pi}{4}
\end{equation}
are accurate to less than $1\%$ relative error.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig3a1.png}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.50\linewidth}
\includegraphics[width=\linewidth]{Fig3b1.png}
\caption{}
\end{subfigure}
\caption{(a) The cutoff, and (b) corresponding difference between $F$ and $-\pi/4$ as $\eta\to1$. The solid lines are given by Eqs. (7) and (8) and are visually indistinguishable from numerical solutions. The dashed lines are given by Eqs. (5) and (6). }
\label{fig:coffee2}
\end{figure}
If we are interested in well-posed large-bandwidth limits, we can impose an infrared cutoff $q_{c} = \tilde{q}_{c}(\eta)/L_{0}$ on the noise field and define $L_{0}$ as the minimum length under consideration; then
\begin{equation*}
\lim_{q_{2}\to\infty}P\left(\eta,L,\tfrac{\tilde{q}_{c}(\eta)}{L_{0}},q_{2}\right) \equiv P^{(q_{c},\infty)}(\eta,L,L_{0}) =\frac{I_{\omega}}{L}F\left(\tilde{q}_{c}(\eta)\tfrac{L}{L_{0}},\text{ }\eta\right), \ \ \ \ \ L\geq L_{0}
\end{equation*}
with $F$ given by Eq. (3). If $\eta$ is near 1, then $P^{(q_{c},\infty)}(\eta,L_{0},L_{0})$ is slightly greater than $-\pi I_{\omega}/4L_{0}$, cf. Fig. 3b. Furthermore, since $\tilde{q}_{c}$ is small, $P^{(q_{c},\infty)} \approx -\pi I_{\omega}/4L$ over a considerable range of $L/L_{0}$, up to the point where $F\left(\tilde{q}_{c}(\eta)\tfrac{L}{L_{0}}\right)$ becomes too small; perhaps around $\tilde{q}_{c}L/L_{0} = \pi/4$, cf. Fig 2a. At around $\tilde{q}_{c}L/L_{0} = 3\pi/4$, the force becomes repulsive, and slowly oscillates between attractive and repulsive as $L/L_{0}$ increases further.
If the wavenumber band $[q_{1},q_{2}]$ is fixed, we can consider $L_{c}=\tilde{q}_{c}(\eta)/q_{1}$ as a critical plate separation, at which the force is attractive and below which the the force rapidly tends to repulsion. In the example of Fig. 1, for $\eta=0.98$, we have $L_{c} \approx 2\text{ mm}$ and a corresponding force of about 25 mg.
\section*{Power-law and narrow unimodal spectra}
Next, we consider energy spectra well described by $G(q) \propto q^{\alpha}$ over a particular wavenumber range. The corresponding Casimir pressure is proportional to $\text{Re}[P_{\alpha}]$, where
\begin{equation}
P_{\alpha}(\eta,L,q_{1},q_{2})=\int_{q_{1}}^{q_{2}}dq\text{ }q^{\alpha}\int_{-1}^{1}du\frac{u^{2}}{\eta^{-1}e^{-2iuLq}-1} .
\end{equation}
Carrying out the $u$-integration first,
\begin{equation*}
\int_{-1}^{1}du\frac{u^{2}}{\eta^{-1}e^{-2iuLq}-1} = f(Lq)+f(-Lq),
\end{equation*}
where
\begin{equation}
f(\tilde{q}) = \frac{\text{ln}(1-\eta e^{-2i\tilde{q}})}{2i\tilde{q}}+\frac{\text{Li}_{2}(\eta e^{-2i\tilde{q}})}{2\tilde{q}^{2}}+\frac{\text{Li}_{3}(\eta e^{-2i\tilde{q}})}{4i\tilde{q}^{3}}.
\end{equation}
Since $|\eta|\leq1$, we can express this in series form and carry out the $q$-integration term-by-term. The result is
\begin{align}
P_{\alpha}(\eta,L,q_{1},q_{2})&= \sum_{n=1}^{\infty}\eta^{n}\int_{q_{1}}^{q_{2}}dq\left[e^{-2inLq}\left(\frac{q^{\alpha-3}}{4in^{3}L^{3}} + \frac{q^{\alpha-2}}{2n^{2}L^{2}} - \frac{q^{\alpha-1}}{2inL}\right)+\text{c.c.}\right] \nonumber \\
&=\frac{1}{L^{1+\alpha}}\sum_{n=1}^{\infty}\eta^{n}\left[f_{n,\alpha}(Lq_{1})-f_{n,\alpha}(Lq_{2})\right],
\end{align}
where
\begin{equation}
f_{n,\alpha}(\tilde{q}) = \frac{\tilde{q}^{\alpha-2}}{4in^{3}}E_{3-\alpha}(2in\tilde{q})+\frac{\tilde{q}^{\alpha-1}}{2n^{2}}E_{2-\alpha}(2in\tilde{q})-\frac{\tilde{q}^{\alpha}}{2in}E_{1-\alpha}(2in\tilde{q}) + \text{c.c.}
\end{equation}
and
\begin{equation*}
E_{m}(z) \equiv \int_{1}^{\infty}dt\text{ }\frac{e^{-zt}}{t^{m}}.
\end{equation*}
This result, like the flat-spectrum case, is a simple function of $L$ times a more complicated function of $\tilde{q}_{1,2} \equiv Lq_{1,2}$. In Fig. 4a we display
\begin{equation*}
F_{\alpha}(\tilde{q},\eta) \equiv \sum_{n=1}^{\infty}\eta^{n}f_{n,\alpha}(\tilde{q})
\end{equation*}
for various $\alpha$ with $\eta=0.9$.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{Fig4b1.png}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.48\linewidth}
\includegraphics[width=\linewidth]{Fig4a1.png}
\caption{}
\end{subfigure}
\caption{Analysis of power-law spectra, $G(q)\propto q^{\alpha}$. (a) $F_{\alpha}$ vs. $\tilde{q}$ for $\eta=0.9$. The variation at small $\tilde{q}$ is sharper for lower $\alpha$. As $\tilde{q}\to\infty$, $F_{1/3}$ decays faster than does $F_{2/3}$, and the oscillations of $F_{1}$ neither decay nor grow. (b) Infinite-bandwidth limits as a function of $\eta$ for various $\alpha$. The case $\alpha=1/3$ displays relatively sharp variation as $\eta$ approaches 1.}
\label{fig:coffee2}
\end{figure}
The functions $E_{m}(z)$ each have a branch cut running along the negative real axis, and the small-$q$ limit of $F_{\alpha}(Lq)$ is determined by the behavior as the branch points at zero are approached along both sides of the imaginary axis. We have
\begin{equation*}
\frac{f_{n,\alpha}(Lq)}{L^{1+\alpha}} \to \frac{1}{n^{1+\alpha}}\frac{2^{-\alpha}}{L^{1+\alpha}}\frac{\Gamma(1+\alpha)}{\alpha-2}\sin{\left(\frac{\alpha\pi}{2}\right)} - \frac{2q^{\alpha+1}}{3(\alpha+1)} \ \ \ \ \ \text{as} \ \ \ \ \ q \to 0^{+}
\end{equation*}
where $\Gamma$ is the gamma function. This converges for $\alpha>-1$, while the large-$q$ limit,
\begin{equation*}
\frac{f_{n,\alpha}(Lq)}{L^{1+\alpha}}\to \frac{q^{\alpha-1}}{n^{2}L^{2}}\cos{(2nLq)} \ \ \ \ \ \text{as} \ \ \ \ \ q \to \infty
\end{equation*}
is 0 for $\alpha<1$. Thus an infinite-bandwidth limit exists for $-1<\alpha<1$ and is given by
\begin{equation}
\lim_{q\to0^{+}}\frac{F_{\alpha}(Lq)}{L^{1+\alpha}} \equiv P_{\alpha}^{(0,\infty)}(\eta,L) = -\frac{2^{-\alpha}}{L^{1+\alpha}}\frac{\Gamma(1+\alpha)}{2-\alpha}\sin{\left(\frac{\alpha\pi}{2}\right)}\text{Li}_{1+\alpha}(\eta),
\end{equation}
which itself diverges as $\alpha\to-1$, and which for any $\alpha<0$ implies a repulsive force when $\eta>0$ and diverges in the limit of perfect reflectors $\eta\to1$. This formula illustrates the sharp variation in $\eta$, as $\eta$ approaches 1, of the $q\to0$ limit when $\alpha$ is greater than, but close to zero. In particular, we have as $\alpha\to0^{+}$,
\begin{equation*}
P_{\alpha}^{(0,\infty)}(1,L) \to -\frac{1}{2L}\lim_{\alpha\to0^{+}}\left[\sin{\left(\frac{\alpha\pi}{2}\right)}\text{Li}_{1+\alpha}(1)\right]=-\frac{\pi}{4L},
\end{equation*}
whereas for $\eta<1$, $P_{0}^{(0,\infty)}(\eta,L)$ is strictly zero as expected. The $L$-independent part of $P_{\alpha}^{(0,\infty)}$ is displayed in Fig. 4b. We can also get the marginal $\alpha\to1$ case,
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.47\linewidth}
\includegraphics[width=\linewidth]{Fig5a.png}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.51\linewidth}
\includegraphics[width=\linewidth]{Fig5b.png}
\caption{}
\end{subfigure}
\caption{Casimir pressure resulting from narrowly peaked spectra. (a) Delta-function spectrum for various $\eta$. (b) Parabolic spectrum for various peak widths with $\eta=0.9$. The prediction of rapid variation near $\tilde{L}=\tilde{q}_{c}(\eta)$ is confirmed. When $D=0.01$, the corresponding pressure is indistinguishable from $P_{\delta}(0.9,\tilde{L})$ over this range, but this would not be the case for larger $\tilde{L}$.}
\label{fig:coffee2}
\end{figure}
\begin{equation*}
P_{1}^{(0,\infty)}(\eta,L) = -\frac{\text{Li}_{2}(\eta)}{2L^{2}}
\end{equation*}
which is $-\pi^{2}/12L^{2}$ for perfect reflectors.
Finally, we investigate narrow unimodal spectra where $G(q)$ has a maximum at some $q_{0}$. First consider that if $G(q) \propto \delta(q-q_{0})$, then
\begin{equation}
P_{\delta}(\eta,\tilde{L}) = f(\tilde{L})+f(-\tilde{L}) = -\left.\frac{\partial F}{\partial \tilde{q}}\right|_{\tilde{q}=\tilde{L}}
\end{equation}
where $\tilde{L}\equiv q_{0}L$, with $f$ defined by Eq. (10) and $F$ defined by Eq. (3). Again, there is non-trivial behavior as $\eta\to1$ for small $\tilde{L}$: we have
\begin{equation*}
\lim_{\tilde{L}\to0}P_{\delta}(1,\tilde{L}) = -\frac{1}{3};
\end{equation*}
\begin{equation}
\lim_{\tilde{L}\to0}P_{\delta}(\eta,\tilde{L}) = \frac{2\eta}{3(1-\eta)}, \ \ \ \ \ \eta\neq1.
\end{equation}
$P_{\delta}(\tilde{L})$ is plotted in Fig. 5a for different values of $\eta$. As $\tilde{L}$ is increased past an integer multiple of $\pi$, the force rapidly switches from attractive to repulsive, becoming discontinuous in the limit of perfect reflectors. The cutoff $\tilde{q}_{c}(\eta)$ from Eqs. (5) and (7) gives the lowest value of $\tilde{L}$ at which $P_{\delta}(\tilde{L})$ is zero; when $\eta$ is near 1, $P_{\delta}$ quickly falls to a value slightly greater than $-1/3$ as $\tilde{L}$ is increased from $\tilde{q}_{c}$. Below $\tilde{q}_{c}$, the force is repulsive, rapidly increasing in magnitude as $\tilde{L}$ is decreased further. If one considers $q_{0}$ fixed, then $\tilde{q}_{c}$ is a dimensionless critical separation, i.e. $L_{c} = \tilde{q}_{c}/q_{0}$.
We would like to see if this phenomena holds for general narrowly peaked spectra. Taylor expanding an appropriate $G(q)$ about its maximum at $q_{0}$ gives a parabolic approximation [4], $G(q) \propto G_{\text{par}}(q)$, where
\begin{equation*}
G_{\text{par}}(q) = \max\left\{\frac{3}{4q_{0}D}\left[1-\left(\frac{q-q_{0}}{q_{0}D}\right)^{2}\right],0\right\}.
\end{equation*}
The dimensionless parameter $D<1$ relates to the width of the peak. This is equivalent to a weighted sum of three spectra, proportional to $1$, $q$, and $q^{2}$ respectively, over the finite band $[q_{0}-q_{0}D,q_{0}+q_{0}D]$. From Eq. (9), the resulting pressure is
\begin{equation}
P_{\text{par}}(D,\eta,\tilde{L})=\frac{3}{D^{3}}\left[\frac{D^{2}-1}{4q_{0}}P_{0}^{\{q_{0},D\}}(\eta,L)+\frac{1}{2q_{0}^{2}}P_{1}^{\{q_{0},D\}}(\eta,L)-\frac{1}{4q_{0}^{3}}P_{2}^{\{q_{0},D\}}(\eta,L)\right],
\end{equation}
where
\begin{equation*}
P_{\alpha}^{\{q_{0},D\}}(\eta,L)\equiv P_{\alpha}\left(\eta,L,q_{0}-q_{0}D,q_{0}+q_{0}D\right).
\end{equation*}
Note that $\int_{0}^{\infty}dq\text{ }G_{\text{par}}(q) = 1$, and that the corresponding pressure depends on $q_{0}L$ and not separately on $q_{0}$ or $L$, cf. Eq. (11). $P_{0}$ is just $P/I_{\omega}$ from Eqs. (2)-(3); a similar form of $P_{1}$, computationally more expedient than the result of (11)-(12), can be derived along the same lines of (2). For $P_{2}$ this is not the case, but the form given by (11)-(12) is advantageous over numerical integration [6].
The result for $\eta=0.9$ is plotted in Fig. 5b for various $D$. The phenomena near $\tilde{q}_{c}$ persists, and is in fact insensitive to the peak width! The effect of larger widths comes into play at larger $\tilde{L}$, involving the fact that $n\pi(1-D)$ and $n\pi(1+D)$, $n$ an integer, become more spaced apart as $n$ increases; the alternations between attraction and repulsion become less rapid, and this effect is more pronounced for larger $D$.
\section*{Conclusions}
We have derived expressions for the pressure between two imperfectly reflecting plates immersed in random acoustic fields. The asympotic properties of the associated special functions reveal singular behavoir as the product of the plate reflectivities approaches unity, leaving a crucial imprint on the sign and magnitude of the force between closely-spaced plates in flat or nonmonotonic spectral backgrounds. The closer the plates are to perfect reflectors, the stronger the repulsion in the limit of no separation between the plates; while at the same time, increasingly small plate separations are required for attraction to cease. It would be interesting to extend the analysis to include plate deformations due to the noise field [3] and prescribed spatiotemporal modulations of the plates [5,7].
For completeness, we remark on the pressure between two plates with negative or complex $\eta$. If plate A is a perfect reflector, $r_{A}=1$, and plate B is a pressure release surface, $r_{B}=-1$, then $\eta=-1$ and the wavevector component perpendicular to the plates can only take on values $(n\pi - \tfrac{\pi}{2})/L$, where $n$ is an integer. The kinks, when $G(q)\propto q^{\alpha}$ over a finite band $[q_{1},q_{2}]$, and discontinuities, when $G(q) \propto \delta(q-q_{0})$, then occur at $Lq_{0,1,2} = n\pi - \tfrac{\pi}{2}$ rather than $n\pi$.
If $G(q) \propto 1$, then the infinite-bandwidth limit is zero, in contrast to when $\eta=1$, but the $L\to0$ limit under finite bandwidth is the same as that of $\eta=1$. Furthermore, if $G(q) \propto \delta(q-q_{0})$, then the $\tilde{L}\to0$ limit is $-1/3$, the same as when $\eta$ is strictly 1. No singular behavior occurs as $\eta$ is increased from $-1$ (other than the smoothing of the kinks/discontinuities), and when $Lq_{0,2}$ is less than about $\tfrac{\pi}{2}$ the force is always attractive.
The correspondence between $\delta$-spectra and parabolic spectra for positive $\eta$ also holds for negative as well as complex $\eta$. If $\eta=e^{i\theta}$, there are kinks/discontinuities at $Lq_{0,1,2} = n\pi \pm \tfrac{\theta}{2}$, which smooth out as $\eta$ is varied along the radius of the unit circle. Singular behavior can only occur when $\eta$ is varied to 1, cf. Eqs. (4), (13), and (15). One can verify all this by evaluating Eqs. (2)-(3), (11)-(12), (14), (16) for any $\eta$ in the unit disk, taking only the real part of $P$ to get the desired acoustic Casimir force when $\text{Im}[\eta]\neq0$.
\section*{References}
\noindent [1] H. B. G. Casimir, ‘‘\textit{On the attraction between two perfectly conducting plates},’’ Proc. K. Ned. Akad. Wet. 51, 793
(1948).
\noindent [2] A. Larraza, C. D. Holmes, R. T. Susbilla, and B. Denardo, ‘‘\textit{The force between two parallel rigid plates due to the}
\textit{radiation pressure of broadband noise: An acoustic Casimir effect},’’ J. Acoust. Soc. Am. 103, 2267 (1998).
\noindent [3] J. B\'arcenas, L. Reyes, and R. Esquivel-Sirvent, ``\textit{Acoustic Casimir pressure for arbitrary media},'' J. Acoust. Soc.
Am. 116, 717 (2004).
\noindent [4] A. A. Lee, D. Vella, and J. S. Wettlaufer, ``\textit{Fluctuation spectra and force generation in nonequilibrium systems},''
Proc. Nat. Acad. Sci. 114, 9255 (2017).
\noindent [5] T. Emig, A. Hanke, R. Golestanian, and M. Kardar, ``\textit{Normal and lateral Casimir forces between deformed plates},''
Phys. Rev. A 67, 022114 (2003).
\noindent [6] $E_{m}(z)$ is given by the Wolfram Language function \texttt{ExpIntegralE}\textsf{[}\textsl{\textsf{m}}, \textsl{\textsf{z}}\textsf{].} E. W. Weisstein, ``\textit{E\_n-Function},''
MathWorld-{}-A Wolfram Web Resource. $<$https://mathworld.wolfram.com/En-Function.html$>$
\noindent [7] A. Hanke, ``\textit{Non-Equilibrium Casimir Force between Vibrating Plates},'' PLoS ONE 8, e53228 (2013).
\end{document} |
2,869,038,156,317 | arxiv | \section*{References}
\bibliographystyle{IEEEtran}
|
2,869,038,156,318 | arxiv | \section{Conclusion}
We presented Visor\xspace, a system that enables privacy-preserving video analytics services. Visor\xspace uses a hybrid TEE architecture that spans both the CPU and the GPU, as well as novel data-oblivious vision algorithms. Visor\xspace provides strong confidentiality and integrity guarantees, for video streams and models, in the presence of privileged attackers and malicious co-tenants. Our implementation of Visor\xspace shows limited performance overhead for the provided level of security.
\section*{Acknowledgments}
We are grateful to Chia-Che Tsai for helping us instrument the Graphene LibOS.
We thank our shepherd, Kaveh Razavi, and the anonymous reviewers for their insightful comments.
We also thank Stefan Saroiu, Yuanchao Shu, and members of the RISELab at UC Berkeley for helpful feedback on the paper.
This work was supported in part by the NSF CISE Expeditions Award CCF-1730628, and gifts from the Sloan Foundation, Bakar Program, Alibaba, Amazon Web Services, Ant Financial, Capital One, Ericsson, Facebook, Futurewei, Google, Intel, Microsoft, Nvidia, Scotiabank, Splunk, and VMware.
{\footnotesize
\begin{flushleft}
\setlength{\parskip}{0pt}
\setlength{\itemsep}{0pt}
\bibliographystyle{abbrv}
\section{A Privacy-Preserving MLaaS Framework}
\label{s:system}
In this section, we present a privacy-preserving framework for machine-learning-as-a-service (MLaaS), that supports CNN-based ML applications spanning both CPU and GPU resources.
Though Visor\xspace focuses on protecting video analytics pipelines, our framework can more broadly be used for a range of MLaaS applications such as medical imaging, recommendation systems, and financial forecasting.
Our framework comprises three key features that collectively enable data-oblivious execution of ML services. First, it protects the computation in ML pipelines using a \emph{hybrid} TEE that spans both the CPU and GPU.
Second, it provides a secure CPU-GPU communication channel that additionally prevents the leakage of information via traffic patterns in the channel.
Third, it prevents access-pattern-based leakage on the CPU and GPU by facilitating the development of data-oblivious modules using a suite of optimized primitives.
\begin{figure} [t!]
\centering
\includegraphics[width=\linewidth]{figs/architecture.pdf}
\caption{Visor\xspace's hybrid TEE architecture. Locks indicate encrypted data channels, and keys indicate decryption points.}
\label{fig:architecture}
\end{figure}
\subsection{Hybrid TEE Architecture} \label{s:system:architecture}
\cref{fig:architecture} shows Visor\xspace's architecture. Visor\xspace receives encrypted video streams from the client's camera, which are then fed to the video processing pipeline. We refer to the architecture as a {\em hybrid} TEE as it spans both the CPU and GPU TEEs, with different modules of the video pipeline (\cref{s:model}) being placed across these TEEs.
We follow the example of prior work that has shown that running the non-CNN modules of the pipeline on the CPU, and the CNNs on the GPU \cite{Focus:OSDI18, Scanner:Poms:2018, MS:Rocket:web}, results in efficient use of the expensive GPU resources while still keeping up with the incoming frame rate of videos. %
Regardless of the placement of modules across the CPU and GPU, we note that attacks based on data access patterns can be mounted on {\em both} CPU and GPU TEEs, as explained in \cref{s:threatmodel:attacks}.
As such, our data-oblivious algorithms and techniques are broadly applicable irrespective of the placement,
though our description is based on non-CNN modules running on the CPU and the CNNs on the GPU.
\drop{
Nonetheless, the placement of modules across the CPU and GPU is orthogonal to Visor\xspace. As explained in \cref{s:threatmodel:attacks}, attacks based on data access patterns can be mounted on {\em both} CPU and GPU enclaves, and hence our data-oblivious modules and techniques are applicable regardless of the placement. That said, our description is based on non-CNN modules running on the CPU and the CNNs on the GPU.
}
\paragraph{CPU and GPU TEEs}
We implement the CPU TEE using Intel SGX enclaves, and the GPU TEE using Graviton secure contexts \cite{Graviton:OSDI18}.
The CPU TEE also runs Graviton's trusted GPU runtime, which enables Visor\xspace to securely bootstrap the GPU TEE and establish a single trust domain across the TEEs.
The GPU runtime talks to the untrusted GPU driver (running on the host outside the CPU TEE) to manage resources on the GPU via \code{ioctl} calls. In Graviton, each \code{ioctl} call is translated to a sequence of commands submitted to the command processor. Graviton ensures {\em secure} command submission (and subsequently \code{ioctl} delivery) as follows:
\begin{enumerate*}[($i$)]
\item for task submission, the runtime uses authenticated encryption to protect commands from being dropped, replayed, or reordered, and
\item for resource management, the runtime validates signed summaries returned by the GPU upon completion.
\end{enumerate*}
The GPU runtime {\em encrypts all inter-TEE communication}.
We port the non-CNN video modules (\cref{fig:pipeline}) to SGX enclaves using the Graphene LibOS~\cite{Graphene:SGX:Tsai:2017}.
In doing so, we instrument Graphene to support the \code{ioctl} calls that are used by the runtime to communicate with the GPU driver. %
\paragraph{Pipeline execution} %
The hybrid architecture requires us to protect against attacks on the CPU TEE, GPU TEE, and the CPU-GPU channel.
As \cref{fig:architecture} illustrates, Visor\xspace decrypts the video stream inside the CPU TEE, and obliviously decodes out each frame (in \cref{s:decoding}).
Visor\xspace then processes the decoded frames using oblivious vision algorithms to extract objects from each frame (in \cref{s:algorithms}).
Visor\xspace extracts the \emph{same} number of objects of \emph{identical dimensions} from each frame (some of which are dummies, up to an upper-bound) and feeds them into a circular buffer.
This avoids leaking the \emph{actual} number of objects in each frame and their \emph{sizes}; the attacker can observe accesses to the buffer, even though objects are encrypted.
Objects are dequeued from the buffer and sent to the GPU (\cref{s:system:communication}) where they are decrypted and processed obliviously by the CNN in the GPU TEE (\cref{s:algorithms:cnn}).
\input{sections/cnn.tex}
\subsection{Oblivious Modules on the CPU}\label{s:system:cpu}
After providing a data-oblivious CPU-GPU channel and CNN execution on the GPU, we address the video modules (in \cref{fig:pipeline}) that execute on the CPU.
We carefully craft oblivious versions of the video modules using novel efficient algorithms (which we describe in the subsequent sections).
To implement our algorithms, we use a set of oblivious primitives which we summarize below.
\input{sections/obl_primitives.tex}
\section{Background and Motivation}\label{s:background}
\subsection{Video Analytics as a Service} \label{s:model}
\cref{fig:pipeline} depicts the canonical pipelines for video analytics~\cite{Focus:OSDI18, NoScope, VideoStorm, AWStream, MS:Rocket:web}.
The client (\eg a source camera) feeds the video stream to the service hosted in the cloud, which (a)~decodes the video into frames, (b)~extracts objects from the frames using vision algorithms, and (c)~classifies the objects using a pre-trained convolutional neural network (CNN). Cameras typically offer the ability to control the resolution and frame rate at which the video streams are encoded.
Recent work demonstrates that scaling video analytics pipelines requires judicious use of both CPUs and GPUs~\cite{Scanner:Poms:2018,Focus:OSDI18}.
In Visor\xspace, we follow the example of Microsoft's Rocket platform for video analytics~\cite{MS:Rocket:web,MS:Rocket:git}---we split the pipelines by running video decoding and vision modules on the CPU, while offloading the CNN to the GPU (as shown in \cref{fig:pipeline}).
The vision modules process each frame to detect the moving ``foreground'' objects in the video using background subtraction~\cite{MOG:Survey:2008}, compute each object's bounding box \cite{Suzuki:1985}, and crop them from the frame for the CNN classifier. These vision modules can sustain the typical frame rates of videos even on CPUs, thereby serving as vital ``filters'' to reduce the expensive CNN operations on the GPU \cite{Focus:OSDI18, NoScope}, and are thus widely used in practical deployments. For example, CNN classification in \cref{fig:pipeline:resnet} is invoked only if moving objects are detected in a region of interest in the frame.
Optionally, the moving objects are also tracked to infer directions (say, cars turning left).
The CNNs can either be object classifiers (e.g., ResNet \cite{He:Resnet:2016}) as in \cref{fig:pipeline:resnet}; or object detectors (\eg Yolo \cite{Redmon:YOLO}) as in \cref{fig:pipeline:yolo}, which take whole frames as input.
The choice of pipeline modules is application dependent \cite{Chameleon:Sigcomm, Focus:OSDI18} and Visor\xspace targets confidentiality for all pipeline modules, their different combinations, and vision CNNs. %
While our description focuses on a multi-tenant cloud service, our ideas equally apply to multi-tenant {\em edge compute} systems, say, at cellular base stations~\cite{MEC:ETSI}. Techniques for lightweight programmability on the cameras to reduce network traffic (\eg using smart encoders \cite{SmartCodec:Vivotek} or dynamically adapting frame rates \cite{Edgecomputing:video}) are orthogonal to {Visor\xspace}'s techniques.
\subsection{Trusted Execution Environments}
\label{s:background:enclaves}
Trusted execution environments, or enclaves, protect application's code and data from all other software in a system.
Code and data loaded in an enclave---CPU and GPU TEEs---can be verified by clients using the \emph{remote attestation} feature.
\noindent\textbf{Intel SGX}
\cite{SGX:HASP13} enables TEEs on CPUs and enforces isolation by storing enclave code and data in a protected memory region called the Enclave Page Cache (EPC). %
The hardware ensures that no software outside the enclave can access EPC contents.
\noindent\textbf{Graviton}
\cite{Graviton:OSDI18} enables TEEs on GPUs in tandem with trusted applications hosted in CPU TEEs.
Graviton prevents an adversary from observing or tampering with traffic (data and commands) transferred to/from the GPU.
A trusted GPU runtime (\eg CUDA runtime) hosted in a CPU TEE attests that all code/data have been securely loaded onto the GPU.
\subsection{Attacks based on Access Pattern Leakage}
\label{s:background:attacks}
TEEs are vulnerable to leakage from side-channel attacks %
that exploit micro-architectural side-channels
\cite{sgxattacks-foreshadow, sgxcache-gotzfried, sgxcache-brasser, sgxcache-schwarz, sgxcache-moghimi, sgxattacks-hahnel:cache, sgxattacks-lee:branches, SGX:attack:ZombieLoad, sgxcache-cachequote},
software-based channels~\cite{sgxattacks-xu:pagefaults, bulck-sgxattack:pagefaults}, or application-specific leakage, such as network and memory accesses. %
\begin{figure} [t!]
\centering
\includegraphics[width=0.7\linewidth]{figs/leakage.pdf}
\caption{Attacker obtains all the frame's objects (right) using access pattern leakage in the bounding box detection module.}
\label{fig:leakage}
\end{figure}
A large subset of these attacks exploit {\em data-dependent memory access patterns} %
(\eg %
branch-prediction, cache-timing, or controlled page fault attacks). %
Xu \etal~\cite{sgxattacks-xu:pagefaults} show that by simply observing the page access patterns of image decoders, an attacker can reconstruct entire images.
We ourselves analyzed the impact of access pattern leakage at cache-line granularity~\cite{sgxcache-gotzfried,sgxcache-brasser,sgxcache-schwarz,sgxcache-moghimi} on the bounding box detection algorithm \cite{Suzuki:1985} (see \cref{fig:pipeline:resnet}; \S\ref{s:model}).
We simulated existing attacks by capturing the memory access trace during an execution of the algorithm, and then examined the trace to reverse-engineer the contents of the input frame.
Since images are laid out predictably in memory, we found that the attacker is able to infer the locations of all the pixels touched during execution, and thus, the {\em shapes and positions of all objects} (as shown in \cref{fig:leakage}).
Shapes and positions of objects are the core content of any video, and allow the attacker to infer sensitive information like times when patients are visiting private medical centers or when residents are inside a house, and even infer if the individuals are babies or on wheelchairs based on their size and shapes. In fact, conversations with customers of one of the largest public cloud providers
indeed confirm that {\em privacy of the videos is among their top-two concerns} in signing up for the video analytics cloud service.%
\section{Discussion}\label{s:discussion}
\paragraph{Attacks on upper bounds}
For efficiency, Visor\xspace extracts a fixed number of objects per frame based on a user-specified upper bound.
However, this leaves Visor\xspace open to adversarial inputs: an attacker who knows this upper bound can attempt to confuse the analytics pipeline by operating many objects in the frame at the same time.
To mitigate such attacks, we suggest two potential strategies:
\begin{enumerate*}[(i)]
\item For frames containing $>=N$ objects (as detected in \cref{s:algorithms:objdet}), process those frames off the critical path using worst-case bounds (\eg total number of pixels). While this approach leaks which specific frames contain $>=N$ objects, the leakage may be acceptable considering these frames are suspicious.
\item Filter objects based on their properties like object size or object location: \eg for a traffic feed, only select objects at the center of the traffic intersection. This limits the number of valid objects possible per frame, raising the bar for mounting such attacks. One can also apply richer filters on the pipeline results and reprocess frames with suspicious content.
\end{enumerate*}
\paragraph{Oblivious-by-design encoding}
Instead of designing oblivious versions of existing codecs, it may be possible to construct an oblivious-by-design coding scheme that is \begin{enumerate*}[(i)]\item potentially simpler, and \item performs better than Visor\xspace's oblivious decoding. \end{enumerate*}
This alternate design point is an interesting direction for future work.
We note, however, that any such codec would need to produce a perfectly constant bitrate (CBR) per frame to prevent bitrate leakage over the network.
While CBR codecs have been explored in the video literature, they are inferior to variable bitrate schemes (VBR) such as VP8 because they are lossier.
In other words, an oblivious CBR scheme would consume greater bandwidth than VP8 to match its video quality (and therefore, VP8 with padding), though it may indeed be simpler.
In Visor\xspace, we optimize for quality.
\section{Evaluation}\label{s:evaluation}
\ifpadding
\begin{figure*}[t!]
\minipage[t]{0.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/frames.pdf}
\caption{Decoding latency vs.\ B/W.}
\label{fig:frames:lat}
\endminipage\hfill
\minipage[t]{0.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/decode.pdf}
\caption{Latency of oblivious decoding.
}
\label{fig:decode:lat}
\endminipage\hfill
\minipage[t]{0.32\textwidth}%
\centering
\includegraphics[width=\linewidth]{plots/bgs.pdf}
\caption{Background subtraction.}
\label{fig:bgs:latency}
\endminipage
\end{figure*}
\else
\begin{figure*}[t!]
\minipage[t]{0.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/frames_no_padding.pdf}
\caption{Decoding latency vs.\ B/W.}
\label{fig:frames:lat}
\endminipage\hfill
\minipage[t]{0.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/decode_no_padding.pdf}
\caption{Latency of oblivious decoding.
}
\label{fig:decode:lat}
\endminipage\hfill
\minipage[t]{0.32\textwidth}%
\centering
\includegraphics[width=\linewidth]{plots/bgs.pdf}
\caption{Background subtraction.}
\label{fig:bgs:latency}
\endminipage
\end{figure*}
\fi
\paragraph{Implementation}
We implement our oblivious video decoder atop FFmpeg's VP8 decoder~\cite{ffmpeg} and oblivious vision algorithms atop OpenCV 3.2.0~\cite{opencv}. We use Caffe~\cite{Jia:Caffe:2014} for running CNNs. We encrypt data channels using AES-GCM.
We implement the oblivious primitives of \cref{s:background:primitives} using inline assembly code (as in~\cite{Ohrimenko:ObliviousML,Raccoon,Zerotrace:Sasy}), and manually verified the binary to ensure that compiler optimizations do not undo our intent; one can also use tools such as Vale~\cite{Vale:Bond:2017} to do the same.
\paragraph{Testbed} We evaluate Visor\xspace on Intel i7-8700K with 6 cores running at \SI{3.7}{\giga\hertz}, and an NVIDIA GTX 780 GPU with 2304 CUDA cores running at \SI{863}{\mega\hertz}.
We disable hyperthreading for experiments with Visor\xspace (per \cref{s:threatmodel}), but retain hyperthreading in the insecure baseline.
Disabling hyperthreading for security does not sacrifice the performance of Visor (due to its heavy utilization of vector units) unlike the baseline system that favors hyperthreading; see \cref{s:hyperthreading} for more details.
The server runs Linux v4.11; supports AVX2 and SGX-v1 instruction sets; %
and has \SI{32}{\giga\byte} of memory, with \SI{93.5}{\mega\byte} of enclave memory. The GPU has \SI{3}{\giga\byte} of memory.
\paragraph{Datasets}
We use four real-world video streams (obtained with permission) in our experiments: streams 1 and 4 are from traffic cameras in the city of Bellevue (resolution $1280 \times 720$) while streams 2 and 3 are sourced from cameras surveilling commercial datacenters (resolution $1024 \times 768$).
All these videos are privacy-sensitive as they involve government regulations or business sensitivity. For experiments that evaluate the cost of obliviousness across different resolutions and bitrates, we re-encode the videos accordingly.
A recent body of work~\cite{NoScope,Chameleon:Sigcomm,VideoStorm} has found that the accuracy of object detection in video streams is not affected if the resolution is decreased (while consuming significantly lesser resources), and 720p videos suffice. We therefore chose to use streams closer to 720p in resolution because we believe they would be a more accurate representation of real performance.
\paragraph{Evaluation highlights} We summarize the key takeaways of our evaluation.
\begin{compactenumerate}
\item Visor\xspace's optimized oblivious algorithms (\cref{s:decoding}, \cref{s:algorithms}) are up to $1000\times$ faster than na\"ive competing solutions. (\cref{s:eval:modules})
\item End-to-end overhead of obliviousness for real-world video pipelines with state-of-the-art CNNs are limited to {$2\times$--$6\times$}\xspace over a {\em non-oblivious} baseline. (\cref{s:eval:overall})
\item Visor\xspace is generic and can accommodate multiple pipelines (\cref{s:model}; \cref{fig:pipeline}) that combine the different vision processing algorithms and CNNs. (\cref{s:eval:overall})
\item Visor\xspace's performance is over 6 to 7 orders of magnitude better than a state-of-the-art general-purpose system for oblivious program execution. (\cref{s:evaluation:related})
\end{compactenumerate}
Overall, Visor\xspace's use of properties of the video streams has {\em no impact on the accuracy} of the analytics outputs.
\subsection{Performance of Oblivious Components}\label{s:eval:modules}
We begin by studying the performance of Visor\xspace's oblivious modules:
we quantify the raw overhead of our algorithms (without enclaves) over non-oblivious baselines;
we also measure the improvements over na\"ive oblivious solutions.
\subsubsection{Oblivious video decoding}\label{s:eval:decoding}
\ifpadding
Decoding of the compressed bitstream dominates decoding latency, consuming up to \approx$90\%$ of the total latency. %
Further, this stage is dominated by the oblivious assignment subroutine which sorts coefficients into the correct pixel positions using \code{osort}, consuming up to \approx$83\%$ of the decoding latency. %
Since the complexity of oblivious sort is super-linear in the number of elements being sorted,
our technique for decoding at the granularity of \emph{rows of blocks} rather than frames significantly improves the latency of oblivious decoding.
\paragraph{Overheads} \cref{fig:frames:lat} shows
the bandwidth usage
and decoding latency for different oblivious decoding strategies (\ie decoding at the level of frames, or at the level of \emph{row of blocks}) for a video stream of resolution $1280 \times 720$. We also include two reference points: non-encoded frames and VP8 encoding. The baseline latency of decoding VP8 encoded frames is $4$--\SI{5}{\milli\second}.
Non-encoded raw frames incur no decoding latency but result in frames that are three orders of magnitude larger than the VP8 average frame size (10s of \SI{}{\kilo\byte}) at a bitrate of \SI{4}{\mega\bps}.
Frame-level oblivious decoding introduces high latency (\approx\SI{850}{\milli\second}), which is two orders of magnitude higher than non-oblivious counterparts.
Furthermore, padding each frame to prevent leakage of the frame's bitrate increases the average frame size to \approx\SI{95}{\kilo\byte}.
On the contrary, oblivious decoding at the level of rows of blocks delivers \approx\SI{140}{\milli\second}, which is \approx$6\times$ lower than frame-level decoding.
However, this comes with a modest increase in network bandwidth as the encoder needs to pad each row of blocks individually, rather than a frame. In particular, the frame size increases from \approx\SI{95}{\kilo\byte} to \approx\SI{140}{\kilo\byte}.
Apart from the granularity of decoding, the latency of the oblivious sort is also governed by: \begin{enumerate*}[($i$)] \item the frame's resolution, and \item the bitrate.\end{enumerate*}
The higher the frame's resolution / bitrate, the more coefficients there are to be sorted.
\cref{fig:decode:lat} plots oblivious decoding latency at the granularity of rows of blocks across video streams with different resolutions and bitrates. The figure shows that lower resolution/bitrates introduce lower decoding overheads. %
In many cases, lower image qualities are adequate for video analytics as it does not impact the accuracy of the object classification~\cite{Chameleon:Sigcomm}.
\else
Decoding of the compressed bitstream dominates decoding latency, consuming up to \approx$90\%$ of the total latency. %
Further, this stage is dominated by the oblivious assignment subroutine which sorts coefficients into the correct pixel positions using \code{osort}, consuming up to \approx$83\%$ of the decoding latency. %
Since the complexity of oblivious sort is super-linear in the number of elements being sorted,
an optimization that we make in our implementation is to decode and assign coefficients to pixels at the granularity of {\em rows of blocks} rather than frames.
As we show, this significantly improves the latency of oblivious decoding, though the attacker now learns the total number of bits per row of blocks, instead of per frame.
\paragraph{Overheads} \cref{fig:frames:lat} shows
the bandwidth usage
and decoding latency for different oblivious decoding strategies (\ie decoding at the level of frames, or at the level of \emph{row of blocks}) for a video stream of resolution $1280 \times 720$. We also include two reference points: non-encoded frames and VP8 encoding. The baseline latency of decoding VP8 encoded frames is $4$--\SI{5}{\milli\second}.
Non-encoded raw frames incur no decoding latency but result in frames that are three orders of magnitude larger than the VP8 average frame size (10s of \SI{}{\kilo\byte}) at a bitrate of \SI{4}{\mega\bps}.
Frame-level oblivious decoding introduces high latency (\approx\SI{850}{\milli\second} for keyframes, and ~\approx\SI{310}{\milli\second} overall), which is two orders of magnitude higher than non-oblivious counterparts.
On the contrary, oblivious decoding at the level of \emph{rows of blocks} delivers \approx\SI{140}{\milli\second} for keyframes and \approx\SI{50}{\milli\second} overall, which is \approx$6\times$ lower than frame-level decoding.
Apart from the granularity of decoding, the latency of the oblivious sort is also governed by: \begin{enumerate*}[($i$)] \item the frame's resolution, and \item the bitrate.\end{enumerate*}
The higher the frame's resolution / bitrate, the more coefficients there are to be sorted.
\cref{fig:decode:lat} plots oblivious decoding latency at the granularity of \emph{row of blocks} across video streams with different resolutions and bitrates. The figure shows that lower resolution/bitrates introduce lower decoding overheads. %
In many cases, lower image qualities are adequate for video analytics as it does not impact the accuracy of the object classification~\cite{Chameleon:Sigcomm}.
\fi
\begin{figure}
\minipage[t]{0.48\linewidth}
\centering
\includegraphics[width=\linewidth]{plots/max_classes.pdf}
\caption{Number of labels for bounding box detection.
}
\label{fig:objdet:labels}
\endminipage\hfill
\minipage[t]{0.48\linewidth}
\includegraphics[width=\linewidth]{plots/obj_det.pdf}
\caption{Latency of oblivious bounding box detection.
}
\label{fig:objdet:latency}
\endminipage
\end{figure}
\begin{figure*}
\minipage[t]{0.32\textwidth}%
\centering
\includegraphics[width=\linewidth]{plots/extract.png}
\caption{Oblivious object cropping.}
\label{fig:objcrop:latency}
\endminipage\hfill
\minipage[t]{0.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/resize.pdf}
\caption{Oblivious object resizing.}
\label{fig:resize:lat}
\endminipage\hfill
\minipage[t]{0.32\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/sift.pdf}
\caption{Oblivious object tracking.}
\label{fig:sift:lat}
\endminipage\hfill
\end{figure*}
\subsubsection{Background subtraction}
We set the maximum number of Gaussian components per pixel $M_\code{max} = 4$, following prior work~\cite{MOG2:Zivkovic:2004, MOG2:Zivkovic:2006}.
Our changes for obliviousness enable us to make use of SIMD instructions for updating the Gaussian components in parallel.
This is because we now maintain the same number of components per pixel, and update operations for each component are identical.
\cref{fig:bgs:latency} plots the overhead of obliviousness on background subtraction across different resolutions. The SIMD implementation increases the latency of the routine only by $1.8\times$ over the baseline non-oblivious routine. As the routine processes each pixel in the frame independent of the rest, its latency increases linearly with the total number of pixels.
\subsubsection{Bounding box detection}
For non-oblivious bounding box detection, we use
the border-following algorithm of Suzuki and Abe~\cite{Suzuki:1985} (per \cref{s:algorithms:objdet}); this algorithm is efficient, running in sub-millisecond latencies.
The performance of our oblivious bounding box detection algorithm is governed by two parameters: \begin{enumerate*}[($i$)] \item the number of stripes used in the divide-and-conquer approach, which controls the degree of parallelism, and \item an upper bound $L$ on the maximum number of labels possible per stripe, which determines the size of the algorithm's data structures.\end{enumerate*}
\cref{fig:objdet:labels} plots $L$ for streams of different frame resolutions while varying the number of stripes into which each frame is divided. As expected, as the number of stripes increases, the value of $L$ required per stripe decreases. Similarly, lower resolution frames require smaller values of $L$.
\cref{fig:objdet:latency} plots the latency of detecting all bounding boxes in a frame based on the value of the parameter $L$, ranging from a few milliseconds to hundreds of milliseconds. For a given resolution, the latency decreases as the number of stripes increase, due to two reasons: \begin{enumerate*}[($i$)] \item increased parallelism, and \item smaller sizes of $L$ required per stripe.\end{enumerate*} Overall, the divide-and-conquer approach reduces latency by an order of magnitude down to a handful of milliseconds.
\subsubsection{Object cropping}
\label{s:eval:cropping}
We first evaluate oblivious object cropping while leaking object sizes. We include three variants: the na\"ive approach; the two-phase approach; and
a further optimization that advances the sliding window forward multiple rows/columns at a time.
\cref{fig:objcrop:latency} plots the cost of cropping variable-sized objects from a $1280 \times 720$ frame, showing that the proposed refinements reduce latency by three orders of magnitude .
\cref{fig:resize:lat} plots the latency of obliviously resizing the target ROI within a cropped image to hide the object's size.
While the latency of na\"ive bilinear interpolation is high (10s of milliseconds) for large objects, the optimized two-pass approach (that exploits cache locality by transposing the image before the second pass; \cref{s:algorithms:cropping:size}) reduces latency by two orders of magnitude down to one millisecond for large objects.
\subsubsection{Object tracking}
\cref{fig:sift:lat} plots the latency of object tracking with and without obliviousness.
We examine our sample streams at various resolutions to determine upper bounds on the maximum number of features in frames.
As the resolution increases, the overhead of obliviousness increases as well because our algorithm involves an oblivious sort of the intermediate set of detected features, the cost of which is superlinear in the size of the set.
Overall, the overhead is $< 2\times$.
\subsubsection{CNN classification on GPU}
\paragraph{Buffer} \cref{fig:queue:lat} benchmarks the sorting cost as a function of the object size and the buffer size. For buffer sizes smaller than 50, the sorting cost remains under \SI{5}{\milli\second}. %
\paragraph{Inference} We measure the performance of CNN object classification on the GPU. As discussed in \cref{s:algorithms:cnn}, oblivious inference comes free of cost. %
\cref{table:data:cnn} lists the throughput of different CNN models using the proprietary NVIDIA driver, with CUDA version 9.2. Each model takes as input a batch of 10 objects of size $224\times224$. %
Further, since GPU memory is limited to \SI{3}{\giga\byte}, we also list the maximum number of concurrent models that can run on our testbed.
As we show in \cref{s:eval:overall}, the latter has a direct bearing on the number of video analytics pipelines that can be concurrently served.%
\begin{figure*}[t]
\minipage[b]{0.36\textwidth}
\centering
\small
\begin{tabular}[t]{r|c|c}
\thickhline
CNN & Batches/s & Max no. of models \T\B \\
\hline
AlexNet & 40.3 & ~7 \T \\
ResNet-18 & 18.4 & 4 \\
ResNet-50 & 8.2 & 1 \\
VGG-16 & 5.4 & 1 \\
VGG-19 & 4.4 & 1 \\
Yolo & 3.9 & ~1 \B \\
\thickhline
\end{tabular}
\caption{CNN throughput (batch size 10).}
\label{table:data:cnn}
\endminipage\hfill
\minipage[b]{0.30\textwidth}%
\centering
\includegraphics[width=0.95\linewidth]{plots/qsort.pdf}
\caption{Oblivious queue sort.}
\label{fig:queue:lat}
\endminipage\hfill
\minipage[b]{0.33\textwidth}
\centering
\ifpadding
\includegraphics[width=\linewidth]{plots/tput_cpu_5objs.pdf}
\else
\includegraphics[width=\linewidth]{plots/tput_cpu_no_padding_5objs.pdf}
\fi
\caption{CPU throughput (pipeline 1).}
\label{fig:tput:cpu}
\endminipage\hfill
\end{figure*}
\subsection{System Performance}\label{s:eval:overall}
We now evaluate the end-to-end performance of the video analytics pipeline using four real video streams. We present the overheads of running Visor\xspace's data-oblivious techniques and hosting the pipeline in a hybrid enclave.
We evaluate the two example pipelines in \cref{fig:pipeline}: pipeline 1 uses an object classifier CNN; pipeline 2 uses an object detector CNN (Yolo), and performs object tracking on the CPU.
\emph{Pipeline 1 configuration.} We run inference on objects that are larger than 1\% of the frame size as smaller detected objects do not represent any meaningful value. Across our videos, the number of such objects per frame is small---no frame has more than 5 objects, and 97-99\% of frames have less than 2 to 3 objects.
Therefore, we configure: \begin{enumerate*}[($i$)] \item Visor\xspace's object detection stage to conservatively output 5 objects per frame (including dummies) into the buffer, \item the consumption rate of Visor\xspace's CNN module to 2 or 3 objects per frame (depending on the stream), and \item the buffer size to 50, which suffices to prevent non-dummy objects from being overwritten.
\end{enumerate*}
\emph{Pipeline 2 configuration.} The Yolo object detection CNN ingests entire frames, instead of individual objects. In the baseline, we filter frames that don't contain any objects using background subtraction. However, we forego this filtering in the oblivious version since most frames contain foreground objects in our sample streams.
Additionally, Yolo expects the frames to be of resolution $448\times 448$. So we resize the input video streams to be of the same resolution.
\begin{figure*}
\minipage[t]{0.33\textwidth}%
\centering
\ifpadding
\includegraphics[width=\linewidth]{plots/tput_cpu_tracker_padded.pdf}
\else
\includegraphics[width=\linewidth]{plots/tput_cpu_tracker.pdf}
\fi
\caption{CPU throughput (pipeline 2).}
\label{fig:tput:cpu:tracker}
\endminipage\hfill
\minipage[t]{0.42\textwidth}
\centering
\ifpadding
\includegraphics[width=\linewidth]{plots/e2e_tput.pdf}
\else
\includegraphics[width=\linewidth]{plots/e2e_tput_no_padding.pdf}
\fi
\caption{Overall pipeline throughput.}
\label{fig:tput:e2e}
\endminipage\hfill
\minipage[t]{0.23\textwidth}
\centering
\ifpadding
\includegraphics[width=\linewidth]{plots/sgx_obliv_lat_all.pdf}
\else
\includegraphics[width=\linewidth]{plots/sgx_obliv_lat_no_padding.pdf}
\fi
\caption{Cost of enclaves.}
\label{fig:lat:sgx}
\endminipage\hfill
\end{figure*}
\paragraph{Cost of obliviousness} \cref{fig:tput:cpu,fig:tput:cpu:tracker} plot the overhead of Visor\xspace on the CPU-side components of pipelines 1 and 2, while varying the number of concurrent pipelines.
\ifpadding
Visor\xspace reduces \emph{peak} CPU throughput by \approx$2.6\times$--$6\times$ across the two pipelines, compared to the non-oblivious baseline.
However, the throughput of the system ultimately depends on the number of models that can fit in GPU memory.
\cref{fig:tput:e2e} plots Visor\xspace's end-to-end performance for both pipelines, across all four sample video streams.
In the presence of CNN inference, Visor\xspace's overheads depend on the model complexity.
Pipelines that utilize light models, such as AlexNet and ResNet-18, are bottlenecked by the CPU.
In such cases, the overhead is determined by the cost of obliviousness incurred by the CPU components.
With heavier models such as ResNet-50 and VGG, the performance bottleneck shifts to the GPU. In this case, the overhead of Visor\xspace is governed by the amount of dummy objects processed by the GPU (as described in \cref{s:system:communication}).
Overall, the cost of obliviousness remains in the range of $2.2\times$--$5.9\times$ across video streams for the first pipeline.
In the second pipeline, the overhead is \approx$2\times$.
The GPU can fit only a single Yolo model.
The overall performance, however, is bottlenecked at the CPU because the object tracking routine is relatively expensive.
\paragraph{Cost of enclaves} We measure the cost of running the pipelines in CPU/GPU enclaves by replacing the NVIDIA stack with Graviton's stack, which comprises open-source CUDA runtime (Gdev~\cite{Kato:gdev}) and GPU driver (Nouveau~\cite{nouveau}).
\cref{fig:lat:sgx} compares Visor\xspace against a non-oblivious baseline when both systems are hosted in CPU/GPU enclaves. As SGX's EPC size is limited to \SI{93.5}{\mega\byte}, workloads with large memory footprints incur high overhead.
For pipeline 1, and for large frame resolutions, the latency of background subtraction increases from \approx\SI{6}{\milli\second} to \SI{225}{\milli\second} due to its working set size being \SI{132}{\mega\byte}.
In Visor, the pipeline's net latency increases by $2.4\times$ (as SGX overheads mask some of Visor\xspace's overheads) while increasing the memory footprint to \SI{190}{\mega\byte}.
When the pipeline operates on lower frame resolutions, such that its memory footprint fits within current EPC, the latency of the non-oblivious baseline tracks the latency of the insecure baseline (a few milliseconds); the additional overhead of obliviousness is $2.3\times$. %
For pipeline 2, the limited EPC increases the latency of object tracking from \approx\SI{90}{\milli\second} to \approx\SI{240}{\milli\second}.
With Visor\xspace's obliviousness, the net latency increases by $1.7\times$.
\else
Visor\xspace reduces peak CPU throughput by \approx$2.1\times$--$2.8\times$ across the two pipelines, compared to the non-oblivious baseline.
However, the throughput of the system ultimately depends on the number of models that can fit in GPU memory.
\cref{fig:tput:e2e} plots Visor\xspace's end-to-end performance for both pipelines, across all four sample video streams.
In the presence of CNN inference, Visor\xspace's overheads depend on the model complexity.
Pipelines that utilize light models, such as AlexNet and ResNet-18, are bottlenecked by the CPU.
In such cases, the overhead is determined by the cost of obliviousness incurred by the CPU components.
With heavier models such as ResNet-50 and VGG, the performance bottleneck shifts to the GPU. In this case, the overhead of Visor\xspace is governed by the amount of dummy objects processed by the GPU (as described in \cref{s:system:communication}).
Overall, the cost of obliviousness remains in the range of $1.8\times$--$2.9\times$ across video streams for the first pipeline.
In the second pipeline, the overhead is in the range of $1.6\times$--$1.8\times$.
The GPU can fit only a single Yolo model.
The overall performance, however, is bottlenecked at the CPU because the object tracking routine is relatively expensive.
\paragraph{Cost of enclaves} We measure the cost of running the pipelines in CPU/GPU enclaves by replacing the NVIDIA stack with Graviton's stack, which comprises open-source CUDA runtime (Gdev~\cite{Kato:gdev}) and GPU driver (Nouveau~\cite{nouveau}).
\cref{fig:lat:sgx} compares Visor\xspace against a non-oblivious baseline with both systems hosted in CPU/GPU enclaves. As SGX's EPC size is limited to \SI{93.5}{\mega\byte}, workloads with larger memory footprints incur high overhead.
With pipeline 1, for large resolutions, the latency of background subtraction increases from \approx\SI{6}{\milli\second} to \SI{225}{\milli\second} due to its working set size being \SI{132}{\mega\byte}.
Due to this limitation, the CPU is saturated by a single pipeline.
With Visor, the pipeline's net latency increases by $1.2\times$ (as SGX overheads mask some of Visor\xspace's overheads) while increasing the memory footprint to \SI{190}{\mega\byte}.
We also quantify overheads when the pipeline operates on lower frame resolutions such that its memory footprint fits within current EPC.
In this case, the latency of the non-oblivious baseline tracks the latency of the insecure baseline (a few milliseconds); the additional overhead of obliviousness is $2.3\times$. %
Similarly, with pipeline 2, the limited EPC increases the latency of object tracking from \approx\SI{90}{\milli\second} to \approx\SI{240}{\milli\second}.
With Visor\xspace's obliviousness, the net latency increases by $1.5\times$.
\fi
\subsection{Comparison against Prior Work}\label{s:evaluation:related}
We conclude our evaluation by comparing Visor\xspace against Obfuscuro~\cite{Obfuscuro}, a state-of-the-art general-purpose system for oblivious program execution.
The current implementation of Obfuscuro supports a limited set of instructions, and hence cannot run the entire video analytics pipeline.
On this note, we ported the OpenCV object cropping module to Obfuscuro, which requires only simple assignment operations.
Cropping objects of size $128\times128$ and $16\times16$ (from a $1280\times720$ image) takes $8.5$ hours and $8$ minutes in Obfuscuro respectively, versus \SI{800}{\micro\second} and \SI{200}{\micro\second} in Visor\xspace; making Visor\xspace faster by over $6$ to $7$ orders of magnitude.
We note, however, that Obfuscuro targets stronger guarantees than Visor\xspace as it also aims to obfuscate the programs; hence, it is not a strictly apples-to-apples comparison. Nonetheless, the large gap in performance is hard to bridge, and our experiments demonstrate the benefit of Visor\xspace's customized solutions.
Other tools for automatically synthesizing or executing oblivious programs are either closed-source~\cite{Raccoon,Wu:ISSTA:2018}, require special hardware~\cite{HOP:Nayak, ghostrider, Phantom}, or require custom language support~\cite{Fact:Cauligi}.
However, we note that the authors of Raccoon~\cite{Raccoon} (which provides similar levels of security as Visor\xspace) report up to $1000\times$ overhead on toy programs; the overhead would arguably be higher for complex programs like video analytics.
\section{Security Proofs and Pseudocode}\label{s:proofs}
We now provide detailed pseudocode along with proofs of security for our algorithms.
We start by providing a formal definition of data-obliviousness.
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{L}}{\mathcal{L}}
Let $\code{trace}_\mathcal{A}(x)$ be the trace of observations that the adversary can make during an execution of an algorithm $\mathcal{A}$, when given some input $x$, \ie the sequence of accessed memory addresses, along with the (encrypted) data that is read or written to the addresses.
To prove that the algorithm is data-oblivious, we show that there exists a simulator program that can produce a trace $T$ indistinguishable from $\code{trace}_\mathcal{A} (x)$, when given \emph{only} the public parameters for the algorithm, and regardless of the value of $x$.
Since $T$ does not depend on any private data, the indistinguishability of $T$ and $\code{trace}_\mathcal{A}$ implies that the latter leaks no information about the private data to the adversary, and only depends on the public parameters.
The following definition captures the definition formally.
\begin{definition}[Data-obliviousness]\label{def:oblivious}
We say than an algorithm $\mathcal{A}$ is data-oblivious if there exists a polynomial-time simulator $\mathcal{S}$ such that for any input $x$
$$
\code{trace}_\mathcal{A}(x) \equiv \mathcal{S}(\mathcal{L}(\mathcal{A}))
$$
where $\mathcal{L}(\mathcal{A})$ is the leakage function and represents the public parameters of $\mathcal{A}$.
\end{definition}
We now prove the security of each of our algorithms with respect to \cref{def:oblivious} in the following subsections.
\cref{table:leakage} summarizes the public parameters across Visor\xspace oblivious vision modules that are leaked to the attacker.
\begin{figure}
\centering
\small
\begin{tabular}[t]{p{2.5cm}|p{5cm}}
\thickhline
Component & Public parameters \T\B \\
\thickhline
Video decoding &
\begin{enumerate*}[($i$)]
\item Metadata of video stream (format, frame rate, resolution);
\item Number of bits used to encode each (padded) row of blocks;
\item Maximum number of bits encoded per 2-byte chunk.
\end{enumerate*}\\\hline
Background sub. & -- \\\hline
Bounding box det. &
\begin{enumerate*}[($i$)]
\item Maximum number of objects per image; \item Maximum number of different labels that can be assigned to pixels (an object consists of all labels that are adjacent to each other).
\end{enumerate*}\\\hline
Object cropping &
Upper bounds on object dimensions.
\\\hline
Object tracking &
\begin{enumerate*}[($i$)]
\item An upper bound on the intermediate number of features;
\item An upper bound on the total number of features.
\end{enumerate*}\\\hline
CNN Inference & CNN architecture. \\\hline
Overall & Modules and algorithms used in the pipeline.\\
\thickhline
\end{tabular}
\caption{Summary of public parameters in Visor\xspace's oblivious vision modules observable by the attacker.
These consist of the input parameters provided to Visor\xspace, along with information leaked by Visor\xspace (such as frame rate and resolution).
}
\label{table:leakage}
\vspace{0.5cm}
\end{figure}
\subsection{Oblivious video decoding}\label{s:proofs:decoder}
\cref{alg:decoding} provides detailed pseudocode for oblivious decoding the bitstream into pixel coefficients during video decoding, as described in \cref{s:decoding:bitstream}. We first explain the pseudocode in more detail, following the high-level description of \cref{s:decoding:bitstream}.
In our implementation, we model the prefix tree as a finite state machine (FSM).
While traversing the tree, we decode a single bit at each state (\ie each node in the tree) using the \textsc{EntropyDecode} subroutine.
This subroutine takes in a pointer \code{ptr} to the bitstream, and decodes a single bit from the bitstream via arithmetic operations. If no more bits can be decoded at the current position, it outputs \code{null}; otherwise, it outputs the decoded bit 0 or 1.
To enable decoding and traversal, each state $S$ (\ie each node in the tree) is a structure containing four fields:
$(\code{prob}, \allowbreak \code{next}_0, \allowbreak \code{next}_1, \code{type})$.
Here, $\code{prob}$ is the probability that the bit to be decoded at $S$ is 0 (as defined in the VP8 specifications~\cite{RFC:VP8Decoding}); and $\code{next}_0$ and $\code{next}_1$ are the indices of the next state based on whether the decoded bit is a 0 or 1.
Some states in the FSM are end states, \ie states that complete reconstructing a data value; for these states, \code{type} is set to \code{`end'}.
States that are not end states (\ie decode intermediate bits) have \code{type} set to \code{`mid'}.
The FSM also contains a dummy state $S_\code{dummy}$ that performs dummy bit decodes by invoking the entropy decoder with \code{isDummy} set to true; for the dummy state, \code{type} is set to \code{`dummy'}.
Next, we represent the FSM as an array---\code{Nodes} in \cref{alg:decoding}.
We set $\code{Nodes}[0]$ to be $S_\code{dummy}$, and $\code{Nodes}[1]$ to be the starting state.
This enables us to implement transitions to any state $S_j$ by simply fetching the state at index $j$ from the array using the \code{oaccess} primitive.
As a result, the current state of the FSM remains hidden across transitions.
Each transition passes four items of information to the next state: (i)~the state that made the transition $S_\code{parent}$; (ii)~an integer \code{pos} that denotes the position in the bitstream of the current bit being decoded; (iii)~the (partially) constructed data value \code{data}, and (iv)~a counter \code{ctr} that counts the number of bits decoded at each position.
Note that the structure of the prefix tree (and hence the array \code{Nodes}) is public information since it is constructed per the VP8 specifications~\cite{RFC:VP8Decoding}.
\begin{algorithm}[t]
\small
\caption{Bitstream decoding}\label{alg:decoding}
\begin{algorithmic}[1]
\State \textbf{Constants:} Upper bound on number of encoded bits per 2-byte chunk in bitstream $N_\code{chunk}$; total number of bits in bitstream $N_\code{bits}$; array representation of the prefix tree for decoding \code{Nodes}
\State \textbf{Globals:} The data value being decoded \code{data}; counter \code{ctr} that counts the number of bits decoded per chunk in the bitstream
\State \textbf{Input:} Bitstream $B$
\Procedure{DecodeBitstream}{$B$}
\State $\code{ptr} = B.\code{start}$
\State $S = \code{Nodes}[1]$
\State $S_\code{parent} = \code{null}$, $\code{data} = 0$, $\code{ctr} = 0$, $\code{pos} = 0$
\State $O = [~]$
\While {$\code{ptr} < B.\code{start} + N_\code{bits}$}
\State $\code{isDummy = (S.\code{type} == `dummy')}$
\State $b = \textsc{EntropyDecode}(\code{isDummy}, \code{ptr}, S.\code{prob})$
\State $\code{isDummy = (b == \code{null})}$
\State $\code{data} = \textsc{UpdateData}(\code{isDummy}, \code{data}, b)$
\State $\code{pos} += 1$
\State $\code{ctr} = \code{oassign}(\code{ctr} == N_\code{chunk}, 0, \code{ctr}+1)$
\State $\code{isEnd = (S.\code{type} == `\code{end}')}$
\State $o_1 = \code{oassign}(\code{isEnd}, \code{pos}, 0)$
\State $o_2 = \code{oassign}(\code{isEnd}, \code{data}, \code{null})$
\Statex
\State $\code{parent} = \code{oassign}(\lnot\code{isDummy}, S.\code{index}, S_\code{parent}.\code{index})$
\State $\code{next} = \code{oassign}(b == 0, S.\code{next}_0, S.\code{index})$
\State $\code{next} = \code{oassign}(b == 1, S.\code{next}_1, \code{next})$
\State $\code{next} = \code{oassign}(\code{isDummy}, 0, \code{next})$
\State $\code{next} = \code{oassign}(\code{isDummy} \wedge$
\Statex \hskip\algorithmicindent \hskip\algorithmicindent \hskip\algorithmicindent \hskip\algorithmicindent
$\code{ctr} == N_\code{chunk}, \code{parent}, \code{next})$
\Statex
\State $O.\textsc{Append}((o_1, o_2))$
\State $S_\code{parent} = \code{oaccess}(\code{Nodes},\code{parent})$
\State $S = \code{oaccess}(\code{Nodes}, \code{next})$
\State $\code{ptr} = \code{oassign}(\code{ctr} == N_\code{chunk}, \code{ptr}+2, \code{ptr})$
\EndWhile
\State $\code{osort}(O)$
\State \Return $O$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{theorem}
The bitstream decoding algorithm in \cref{alg:decoding} is data-oblivious per \cref{def:oblivious}, with public parameters $N_\code{bits}$, $N_\code{chunk}$, and the size of the prefix tree array \code{Nodes} (which is a known constant).
\end{theorem}
\begin{proof}
The simulator starts be generating a random bitstream $B$ of length $N_\code{bits}$, and then simply executes \cref{alg:decoding}.
It outputs the trace produced by the algorithm.
Lines 5--8 have fixed access patterns.
The loop in line 9 runs a fixed number of times: \code{ctr} increments by 1 in each run of the loop on line 15 until it becomes equal to $N_\code{chunk}$, at which point the loop variable \code{ptr} is incremented by 2 in line 27.
Thus, the loop makes exactly $N_\code{bits}\times N_\code{chunk} / 2$ iterations.
Within the loop, line 10 has fixed access patterns.
In line 11, the function \textsc{EntropyDecode} dereferences \code{ptr} and decodes a single bit from the dereferenced value using simple arithmetic operations; if \code{isDummy} is true it performs dummy operations instead, using \code{oassign}.
Its access patterns thus only depend on the location pointed to by \code{ptr} within the bitstream $B$, and not the contents of the bitstream.
Further, the value of the loop variable \code{ptr} is incremented per a fixed schedule (as described above), and thus only depends on the value of $N_\code{bits}$.
Line 12 has fixed access patterns.
In line 13, the function \textsc{UpdateData} updates the value of \code{data} with $b$ using arithmetic operations implemented using \code{oassign}. Its access patterns are thus independent of the \code{data} or $b$.
Lines 14--24 have fixed access patterns.
The access patterns of lines 25--26 only depend on the length of \code{Nodes}, which is fixed and public.
Line 27 also has fixed access patterns.
Finally, line 29 uses the \code{osort} primitive to sort the array $O$; its access patterns thus depend on the length of $O$. Since a single tuple is appended to $O$ per iteration of the loop, the length of $O$ is equal to the number of iterations, which only depends on $N_\code{bits}$ and $N_\code{chunk}$ as described above.
Thus, the trace produced by the algorithm can be simulated only using the values of $N_\code{bits}$ and $N_\code{chunk}$.
\end{proof}
\subsection{Oblivious image processing}\label{s:proofs:algorithms}
In this section, we provide pseudocode along with proofs of security for the image processing algorithms described in \cref{s:algorithms}.
For each algorithm, we first briefly describe its pseudocode, and then prove its security with respect to \cref{def:oblivious}.
\subsubsection{Background subtraction}
As described in \cref{s:algorithms:bgs}, the background subtraction algorithm (\cref{alg:bgs}) maintains a mixture of $M$ Gaussian components per pixel.
Let $\vec{x}^{(t)}$ denote the value of a pixel in RGB at time $t$.
The algorithm uses the value of the pixel to update each Gaussian component via a set of arithmetic operations (lines 5--8 in the pseudocode) along with their weights $\pi_m$ such that,
over time, components that represent background values for the pixel come to have larger weights, while foreground values are represented by components having smaller weights.
Then, it compute the distance of the sample from each of the $M$ components. If no component is sufficiently close, it adds a new component, %
increments $M$, and if the new $M$ $>$ $M_\code{max}$, discards the component with the smallest weight $\pi_m$ (lines 9--21).
Finally, it uses the $B$ largest components by weight to determine whether the pixel is part of the background (lines 22--30).
Note that $M_\code{max}$ and $B$ are algorithmic constants, independent of the input video streams.
\begin{algorithm}[t]
\small
\caption{Background subtraction}\label{alg:bgs}
\begin{algorithmic}[1]
\State \textbf{Constants:} Maximum number of Gaussian components $M_\code{max}$, number of components to count towards background decision $B$; threshold measures $\delta_\code{thr}$, $c_f$, and $c_\code{thr}$
\State \textbf{Globals:} Actual number of Gaussian components $M$, array of Gaussian components \code{GMM} of size $M_\code{max}$
\State \textbf{Input:} Pixel $x$
\Procedure{BackgroundSubtraction}{$x$}
\For{$m=1$ \textbf{to} $M_\code{max}$}
\State $\code{isDummy} = (m > M)$
\State $\textsc{UpdateGaussian}(\code{isDummy}, \code{GMM}[m], x)$
\EndFor
\Statex
\State $\textsc{SortByWeight}(\code{GMM})$
\State $\code{isClose} = \code{false}$
\For{$m=1$ \textbf{to} $M_\code{max}$}
\State $\delta = \textsc{GetDistance}(\code{GMM}[m], x)$
\State $\code{isClose} = \code{isClose} \vee (\delta > \delta_\code{thr})$
\EndFor
\State $M = \code{oassign}(\code{isClose} \wedge (M < M_\code{max}), M+1, M)$
\State $G = \textsc{GenerateGaussian}()$
\State $\code{GMM}[M_\code{max}-1] = \code{oassign}(\code{isClose}, \code{GMM}[M_\code{max}-1], G)$
\For{$m=M_\code{max-1}$ \textbf{to} $1$}
\State $\code{toSwap} = (\code{GMM}[m].\pi < \code{GMM}[m+1].\pi)$
\State $\code{GMM}[m] = \code{oassign}(\code{toSwap}, \code{GMM}[m+1], \code{GMM}[m])$
\EndFor
\Statex
\State $c = 0$
\State $p = 0$
\State $\code{toInclude} = \code{true}$
\For{$m=1$ \textbf{to} $B$}
\State $c = c + \code{GMM}[m].\pi$
\State $p = \code{oassign}(\code{toInclude}, p+c, p)$
\State $\code{toInclude} = \code{oassign}(c > c_f, \code{false}, \code{toInclude})$
\EndFor
\State \Return $p > c_\code{thr}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{theorem}
The background subtraction algorithm in \cref{alg:bgs} is data-oblivious per \cref{def:oblivious}, with public parameters $M_\code{max}$ and $B$ (which are known constants).
\end{theorem}
\begin{proof}
The simulator chooses a random pixel value $x$ and simply runs the algorithm.
It outputs the trace produced by the algorithm.
Lines 6--7 are executed exactly $M_\code{max}$ times. Here, the loop variable $m$ is public information.
Line 6 has fixed access patterns. The function \textsc{UpdateGaussian} performs a set of arithmetic operations independent of the value of $x$, via \code{oassign} operations using the condition value \code{isDummy}.
The function updates
The access patterns of line 7 therefore only depend on $m$.
\textsc{SortByWeight} in line 9 sorts the \code{GMM} array using the oblivious sorting primitive \code{osort}. Hence, the access patterns of this step only depend on the length of \code{GMM}, which is $M_\code{max}$.
Lines 12--13 are executed exactly $M_\code{max}$ times. %
The function \textsc{GetDistance} computes the distance of $x$ from $\code{GMM}[m]$ via arithmetic operations, independent of the value of $x$.
Thus, the access patterns of line 12 thus depends only on the loop variable $m$.
Line 13 has fixed access patterns.
Lines 15--16 have fixed access patterns, and the access patterns of line 17 depends only on $M_\code{max}$.
Lines 19--20 are executed exactly $M_\code{max}-1$ times. %
The access patterns of both lines depend only on the loop variable $m$.
Lines 22--24 have fixed access patterns.
Lines 26--28 are executed exactly $M_\code{max}$ times. %
The access patterns of line 26 depend only on the loop variable $m$; lines 27 and 28 have fixed access patterns.
Thus, the trace produced by the simulator is indistinguishable from the trace produced by a real run.
\end{proof}
\subsubsection{Object detection}
\cref{alg:bbox} describes our algorithm for detecting bounding boxes of objects in an input image.
The algorithm maintains a list $L$ of tuples of the form $(\code{parent}, \code{bbox})$, where each tuple corresponds to a distinct ``label'' that will eventually be mapped to each blob. Initially, the list $L$ is empty. The \code{parent} field identifies other labels that are connected to the tuple's label (explained shortly), and the \code{bbox} field maintains the coordinates of the bounding box of the label (or blob).%
The algorithm first scans the image row-wise (lines 6--22).
Whenever a white pixel is detected, the algorithm checks if any of its neighbors scanned thus far were also white (\ie pixel to the left and the three pixels above).
In case at least one neighbor is white, the pixel is assigned the label of the neighbor with the smallest numerical value, $l_\code{min}$.
The algorithm records that all white neighbors are connected, by setting the \code{parent} fields for each neighboring label to $l_\code{min}$ and updating the \code{bbox} field for $l_\code{min}$.
In case no neighbor is white, the pixel is assigned a new label, and a new entry is added to the list $L$, with its $\code{parent}$ set to the label itself and \code{bbox} as the coordinates of the current pixel.
Next, the algorithm \emph{merges} the bounding boxes of all connected labels into a single bounding box (lines 23--35).
Specifically, for every label $l$ in $L$, the algorithm first obtains the \code{parent} label of $l$ (say $l_\code{par}$), and then updates the \code{bbox} of $l_\code{par}$ to include the \code{bbox} of $l$. It repeats the process recursively with $l_\code{par}$, until it reaches a root label $l_\code{root}$ whose \code{parent} value is the label itself.
The process repeats for all labels in $L$, until only the root labels are left behind.
Each root label corresponds to a distinct object in the frame.
\begin{algorithm}[th!]
\small
\caption{Bounding box detection}\label{alg:bbox}
\begin{algorithmic}[1]
\State \textbf{Constants:} Maximum number of labels $N$
\State \textbf{Input:} Frame $F$
\Procedure{BoundingBoxDetection}{$F$}
\State Initialize list $L$ of $N$ tuples of type $(\code{parent}, \code{bbox})$,
\newline with $L[i].\code{parent} = i$
\State $\code{ctr} = 1$
\For{$i=1$ \textbf{to} $F.\code{height}$}
\For{$j=1$ \textbf{to} $F.\code{width}$}
\State $p = F[i][j]$
\State $\code{isWhite} = (p \ne 0)$
\State $(p_1, p_2, p_3, p_4) = \textsc{GetNeighbors}(F, i, j)$
\State $(l_1, l_2, l_3, l_4) = \textsc{GetNeighborLabels}(F, i, j)$
\State $\code{isNew} = (p_1 == p_2 == p_3 == p_4 == 0) \wedge \code{isWhite}$
\State $l_\code{min} = \textsc{GetMinLabel}(l_1, l_2, l_3, l_4)$
\State $l_\code{min} = \code{oassign}(\code{isNew}, \code{ctr}, l_\code{min})$
\State $\code{ctr} = \code{oassign}(\lnot\code{isNew}, \code{ctr} + 1, \code{ctr})$
\For{each label $l$ in $\{l_1, l_2, l_3, l_4\}$}
\State $\textsc{UpdateParent}(L, \lnot\code{isNew}, l, l_\code{min})$
\EndFor
\State $\textsc{UpdateBBox}(L, \code{isWhite}, l_\code{min}, i, j)$
\State $\textsc{SetLabel}(F, i, j, l_\code{min})$
\EndFor
\EndFor
\Statex
\For{$i=1$ \textbf{to} $N$}
\State $\code{par} = L[i].\code{parent}$
\State $\code{toMerge} = (\code{par} < i)$
\For{$j=i$ \textbf{to} $1$}
\State \begin{varwidth}[t]{\linewidth}
$L[i].\code{parent} = \code{oassign}(\code{toMerge}\wedge (\code{par}==j),$ \par
\hskip\algorithmicindent $L[j].\code{parent}, L[i].\code{parent})$
\end{varwidth}
\EndFor
\EndFor
\For{$i=1$ \textbf{to} $N$}
\For{$j=1$ \textbf{to} $N$}
\State $\code{toMerge} = (L[j].\code{parent} == i)$
\State $\textsc{MergeBBox}(\code{toMerge}, L[i].\code{bbox}, L[j].\code{bbox})$
\EndFor
\EndFor
\State \Return $L$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{theorem}
The bounding box detection algorithm in \cref{alg:bbox} is data-oblivious, with public parameters $N$, and the height and width of the input frame.
\end{theorem}
\begin{proof}
The simulator generates a random frame $F$ of the given height and width and runs the algorithm.
It outputs the trace produced by the algorithm.
The access patterns of line 4 depends only on $N$.
Line 5 has fixed access patterns.
The loops (lines 6--22) are executed a fixed number of times, equal to the height and width of the frame.
The access patterns of line 8 depend only on the loop variables $i$ and $j$, which are public information.
Line 9 has fixed access patterns.
In line 10, the function \textsc{GetNeighbors} returns the four pixels neighboring the input coordinates ($i$ and $j$), and its access patterns thus depend only on $i$ and $j$.
Similarly in line 11, \textsc{GetNeighborLabels} looks up the labels assigned to the neighboring pixels, and thus has access patterns that only depend on $i$ and $j$.
Line 12 has fixed access patterns.
In line 13, \textsc{GetMinLabel} selects the minimum of the input values using \code{oassign} operations, and thus has fixed access patterns.
Lines 14--15 have fixed access patterns.
In line 17, \textsc{UpdateParent} uses \code{oaccess} combined with \code{oassign} to update $L[l].\code{parent}$ to $l_\code{min}$; it thus has access patterns that only depend on the length $N$ of the array $L$.
In line 18, \textsc{UpdateBbox} similarly uses \code{oaccess} combined with \code{oassign} to update $L[l_\code{min}].\code{bbox}$ with the current coordinates $i$ and $j$; its access patterns therefore only depend on $L$'s length $N$.
In line 19, \textsc{SetLabel} sets the label of the pixel at $F[i][j]$ to $l_\code{min}$; its access patterns depend only on the loop variables $i$ and $j$.
Lines 24--28 are run $N$ times.
The access patterns of line 24 depend only on the loop variable $i$.
Line 25 has fixed access patterns.
Line 27 is executed $i$ times, which is public information; also, the access patterns of this line only depend on the loop variables $i$ and $j$.
Lines 32--33 are run $N^2$ times.
The access patterns of line 32 only depend on the loop variable $j$.
In line 33, the function \textsc{MergeBbox} uses \code{oassign} operations to update $L[i].\code{bbox}$ with $L[j].\code{bbox}$; it therefore has fixed access patterns.
Thus, the trace produced by the simulator is indistinguishable from the trace produced by a real run of the algorithm.
\end{proof}
\subsubsection{Object cropping}
\cref{alg:crop,alg:resize} together describe Visor\xspace's oblivious cropping algorithm.
Visor\xspace crops out images of a fixed upper bounded size using \cref{alg:crop}, and then scales up the ROI within the cropped image using \cref{alg:resize} (as described in \cref{s:algorithms:cropping}).
The pseudocode is self-explanatory.
\begin{algorithm}[t]
\small
\caption{Object cropping}\label{alg:crop}
\begin{algorithmic}[1]
\State \textbf{Constants:} Upper bounds on object dimensions \code{height}, \code{width}
\State \textbf{Input:} Frame $F$, bounding box coordinates \code{bbox}
\Procedure{CropObject}{$F$, \code{bbox}}
\State Initialize an empty buffer \code{buf} with width $=F.\code{width}$ and height $=\code{height}$
\For{$i=1$ \textbf{to} $F$.\code{height}}
\State $\code{cond} = (i == \code{bbox.top})$
\State $\textsc{CopyRows}(\code{cond}, i, F, \code{buf})$
\EndFor
\Statex
\State Initialize an empty buffer \code{obj} with width $=\code{width}$ and height $=\code{height}$
\For{$i=1$ \textbf{to} $F$.\code{width}}
\State $\code{cond} = (i == \code{bbox.left})$
\State $\textsc{CopyCols}(\code{cond}, i, \code{buf}, \code{obj})$
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{theorem}
\label{thm:cropping}
The object cropping algorithm in \cref{alg:crop} is data-oblivious, with public parameters equal to the dimensions of the input frame, and the upper bounds on the object dimensions \code{height} and \code{width}.
\end{theorem}
\begin{proof}
The simulator generates a random frame of the given dimensions, along with a bounding box \code{bbox} with random coordinates.
It then runs the algorithm, and outputs the produced trace.
The access patterns of line 4 depend only the frame's width, and the parameter \code{width}, both of which are known to the simulator.
Lines 6--7 run a fixed number of times, equal to the height of the frame.
Line 6 has fixed access patterns.
In line 7, \textsc{CopyRows} uses \code{oassign} to copy pixels from $F$ into \code{buf}; its access patterns thus only depend on the loop variable $i$, the width of the frame, and the parameter \code{height}.
The access patterns of line 9 depend only the parameters \code{width} and \code{height}.
Lines 11--12 run a fixed number of times, equal to the width of the frame.
Line 11 has fixed access patterns.
In line 12, \textsc{CopyCols} uses \code{oassign} to copy pixels from \code{buf} into \code{obj}; its access patterns thus only depend on the loop variable $i$, and the parameters \code{height} and \code{width}.
Thus, the trace produced by the simulator is indistinguishable from the trace produced by a real run of the algorithm.
\end{proof}
\begin{algorithm}[t!]
\small
\caption{Object resizing}\label{alg:resize}
\begin{algorithmic}[1]
\State \textbf{Input:} Object buffer $O$, bounding box coordinates \code{bbox}
\Procedure{ResizeObject}{$O$, \code{bbox}}
\State\textsc{ResizeHorizontally}({$O$, \code{bbox}, \code{false}})
\State\textsc{Transpose}($O$)
\State\textsc{ResizeHorizontally}({$O$, \code{bbox}, \code{true}})
\State\textsc{Transpose}($O$)
\EndProcedure
\Statex
\Procedure{ResizeHorizontally}{$O$, \code{bbox}}%
\For{$i=1$ \textbf{to} $O$.\code{height}}
\For{$j=1$ \textbf{to} $O$.\code{width}}
\State $p = \textsc{PixelOfInterest}(j, \code{bbox})$%
\State $a = \code{oaccess}($O[i]$, p)$
\State $b = \code{oaccess}($O[i]$, p+1)$
\State $O[i][j] = \textsc{LinearInterpolate}(a, b)$
\EndFor
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{theorem}
The object resizing algorithm in \cref{alg:resize} is data-oblivious, with public parameters equal to the dimensions of the input object $O$.
\end{theorem}
\begin{proof}
The simulator generates a random object buffer with the given dimensions, along with a bounding box \code{bbox} with random coordinates.
It then runs the algorithm, and outputs the produced trace.
The function \code{Transpose} transposes the object buffer, and thus its access patterns only depend on the dimensions of $O$.
The function \code{ResizeHorizontally} works as follows.
The loops (lines 9--15) are executed a fixed number of times, equal to the dimensions of $O$.
Line 11 computes the location of the pixels to be used for linearly interpolating the current pixel, using a set of arithmetic operations; it thus has fixed access patterns.
Lines 12 and 13 have access patterns that only depend on the width of $O$.
In line 14, \textsc{LinearInterpolate} linearly interpolates the current pixel's value using a set of arithmetic operations; the access patterns of this line thus depend only on the loop variables.
Thus, the trace produced by the simulator is indistinguishable from the trace produced by a real run of the algorithm.
\end{proof}
\subsubsection{Object tracking}
\cref{alg:featuredet} describes the feature detection phase of the object tracking. We omit a description of feature matching since it is oblivious by default.
The algorithm first creates a set of increasingly blurred versions of the input image (line 5).
Then, it identifies a set of candidate \emph{keypoints} in these blurred images, \ie pixels that are the maximum and minimum of all their neighbors (lines 6--14).
This set of keypoints is further refined to identify those that are robust to changes in illumination (\ie have high intensity), or represent a ``corner'' (lines 15--18).
Mathematically, these tests involve the computation of derivatives at the candidate point, and then a comparison of the results against a threshold. Candidates that fail these tests are discarded.
Finally, for each keypoint, the algorithm constructs a \emph{feature descriptor}.
It calculates the ``orientation'' of the pixels around the keypoint (within a $16\times16$ neighborhood) based on pixel differences, and then constructs a histogram over the computed values (lines 20--14).
The histogram acts as the keypoint's descriptor.
\begin{algorithm}[t!]
\small
\caption{Feature detection}\label{alg:featuredet}
\begin{algorithmic}[1]
\State \textbf{Input:} Object buffer $O$, maximum number of candidate keypoints $N_\code{temp}$, maximum number of actual keypoints $N$
\Procedure{DetectFeatures}{$O$, $N_\code{temp}$, $N$}
\State Initialize an empty list $L$ of size $N_\code{temp}$ for candidate keypoints, and a list $H$ of size $N$ for features of final keypoints
\State $\code{ctr} = 0$
\State $\code{images} = \textsc{GetDifferenceOfGaussians}(O)$
\For {each pixel $p$ in \code{images}}
\State $\code{nbrs} = \textsc{GetNeighbors}(p)$
\State $\code{isExtrema} = \textsc{CheckExtrema}(p, \code{nbrs})$
\State $k = (p, \code{nbrs})$
\For {$i = 1$ to $N_\code{temp}$}
\State $L[i] = \code{oassign}(\code{isExtrema} \wedge i == \code{ctr}, k, L[i])$
\EndFor
\State $\code{ctr} = \code{oassign}(\code{isExtrema} \wedge \code{ctr} < N_\code{temp}, \code{ctr+1}, \code{ctr})$
\EndFor
\Statex
\For {$i = 1$ to $N_\code{temp}$}
\State $\code{isRobust} = \textsc{CheckRobustness}(L[i])$
\State $L[i] = \code{oassign}(\code{isRobust}, L[i], \code{null})$
\EndFor
\State $\code{osort}(L)$ such that non-null values move to the head of $L$
\Statex
\For {$i = 1$ to $N$}
\State $\code{bbox} = \textsc{CalcNeighborhoodBbox}(L[i])$
\State $\code{roi} = \textsc{CropObject}(\code{images}, \code{bbox})$
\State $H[i] = \textsc{CalcOrientationHist}(L[i], \code{roi})$
\EndFor
\State \Return $H$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{theorem}
The feature detection algorithm in \cref{alg:featuredet} is data-oblivious, with public parameters equal to the dimensions of the input image $O$, and upper bounds $N_\code{temp}$ and $N$.
\end{theorem}
\begin{proof}
The simulator generates a random image buffer with the given dimensions, and then runs the algorithm.
It outputs the trace produced by the algorithm.
Line 5 performs Gaussian blurring operations on the input image $O$, which perform a convolution of the input image with a specified \emph{kernel} (\ie a small matrix). The access patterns of these matrix multiplications are fixed, and independent of the values of the matrices.
The loop (lines 6--14) runs a fixed number of times, the value of which depends on the resolution of the input image, which is public.
Line 7 fetches the neighbors of the current pixel; its access patterns are therefore dependent only on coordinates of the loop variable, which is public.
Line 8 checks the value of the current pixel with the obtained \code{nbrs} using \code{oassign} operations, and thus has fixed access patterns, independent of the values.
Line 9 has fixed access patterns.
The loop in lines 10--12 executes a fixed $N_\code{temp}$ number of times.
The access patterns of line 11 depend only on the public loop variable.
Line 13 has fixed access patterns.
The loop in lines 15--18 executes a fixed $N_\code{temp}$ number of times.
Line 16 has fixed access patterns.
The access patterns of line 17 depend only on the public loop variable.
Line 19 has fixed access patterns that depend only on the size $N_\code{temp}$ of the array $L$.
The loop in lines 15--18 executes a fixed $N$ number of times.
The function \textsc{CalcNeighborhoodBbox} in Line 21 computes the bounding box of the $16\times16$ neighborhood of the current keypoint using arithmetic operations, and has fixed access patterns.
The access patterns of the function \textsc{CropObject} depend only on the dimensions of the input image $O$ (which is public) and the resolution of the bounding box, which is fixed (from \cref{thm:cropping}).
The function \textsc{CalcOrientationHist} performs arithmetic operations, and the access patterns of line 23 depend only on the public loop variable.
Thus, the trace produced by the simulator is indistinguishable from the trace produced by a real run of the algorithm.
\end{proof}
\section{Introduction}
Cameras are being deployed pervasively %
for
the many applications they enable, such as traffic planning, retail experience, and enterprise security \cite{VisionZero,camera:retail,Verkada}. Videos from the cameras are streamed to the cloud, where they are processed using video analytics pipelines \cite{VideoStorm, Chameleon:Sigcomm, NoScope} composed of computer vision techniques (e.g., OpenCV \cite{opencv}) and convolutional neural networks (e.g., object detector CNNs \cite{Redmon:YOLO}); as illustrated in \cref{fig:pipeline}. Indeed, ``video-analytics-as-a-service'' is becoming an important offering for cloud providers \cite{Azure:Video:analytics, Amazon:Video:analytics}.
Privacy of the video contents is of paramount concern in the ``video analytics-as-a-service'' offerings. Videos often contain sensitive information, such as users' home interiors, people in workspaces, or license plates of cars. %
For example, the Kuna home monitoring service~\cite{Kuna} transmits videos from users' homes to the cloud, analyzes the videos, and notifies users when it detects movement in areas of interest.
For user privacy, video streams must remain {\em confidential} and not be revealed to the cloud provider %
or other co-tenants in the cloud.
Trusted execution environments (TEEs)~\cite{SGX:HASP13,Graviton:OSDI18} are a natural fit for privacy-preserving video analytics in the cloud. In contrast to cryptographic approaches, such as homomorphic encryption, TEEs rely on the assumption that cloud tenants also trust the hardware. %
The hardware provides %
the ability to create secure ``enclaves'' that are protected against privileged attackers. TEEs are more compelling than cryptographic techniques since they are orders of magnitude faster.
In fact, CPU TEEs (e.g., Intel SGX \cite{SGX:HASP13}) lie at the heart of confidential cloud computing \cite{IBM:SGX, Azure:SGX}. Meanwhile, recent advancements in GPU TEEs \cite{Graviton:OSDI18,HIX:Jang:2019} enable the execution of ML models (e.g., neural networks) with strong privacy guarantees as well. CPU and GPU TEEs, thus, present an opportunity for building privacy-preserving video analytics systems.
Unfortunately, TEEs (e.g., Intel SGX) are vulnerable to a host of side-channel attacks %
(e.g., \cite{wang-sgx-leaky, sgxcache-brasser, sgxattacks-foreshadow, sgxattacks-xu:pagefaults}). %
For instance, in \cref{s:background:attacks} we show that by observing just the memory access patterns of a widely used bounding box detection OpenCV module, an attacker can infer the {\em exact shapes and positions of all moving objects} in the video.
In general, an attacker can infer crucial information about the video being processed, such as the times when there is activity, objects that appear in the video frame, all of which when combined with knowledge about the physical space being covered by the camera, can lead to serious violations of confidentiality.
We present Visor\xspace, a system for privacy-preserving video analytics services.
Visor\xspace protects the confidentiality of the videos being analyzed from the service provider and other co-tenants.
When tenants host their own CNN models in the cloud, it also protects the model parameters and weights. %
Visor\xspace protects against a powerful enclave attacker who can compromise the software stack outside the enclave, as well as observe any {\em data-dependent accesses} to network, disk, or memory via side-channels (similar to prior work \cite{Ohrimenko:ObliviousML, Raccoon}). %
Visor\xspace makes two primary contributions,
combining insights from ML systems, security, computer vision, and algorithm design.
First, we present a privacy-preserving framework for machine-learning-as-a-service (MLaaS), which supports CNN-based ML applications spanning both CPU and GPU resources.
Our framework can potentially power applications beyond video analytics, such as medical imaging, recommendation systems, and financial forecasting.
Second, we develop novel \emph{data-oblivious} algorithms with provable privacy guarantees within our MLaaS framework, for commonly used vision modules.
The modules are efficient and can be composed to construct many different video analytics pipelines.
In designing our algorithms, we formulate a set of design principles that can be broadly applied to other vision modules as well.
\paragraph{1) Privacy-Preserving MLaaS Framework}
Visor\xspace leverages a \emph{hybrid} TEE that spans CPU and GPU resources available in the cloud. Recent work has shown that scaling video analytics pipelines requires judicious use of both CPUs and GPUs~\cite{Scanner:Poms:2018,Focus:OSDI18}.
Some pipeline modules can run on CPUs at the required frame rates (\eg video decoding or vision algorithms) while others (\eg CNNs) require GPUs, as shown in Figure \ref{fig:pipeline}.
Thus, our solution spans both CPU and GPU TEEs, and combines them into a unified trust domain.
Visor\xspace systematically addresses access-pattern-based leakage across the components of the hybrid TEE, from video ingestion to CPU-GPU communication to CNN processing. In particular, we take the following steps:
\begin{compactenumeratealph}
\item Visor\xspace leverages a suite of data-oblivious primitives to remove access pattern leakage from the CPU TEE. The primitives enable the development of oblivious modules with provable privacy guarantees, the access patterns of which are always independent of private data.
\item Visor\xspace relies on a novel oblivious communication protocol to remove leakage from the CPU-GPU channel. As the CPU modules serve as filters, the data flow in the CPU-GPU channel (on which objects of each frame are passed to the GPU) leaks information about the contents of each frame, enabling attackers to infer the number of moving objects in a frame. At a high level, Visor\xspace pads the channel with dummy objects, leveraging the observation that our application is not constrained by the CPU-GPU bandwidth. To reduce GPU wastage, Visor\xspace intelligently minimizes running the CNN on the dummy objects.
\item Visor\xspace makes CNNs running in a GPU TEE oblivious by leveraging \emph{branchless} CUDA instructions to implement conditional operations (e.g., ReLU and max pooling) in a data-oblivious way.
\end{compactenumeratealph}
\paragraph{2) Efficient Oblivious Vision Pipelines}
Next, we design novel data-oblivious algorithms for vision modules that are foundational for video analytics, and implement them using the oblivious primitives provided by the framework described above.
Vision algorithms are used in video analytics pipelines to extract the moving foreground objects. These algorithms (\eg background subtraction, bounding box detection, object cropping, and tracking) run on CPUs and serve as cheap {\em filters} to discard frames instead of invoking expensive CNNs on the GPU for each frame's objects (more in \cref{s:model}). %
The modules can be composed to construct various vision pipelines, such as medical imaging and motion tracking.
As we demonstrate in \cref{s:evaluation}, na\"ive approaches for making these algorithms data-oblivious, such that their operations are independent of each pixel's value, can slow down video pipelines by several orders of magnitude.
Instead, we carefully craft oblivious vision algorithms for each module in the video analytics pipeline, including the popular VP8 video decoder \cite{VP8Overview}.
Our overarching goal is to transform each algorithm into a pattern that processes each pixel identically.
To apply this design pattern efficiently, we devise a set of algorithmic and systemic optimization strategies based on the properties of vision modules, as follows.
First, we employ a divide-and conquer approach---\ie we break down each algorithm into independent subroutines based on their functionality, and tailor each subroutine individually.
Second, we cast sequential algorithms into a form that \emph{scans} input images while performing identical operations on each pixel. %
Third, identical pixel operations allow us to systemically amortize the processing cost across groups of pixels in each algorithm.
For each vision module, we derive the operations applied per pixel in conjunction with these design strategies.
Collectively, these strategies improve performance by up to $1000\times$ over na\"ive oblivious solutions.
We discuss our approach in more detail in \cref{s:obl_overview}; %
nevertheless, we note that it can potentially help inform the design of other oblivious vision modules as well, beyond the ones we consider in Visor\xspace.
In addition, as shown by prior work, bitrate variations in encrypted network traffic can also leak information about the underlying video streams~\cite{Schuster:Video:Attack}, beyond access pattern leakage at the cloud.
To prevent this leakage, we modify the video encoder to carefully pad video streams \emph{at the source} in a way that optimizes the video decoder's latency.
Visor\xspace thus provides an end-to-end solution for private video analytics.
\paragraph{Evaluation Highlights}
We have implemented Visor\xspace on Intel SGX CPU enclaves \cite{SGX:HASP13} and Graviton GPU enclaves \cite{Graviton:OSDI18}.
We evaluate Visor\xspace on commercial video streams of cities and datacenter premises containing sensitive data.
Our evaluation shows that Visor\xspace's vision components perform up to $1000\times$ better than na\"ive oblivious solutions, and over $6$ to $7$ orders of magnitude better than a state-of-the-art general-purpose system for oblivious program execution.
Against a {\em non-oblivious baseline}, Visor\xspace's overheads are limited to {$2\times$--$6\times$}\xspace which still enables us to analyze multiple streams simultaneously in real-time on our testbed.
Visor\xspace is versatile and can accommodate different combinations of vision components used in real-world applications.
Thus, Visor\xspace provides an efficient solution for private video analytics.
\section{Oblivious Video Decoding}\label{s:decoding}
Video encoding converts a sequence of raw images, called \emph{frames}, into a compressed bitstream. Frames are of two types: \emph{keyframes} and \emph{interframes}. Keyframes are encoded %
to only exploit redundancy across pixels {\em within the same frame}. Interframes, on the other hand, use the prior frame as reference (or the most recent keyframe), and thus can exploit temporal redundancy in pixels {\em across frames}.
\paragraph{Encoding overview} We ground our discussion using the VP8 encoder \cite{VP8Overview}, but our techniques are broadly applicable.
A frame is decomposed into square arrays of pixels called {\em blocks}, and then compressed using the following steps (see \cref{fig:encoding}).
\bubble{1} An estimate of the block is first \emph{predicted} using reference pixels (in a previous frame if interframe or the current frame if keyframe). The prediction is then subtracted from the actual block to obtain a \emph{residue}.
\bubble{2}
Each block in the residue is \emph{transformed} into the frequency domain (\eg using a discrete cosine transform), and its coefficients are \emph{quantized} %
thus improving compression.
\iffull
At the end of this step, each block comprises a sequence of 16 data values, the last several of which are typically zeros as the quantization factors for the later coefficients are larger than those of the initial ones.
\fi
\bubble{3} Each (quantized) block is compressed into a variable-sized bitstream using a binary prefix tree and arithmetic encoding. %
\iffull
The last few coefficients that are zeros are not encoded, and an end-of-block symbol (EOB) is encoded instead.
\fi
Block prediction modes, cosine transformation, and arithmetic encoding are core to all video encoders (\eg H264~\cite{H264}, VP9~\cite{VP9}) and thus our oblivious techniques carry over to all popular codecs.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{figs/encoding.pdf}
\caption{Flowchart of the encoding process.}
\label{fig:encoding}
\end{figure}
The {\em decoder} reverses the steps of the encoder:
\begin{enumerate*}[($i$)]
\item the incoming video bitstream is entropy decoded (\cref{s:decoding:bitstream});
\item the resulting coefficients are dequantized and inverse transformed to obtain the residual block (\cref{s:decoding:transformation}); and
\item previously decoded pixels are used as reference to obtain a prediction block, which are then added to the residue (\cref{s:decoding:prediction}).
\end{enumerate*}
\ifextended
\drop{
While our explanation here is simplified,
\cref{s:proofs:decoder} provides detailed pseudocode and proofs of obliviousness for the decoder.%
}
\else
Our explanation here is simplified; we defer detailed pseudocode along with security proofs to an extended appendix~\cite{visor:extended}.%
\fi
\ifpadding
\subsection{Video Encoder Padding}\label{s:padding}
While the video stream is in transit, the bitrate variation of each frame is visible to an attacker observing the network even if the traffic is TLS-encrypted. This variability can be exploited for fingerprinting video streams~\cite{Schuster:Video:Attack} and understanding its content. %
Overcoming this leakage requires changes to the video {\em encoder} to ``pad'' each frame with dummy bits to an upper bound before sending the stream to Visor\xspace. %
We modify the video encoder to pad the encoded video streams. However, instead of applying padding at the level of frames, we pad each individual \emph{row of blocks} within the frames. Compared to frame-level padding, padding individual rows of blocks significantly improves latency of oblivious decoding, but at the cost of an increase in network bandwidth.
Padding the frames of the video stream,
however, negates the benefit of using \emph{interframes} during encoding of the raw video stream, which are typically much smaller than keyframes. %
We therefore configure the encoder to encode all raw video frames into keyframes, which eliminates the added complexity of dealing with interframes, and consequently simplifies the oblivious decoding procedure.
We note that it may not always be possible to modify legacy cameras to incorporate padding.
In such cases, potential solutions include the deployment of a lightweight edge-compute device that pads input camera feeds before streaming them to the cloud.
For completeness, we also discuss the impact of the lack of padding in \cref{s:appendix:padding}, along with the accompanying security-performance tradeoff.
\fi
\begin{figure*}[t!]
\centering
\subfigure[\bf CCL-based algorithm for bounding box detection]{
\includegraphics[width=0.46\textwidth]{figs/ccl2.pdf}
\label{fig:ccl}}
\hspace{.25in}
\subfigure[\bf Enhancement via parallelization]{
\includegraphics[width=0.46\textwidth]{figs/ccl4.pdf}
\label{fig:ccl:parallel}}
\caption{Oblivious bounding box detection}
\end{figure*}
\subsection{Bitstream Decoding}
\label{s:decoding:bitstream}
The bitstream decoder reconstructs blocks with the help of a \emph{prefix tree}.
At each node in the tree it decodes a single bit from the compressed bitstream via arithmetic decoding, and traverses the tree based on the value of the bit.
While decoding the bit, the decoder first checks whether any more bits can be decoded at the current bitstream position, and if not, it advances the bitstream pointer by two bytes.
Once it reaches a leaf node, it outputs a coefficient based on the position of the leaf, and assigns the coefficient to the current pixel in the block. %
\iffull
If an EOB symbol is decoded, then all the coefficients remaining in the block are assigned a value of zero.
\fi
This continues for all the coefficients in the frame.%
\paragraph{Requirements for obliviousness}
The above algorithm leaks information about the compressed bitstream. First, the traversal of the tree leaks the \emph{value of the parsed coefficient}.
For obliviousness, we need to ensure that during traversal, the identity of the current node being processed remains secret.
Second, not every position in the bitstream encodes the same number of coefficients, and the bitstream pointer advances variably during decoding. Hence, this leaks the \emph{number of coefficients} that are encoded per two-byte chunk (which may convey their values).
\iffull
Finally, the presence of EOB coefficients, coupled with the assignment of decoded coefficients to pixels, leaks the number of zero coefficients per block of the frame---prior work has demonstrated attacks that exploit similar leakage to infer the outlines of all objects in the frame~\cite{sgxattacks-xu:pagefaults}.
\fi
We design a solution that \emph{decouples} the parsing of coefficients, \ie prefix tree traversal (\cref{s:decoding:traversal}), from the assignment of the parsed coefficients to pixels (\cref{s:decoding:assignment}).
\subsubsection{Oblivious prefix tree traversal}\label{s:decoding:traversal}
A simple way to make tree traversal oblivious is to represent the prefix tree as an array.
We can then obliviously fetch any node in the tree using \code{oaccess} (\cref{s:background:primitives}).
Though this hides the identity of the fetched node, we need to also ensure that \emph{processing} of the nodes does not leak their identity.
In particular, we need to ensure that nodes are indistinguishable from each other by performing an identical set of operations at each node.
Unfortunately, this requirement is complicated by the following facts. (1)~Only leaf nodes in the tree produce outputs (\ie the parsed coefficients) and not the intermediate nodes. (2)~We do not know beforehand which nodes in the tree will cause the bitstream pointer to be advanced; at the same time, we need to ensure that the pointer is advanced predictably and independent of the bitstream.
To solve these problems, we take the following steps.
\begin{compactenumerate}
\item We modify each node to output a coefficient regardless of whether it is a leaf state or not. Leaves output the parsed coefficient, while other states output a dummy value.
\item We introduce a dummy node into the prefix tree. %
While traversing the tree, if no more bits can be decoded at the current bitstream position, we transition to the dummy node and perform a bounded number of dummy decodes.%
\end{compactenumerate}
These modifications ensure that while traversing the prefix tree, all that an attacker sees is that at \emph{some} node in the tree, a single bit was decoded and a single value was outputted. %
Note that in this phase, we do not assign coefficients to pixels, and instead collect them in a list.
If we were to assign coefficients to pixels in this phase, then the decoder would need to obliviously scan the entire frame (using \code{oaccess}) at every node in the tree, in order to hide the pixel's identity. %
Instead, by \emph{decoupling} parsing from assignment, we are able to perform the assignment obliviously using a super-linear number of accesses (instead of quadratic), as we explain next.
\iffull
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/coeffsort.pdf}
\caption{Steps for obliviously sorting the coefficients into place after populating it with zero coefficients. For simplicity, this illustration assumes that there are two subblocks, with three coefficients per subblock.}
\label{fig:coeffs}
\end{figure}
\fi
\subsubsection{Oblivious coefficient assignment}\label{s:decoding:assignment}
At the end of \cref{s:decoding:traversal}, we have a list of actual and dummy coefficients. %
The key idea is that if we can obliviously sort this set of values using \code{osort} such that all the actual coefficients are contiguously ordered while all dummies are pushed to the front,
then we can simply read the coefficients off the end of the list sequentially and assign them to pixels one by one. %
\iffull
However, recall that in lieu of the trailing zeros within each block, the encoder encodes an EOB symbol instead. Therefore, we need to append the requisite zeros to the set and move them to the appropriate indices before we can carry out the assignment.
To achieve this, our algorithm makes a single forward pass over the set to add the zeros, while updating all index values per tuple in a way that ensures the zeros will be sorted to the correct positions, as illustrated in \cref{fig:coeffs}.
\fi
To enable such a sort, we modify the prefix tree traversal to additionally output a tuple $(\code{flag}, \code{index})$ per coefficient; \code{flag} is 0 for dummies and 1 otherwise; \code{index} is an increasing counter as per the pixel's index.
Then, the desired sort can be achieved by sorting the list based on the value of the tuple.
\ifpadding
As the complexity of oblivious sort is super-linear in the number of elements being sorted,
an important optimization is to decode and assign coefficients to pixels at the granularity of {\em rows of blocks} rather than frames.
While the number of bits per row of blocks may be observed, the algorithm's obliviousness is not affected as each row of blocks in the video stream is padded to an upper bound (\cref{s:padding}); had we applied frame-level padding, this optimization would have revealed the number of bits per row of blocks.
In \cref{s:eval:decoding}, we show that this technique improves oblivious decoding latency by \approx$6\times$.
\fi
\subsection{Dequantization and Inverse Transformation} \label{s:decoding:transformation}
The next step in the decoding process is to
\begin{enumerate*}[($i$)]
\item dequantize the coefficients decoded from the bitstream, followed by
\item inverse transformation to obtain the residual blocks.
\end{enumerate*}
Dequantization just multiplies each coefficient by a quantization factor.
The inverse transformation also %
performs a set of identical arithmetic operations irrespective of the coefficient values.
\subsection{Block Prediction}\label{s:decoding:prediction}
Prediction is the final stage in decoding. The residual block obtained after \cref{s:decoding:transformation} is added to a {\em predicted block}, obtained using a previously constructed block as reference, to obtain the raw pixel values.
In keyframes, each block is \emph{intra}-predicted---\ie it uses a block in the same frame as referenced.
\ifpadding
We do not discuss interframes because as described in \cref{s:padding}, the padded input video streams in Visor\xspace only contain keyframes.
\fi
Intra-predicted blocks are computed using one of several \emph{modes}. %
A mode to encode a block refers to a combination of pixels on its top row and left column used as reference. %
Obliviousness requires that the prediction mode %
remains private.
Otherwise, an attacker can identify the pixels that are most similar to each other, thus revealing details about the frame.
\iffull
We investigate two different ideas for making intra-prediction oblivious.
First, we note that in each prediction mode, the value of a pixel in the predicted block can be expressed as a linear combination $\Sigma a_i p_i$ of all the pixels that lie above and to the left of the block.
Here $p_i$ represents the adjoining pixels and $a_i$ are weights.
Thus, to compute the value of the predicted pixel obliviously, we can simply evaluate the expression after using the \code{oassign} primitive to obliviously assign each $a_i$ a value based on the mode and the location of the current pixel.
A second approach is to simply evaluate all possible predictions for the pixel and store them in an array, indexing each prediction by its mode.
Then, use the \code{oaccess} primitive to obliviously select the correct prediction from the array.
We implemented both approaches, and found that the second offers better performance in practice.
This is because in the second approach, we can compute the predicted values for several pixels simultaneously at the level of individual rows, which amortizes the cost of our operations.
\fi
We make intra-prediction oblivious by evaluating all possible predictions for the pixel and storing them in an array, indexing each prediction by its mode. Then, we use \code{oaccess} to obliviously select the correct prediction from the array. %
\ifpadding
\else
\subsubsection{Inter-prediction for interframes} \label{s:decoding:inter}
Inter-predicted blocks use {\em previously decoded frames} as reference (either the previous frame, or the most recent keyframe). %
Obliviousness of inter-prediction requires that the reference block (which frame, and block's coordinates therein) remains private during decoding. Otherwise, an attacker observing access patterns during inter-prediction can discern the motion of objects across frames. Furthermore, some blocks even in interframes can be \emph{intra}-predicted for coding efficiency, and oblivious approaches need to conceal whether an interframe block is inter- or intra-predicted.
A na\"ive, but inefficient, approach to achieve obliviousness is to access \emph{all blocks in possible reference frames} at least once---if any block is left untouched, its location its leaked to the attacker. %
We leverage empirical properties of video streams to make our oblivious solution efficient:
\begin{enumerate*}[($i$)]
\item Most blocks in interframes are inter-predicted (\approx$99\%$ blocks in our streams); and
\item Coordinates of reference blocks are close to the coordinates of inter-predicted blocks (in a previous frame), \eg $90\%$ of blocks are radially within 1 to 3 blocks.
\end{enumerate*}
These properties enable two optimizations. %
First, we assume every block in an interframe is inter-predicted.
Any error due to this assumption on intra-predicted blocks is minor in practice.
Second, %
instead of scanning all blocks in prior frames, we only access blocks within a small distance of the current block.
If the reference block is indeed within this distance, we fetch it obliviously using \code{oaccess}; else, (in the rare cases) we use the block at the same coordinates in the previous frame as reference.
\fi
\section{Oblivious Image Processing}\label{s:algorithms}
\begin{figure*}[t!]
\centering
\subfigure[\bf Localizing objects.]{
\centering
\includegraphics[width=0.28\textwidth]{figs/cropv1.pdf}
\label{fig:cropv1}}
\subfigure[\bf Bilinear interpolation.]{
\centering
\includegraphics[width=0.29\textwidth]{figs/resize1.pdf}
\label{fig:resize:v1}
}
\subfigure[\bf Improved Bilinear interpolation.]{
\centering
\includegraphics[width=0.34\textwidth]{figs/resize2.pdf}
\label{fig:resize:v2}
}
\caption{Oblivious object cropping}
\end{figure*}
After obliviously decoding frames in \cref{s:decoding}, the next step as shown in \cref{fig:pipeline} is to develop data-oblivious techniques for background subtraction (\cref{s:algorithms:bgs}), bounding box detection (\cref{s:algorithms:objdet}), object cropping (\cref{s:algorithms:cropping}), and tracking (\cref{s:algorithms:tracking}).
We present the key ideas here; detailed pseudocode and proofs of obliviousness are available in
\ifextended
\cref{s:proofs:algorithms}.
\else
an extended appendix~\cite{visor:extended}.
\fi
Note that \cref{s:algorithms:bgs} and \cref{s:algorithms:tracking} modify popular algorithms to make them oblivious, while \cref{s:algorithms:objdet} and \cref{s:algorithms:cropping} propose new oblivious algorithms.
\subsection{Background Subtraction}\label{s:algorithms:bgs}
The goal of background subtraction is to detect moving objects in a video.
Specifically, it dynamically learns stationary pixels that belong to the video's background, and then subtracts them from each frame, %
thus producing a binary image with black background pixels and white foreground pixels.
Zivkovic \etal proposed a mechanism~\cite{MOG2:Zivkovic:2004,MOG2:Zivkovic:2006} that is widely used in practical deployments, that models each pixel %
as a mixture of Gaussians~\cite{MOG:Survey:2008}. The number of Gaussian components $M$ differs across pixels depending on their value (but is no more than $M_\code{max}$, a pre-defined constant). As more data arrives (with new frames), the algorithm updates each Gaussian component along with their weights ($\pi$), and adds new components if necessary.
To determine if a pixel $\vec{x}$ belongs to the background or not, the algorithm uses the $B$ Gaussian components with the largest weights and outputs true if $p(\vec{x})$ is larger than a threshold:
{
\setlength{\abovedisplayskip}{0pt}
\setlength{\belowdisplayskip}{0pt}
\begin{align*}
p(\vec{x}) = \sum^B_{m = 1}\pi_m\mathcal{N}(\vec{x}~|~\vec{\mu}_m,\Sigma_m)
\end{align*}
}
where
$\vec{\mu}_m$ and $\Sigma_m$ are parameters of the Gaussian components, and $\pi_m$ is the weight of the $m$-th Gaussian component.
This algorithm is not oblivious because it maintains a different number of Gaussian components per pixel, and thus performs different steps while updating the mixture model per pixel.
These differences are visible via access patterns, and these leakages reveal to an attacker how \emph{complex} a pixel is in relation to others---\ie whether a pixel's value stays stable over time or changes frequently.
This enables the attacker to identify the positions of moving objects in the video.
For obliviousness, we need to perform an identical set of operations per pixel (regardless of their value); we thus
{\em always} maintain $M_\code{max}$ Gaussian components for each pixel, of which $(M_\code{max} - M)$ are dummy components and assigned a weight $\pi=0$.
When newer frames arrive, we use \code{oassign} operations to make all the updates to the mixture model, making dummy operations for the dummy components.
Similarly, to select the $B$ largest components by weight, %
we use the \code{osort} primitive.%
\subsection{Bounding Box Detection}\label{s:algorithms:objdet}
The output from \S\ref{s:algorithms:bgs} is a binary image with black background pixels where the foreground objects are white blobs (\cref{fig:ccl}).
To find these objects, it suffices to find the {\em edge contours} of all blobs.
These are used to compute the \emph{bounding rectangular box} of each object. %
A standard approach for finding the contours in a binary image
is the border following algorithm of Suzuki and Abe~\cite{Suzuki:1985}. %
As the name suggests, the algorithm works by scanning the image until it locates an edge pixel, and then follows the edge around a blob. %
As \cref{fig:leakage} in \cref{s:background:attacks} illustrated, the memory access patterns of this algorithm leak the details of all the objects in the frame.%
A na\"ive way to make this algorithm oblivious is to implement each pixel access using the \code{oaccess} primitive (along with other minor modifications). However, we measure that this approach slows down the algorithm by over \approx$1200\times$.%
We devise a two-pass oblivious algorithm for computing bounding boxes by adapting the classical technique of connected component labeling (CCL)~\cite{Rosenfeld:1966}. The algorithm’s main steps are illustrated in \cref{fig:ccl} (whose original binary image contains two blobs). In the first pass, it scans the image and assigns each pixel a temporary label if it is ``connected'' to other pixels. %
In the second pass, it merges labels that are part of a single object.
Even though CCL on its own is less efficient for detecting blobs than border following, it is far more amenable to being adapted for obliviousness.
We make this algorithm oblivious as follows.
First, we perform identical operations regardless of whether the current pixel is connected to other pixels. %
Second, for efficiency, we restrict the maximum number of temporary labels (in the first pass) to a parameter $N$ provided as input to Visor\xspace (per \cref{s:system:parameters}, \cref{table:parameters}). %
Note that the value of the parameter may be much lower than the worst case upper bound (which is the total number of pixels), and thus is more efficient.
\paragraph{Enhancement via parallelization}
We observe that the oblivious algorithm can be parallelized using a divide-and-conquer approach. %
We divide the frame into horizontal \emph{stripes} (\bubble{1} in \cref{fig:ccl:parallel}) and process {\em each stripe in parallel} (\bubble{2}).
For objects that span stripe boundaries, each stripe outputs only a \emph{partial} bounding box containing the pixels within the stripe. We combine the partial boxes by re-applying the oblivious CCL algorithm to the boundaries of adjacent stripes (\bubble{3}).
Given two adjacent stripes $S_{i}$ and $S_{i+1}$ one below the other, we compare each pixel in the top row of $S_{i+1}$ with its neighbors in the bottom row of $S_{i}$, and merge their labels as required.
\subsection{Object Cropping}\label{s:algorithms:cropping}
The next step after detecting bounding boxes of objects is to {\em crop} them out of the frame to be sent for CNN classification (\cref{fig:pipeline:resnet}). %
Visor\xspace needs to ensure that the cropping of objects does not leak \begin{enumerate*}[($i$)] \item their positions, or \item their dimensions.\end{enumerate*}
\subsubsection{Hiding object positions}\label{s:algorithms:cropping:positions}
A na\"ive way of obliviously cropping an object of size $p\times q$ is to slide a window (of size $p\times q$) horizontally %
in raster order, and copy the window's pixels %
if it aligns with the object's bounding box. Otherwise, perform a dummy copy. This, however, leads to a slow down of $4000\times$, with the major reason being redundant copies: while sliding the window forward by one pixel results in a new position in the frame, a majority of the pixels copied are the same as in the previous position.
\iffull
For a $m\times n$ frame, and an object of size $p\times q$, the technique results in $pq(m-p)(n-q)$ pixel copies, as compared to $pq$ pixel copies when directly cropping the object.
\fi
We get rid of this redundancy by \emph{decoupling} the algorithm into multiple passes---one pass along each dimension of the image---such that each pass performs only a subset of the work. %
As \cref{fig:cropv1} shows, the first phase extracts the horizontal strip containing the object; the second phase extracts the object from the horizontal strip.
\bubble{1} Instead of sliding a window (of size $p \times q$) across the frame (of size $m \times n$), we use a horizontal strip of $m \times q$ that has width $m$ equal to that of the frame, and height $q$ equal to that of the object.
We slide the strip vertically down the frame {\em row by row}.
If the top and bottom edges of the strip are aligned with the object, we copy all pixels covered by the strip into the buffer; otherwise, we perform dummy copies.
\iffull
This phase results in $mq(n-q)$ pixel copies.
\fi
\bubble{2} We allocate a window of size $p\times q$ equal to the object's size and then slide it {\em column by column} across the extracted strip in \bubble{1}. If the left and right edges of the window are aligned with the object's bounding box, we copy the window's pixels into the buffer; if not, we perform dummy copies.
\iffull
This phase performs $pq(m-p)$ pixel copies.
\fi
\ifextended
\cref{alg:crop} in \cref{s:proofs:algorithms} provides the detailed steps.
\fi
\subsubsection{Hiding object dimensions}\label{s:algorithms:cropping:size}
The algorithm in \cref{s:algorithms:cropping:positions} leaks the dimensions $p\times q$ of the objects.
To hide object dimensions, Visor\xspace takes as input parameters $P$ and $Q$ representing upper bounds on object dimensions (as described in \cref{s:system:parameters}, \cref{table:parameters}),
and instead of cropping out the exact $p \times q$ object, %
we obliviously crop out a larger image of size $P \times Q$ that \emph{subsumes} the object. While the object sizes vary depending on their position in the frame (e.g., near or far from the camera), the maximum values ($P$ and $Q$) can be learned from profiling just a few sample minutes of the video, and they tend to remain unchanged in our datasets.
This larger image now contains extraneous pixels surrounding the object, which might lead to errors during the CNN's object classification.
We remove the extraneous pixels surrounding the $p \times q$ object by obliviously {scaling it up} to fill the $P \times Q$ buffer. Note that all objects we send to the CNN across the CPU-GPU channel are of size $P \times Q$ (\cref{s:system:communication}), and recall from \cref{s:system:architecture} that we extract the same number of objects from each frame (by padding dummy objects, if needed).
We develop an oblivious routine for scaling up using bilinear interpolation%
~\cite{Fundamentals:Jain:1989}.
Bilinear interpolation computes the value of a pixel in the scaled up image using a linear combination of a $2\times2$ array of pixels from the original image (see \cref{fig:resize:v1}).
\iffull
The simplest way to implement this routine obliviously is to fetch the $4$ pixel values obliviously using \code{oaccess} for each pixel in the scaled up image.
This would entail $PQ$ scans of the entire image,yielding a total of $O(P^2Q^2)$ pixel accesses.
\fi
We once again use decoupling of the algorithm into two passes to improve its efficiency (\cref{fig:resize:v2}) by scaling up along a single dimension per pass.
\iffull
The two passes perform a total of $O(P^2 Q + PQ^2)$ pixel accesses, improving asymptotic performance over the $O(P^2Q^2)$ algorithm.
\fi
\paragraph{Cache locality}
Since the second pass of our (decoupled bilinear interpolation) algorithm performs column-wise interpolations, each pixel access during the interpolation touches a different cache line.
To exploit cache locality, we \emph{transpose} the image before the second pass, and make the second pass to also perform {\em row-wise} interpolations (as in the first pass).
This results in another order of magnitude speedup (\cref{s:eval:cropping}).
\subsection{Object Tracking}\label{s:algorithms:tracking}
Object tracking consists of two main steps: feature detection in {\em each frame} and feature matching {\em across frames}.
\paragraph{Feature detection} SIFT \cite{SIFT:Lowe:1999,SIFT:Lowe:2004} is a popular algorithm for extracting features for {\em keypoints}, i.e., pixels that are the most ``valuable'' in the frame. In a nutshell, it generates candidate keypoints, where each candidate is a local maxima/minima; the candidates are then filtered to get the legitimate keypoints.%
Based on the access patterns of the SIFT algorithm, an attacker can infer the locations of all the keypoints in the image, which in turn, can reveal the location of all object ``corners'' in the image. A na\"ive way of making the algorithm oblivious is to treat each pixel as a keypoint, performing all the above operations for each.
However, the SIFT algorithm's performance depends critically on its ability to filter out a small set of good keypoints from the frame.
To be oblivious {\em and} efficient, Visor\xspace takes as input two parameters $N_\code{temp}$ and $N$ (per \cref{table:parameters}).
The parameter $N_\code{temp}$ represents an upper bound on the number of candidate keypoints, and $N$ on the number of legitimate keypoints.
These parameters, coupled with \code{oassign} and \code{osort}, allow for efficient and oblivious identification of keypoints.
Finally, computing the {\em feature descriptors} for each keypoint requires accessing the pixels around it. For this, we use oblivious extraction (\cref{s:algorithms:cropping}).
\ifextended
\cref{s:proofs:algorithms}'s Algorithm \ref{alg:featuredet} has the pseudocode.%
\fi
\paragraph{Feature matching}
The next step after detecting features is to match them across images.
Feature matching computes a distance metric between two sets of features, and identifies features that are ``nearest'' to each other in the two sets.
In Visor\xspace, we simply perform brute-force matching of the two sets, using \code{oassign} operations to select the closest features.%
\subsection{CPU-GPU Communication} \label{s:system:communication}
Although the CPU-GPU channel in \cref{fig:architecture} transfers encrypted objects, Visor\xspace needs to ensure that its traffic patterns are independent of the video content.
Otherwise, an attacker observing the channel can infer the processing rate of objects, and hence the number (and size) of the detected objects in each frame.
To address this leakage, Visor\xspace ensures that
\begin{enumerate*}[($i$)]
\item the CPU TEE transfers the same number of objects to the GPU per frame, and
\item CNN inference runs at a fixed rate (or batch size) in the GPU TEE.
\end{enumerate*} %
Crucially, Visor\xspace ensures that the CNN processes as few {\em dummy objects} as possible. While our description focuses on \cref{fig:pipeline:resnet} to hide the processing rate of {\em objects of a frame} on the GPU, our techniques directly apply to the pipeline of \cref{fig:pipeline:yolo} to hide the processing rate of complete frames using {\em dummy frames}.
Since the CPU TEE already extracts a fixed number of objects per frame (say $k_\code{max})$ for obliviousness, we enforce an inference rate of $k_\code{max}$ for the CNN as well, regardless of the number of \emph{actual} objects in each frame (say $k$).
The upper bound $k_\code{max}$ is easy to learn for each video stream in practice.
However, this leads to a wastage of GPU resources, which must now also run inference on $(k_\code{max} - k)$ dummy objects per frame.
To limit this wastage, we develop an oblivious protocol that leads to processing as few dummy objects as possible.
\paragraph{Oblivious protocol} Visor\xspace runs CNN inference on $k^\prime (<< k_\code{max})$ objects per frame. %
Visor\xspace's CPU pipeline extracts $k_\code{max}$ objects from each frame (extracting dummy objects if needed) and pushes them into the head of the circular buffer (\cref{fig:architecture}). %
At a fixed rate (\eg once per frame, or every 33ms for a 30fps video), $k^\prime$ objects are dequeued from the \emph{tail} of the buffer and sent to the GPU that runs inference on all $k^\prime$ objects.
We reduce the number of dummy objects processed by the GPU as follows. %
We sort the buffer using \code{osort} in ascending order of ``priority'' values (dummy objects are assigned lower priority), thus moving dummy objects to the \emph{head} of the buffer and actual objects to the \emph{tail}. Dequeuing from the tail of the buffer ensures that actual objects are processed first, and that %
dummy objects at the head of the buffer are likely {\em overwritten} %
before being sent to the GPU.
The circular buffer's size is set large enough to avoid overwriting actual objects.%
The consumption (or inference) rate $k^\prime$ should be set relative to the actual number of objects that occur in the frames of the video stream. Too high a value of $k^\prime$ results in GPU wastage due to dummy inferences, while too low a value leads to delay in the processing of the objects in the frame (and potentially overwriting them in the circular buffer). In our experiments, we use a value of $k^\prime = 2 \times k_\code{avg}$ ($k_\code{avg}$ is the average number of objects in a frame) that leads to little delay and wastage.
\paragraph{Bandwidth consumption}
The increase in traffic on the CPU-GPU PCIe bus (\cref{fig:architecture}) due to additional dummy objects for obliviousness is not an issue because the bus is not bandwidth-constrained.
Even with Visor\xspace's oblivious video pipelines, we measure the data rate to be $<$\SI{70}{\mega\byte/\second}, in contrast to the several \SI{}{\giga\byte/\second} available in PCIe interconnects.
\subsection{CNN Classification on the GPU}\label{s:algorithms:cnn}
The CNN processes identically-sized objects at a fixed rate on the GPU.
The vast majority of CNN operations, such as matrix multiplications, have inherently input-independent access patterns~\cite{Ohrimenko:ObliviousML, Privado:ObliviousML}.
The operations that are \emph{not} oblivious can be categorized as conditional assignments.
For instance, the ReLU function, when given an input $x$, replaces $x$ with $max($0$, $x$)$; likewise, the max-pooling layer replaces each value within a square input array with its maximum value.
Oblivious implementation of the $max$ operator may use CUDA \code{max}/\code{fmax} intrinsics for integers/ floats, which get compiled to \code{IMNMX}/\code{FMNMX} instructions~\cite{Cuda:ISA} that execute the $max$ operation branchlessly. This ensures that the code is free of data-dependent accesses, making CNN inference oblivious. %
\section{Related Work}\label{s:relatedwork}
To the best of our knowledge, Visor\xspace is the first system for the secure execution of vision pipelines. We discuss prior work related to various aspects of Visor\xspace.
\paragraph{Video processing systems} A wide range of optimizations have been proposed to improve the efficiency of video analytic pipelines~\cite{Focus:OSDI18, NoScope, Chameleon:Sigcomm, VideoStorm}. %
These systems offer different design points for enabling trade-offs between performance and accuracy.
Their techniques are complementary to Visor\xspace which can benefit from their performance efficiency.
\paragraph{Data-oblivious techniques}
Eppstein \etal~\cite{Eppstein:2010:Oblivious:Geometric} develop data-oblivious algorithms for geometric computations.
Ohrimenko \etal~\cite{Ohrimenko:ObliviousML} propose data-oblivious machine learning algorithms running inside CPU TEEs.
These works are similar in spirit to Visor\xspace, but are not applicable to our setting.
Oblivious RAM~\cite{ORAM:GR96} is a general-purpose cryptographic solution for eliminating access-pattern leakage.
While recent advancements have reduced its computational overhead~\cite{StefanovDSFRYD13}, it still remains several orders of magnitude more expensive than customized solutions.
Oblix~\cite{Oblix} and Zerotrace~\cite{Zerotrace:Sasy} enable ORAM support for applications running within hardware enclaves, but have similar limitations.
Various systems~\cite{Raccoon, ghostrider, Obfuscuro, PAO:Sinha:2017, HOP:Nayak, Phantom, Wu:ISSTA:2018, Fact:Cauligi} also offer generic solutions for hiding access patterns at different levels, with the help of ORAM, specialized hardware, or compiler-based techniques.
Generic solutions, however, are less efficient than customized solutions (such as Visor\xspace) which can exploit algorithmic patterns for greater efficiency.
\paragraph{Side-channel defenses for TEEs}
Visor\xspace provides systemic protection against attacks that exploit access pattern leakage in enclaves.
Systems for data-oblivious execution (such as Obfuscuro~\cite{Obfuscuro} and Raccoon~\cite{Raccoon}) provide similar levels of security for general-purpose workloads, while Visor\xspace is tailored to vision pipelines.
In contrast, a variety of defenses have also been proposed to detect~\cite{dejavu} or mitigate \emph{specific} classes of access-pattern leakage.
For example, Cloak~\cite{cloak}, Varys~\cite{SGX:defense:Varys}, and Hyperrace~\cite{SGX:defense:Hyperrace} target cache-based attacks; while T-SGX~\cite{TSGX} and Shinde \etal~\cite{Shinde2016} propose defenses for paging-based attacks.
DR.SGX~\cite{Brasser:DRSGX} mitigates access pattern leakage by frequently re-randomizing data locations, but can leak information if the enclave program makes predictable memory accesses.
Telekine~\cite{telekine} mitigates side-channels in GPU TEEs induced by CPU-GPU communication patterns, similar to Visor\xspace's oblivious CPU-GPU communication protocol (though the latter is specific to Visor\xspace's use case).
\drop{
\paragraph{SGX side-channel defenses}
A variety of defenses have been proposed to detect~\mbox{\cite{dejavu}} or mitigate side-channel attacks on SGX, such as cache attacks~\mbox{\cite{cloak, SGX:defense:Varys}} and paging-based attacks~\mbox{\cite{TSGX, Shinde2016}}. These solutions, however, do not provide systemic protection against leakage due to access patterns, and only focus on mitigating specific side-channels.
DR.SGX~\mbox{\cite{Brasser:DRSGX}} aims to mitigate access pattern leakage by frequently re-randomizing data locations, but still leaks information if the enclave program makes predictable memory accesses.
}
\paragraph{Secure inference}
Several recent works propose cryptographic solutions for CNN inference~\cite{Gazelle, Minionn, CryptoNets, Deepsecure, Riazi:Chameleon:ML} relying on homomorphic encryption and/or secure multi-party computation \cite{Yao:GC}. While cryptographic approaches avoid the pitfalls of TEE-based CNN inference, the latter remains faster by orders of magnitude~\cite{Slalom:Tramer, Chiron:Hunt}.
\section{Designing Oblivious Vision Modules}
\label{s:obl_overview}
Na\"ive approaches and generic tools for oblivious execution of vision modules can lead to prohibitive performance overheads. For instance, a na\"ive approach for implementing oblivious versions of CPU video analytics modules (as in \cref{fig:pipeline}) is to simply rewrite them using the oblivious primitives outlined in \cref{s:background:primitives}.
Such an approach:
\begin{enumerate*}[(i)]
\item eliminates all branches and replaces conditional statements with \code{oassign} operations to prevent control flow leakage via access patterns to code,
\item implements all array accesses via \code{oaccess} to prevent leakage via memory accesses to data, and
\item performs all iterations for a fixed number of times while executing dummy operations when needed.
\end{enumerate*}
The simplicity of this approach, however, comes at the cost of high overheads: two to three orders of magnitude.
Furthermore, as we show in \cref{s:evaluation:related}, generic tools for executing programs obliviously such as Raccoon~\cite{Raccoon} and Obfuscuro~\cite{Obfuscuro} also have massive overheads---six to seven orders of magnitude.
Instead, we demonstrate that by carefully crafting oblivious vision modules using the primitives outlined in \cref{s:background:primitives}, Visor\xspace improves performance over na\"ive approaches by several orders of magnitude.
In the remainder of this section, we present an overview of our design strategy, before diving into the detailed design of our algorithms in \cref{s:decoding} and \cref{s:algorithms}.
\subsection{Design Strategy}
Our overarching goal is to transform each algorithm into a pattern that processes each pixel identically, regardless of the pixel's value.
To apply this design pattern efficiently, we devise a set of algorithmic and systemic optimization strategies.
These strategies are informed by the properties of vision modules, as follows.
\paragraph{1) Divide-and-conquer for improving performance}
We break down each vision algorithm into independent subroutines based on their functionality and make each subroutine oblivious individually. %
Intuitively, this strategy improves performance by
\begin{enumerate*}[(i)]
\item allowing us to tailor each subroutine separately, and
\item preventing the overheads of obliviousness from getting compounded.
\end{enumerate*}
\paragraph{2) Scan-based sequential processing}
Data-oblivious processing of images demands that each pixel in the image be indistinguishable from the others.
This requirement presents an opportunity to revisit the design of sequential image processing algorithms.
Instead of simply rewriting existing algorithms using the data-oblivious primitives from \cref{s:background:primitives},
we find that recasting the algorithm into a form that scans the image, while applying the same functionality to each pixel, yields superior performance.
Intuitively, this is because any non-sequential pixel access implicitly requires a scan of the image for obliviousness (\eg using \code{oaccess}); therefore, by transforming the algorithm into a scan-based algorithm, we get rid of such non-sequential accesses.
\paragraph{3) Amortize cost across groups of pixels}
Processing each pixel in an identical manner lends itself naturally to optimization strategies that enable batched computation over pixels---\eg the use of data-parallel (SIMD) instructions.
\smallskip
\noindent In Visor\xspace, we follow the general strategy above to design oblivious versions of popular vision modules that can be composed and reused across diverse pipelines. %
However, our strategy can potentially help inform the design of other oblivious vision modules as well, beyond the ones we consider.
\subsection{Input Parameters for Oblivious Algorithms}\label{s:system:parameters}
Our oblivious algorithms rely on a set of public input parameters that need to be provided to Visor\xspace before the deployment of the video pipelines.
These parameters represent various upper bounds on the properties of the video stream, such as the maximum number of objects per frame, or the maximum size of each object.
\cref{table:parameters} summarizes the list of input parameters across all the modules of the vision pipeline.
\begin{figure}
\centering
\small
\begin{tabular}[t]{p{2.9cm}|p{4.7cm}}
\thickhline
Component & Input parameters \T\B \\
\thickhline
Video decoding (\cref{s:decoding}) &
Number of bits used to encode each (padded) row of blocks;
%
%
%
%
%
\\\hline
Background sub. (\cref{s:algorithms:bgs}) & -- \\\hline
Bounding box detection (\cref{s:algorithms:objdet}) &
\begin{enumerate*}[($i$)]
\item Maximum number of objects per image; \item Maximum number of different labels that can be assigned to pixels (an object consists of all labels that are adjacent to each other).
\end{enumerate*}\\\hline
Object cropping (\cref{s:algorithms:cropping}) &
%
Upper bounds on object dimensions.
%
%
\\\hline
Object tracking (\cref{s:algorithms:tracking}) &
\begin{enumerate*}[($i$)]
\item An upper bound on the intermediate number of features;
\item An upper bound on the total number of features.
\end{enumerate*}\\\hline
CNN Inference (\cref{s:algorithms:cnn}) & -- \\%\hline
%
\thickhline
\end{tabular}
\caption{Public input parameters in Visor\xspace's oblivious modules.
%
}
\label{table:parameters}
\end{figure}
There are multiple ways by which these parameters may be determined.
\begin{enumerate*}[($i$)]
\item The model owner may obtain these parameters simultaneously while training the model on a public dataset.
\item The client may perform offline empirical analysis of their video streams and choose a reasonable set of parameters.
\item Visor\xspace may also be augmented to compute these parameters dynamically, based on historical data (though we do not implement this).
\end{enumerate*}
We note that providing these parameters is not strictly necessary, but meaningful parameters can significantly improve the performance of our algorithms.
\section{Threat Model and Security Guarantees}\label{s:threatmodel}
We describe the attacker's capabilities and lay out the attacks that are in scope and out of scope for our work. %
\subsection{Hardware Enclaves and Side-Channels}\label{s:threatmodel:attacks}
Our trusted computing base includes:
\begin{enumerate*}[($i$)]
\item the GPU package and its enclave implementation,
\item the CPU package and its enclave implementation, and
\item the video analytics pipeline implementation and GPU runtime hosted in the CPU enclave.
\end{enumerate*}
The design of Visor\xspace is not tied to any specific hardware enclave; instead, Visor\xspace builds
on top of an {\em abstract} model of hardware enclaves where the attacker controls the server’s software stack outside the enclave (including the OS), but cannot perform any attacks to glean information from inside the processor (including processor keys).
The attacker can additionally observe the contents and access patterns of all (encrypted) pages in memory, for both data and code.
We assume that the attacker can observe the enclave's memory access patterns at cache line granularity~\cite{Ohrimenko:ObliviousML}.
Note that our attacker model includes the cloud service provider as well as other co-tenants.
We instantiate Visor\xspace with the widely-deployed Intel SGX enclave. However, recent attacks show that SGX does not quite satisfy the abstract enclave model that Visor\xspace requires. For example, attackers may be able to distinguish \emph{intra} cache line memory accesses~\cite{sgxattacks:CacheBleed,sgxattacks:Memjam}.
In Visor\xspace, we mitigate these attacks by disabling hyperthreading in the underlying system, disallowing attackers from observing intra-core side-channels; clients can verify that hyperthreading is disabled during remote attestation~\cite{SGX:IAS}.
One may also employ complementary solutions for closing hyperthreading-based attacks~\cite{SGX:defense:Varys,SGX:defense:Hyperrace}.
Other attacks that violate our abstract enclave model are out of scope:
such as attacks based on timing analysis or %
power consumption~\cite{SGX:attack:Plundervolt,Attack:Clkscrew}, DoS attacks~\cite{SGXattack:rowhammer:SGXbomb:DOS:Jang,SGX:attack:rowhammer}, or rollback attacks~\cite{Memoir:rollback} (which have complementary solutions~\cite{SGX:LCM:defense:rollback:Brandenburger:2018, SGX:ROTE:defense:rollback:Matetic:2017}).
Transient execution attacks (\eg \cite{sgxattacks-foreshadow,SGX:attack:ZombieLoad,sgxattacks-sgxpectre,vanbulck2020lvi,crosstalk,MDS:attack:RIDL,cacheOut}) are also out of scope;
these attacks violate the threat model of SGX and are typically patched promptly by the vendor via microcode updates.
In the future, one could swap out Intel SGX in our implementation for upcoming enclaves such as MI6~\cite{mi6} and Keystone~\cite{keystone} that address many of the above drawbacks of SGX.
Visor\xspace provides protection against {\em any channel of attack that exploits data-dependent access patterns} within our abstract enclave model, which represent a large class of known attacks on enclaves
(\eg cache attacks~\cite{sgxcache-gotzfried, sgxcache-brasser, sgxcache-schwarz, sgxcache-moghimi, sgxattacks-hahnel:cache}, branch prediction~\cite{sgxattacks-lee:branches}, paging-based attacks~\cite{sgxattacks-xu:pagefaults, bulck-sgxattack:pagefaults}, or memory bus snooping~\cite{membuster}).
We note that even if co-tenancy is disabled (which comes at considerable expense), privileged software such as the OS and hypervisor can still infer access patterns (\eg by monitoring page faults), thus still requiring data-oblivious solutions.
Recent work has shown side-channel leakage on GPUs ~\cite{GPU:Covert:Naghibijouybari:2017, GPU:Side:Naghibijouybari:2018, GPU:Timing:Jiang:2017, GPU:Timing:Jiang:2016} including the exploitation of data access patterns out of the GPU. We expect similar attacks to be mounted on GPU {\em enclaves} as video and ML workloads gain in popularity, and our threat model applies to GPU enclaves as well.
\ifpadding
\else
Finally, the TLS traffic from the camera leaks the variation in bitrate of the video to an attacker observing the network.
Padding the video segments {\em at the camera} addresses this leakage \cite{Schuster:Video:Attack}, and this is complementary to Visor\xspace's threat model {\em in the cloud}.
For completeness, we measure the impact of such padding on Visor\xspace's performance in \cref{s:padding}.
\fi
\subsection{Video Streams and CNN Model}
Each client owns its video streams, and it expects to protect its video from the cloud and co-tenants of the video analytics service. %
The vision algorithms are assumed to be public.
We assume that the CNN model's architecture is public, but its weights are private and may be proprietary to either the client or the cloud service.
Visor\xspace protects the weights in both scenarios within enclaves, in accordance with the threat model and guarantees from \cref{s:threatmodel:attacks};
however, when the weights are proprietary to the cloud service, the client may be able to learn some information about the weights by analyzing the results of the pipeline~\cite{ML:model:predictionAPI:Tramer:2016,ML:inversion:Fredrikson:2015,ML:inversion:pharma:Fredrikson:2014}.
Such attacks are out of scope for Visor.
\drop{
The CNN model has two deployment scenarios. In both scenarios, we assume that the model's architecture is public.
In one scenario, the client owns the model weights and analyzes the videos on the cloud due to the availability of richer compute that is unavailable and expensive to provision on-premise. However, the client wishes to conceal the weights from the cloud provider.
In the other scenario, the cloud provider owns the model weights and wishes to conceal anything about the weights \emph{from the clients}, beyond what can be inferred from the model's results. Visor\xspace protects both the deployment scenarios for CNNs.
Protecting against attacks~\mbox{\cite{Abadi:MLdefense:DP, Nasr:MLdefense:Regularization:2018, Iyengar:MLdefense:DP:2019}} that use a model's results to extract its weights~\mbox{\cite{ML:model:predictionAPI:Tramer:2016, ML:model:reverse:Oh:2018, ML:model:hyperparameters:Wang:2018}} or its training data~\mbox{\cite{ML:data:membership:Shokri:2017, ML:data:membership:Salem:2019, ML:data:remember:Song:2017, ML:inversion:Fredrikson:2015, ML:inversion:pharma:Fredrikson:2014}}, is complementary to Visor\xspace.
}
\ifpadding
Finally, recent work has shown that the camera's encrypted network traffic leaks the video's bitrate variation to an attacker observing the network~\cite{Schuster:Video:Attack}, which may consequently leak information about the video contents.
Visor\xspace eliminates this leakage by padding the video segments {\em at the camera}, in such a way that optimizes the latency of decoding the padded stream at the cloud~(\S\ref{s:padding}).
\fi
\subsection{Provable Guarantees for Data-Obliviousness} %
Visor\xspace provides {\em data-obliviousness}
within our abstract enclave model from \cref{s:threatmodel:attacks}, which guarantees that the memory access patterns of enclave code does not reveal any information about sensitive data.
We rely on the enclaves themselves to provide integrity, along with authenticated encryption.
We formulate the guarantees of data-obliviousness using the ``simulation paradigm'' \cite{Goldreich2004:Vol1}.
First, we define a {\em trace of observations} that the attacker sees in our threat model. Then, we define the {\em public information}, \ie information we do not attempt to hide and is known to the attacker. Using these, we argue that there exists a simulator, such that for all videos $V$, when given {\em only} the public information (about $V$ and the video algorithms), the simulator can produce a trace that is indistinguishable from the real trace visible to an attacker who observes the access patterns during Visor\xspace's processing of $V$. By ``indistinguishable'', we mean that no polynomial-time attacker can distinguish between the simulated trace and the real trace observed by the attacker. %
The fact that a simulator can produce the same observations as seen by the attacker {\em even without knowing the private data in the video stream} implies that the attacker does not learn sensitive data about the video.
In our attacker model, the trace of observations is the sequence of the addresses of memory references to code as well as data, along with the accessed data (which is encrypted).
The public information is all of Visor\xspace's algorithms, formatting and sizing information, but not the video data.
For efficiency, Visor\xspace also takes as input some public parameters that represent various upper bounds on the properties of the video streams, \eg, the maximum number of objects per frame, or upper bounds on object dimensions.
\ifextended
In \cref{s:proofs}, we provide a formal definition of data-obliviousness (\cref{def:oblivious}); a summary of public information for each algorithm; and proofs of security along with detailed pseudocode for each algorithm.
Since Visor\xspace's data-oblivious algorithms (\cref{s:decoding} and \cref{s:algorithms}) follow an {\em identical sequence of memory accesses} that depend only on public information and are {\em independent} of data content, our proofs are easy to verify. %
\else
We defer a formal treatment of Visor\xspace's security guarantees---including the definitions and proofs of security, along with detailed pseudocode for each algorithm---to an extended appendix~\cite{visor:extended}.
In summary, we show that Visor\xspace's data-oblivious algorithms (\cref{s:decoding} and \cref{s:algorithms}) follow an \emph{identical sequence of memory accesses} that depend only on public information and are \emph{independent} of data content. %
\fi
|
2,869,038,156,319 | arxiv | \section{Introduction}
Kelvin probe force microscopy (KPFM) is an advanced atomic force microscope (AFM) method that enables the study of electrical properties of a sample with high lateral resolution. For semiconductor samples, these properties include the dopant density, density of surface states, surface charge density, band bending and the work function \cite{saraf2005localAPL,saraf2005localSS,tzeng2006charge,barbet2008surface,tsui2008two,volotsenko2010secondary,arita2014surface,maragliano2014quantifying}. In combination with sample illumination techniques, properties such as the band gap, carrier diffusion length, and recombination rate can be obtained \cite{kronik1999surface,meoded1999direct,streicher2009surface}. The importance of KPFM is reflected in its application in a broad range of popular material science topics, such as new photovoltaic materials \cite{watanabe2014situ,xiao2015giant,yun2015benefit,fuchs2016high}, two-dimensional materials \cite{lee2009interlayer,bussmann2011doping,kim2013work,dumcenco2015large}, nanowires \cite{koren2010measurement,jeong2014quantitative,gupta2015nanoscale},
topological insulators \cite{hao2013fermi}, plasmonic structures \cite{sheldon2014plasmoelectric,gwon2015plasmon}, and photocatalytic systems \cite{hiehata2007local,kittel2016charge}. Reviews of KPFM and its applications can, e.g., be found in Refs. \cite{melitz2011kelvin,sadewasser2012experimental}.
Like the classic vibrating Kelvin probe, the quantity measured with KPFM is generally interpreted as the contact potential difference (CPD) \cite{nonnenmacher1991kelvin,melitz2011kelvin,sadewasser2012experimental}. In the case of semiconductors, the situation is complicated by the possibility of band banding near the surface and the possible presence of surface charges. Application of the CPD interpretation therefore requires careful consideration of these effects on the work function \cite{brattain1953surface,kronik1999surface,saraf2005localAPL,saraf2005localSS,tzeng2006charge,tsui2008two,barbet2008surface,volotsenko2010secondary}. However, Baumgart, Helm, and Schmidt \cite{baumgart2009quantitative,baumgart2011kelvin,baumgart2012quantitativethesis} proposed an alternative interpretation for KPFM on semiconductors, which we will refer to as the BHS interpretation. As we show below, the CPD and BHS interpretations are significantly different. Hence, it is important to determine what the differences between these two interpretations are, and to what extent they are valid \cite{kuriplach2014improved}. This is the main purpose of this work.
This article is organized as follows. In the theory section, we introduce the principles of Kelvin probe measurements, describe the CPD and BHS predictions for \emph{pn}-junctions, and give the relevant expressions for the semiconductor modeling. In the methods sections we describe the computational and experimental details. Finally, in the results and discussion section we explore the general differences between the BHS and CPD interpretations and test them against experimental KPFM results.
\section{Theory}
\subsection{Kelvin probe principles} \label{KPprinciples}
In KPFM, an AFM probe and sample are electrically connected through a voltage source that applies an oscillating potential $V=V_\text{DC}+V_\text{AC}\text{ cos }\omega t$. This causes an oscillation of the electrostatic force per unit area at frequency $\omega$ with amplitude $F_{\omega}$, which is called the first harmonic. KPFM methods can be divided into two main categories: amplitude modulation (AM) and frequency modulation (FM). In closed loop AM-KPFM, $V_\text{DC}$ is adjusted by a feedback loop to the value $V_K$ that nullifies a signal that is proportional to $F_\omega$, i.e.,
\begin{equation}
\left. F_{\omega} \right\vert_{V_\text{DC}=V_{K}}=0.
\label{AMCondition}
\end{equation}
In closed loop FM-KPFM a signal is nullified that is approximately proportional to the amplitude of the first harmonic of the gradient of the electrostatic force \cite[][Eq. 2.18]{sadewasser2012experimental}, i.e.
\begin{equation}
\left. \frac{\partial F_{\omega}}{\partial z} \right\vert_{V_\text{DC}=V_{K}}=0.
\label{FMCondition}
\end{equation}
Because the subject of this work is the interpretation and modeling of the quantity $V_K$ obtained with KPFM on semiconductor samples we now introduce the theoretical background of the Kelvin voltage and of its interpretation in terms of the CPD and BHS models.
Upon electric connection of two conducting bodies with different work functions, a potential difference $V_\text{CPD}$ is generated between their surfaces. The work function, $W$, of an object is defined as the energy to bring an electron from the bulk of the object to a position just outside its surface, in the absence of a net charge on the object and any external electric fields originating from other objects. $W$ can vary over the surface of an object with homogeneous bulk properties, because it contains contributions from potential drops at the surface, such as the band bending potential at the semiconductor surface.
To enable a simple theoretical discussion of KPFM measurements, we reduce the problem to one dimension. In this simplified configuration, the connected KPFM probe and sample are positioned opposite each other and form a parallel plate capacitor. Also, in this approximation, each has a single work function. Hence,
\begin{equation}
V_\text{CPD}\equiv (W_s-W_p)/e,
\label{CPDWS}
\end{equation}
where $e$ is the positive elementary charge and $W_s$ and $W_p$ are the sample and probe work function, respectively.
First we consider the case of ideal conductors with surface properties that are independent of any applied potentials. In this case the charge on each body is proportional to the total potential difference. We define the feedback voltage positive when a positive voltage is applied to the sample with respect to the probe. The total net charge per unit area is then
\begin{equation}
\sigma_s=C\left(V-V_\text{CPD}\right).
\label{Qmetal}
\end{equation}
The proportionality constant $C$ is the capacitance per unit area. At a plate distance $z$, $C=\varepsilon/z$ and the electrostatic force per unit area is
\begin{equation}
F=\frac{\sigma_s^2}{2\varepsilon},
\label{Fideal}
\end{equation}
where $\varepsilon$ is the permittivity of the medium in the gap. The first harmonic is then equal to
\begin{equation}
F_{\omega}=\frac{\varepsilon}{z^2}\left(V_\text{DC}-V_\text{CPD}\right)V_\text{AC}.
\label{Fwideal}
\end{equation}
Clearly, in this ideal case we see from Eqs (\ref{AMCondition}) and (\ref{FMCondition}) that both AM- and FM-KPFM methods lead to
\begin{equation}
V_K=V_\text{CPD},
\label{CPDinterp}
\end{equation}
which is the CPD interpretation of $V_K$. In the three-dimensional KPFM geometry, this corresponds to interpreting the measured potential as an approximation for the difference between the work function in a small area of the sample directly underneath the tip and the work function of the tip apex of the probe.
In the case of a semiconducting sample (still probed with a metallic probe), the situation becomes more complicated. It is the main purpose of this paper to evaluate how KPFM results for this configuration should be interpreted. One main complication of a semiconducting sample is that electrical fields penetrate the sample and influence the charge distribution inside the sample. Equivalently, the conduction- and valence-bands of the semiconductor are generally at a different position close to the surface as compared with the bulk, which is the well-known band-bending at semiconducting surfaces \cite{kronik1999surface}. As discussed in more detail below, the surface band bending changes the total potential difference between the surfaces of the two bodies. Hence, $\sigma_s$ is not simply proportional to the sum of the applied potential and the work function difference, as in (\ref{Qmetal}). Instead, the charge-voltage relation can be described with a voltage dependent capacitance per unit area, $C(V)$, as
\begin{equation}
\sigma_s=\int_{V_\text{CPD}}^{V} C(V^{\prime })dV^{\prime }.
\label{Qsemi}
\end{equation}
As a result, Eq. (\ref{CPDinterp}) might not be valid for semiconducting samples and hence the CPD interpretation might be wrong. However, we now present a theoretical argument for its validity.
The electrostatic force for a voltage dependent capacitance can still be written as in Eq. (\ref{Fideal}) \cite{hudlet1995electrostatic}. Combining this expression with Eq. (\ref{Qsemi}) it is clear that without modulation of the potential, i.e. $V_\text{AC}=0$, $F$ will be zero when $V=V_\text{DC}=V_\text{CPD}$. However, this does not necessarily mean that with modulation $F_\omega$ will be nullified by $V_\text{DC}=V_\text{CPD}$. To solve this issue we use a similar approximation as was used by Hudlet et al. \cite{hudlet1995electrostatic}. We make a first order Taylor expansion of $F(V)$ around $V_\text{DC}$ and take the term proportional to $\text{cos }\omega t$ as an approximation for $F_\omega$. This leads to
\begin{equation}
F_{\omega}\approx V_\text{AC}\left.\frac{\partial F}{\partial V}\right\vert_{V=V_\text{DC}}.
\label{Fwfirstorder}
\end{equation}
With (\ref{Fideal}) and (\ref{Qsemi}) this becomes
\begin{align}
F_{\omega}&\approx \frac{V_\text{AC}}{2\varepsilon}\left.\frac{\partial}{\partial V}\left(\int_{V_\text{CPD}}^{V} C(V)dV\right)^2 \right\vert_{V=V_\text{DC}} \nonumber \\
&=\frac{V_\text{AC}}{\varepsilon}\left(I\left(V_\text{DC}\right)-I\left(V_\text{CPD}\right)\right)C\left(V_\text{DC}\right),
\label{Fwfirstorder2}
\end{align}
where $I(V)$ is the antiderivative of $C(V)$. Because $C(V)$ is always positive, $I$ is a monotonically increasing function. Hence, this approximation for $F_{\omega}$ is only nullified by $V_\text{DC}=V_\text{CPD}$. This indicates that the CPD interpretation given by Eq. (\ref{CPDinterp}) is (despite a voltage dependence of $C$) valid for KPFM on semiconductors. For FM-KPFM, there is just a $\partial / \partial z$ added in front of Eq. (\ref{Fwfirstorder}), see Eq. (\ref{FMCondition}). Therefore, with the same reasoning, Eq. (\ref{CPDinterp}) would also be valid for FM-KPFM.
In principle, higher order contributions to $F_{\omega}$ can shift $V_{K}$ from $V_\text{CPD}$, but this can be avoided by keeping $V_\text{AC}$ small. A more precise analysis of this effect requires careful consideration of the frequency dependent dynamics of the surface state charge and the space charge layer, which is outside the scope of the present work.
Baumgart et al. \cite{baumgart2009quantitative,baumgart2011kelvin,baumgart2012quantitativethesis} argued that the CPD interpretation is invalid for semiconductors and proposed the alternative BHS interpretation. To further investigate the merits of both interpretations, we evaluate them for a well defined situation: the potential difference $\Delta V_{K}$ between the \emph{p}- and \emph{n}-sides of \emph{pn}-homojunctions as measured by KPFM:
\begin{equation}
\Delta V_{K} \equiv V_{K,p}-V_{K,n},
\label{DeltaVK}
\end{equation}
where the subscripts $p$ and $n$ indicate that the values are evaluated on the \emph{p}- and \emph{n}-type areas, respectively. In the next two subsections we calculate $\Delta V_{K}$ for the CPD and BHS interpretations.
\subsection{\emph{pn}-junctions in the CPD interpretation}
According to the CPD interpretation:
\begin{equation}
e\Delta V_{K}^\text{CPD}=W_{s,p} - W_{s,n},
\label{pnCPD}
\end{equation}
which is clearly independent of the probe work function. Therefore, we only need to consider the semiconductor work function in the modeling.
Fig. \ref{BandDiagram} shows schematic energy level diagrams of a \emph{p}-type semiconductor with bulk at the l.h.s. and surface at the r.h.s. in the diagrams. Diagram (a) corresponds to a nonzero net charge and (b) to a zero net charge on the semiconductor. As is usual in such energy diagrams, the electron energy increases towards the top of the figure, hence electric potential increases towards the bottom. $E_{F}$ is the Fermi level in the semiconductor and $E_{v}$ and $E_{c}$ are the valence and conduction band energies in the bulk of the semiconductor, respectively. In the presence of surface charges or an external electric field, a so-called space charge region with non-zero net charge forms in the semiconductor just below the surface. This results in a potential difference, $V_{s}$, between the bulk and the surface of the semiconductor, which is called the band bending potential. In addition to the band bending, there is usually a potential step $\phi_{s}$ at the surface of the semiconductor due to a fixed dipole layer on the surface, which can be caused by surface termination or a molecular layer adhered to the surface. We will assume that $\phi_{s}$ is independent of any external electric fields and also that it is equal for the \emph{p}- and \emph{n}-side of the \emph{pn}-junction. $E_l$ is the local vacuum level, defined (following Marshak \cite{marshak1989modeling}) as the energy of an electron at a given point if it were at rest and free from the microscopic potentials of the crystal atomic lattice, but not free from the macroscopic potentials, such as those generated at surfaces or interfaces. The bulk electron affinity, $\chi$, is defined here as the energy required to bring an electron from the conduction band $E_c$ to the local vacuum level in the bulk of the material.
\begin{figure}[t]
\includegraphics[width = 220pt]{BandDiagram_v4}
\caption{\label{BandDiagram}Energy diagram of a semiconductor with (a) a nonzero net charge and (b) a zero net charge. Note that the work function $W_s$ and the related quantity $\widetilde{W}_s$ are defined in the uncharged condition.}
\end{figure}
In Fig. \ref{BandDiagram}(b) the semiconductor has zero net charge, but there is still a band bending. This means that there is charge in the space charge region, which is compensated by surface charges. We label the band bending potential in this uncharged situation with $V_s^0$. For this zero net charge case, the work function $W_s$ is the energy to bring an electron from the Fermi level, $E_F$, to the local vacuum level $E_l$ outside the semiconductor. This leads to $W_s=E_c-E_F+\chi-e\phi_s- eV_s^0$ (note that in the figure $V_s^0$ is positive, while $\phi_s$ is negative). We define
\begin{equation}
\widetilde{W}_s=E_c-E_F-eV_s^0,
\label{EFSurf}
\end{equation}
which is the energy difference between the Fermi level and the conduction band at the surface. Since it is assumed that the fixed surface dipole layer, $\phi_s$ is equal on both sides of the \emph{pn}-junction, the CPD interpretation for $\Delta V_{K}$ on the \emph{pn}-junctions (\ref{pnCPD}) becomes
\begin{align}
e\Delta V_{K}^\text{CPD}=&\widetilde{W}_{s,p}-\widetilde{W}_{s,n}.
\label{pnCPD2}
\end{align}
Hence, $\Delta V_{K}$ can be obtained from the positions of the conduction band level in the bulk with respect to the Fermi level, and $V_s^0$.
\subsection{\emph{pn}-junctions in the BHS interpretation}
We quote the main part of the argument for the BHS model for KPFM on semiconductors from \cite{baumgart2009quantitative} ``\emph{In order to minimize the electrostatic force $F_{el}$ onto the probe, the asymmetric electric-dipole layer has to be removed. This is achieved by injecting majority charge carriers into the surface region in order to screen the unscreened immobile ionized dopant atoms. The charge neutrality condition is only fulfilled when surface states discharge simultaneously}.'' Supposedly, on \emph{n}-type semiconductors this is achieved by applying a potential equal to \cite[][p. 40-41]{baumgart2012quantitativethesis}
\begin{equation}
eV_{K}^\text{BHS}=E_{c}-E_{F} \qquad \text{(\emph{n}-type)}
\label{Baumn}
\end{equation}
and on \emph{p}-type
\begin{equation}
eV_{K}^\text{BHS} =E_{v}-E_{F} \qquad \text{(\emph{p}-type)}.
\label{Baump}
\end{equation}
In addition, they expect a sample specific potential offset that is, according to Baumgart et al. \cite{baumgart2009quantitative}, independent of the work function of the probe. As a result, this interpretation predicts that $V_K$ is independent of the probe work function, which directly contradicts the CPD interpretation (Eq. (\ref{CPDinterp}) with Eq. (\ref{CPDWS})), which depends linearly on the probe work function.
Direct application of Eqs. (\ref{Baumn}) and (\ref{Baump}) would result in negative values for $\Delta V_{K}$, while in Refs. \cite{baumgart2009quantitative,baumgart2011kelvin,baumgart2012quantitativethesis} they state only positive values. This is achieved by taking absolute values as described in Ref. \cite[][p. 44]{baumgart2012quantitativethesis}. The resulting expression can be written as
\begin{equation}
e\Delta V_{K}^\text{BHS}=E_{c,n}-E_{v,p}.
\label{pnBaum}
\end{equation}
We note that, a priori, there appear to be some issues with the BHS model. In the one-dimensional description, even an asymmetric dipole layer at the semiconductor surface does not cause an electrostatic force. Hence, the argument, that the dipole layer has to be removed in order to minimize the electrostatic force, seems to be invalid, unless taking into account the real geometry somehow justifies this assumption. At the same time, they state as a second condition that charge neutrality has to be fulfilled, which is also the condition underlying the CPD interpretation. However, it is unclear how these two conditions are met simultaneously by Eqs. (\ref{Baumn}) and (\ref{Baump}). In addition, they apparently neglect the 'bulk work function difference', i.e. the work function difference minus the surface contributions, but do not mention why this is allowed. On the other hand, the BHS model seems to work well for the experiments analyzed by Baumgart et al. \cite{baumgart2009quantitative,baumgart2011kelvin,baumgart2012quantitativethesis}, hence it is important to further discuss and test its validity, which we do below.
\subsection{Semiconductor modeling} \label{models}
To predict $\Delta V_K$, we need to find the position of the band edges in the bulk with respect to the Fermi level (for CPD and BHS) and the zero net charge band bending potential $V_s^0$ (for CPD only).
For a non-degenerate \emph{n}-type semiconductor, the \emph{ position of the band edges in the bulk with respect to the Fermi level} can be approximated by solving \cite{sze2006physics}
\begin{equation}
N_{c} \text{exp} \left( - \dfrac{E_{c}-E_{F}}{kT} \right) \approx \dfrac{N_{D}}{1+g_{D}\text{exp}[(E_{F}-E_{D})/kT]},
\label{EFermi}
\end{equation}
where $N_{c}$ is the effective density of states in the conduction band, $N_{D}$ is the donor concentration, $E_{D}$ is the donor level energy and $g_{D}$ is the ground state degeneracy of the donor level. A similar expression can be used for a \emph{p}-type semiconductor.
The \emph{zero net charge band bending potential}, $V_s^0$, is the value of $V_s$ for which the total net charge on the semiconductor is zero, i.e. $\sigma_{s}=0$. $\sigma_{s}$ is the sum of the net charge in the space charge layer $\sigma_{sc}$ and the surface charges. Two types of surface charge densities can be distinguished: a surface state charge density, $\sigma_{ss}$, which depends on the energy between the Fermi level and the band edges at the surface, and a fixed surface charge density, $\sigma_{sf}$. Thus
\begin{equation}
\sigma_{s}=\sigma_{sc}+\sigma_{ss}+\sigma_{sf}.
\label{Sigs}
\end{equation}
For simplicity, we will only use models with either surface states or fixed surface charge, not both at the same time. To be able to do calculations, expressions for the three contributions to $\sigma_{s}$ are needed. In the remainder of this sub-section we discuss these contributions.
We start with the dependence of the \emph{space charge density} $\sigma_{sc}$ on the band bending potential $V_s$. The relation between $V_{s}$ and $\sigma_{sc}$ for a \emph{p}-type semiconductor can be approximated by \cite{sze2006physics}
\begin{equation}
\sigma_{sc}=-\text{sgn}\left[V_{s}\right] \sqrt{2 \varepsilon_s N_{A} k T}G(V_{s}),
\label{Sigsc}
\end{equation}
where $\varepsilon_s$ is the permittivity of the semiconductor, $N_{A}$ is the acceptor concentration, $k$ is the Boltzmann constant, $T$ is the temperature, and
\begin{equation}
G=\sqrt{\text{exp}[-\beta V_{s}]+\beta V_{s}-1+\frac{n_e}{n_h}(\text{exp}[\beta V_{s}]-\beta V_{s}-1)},
\label{G}
\end{equation}
where $\beta=e/kT$, and, respectively, $n_e$ and $n_h$ are the equilibrium electron and hole carrier densities in the bulk. In addition, for non-degenerate \emph{p}-type semiconductors one can use the approximation $n_e/n_h\approx n_{i}^{2}/N_{A}^{2}$, where $n_{i}$ is the intrinsic carrier density. Similar expressions can be used for a \emph{n}-type semiconductor.
Now we discuss the dependence of the variable \emph{surface state charge density} $\sigma_{ss}$ on the band bending potential $V_s$. Surface states can be donor or acceptor type. Just as the states in the conduction and valence band near the surface, they are shifted by band bending. According to Fermi-Dirac statistics, the charge in acceptor surface states can be written as
\begin{equation}
\sigma_{ss}^A=\int_{E_{v}}^{E_{c}} \dfrac{-en_{ss}^A(E)}{1+\text{exp}[(E-E_{F}-eV_{s})/kT]}dE.
\label{Sigss}
\end{equation}
where $n_{ss}^A(E)$ is the acceptor density of surface states (DOSS) (per unit area and energy) in case of zero band bending, ignoring surface state degeneracy. A similar expression can be used for donor surface states. We will label the combination of donor and acceptor DOSS with $n_{ss}(E)$ and the total number of surface states with $N_{ss}$.
On atomically clean Si, the total number of states $N_{ss}$ can be on the order of the density of surface atoms \cite{allen1962work}, i.e. $10^{15}$~cm$^{-2}$, while on hydrogen terminated Si surfaces it can be as low as $10^{10}$~cm$^{-2}$ \cite{flietner1988spectrum}. Significant variations in the functional dependence $n_{ss}(E)$ on Si have been reported \cite{hasegawa1983electrical}. Often, it is considered to have a U-shape, with acceptor states above and donor states below the minimum density \cite{hasegawa1983electrical,flietner1988spectrum}. However, Gaussian \cite{ragnarsson2000electrical,saraf2005localAPL}, Lorentzian \cite{volotsenko2010secondary}, delta \cite{kronik1999surface,monch2001semiconductor} and constant \cite{bardeen1947surface} functions have also been considered.
To capture the main phenomenology of $n_{ss}\left( E\right)$, we consider three types of DOSS: U-shaped, constant and double Gaussian densities. Fig. \ref{SurfaceStateDist}(a) shows examples of the U-shaped (solid lines) and constant (dotted lines) densities. These consist of donor states in the lower half of the band gap and of acceptor states in the upper half. The U-shaped densities were chosen similar to those presented in Ref. \cite{flietner1988spectrum} for Si/SiO$_{2}$ interfaces with various surface treatments (in particular, for these curves we used $n_{ss}(E)=\alpha \text{ exp}[(E-\beta)^2/\gamma]+\delta$). Fig. \ref{SurfaceStateDist}(b) shows examples of the double Gaussian densities, which have 0.04 eV standard deviation and are centered at $E_{g}/2\pm0.1$~eV (solid lines) and $E_{g}/2\pm0.2$ eV (dotted lines). The Gaussian densities centered below $E_{g}/2$ represent donor states, while those centered above $E_{g}/2$ represent acceptor states. In the region close to the center of the band gap the Gaussian densities are similar to the results obtained by Angermann \cite{angermann2005interface} on an HF etched Si surface. Due to the symmetry of these DOSS models, $\sigma_{ss}$ will be zero when, at the surface, the Fermi level is in the center of the gap.
\begin{figure}[t]
\includegraphics[width = 220pt]{SurfaceStateDistributions}
\caption{\label{SurfaceStateDist}Model densities of surface states, $n_{ss}(E)$, used in Eq. (\ref{Sigss}). (a) U-shaped (solid lines) and constant (dotted lines) densities consisting of donor states in the lower half of the band gap and of acceptor states in the upper half. (b) Double Gaussian densities, which have 0.04~eV standard deviation and are centered at $E_{g}/2\pm0.1$~eV (solid lines) and $E_{g}/2\pm0.2$~eV (dotted lines). The Gaussians centered below $E_{g}/2$ consists of donor states and the Gaussians centered above $E_{g}/2$ consists of acceptor states. The constant and Gaussian densities in blue, red, orange, purple and green (from the left to right) correspond, respectively, to $N_{ss}=10^{11}$, $10^{12}$, $10^{13}$, $10^{14}$ and $10^{15}$~cm$^{-2}$. The U-shaped densities have the same $n_{ss}$ at $E_g/2$ as the constant densities with the same color, but higher $N_{ss}$.}
\end{figure}
Finally, we will discuss the \emph{fixed surface charge density $\sigma_{sf}$}. Fixed surface charge is known to intrinsically exist at the Si/SiO$_{2}$ interface and to depend on specific sample treatments \cite{deal1967characteristics}. In addition, ions from the environment can deposit on the surface during sample preparation or during measurements \cite{brattain1953surface,okorn1999characterization}. In an experiment the value of $\sigma_{sf}$ is therefore often unknown. Negative fixed surface charge densities are not often considered, but for completeness we will also consider this possibility. Deposited ions can penetrate the native SiO present on the Si surface or remain on top of it. In our analysis below, however, we will neglect a possible distance between the Si surface and fixed surface charges.
\section{Methods}\label{methods}
\subsection{Computational methods}\label{CompMethods}
Our calculations according to the BHS interpretation, given by Eqs. (\ref{Baumn}) to (\ref{pnBaum}), only require knowledge of the position of the band edges in the bulk with respect to the Fermi level. This is calculated by numerically solving Eq. (\ref{EFermi}). Calculations according to the CPD interpretation additionally require computation of $V_s^0$. This is done by taking a model DOSS, $n_{ss}(E)$, or a fixed surface charge, $\sigma_{sf}$, and numerically solving $\sigma_s=0$ for $V_s$, using Eqs. (\ref{Sigs}) to (\ref{Sigss}).
We consider five surface models for fitting the CPD interpretation to experimental $\Delta V_K$ obtained on Si \emph{pn}-junctions: a constant $n_{ss}(E)$ with acceptor states in the upper half of the band gap and donor states in the lower half (labeled hereafter as `constant'), a $n_{ss}(E)$ with Gaussian distributed acceptor and donor states centered, respectively, at $\mu=E_{g}/2\pm0.1$~eV and standard deviation of 0.04~eV (labeled `Gauss1'), a $n_{ss}(E)$ with Gaussian distributed acceptor and donor states centered, respectively, at $\mu=E_{g}/2\pm0.2$ and also standard deviation of 0.04~eV (labeled `Gauss2'), a positive $\sigma_{sf}$ (labeled `$\sigma_{sf}>0$') and a negative $\sigma_{sf}$ (labeled `$\sigma_{sf}<0$'). We assume that the DOSS or fixed surface charge is the same on both sides of the \emph{pn}-junctions. For a given set of Si bulk parameters, the remaining fit parameter for the Gaussian and constant DOSS models is then $N_{ss}$ and for the fixed surface charge models it is $\sigma_{sf}$. Fitting was performed through iterative adjustment of the fit parameter, until the calculated $\Delta V_K$ was within 1 mV of the experimental value. We have not used the U-shaped DOSSs for fitting to experimental results, because, as shown below, its results are very similar to a constant DOSS, which is more simple to use.
For the fixed Si parameters, we use the values from Ref. \cite{sze2006physics}. These are: $\varepsilon_s=1.05\times 10^{-10}$~F/m, $E_g=1.12$~eV, $N_v=2.65\times 10^{19}$~cm$^{-3}$, $N_c=2.8\times 10^{19}$~cm$^{-3}$, $n_i=9.65\times 10^{9}$~cm$^{-3}$, $g_D=2$, $g_A=4$, $E_D(\text{P})=E_v+1.075$~eV, $E_D(\text{As})=E_v+1.066$~eV, and $E_A(\text{B})=E_v+0.045$~eV. In addition, we assume $T=293$~K.
\subsection{Experimental methods}
According to the BHS model, given by Eq. (\ref{Baumn}) and (\ref{Baump}), $V_K$ does not depend on the probe work function. To test this, we performed KPFM measurements in air with a Multimode 8 SPM with Nanoscope V controller and Signal Access Module (SAM) (Bruker) with four different probes on \emph{p}-Si, \emph{n}-Si, and Au. For each scan line the topography was first determined with standard tapping mode using amplitude feedback and then retraced with an offset (lift height) of 100~nm while performing closed loop AM-KPFM with excitation at the resonance frequency. Crosstalk was removed by external wiring of the excitation signal \cite{polak2014note}. The measurements were performed in dark, except for the laser beam used for detecting the probe deflection (1 mW, maximum at 690 nm). This beam illuminates the probe from the back side, such that the sample area close to the tip of the probe is shaded from direct illumination.
Si samples were cut from a single side polished \emph{p}-type $<$100$>$ wafer with $~5\times 10^{15}$~cm$^{-3}$ B dopant concentration and a single side polished \emph{n}-type $<$100$>$ wafer with $~1\times 10^{15}$~cm$^{-3}$ P dopant concentration. Before cutting, proper electric contact was created on the unpolished side of the wafers. This was done by first removing the native oxide layer through immersion in 1\% aqueous HF, followed by a quick rinse with demineralized (DI) water and drying under nitrogen flow, and then depositing 500 nm Al. On the \emph{n}-type wafer the contact side was additionally \emph{n}$^+$ doped prior to Al deposition. After making the contacts, the wafers were immersed in 1\% aqueous HF for 10 s, quickly rinsed with DI water, dried under nitrogen flow, and then stored in air.
The Au sample was created by magnetron sputtering 100~nm of Au on glass. As probes we used a gold coated probe (HQ:NSC14/Cr-Au, Micromasch), a PtIr coated probe (SCMPIT, Bruker), a TiN coated probe (FMG01/TiN, NT-MDT) and a special KPFM probe, which consists of a silicon tip on a silicon nitride cantilever with proprietary reflective (and conductive) back side coating(PFQNE-AL, Bruker).
After every two scans the probe was changed and the next probe was put in the same location with roughly 50~$\mu$m accuracy, using an optical microscope with top view. When each probe had been installed and used for taking two scans twice, the next sample was installed and the procedure was repeated. On the \emph{p}-Si sample measurements were performed on two different spots.
\section{Results and discussion}
\subsection{Comparing the BHS and CPD interpretation}
In this section we study predictions of the BHS and CPD interpretation according to the semiconductor models described in section \ref{models} for a wide range of dopant concentrations.
Fig. \ref{BaumFig} shows the variation of $V_K$ on Si as a function of dopant concentration according to the BHS interpretation \footnote{Fig. \ref{BaumFig} is similar to the schematic diagram shown by Baumgart \cite[][Fig. 5.7]{baumgart2012quantitativethesis}.}. The dotted line corresponds to \emph{n}-type P-doped Si and the dashed line to \emph{p}-type B-doped Si. The expected value of $\Delta V_K$ for any Si \emph{pn}-junction can be read from Fig. \ref{BaumFig}. For example, consider a Si \emph{pn}-junction with $N_A$(B)$=1.27\times 10^{18}$~cm$^{-3}$ and $N_D$(P)$=1.49\times 10^{16}$~cm$^{-3}$. The values of $V_{K}$ on the \emph{p}-type and \emph{n}-type side are $-90$~mV and 191~mV and are indicated in Fig. \ref{BaumFig} with a star and circle, respectively. Hence, the predicted $\left\vert\Delta V_K\right\vert$ is 281~mV. Interestingly, from Fig. \ref{BaumFig}, the BHS interpretation predicts a general trend of decreasing $\Delta V_K$ with increasing dopant concentrations.
\begin{figure}[t]
\includegraphics[width = 220pt]{pn_Baum}
\caption{\label{BaumFig}Variation of $V_K$ as a function of dopant concentration according to the BHS interpretation. The circle and star are described in the text as an example \emph{pn}-junction.}
\end{figure}
In the CPD interpretation $\Delta V_K$ can conveniently be expressed in terms of $\widetilde{W}_{s}$, see eq. (\ref{pnCPD2}), hence we present the results of our calculations in terms of this quantity. Fig. \ref{SurfaceFermi}(a) and (b) show $\widetilde{W}_{s}$ as a function of dopant concentration, calculated for Si with $\sigma_{sf}=0$ and the various model DOSSs shown in Fig. \ref{SurfaceStateDist}(a) and (b), respectively, using identical line colors and types. The lower half of each sub-figure corresponds to \emph{n}-type P-doped Si and the upper half to \emph{p}-type B-doped Si. The black dashed lines correspond to zero band bending, i.e. $V_s=0$, which is the case when there are no surface states.
\begin{figure}[t]
\includegraphics[width = 220pt]{SurfaceFermi_distUConstGaus12}
\caption{\label{SurfaceFermi}$\widetilde{W}_{s}$ as a function of dopant concentration, calculated for Si with $\sigma_{sf}\equiv0$ and several different surface state distributions, $n_{ss}(E)$. The lines in the upper half of each figure correspond to B-doped \emph{p}-type Si and the lines in the lower half to P-doped \emph{n}-type Si. The black dashed lines correspond to zero $n_{ss}$ and, hence, $V_s=0$. The other results in (a) and (b) correspond respectively to the $n_{ss}$ shown in Fig. \ref{SurfaceStateDist}(a) and (b) with the same color and linestyle. The expected value of $\Delta V_K$ in the CPD interpretation for any Si \emph{pn}-junction with these $n_{ss}$ can be obtained from this data using Eq. (\ref{pnCPD2}). The black and red circles and stars in (b) are described in the text as an example \emph{pn}-junction.}
\end{figure}
For each DOSS shown in Fig. \ref{SurfaceStateDist}, the expected value of $\Delta V_K$ for any Si \emph{pn}-junction can read from Fig. \ref{SurfaceFermi} using Eq. (\ref{pnCPD2}). For example, consider again the same Si \emph{pn}-junction as above. Assuming a DOSS equal to the solid red line in Fig. \ref{SurfaceStateDist}(b), which has $N_{ss}=10^{12}$~cm$^{-2}$, the values of $\widetilde{W}_{s}$ on the \emph{p}-type and \emph{n}-type side are 946~meV and 501~meV and indicated in Fig. \ref{SurfaceFermi}(b) with a red star and circle, respectively. Hence, the predicted $\Delta V_K$ is 445~mV. In case of absence of surface states and fixed surface charge there would be zero band bending and $\widetilde{W}_{s}$ of the \emph{p}-type and \emph{n}-type side would lie on the black dashed lines as indicated by the black star and circle, respectively. In this case, the predicted $\Delta V_K$ would be 839~mV.
In a naive approach to the CPD interpretation, the band bending could be ignored, which corresponds to using the black dashed lines in Fig. \ref{SurfaceFermi}. Our calculations show for which range of parameters band bending is significant and, hence, where this naive approach fails. It clearly fails where $\widetilde{W}_{s}$ is close to the $ E_g/2$ and approximately independent of the doping concentration. This regime corresponds to the type of Fermi level pinning that was first suggested by Bardeen \citep{bardeen1947surface}, where $V_s^0$ can be approximated by the value of $V_s$ at which $\sigma_{ss}=0$ (instead of $\sigma_s=0$). For our symmetric model DOSSs this leads to $\widetilde{W}_{s}=E_g/2$.
Fig. \ref{SurfaceFermi_Qsf} shows $\widetilde{W}_{s}$ as a function of dopant concentration, calculated for Si with positive fixed surface charge densities between $\sigma_{sf}/e =10^{10}$~cm$^{-2}$ and $10^{14}$~cm$^{-2}$. Subfigure (a) corresponds to \emph{p}-type B-doped Si and (b) to \emph{n}-type P-doped Si. The black dashed lines correspond again to zero band bending, which is the case when there is no fixed surface charge. Clearly, $V_s$ is positive for these surface charges. Negative fixed surface charge densities lead to similar results, but with opposite sign of $V_s$ and \emph{p}- and \emph{n}-type reversed.
\begin{figure}[t]
\includegraphics[width = 220pt]{SurfaceFermi_distQsf}
\caption{\label{SurfaceFermi_Qsf}$\widetilde{W}_{s}$ as a function of dopant concentration, calculated for Si with $n_{ss}\equiv 0$ and $\sigma_{sf}/e=10^{10}$, $10^{11}$, $10^{12}$,$10^{13}$ and $10^{14}$~cm$^{-2}$ in blue, red, orange, purple and green (from top to bottom), respectively. The lines in (a) correspond to B-doped \emph{p}-type Si and the lines in (b) to P-doped \emph{n}-type Si. The black dashed lines correspond to zero $\sigma_{sf}$ and, hence, $V_s=0$.}
\end{figure}
From Fig. \ref{SurfaceFermi_Qsf} it is clear that a fixed surface charge density can have a dramatic influence on the work function and, therefore, also on $\Delta V_K$. To illustrate this, we consider again the same \emph{pn}-junction as above. For $\sigma_{sf}/e=10^{12}$~cm$^{-2}$, indicated with an orange circle and star, we obtain $\Delta V_K=944$~mV, while for $\sigma_{sf}/e=10^{13}$~cm$^{-2}$, indicated with a purple circle and star, we obtain $\Delta V_K=12$~mV, which is dramatically smaller. However, it should be noted that these calculations are less accurate for $\widetilde{W}_{s}<0$ and $\widetilde{W}_{s}>E_g$, because then the Boltzmann statistics assumed in Eqs. (\ref{EFermi}) and (\ref{Sigsc}) is less accurate. This is the case in the example with $\sigma_{sf}/e=10^{13}$~cm$^{-2}$, where $\widetilde{W}_{s}<0$ on both the \emph{p}- and the \emph{n}-side. Nevertheless, on both sides the Fermi level can be expected to be slightly above the conduction band edge at the surface, i.e. $\widetilde{W}_{s}$ is slightly below zero, and thus $\Delta V_K$ can be expected to be very small. Hence, the conclusion that $\Delta V_K$ is much smaller for $\sigma_{sf}/e=10^{13}$~cm$^{-2}$ than for $\sigma_{sf}/e=10^{12}$~cm$^{-2}$ still holds.
From these calculations it is clear that the CPD interpretation with our model DOSSs or a fixed surface charge density generally gives results that are significantly different from the BHS interpretation. The trend of decreasing $\Delta V_K$ for increasing dopant concentration found for the BHS interpretation is reversed in the case of the CPD interpretation with a DOSS. (In the case of a fixed surface charge density, the situation is more complicated.) As a result, it is not possible that both interpretations are correct and therefore it is important to settle this issue. In the next section we compare both interpretations with experiment.
\subsection{Testing the BHS interpretation against experiment}
We stress that the BHS interpretation does not depend on surface properties. As a result, when the dopant concentrations of a Si \emph{pn}-junction sample are given, there are no free parameters and the model directly predicts $\Delta V_K$. Although this is a very powerful feature, the observation of any deviation between predictions and experimental results would directly indicate that the interpretation is not correct. Therefore, to test the BHS interpretation, we now compare its predictions to experiments.
Table \ref{tab:table1} lists ten experimental values of $\Delta V_K$ obtained on Si \emph{pn}-junctions. The first column gives the corresponding references and second column labels each case for future reference. The third and fourth column state the dopant concentrations and dopant types of the two sides of the \emph{pn}-junction. The fifth column lists the experimental values of $\Delta V_K$ and the sixth column the values we find using the BHS interpretation. Cases (i) to (iv) come from the work of Baumgart et al. \cite{baumgart2009quantitative} and demonstrate that the BHS interpretation can predict results that agree with experiment. However, in cases (v) to (x) we find a significant discrepancy between the predictions and the experimental results. Apparently, the BHS interpretation is not valid for these cases.
\begin{table}
\caption{\label{tab:table1}Four KPFM experiments on Si \emph{pn}-junctions from Baumgart et al. \cite{baumgart2009quantitative} and six from other references. From left to right, the columns give the reference, a case label, the reported dopant concentrations, experimental $\Delta V_{K}$ and the predictions for $\Delta V_K$ by the BHS interpretation. Note that the BHS predictions for case (v) to (x) deviate significantly from the experimental values.}
\begin{ruledtabular}
\begin{tabular}{ccccdd}
\multicolumn{1}{c}{Ref.}&\multicolumn{1}{c}{case}&\multicolumn{1}{c}{$N_{A}$[cm$^{-3}$]}&\multicolumn{1}{c}{$N_{D}$[cm$^{-3}$]}&\multicolumn{1}{c}{$\Delta V_{K}^\text{exp}[V]$}&\multicolumn{1}{c}{$\Delta V_{K}^\text{BHS}$[V]}\\
\hline
$[$\onlinecite{baumgart2009quantitative}$]$&(i)&$2 \times 10^{16}$(B)&$2 \times 10^{17}$(P)&0.30&0.309 \\
$[$\onlinecite{baumgart2009quantitative}$]$&(ii)&$2 \times 10^{16}$(B)&$2 \times 10^{20}$(As)&0.20&0.194 \\
$[$\onlinecite{baumgart2009quantitative}$]$&(iii)&$4.7 \times 10^{16}$(B)&$1.4 \times 10^{15}$(P)&0.44&0.411 \\
$[$\onlinecite{baumgart2009quantitative}$]$&(iv)&$1 \times 10^{15}$(B)&$6.5 \times 10^{15}$(P)&0.47&0.469 \\
$[$\onlinecite{saraf2005localSS}$]$&(v)&$1.8 \times 10^{15}$(B)&$2.1 \times 10^{20}$(As)&0.69&0.254 \\
$[$\onlinecite{tsui2008two}$]$&(vi)&$5 \times 10^{14}$(B)&$2 \times 10^{20}$(As)\footnote{$N_{D}$ was extrapolated from Fig. 9 and the given $\Delta V_{K}$ in Ref. \cite{tsui2008two}.}&0.23&0.287 \\
$[$\onlinecite{tsui2008two}$]$&(vii)&$5 \times 10^{14}$(B)&$2 \times 10^{20}$(As)\footnotemark[1]&0.02\footnote{$\Delta V_{K}$ was estimated from Fig. 6(c). in Ref. \cite{tsui2008two}}&0.287 \\
$[$\onlinecite{volotsenko2010secondary}$]$&(viii)&$1 \times 10^{19}$(B)&$3.5 \times 10^{15}$(P)&0.07&0.284 \\
$[$\onlinecite{volotsenko2010secondary}$]$&(ix)&$5 \times 10^{18}$(B)&$3.5 \times 10^{15}$(P)&0.05&0.294 \\
$[$\onlinecite{volotsenko2010secondary}$]$&(x)&$1 \times 10^{18}$(B)&$3.5 \times 10^{15}$(P)&0.03&0.321 \\
\end{tabular}
\end{ruledtabular}
\end{table}
In addition to the erroneous predictions for cases (v) to (x), we identify a more general incorrect behavior of the BHS interpretation. Although the BHS authors mention the importance of surface states, the prediction for a certain dopant concentration does not depend on the amount of surface states or fixed surface charge. For a given \emph{pn}-junction the BHS model therefore predicts a single $\Delta V_{K}$, independent of surface treatment. However, different $\Delta V_{K}$ values for different surface treatments have been reported \cite{sugimura2002potential,tsui2008two}. Cases (vi) and (vii) constitute an example of this. These two cases correspond to samples that have identical \emph{pn}-junctions, as far as dopant concentrations are concerned, but in case (vi) the sample was dipped in HF and not thermally oxidized, while in case (vii) the sample was thermally oxidized and not dipped in HF. The rather different value for $\Delta V_{K}$ measured on these two samples cannot be accounted for by the BHS interpretation.
Finally, we present evidence that the BHS prediction that KPFM potentials measured on semiconductors should be independent on the probe work function, see Eqs. (\ref{Baumn}) and (\ref{Baump}), is incorrect. This claim is in apparent agreement with the similar results \footnote{We note that in \cite[][Fig. 5]{baumgart2009quantitative} the curves for the \emph{p}- and \emph{n}- type probe practically overlap, but in \cite[][Fig. 6.15(a)]{baumgart2012quantitativethesis} they are shifted by about 150 mV.} they obtained with two highly doped \emph{p}-type and \emph{n}-type probes, which presumably have significantly different work functions. However, we experimentally investigated the probe work function independence for a number of different probes on differently doped Si samples and on Au, and found a clear and reproducible dependence on the probe material, see Fig. \ref{KPFM_probe_dep_on_SC}. Each point in this figure is the mean of four 0.5~$\times$~2~$\mu$m raster scans of 32~lines with 128~pixels. It is generally accepted that in KPFM with a metallic sample and probe, $V_{K}$ is equal to the difference in the work functions of the sample and the probe \cite{nonnenmacher1991kelvin,melitz2011kelvin,sadewasser2012experimental}. Hence, the very similar probe dependence obtained on Au and Si strongly suggests that also on Si KPFM measurements are dependent on the probe work function. This is in accordance with the CPD interpretation and not with the BHS interpretation. We speculate that the nearly identical results obtained with highly doped \emph{p}-type and \emph{n}-type probes by Baumgart et al. were caused by a very high density of surface states at the tip apex, which, through Fermi level pinning, can lead to nearly identical work functions \cite{bardeen1947surface}.
\begin{figure}[t]
\includegraphics[width = 220pt]{KPFM_probe_dep_on_SC}
\caption{\label{KPFM_probe_dep_on_SC} $V_K$ measured with AM-KPFM on different samples with different probes, as indicated in the legend and explained in the text.}
\end{figure}
The incorrect predictions of the BHS interpretation described in this section lead us to conclude that it is not universally valid for the interpretation of KPFM data obtained on semiconductors. In the next section, we give support for the correctness of the CPD interpretation and argue that it should be used instead.
\subsection{Testing the CPD interpretation against experiment}
To obtain $\Delta V_K$ from the CPD interpretation, one needs to know the DOSS and the fixed surface charge density. Since these are generally unknown and difficult to measure, we fit the CPD interpretation with the five surface charge models described in section \ref{methods} to all experimental results listed in Table \ref{tab:table1}. Although this disables a stringent test of the CPD interpretation, it turns out that this still gives constraints, due to the fact that the fit parameter of these models (the total surface state density $N_{ss}$ for the DOSS models and fixed surface charge density $\sigma_{sf}$ for the fixed surface charge models) must be of reasonable value (to be discussed below). More importantly, our purpose here is not to subject the CPD interpretation to scrutiny, but rather to demonstrate that – in contrast to the BHS interpretation – the CPD interpretation \emph{is} capable of accommodating the experimental observations discussed in the previous section.
The values of the fit parameters that reproduce the experimental $\Delta V_K$ values listed in Table \ref{tab:table1} to 1 mV precision are presented in Fig. \ref{fitresults}. We also calculated the sensitivity of the fit parameters by fitting them to the experimental values $\pm$5~mV. It was found that the resulting range falls within the symbols plotted in Fig. \ref{fitresults}. Due to the complicated behavior of the $\widetilde{W}_s$ in the fixed surface charge models, there can be multiple solutions $\sigma_{sf}$ that reproduce a certain value of $\Delta V_K$. However, we checked that within the range $10^{10}$~cm$^{-2}>\left\vert\sigma_{sf}/e\right\vert>10^{14}$~cm$^{-2}$ there is only one solution for each case .
\begin{figure}[t]
\includegraphics[width = 220pt]{fitresults}
\caption{\label{fitresults}Fit parameter values obtained by fitting the CPD interpretation to the experimental $\Delta V_K$ listed in Table~\ref{tab:table1} to within 1~mV. The legend indicates the corresponding surface charge model. The models and their labeling are described in section~\ref{CompMethods}.}
\end{figure}
We consider values of $N_{ss}$ and $\sigma_{sf}/e$ below $~10^{13}$~cm$^{-2}$ to be reasonable, see \cite{deal1967characteristics,flietner1988spectrum}. Higher values are increasingly unlikely with increasing density. Although the actual samples were possibly rough, surface state densities above the density of surface atoms ($\backsim10^{15}$~cm$^{-2}$) are very unlikely for Si surfaces that have been exposed to air. As a result, the main conclusion from Fig. \ref{fitresults} is that all cases can be fit with a reasonable value of the fit parameter by at least one model. This demonstrates that the CPD interpretation \emph{is} capable of accommodating the experimental observations discussed in the previous section. We will now use these fit results to draw some conclusions with respect to the validity of the five surface models for the individual cases.
Case (i) can be fit with all five models with reasonable fit parameter values (i.e. below $~10^{13}$~cm$^{-2}$), while for case (ii) only the fixed positive surface charge model leads to reasonable values; the other models lead to rather high densities. The experimental $\Delta V_K$ values of case (i) and (ii) are obtained from a single KPFM scan on a single sample with multiple \emph{pn}-junctions. Hence, these junctions have undergone similar surface treatments, suggesting that their surface state density or fixed surface charge should be similar. Therefore, since only the fixed positive surface charge model gives nearly identical fit results it is the most likely for both cases. In addition, in case (ii) all other models lead to very high parameter values.
As discussed in the previous section, cases (vi) and (vii) correspond to samples that have identical \emph{pn}-junctions, as far as dopant concentrations are concerned, but which have undergone different surface treatments. In contrast to the BHS interpretation, the CPD interpretation can accommodate different obtained $\Delta V_K$ by a different band bending $V_s$, resulting from a different $N_{ss}$ or $\sigma_{sf}$. Since all models, except the fixed positive surface charge, lead to rather high fit parameter values for these cases, the most likely explanation for the observed difference in $\Delta V_K$ is a higher positive $\sigma_{sf}$ in case (vii) than in case (vi).
Like case (i) and (ii), cases (viii) to (x) correspond to a single Si sample with several \emph{pn}-junctions that went through a single preparation process. For these cases it is therefore again reasonable to assume that the surface state density or fixed surface charge should be similar. Interestingly, this corresponds well with the observation that for each model the fit parameter values for these three cases are very similar. However, the surface state models lead to rather high surface state densities and are therefore less likely. The negative fixed surface charge model is the only that leads to densities that are significantly below $~10^{13}$~cm$^{-2}$, but also the positive fixed surface charge model appears to be reasonable.
In the analysis of the experimental results of cases (viii) to (x), Volotsenko et al. \cite{volotsenko2010secondary} assumed zero band bending, i.e. $V_s=0$, on the highest doped \emph{p}-type region, which is the \emph{p}-side in case (viii). However, in our calculations $V_s$ is larger than 400~mV in this region in all fitted CPD models, except in the negative fixed surface charge model, where it is only $-9$~mV. This suggests that either the assumption was not justified, or there was a fixed negative surface charge. Importantly, both conclusions would significantly influence the results of their further analysis.
Our calculations demonstrate that the CPD interpretation can accommodate all the results discussed in the previous section, even those for which the BHS interpretation gives predictions that do not agree with experiment. We have also shown that, contrary to the BHS interpretation, the CPD interpretation can accommodate different $\Delta V_K$ measured on identical \emph{pn}-junctions that have gone through different surface preparation treatments. And, clearly, the erroneous BHS prediction that $V_{K}$ is independent of the probe work function is absent in the CPD interpretation (see Eq. (\ref{CPDinterp}) and (\ref{CPDWS})). Hence, in agreement with the theoretical arguments given in section \ref{KPprinciples}, we posit that the CPD interpretation is valid for KPFM on semiconductors.
\section{Conclusions}
In this work two different interpretations for KPFM measurements on semiconductors are studied; the CPD interpretation and the BHS interpretation proposed by Baumgart et al. \cite{baumgart2009quantitative,baumgart2011kelvin,baumgart2012quantitativethesis}. By performing model calculations we show that they generally lead to very different results and, thus, that it is important to decide which one should be used.
We show that the BHS interpretation predicts Kelvin potential differences as obtained by KPFM that are not in agreement with experimental observations on Si \emph{pn}-junctions that have been reported in literature. A more general incorrect prediction is that for a specific doping profile, it predicts a KPFM potential difference across a \emph{pn}-junction that is independent of the surface treatment, while some experimental potential differences reported in the literature are very different for different surface treatments. Finally, it predicts that the absolute value of the measured potential is independent of the probe work function, while our own KPFM measurements on Si demonstrate a clear dependence on the probe material. We find that this dependence is very similar to the dependence obtained on Au, which suggests that on semiconductors the absolute value of the measured potential depends on probe work function in the same way as on Au, as predicted by the CPD interpretation. In addition, we show that the CPD interpretation is able to accommodate all the discussed experimental results, including those for which the BHS interpretation gives erroneous predictions.
Based on these findings we posit that the BHS interpretation is not generally suitable for the analysis of KPFM on semiconductors and that the CPD interpretation should be used instead.
\section{Acknowledgements}
We thank Rick Elbersen for help in the sample preparation. This work is part of the research program of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organization for Scientific Research (NWO) and was carried out within the research program of BioSolar Cells, co-financed by the Dutch Ministry of Economic Affairs.
\FloatBarrier
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
2,869,038,156,320 | arxiv | \section{Introduction}
What is the structure of irreducible representations of C*-crossed-products
A\rtimes _{\alpha }G$ of an action $\alpha $ of a finite group $G$ on a
unital C*-algebra $A$? Actions by finite groups provide interesting
examples, such as quantum spheres \cite{Bratteli91,Bratteli92} and actions
on the free group C*-algebras \cite{CLat06}, among many examples, and have
interesting general properties, as those found for instance in \cit
{Rieffel80}. Thus, understanding the irreducible representations of their
crossed-products is a natural inquiry, which we undertake in this paper.
\bigskip After we wrote this paper, we are shown \cite{Takesaki67} where Takesaki provides in a detailed description of the irreducible representations
of $A\rtimes G$ when $G$ is a locally group acting on a type I C*-algebra $A$ and the action is assumed
{\it smooth}, as defined in \cite[Section 6]{Takesaki67}. Our paper takes a different road, though with some important intersections we were not aware of originally.
Both our paper and \cite{Takesaki67} make use of the Mackey machinery and the structure of the commutant of the image of irreducible representations of the crossed-products. However, since our C*-algebras are not assumed to be type I, and in general the restriction of an irreducible representation of $A\rtimes G$ to $A$ does not
lead to an irreducible representation of $A$, we need a different approach than \cite{Takesaki67}. The main tool we use for this purpose is the impressive result proven in \cite{Hoegh-Krohn81} that for ergodic actions of compact groups on unital C*-algebras, spectral subspaces are finite dimensional. As a consequence, we can analyze irreducible representations of finite group crossed-products on arbitrary unital C*-algebras with no condition on the action of the group on the spectrum of the C*-algebra. More formally, we restrict the assumption on the group and relax it completely on the C*-algebra and the action compared to \cite[Theorem 7.2]{Takesaki67}.
\bigskip Our research on this topic was initiated in a paper of Choi and the
second author \cite{Latremoliere06} in the case where $G=\mathbb{Z}_{2},$
i.e. for the action of an order two automorphism $\sigma $ on a C*-algebra
A $. In this situation, all irreducible representations of $A\rtimes
_{\sigma }\mathbb{Z}_{2}$ are either \emph{minimal}, in the sense that their
restriction to $A$ is already irreducible, or are regular, i.e. induced by a
single irreducible representation $\pi $ of $A$ such that $\pi $ and $\pi
\circ \sigma $ are not equivalent. In this paper, we shall answer the
question raised at the beginning of this introduction for any finite group
G $. Thus, we suppose given any action $\alpha $ of $G$ on a unital
C*-algebra $A$. In this general situation, we show that for any irreducible
representation $\Pi $ of $A\rtimes _{\alpha }G$ on some Hilbert space
\mathcal{H}$, the group $G$ acts ergodically on the commutant $\Pi
(A)^{\prime }$ of $\Pi (A)$, and thus, by a theorem of Hoegh-Krohn, Landstad
and Stormer \cite{Hoegh-Krohn81}, we prove that $\Pi (A)^{\prime }$ is
finite dimensional. We can thus deduce that there is a subgroup $H$ of $G$
such that $\Pi $ is constructed from an irreducible representation $\Psi $
of $A\rtimes _{\alpha }H$, with the additional property that the restriction
of $\Psi $ to $A$ is the direct sum of finitely many representations all
equivalent to an irreducible representation $\pi $ of $A$. In addition, the
group $H$ is exactly the group of elements $h$ in $G$ such that $\pi $ and
\pi \circ \alpha _{h}$ are equivalent. The canonical unitaries of $A\rtimes
_{\alpha }G$ are mapped by $\Pi $ to generalized permutation operators for
some decomposition of $\mathcal{H}$. This main result is the matter of the
third section of this paper.
\bigskip When $G$ is a finite cyclic group, then we show that the
representation $\Psi $ is in fact minimal and obtain a full characterization
of irreducible representations of $A\rtimes _{\alpha }G$. This result can
not be extended to more generic finite groups, as we illustrate with some
examples. In addition, the fixed point C*-subalgebra of $A$ for $\alpha $
plays a very interesting role in the description of minimal representations
when $G$ is cyclic. We investigate the finite cyclic case in the fourth
section of this paper.
\bigskip We then apply our work to the case where $G$ is the permutation
group $\mathfrak{S}_{3}$ on three elements $\left\{ 1,2,3\right\} $. It is
possible again to fully describe all irreducible representations of any
crossed-product $A\rtimes _{\alpha }\mathfrak{S}_{3}$, and we illustrate all
the cases we can encounter by examples. This matter is discussed in the last
section of this paper.
\bigskip We start our paper with a section on generalities on
crossed-products of C*-algebras by finite groups, including a result on a
characterization of irreducible regular representations. This section also
allows us to set some of our notations. We now fix some other notations
which we will use recurrently in this paper. Given a Hilbert space $\mathcal
H}$ which we decompose as a direct sum $\mathcal{H}=\mathcal{H}_{1}\oplus
\ldots \oplus \mathcal{H}_{m}$ of Hilbert subspaces, we shall write an
operator $T$ on $\mathcal{H}$ as an $m\times m$ matrix whose $(i,j)$-entry
is the operator $p_{i}Tp_{j}$ where $p_{1},\ldots ,p_{m}$ are the orthogonal
projections from $\mathcal{H}$ onto respectively $\mathcal{H}_{1},\ldots
\mathcal{H}_{m}$. If $t_{1},\ldots ,t_{m}$ are operators on, respectively,
\mathcal{H}_{1},\ldots ,\mathcal{H}_{m}$, then the diagonal operator with
entries $t_{1},\ldots ,t_{m}$ will be denoted by $t_{1}\oplus \ldots \oplus
t_{m}$, i.e. $\oplus _{j=1}^{m}t_{j}\left( \xi _{1},\ldots ,\xi _{m}\right)
=\left( t_{1}\xi _{1},\ldots ,t_{m}\xi _{m}\right) $ for all $\left( \xi
_{1},\ldots ,\xi _{i}\right) \in \mathcal{H}_{1}\oplus \ldots \oplus
\mathcal{H}_{m}$. If $\pi _{1},\ldots ,\pi _{m}$ are representations of some
C*-algebra $A$ acting respectively on $\mathcal{H}_{1},\ldots ,\mathcal{H
_{m}$, then the representation $\pi _{1}\oplus \ldots \oplus \pi _{m}$ of $A$
on $\mathcal{H}$ is defined by $\left( \pi _{1}\oplus \ldots \oplus \pi
_{m}\right) (a)=\pi _{1}(a)\oplus \ldots \oplus \pi _{m}(a)$ for all $a\in A
. The identity operator of $\mathcal{H}$ will be denoted by $1_{\mathcal{H}}$
or simply $1$ when no confusion may occur. More generally, when an operator
t$ on a Hilbert space $\mathcal{H}$ is a scalar multiple $\lambda 1_
\mathcal{H}}$ ($\lambda \in \mathbb{C}$) of the identity of $\mathcal{H}$ we
shall simply denote it by $\lambda $ and omit the symbol $1_{\mathcal{H}}$
when appropriate.
We shall denote by $f_{|E_{0}}$ the restriction of any function
f:E\longrightarrow F$ to a subset $E_{0}$ of $E$. The set $\mathbb{T}$ is
the unitary group of $\mathbb{C}$, i.e. the set of complex numbers of
modulus 1.
\section{Crossed-Product By Finite Groups}
\bigskip In this paper, we let $A$ be a unital C*-algebra and $\alpha $ an
action on $A$ of a finite group $G$ by *-automorphisms. A covariant
representation of $\left( A,\alpha ,G\right) $ on a unital C*-algebra $B$ is
a pair $\left( \pi ,V\right) $ where $\pi $ is a *-homomorphism from $A$
into $B$ and $V$ is a group homomorphism from $G$ into the unitary group of
B$ such that for all $g\in G$ and $a\in A$ we have $V(g)\pi (a)V\left(
g^{-1}\right) =\pi \circ \alpha _{g}(a)$. The crossed-product C*-algebra
A\rtimes _{\alpha }G$ is the universal C*-algebra among all the C*-algebras
generated by some covariant representation of $\left( A,\alpha ,G\right) $.
In particular, $A\rtimes _{\alpha }G$ is generated by a copy of $A$ and
unitaries $U^{g}$ for $g\in G$ such that $U^{gh}=U^{g}U^{h}$,
U^{g^{-1}}=\left( U^{g}\right) ^{\ast }$ and $U^{g}aU^{g^{-1}}=\alpha
_{g}(a) $ for all $g,h\in G$ and $a\in A$. The construction of $A\rtimes
_{\alpha }G$ can be found in \cite{Pedersen79} and is due originally to \cit
{Zeller-Meier68}.
\bigskip By universality, crossed-products by finite groups have a very
simple form which we now describe.
\begin{proposition}
\label{Presentation}Let $G$ be a finite group of order $n$ and write
G=\left\{ g_{0},\ldots ,g_{n-1}\right\} $ with $g_{0}$ the neutral element
of $G$. Let $\sigma $ be the embedding of $G$ in the permutation group of
\left\{ 0,\ldots ,n-1\right\} $ given by $\sigma _{g}(i)=j$ if and only if
gg_{i}=g_{j}$ for all $i,j\in \left\{ 0,\ldots ,n-1\right\} $ and $g\in G$.
We now define $V_{g}$ to be the matrix in $M_{n}(A)$ whose $(i,j)$ entry is
given by $1_{A}$ if $\sigma _{g}(i)=j$ and $0$ otherwise, i.e. the tensor
product of the permutation matrix for $\sigma _{g}$ and $1_{M_{n}}(A)$. Let
\psi :A\longrightarrow M_{n}(A)$ be the *-monomorphism
\begin{equation*}
\psi :a\in A\longmapsto \left[
\begin{array}{cccc}
a & & & \\
& \alpha _{g_{1}}(a) & & \\
& & \ddots & \\
& & & \alpha _{g_{n-1}}(a
\end{array
\right] \text{.}
\end{equation*
Then $A\rtimes _{\alpha }G$ is *-isomorphic to $\oplus _{g\in G}\psi
(A)V_{g} $. In particular:
\begin{equation*}
\dbigoplus_{g\in G}AU^{g}=A\rtimes _{\alpha }G\text{.}
\end{equation*}
\end{proposition}
\begin{proof}
The embedding of $G$ into permutations of $G$ is of course the standard
Cayley Theorem. We simply fix our notations more precisely so as to properly
define our embedding $\psi $. A change of indexing of $G$ simply correspond
to a permutation of the elements in the diagonal of $\psi $ and we shall
work modulo this observation in this proof. For $b\in M_{n}(A)$ we denote by
$b_{i,i^{\prime }}$ its $(i,i^{\prime })$-entry for $i,i^{\prime }\in
\left\{ 1,\ldots ,n\right\} $.
An easy computation shows that
\begin{equation*}
V_{g}\psi (a)V_{g^{-1}}=\psi \left( \alpha _{g}(a)\right)
\end{equation*
and $V_{g}V_{h}=V_{gh}$ for all $g,h\in G$ and $a\in A$. Therefore, by
universality of $A\rtimes _{\alpha }G$, there exists a (unique)
*-epimorphism $\eta :A\rtimes _{\alpha }G\twoheadrightarrow \oplus _{g\in
G}\psi (A)V_{g}$ such that $\eta _{|A}=\psi $ and $\eta (U^{g})=V_{g}$ for
g\in G$. Our goal is to prove that $\eta $ is a *-isomorphism.
First, we show that $\oplus _{g\in G}AU^{g}$ is closed in $A\rtimes _{\alpha
}G$.
Let $\left( a_{m}^{0},\ldots ,a_{m}^{n-1}\right) _{m\in \mathbb{N}}$ in
A^{n}$ such that $\left( \sum_{j=0}^{n-1}a_{m}^{j}U^{g_{j}}\right) _{m\in
\mathbb{N}}$ is a convergent sequence in $A\rtimes _{\alpha }G$. Now
\begin{equation*}
\eta \left( \sum_{j=0}^{n-1}a_{m}^{j}U^{g_{j}}\right) =\sum_{j=0}^{n-1}\psi
(a_{m}^{j})V_{g}\text{.}
\end{equation*
By definition, we have $\sigma _{g_{j}}(0)=i$ for all $i\in \left\{ 0,\ldots
,n-1\right\} $. Let $j\in \left\{ 0,\ldots ,n-1\right\} $. Then
V_{j+1,1}^{g_{j}}=1_{A}$ and $V_{j+1,1}^{g_{i}}=0$ for all $i\in \left\{
0,\ldots ,n-1\right\} \backslash \left\{ j\right\} $. Hence, $\left( \eta
\left( \sum_{j=1}^{n}a_{m}^{j}U^{g_{j}}\right) \right) _{1,j+1}=a_{m}^{j}$
for all $m\in \mathbb{N}$. Since $\eta $ is continuous, and so is the
canonical projection $b\in M_{n}(A)\longrightarrow b_{1,j+1}\in A$, we
conclude that $\left( a_{m}^{j}\right) _{m\in \mathbb{N}}$ converges in $A$.
Let $a^{j}\in A$ be its limit. Then $\left( a_{m}^{0},\ldots
,a_{m}^{n-1}\right) _{m\in \mathbb{N}}$ converges in $A^{n}$ to $\left(
a^{0},\ldots ,a^{n-1}\right) $. Thus, $\left(
\sum_{j=0}^{n-1}a_{m}^{j}U^{g_{j}}\right) _{m\in \mathbb{N}}$ converges to
\sum_{j=0}^{n-1}a^{j}U^{g_{j}}\in \oplus _{g\in G}AU^{g}$ and thus $\oplus
_{g\in G}AU^{g}$ is closed in $A\rtimes _{\alpha }G$. Since $\oplus _{g\in
G}AU^{g}$ is dense in $A\rtimes _{\alpha }G$ by construction, we conclude
that $A\rtimes _{\alpha }G=\oplus _{g\in G}AU^{g}$.
Now, we show that $\eta $ is injective. Let $c\in A\rtimes _{\alpha }G$ such
that $\eta (c)=0$. Then there exists $a_{0},\ldots ,a_{n-1}\in A$ such that
c=\sum_{j=0}^{n-1}a_{j}U^{g_{j}}$. Let $j\in \left\{ 0,\ldots ,n-1\right\}
. Then $\eta (c)=0$ implies that $\eta (c)_{j+1,1}=a_{j}=0$ for all $j\in
\left\{ 0,\ldots ,n-1\right\} $ and thus $c=0$. So $\eta $ is a
*-isomorphism and our proof is concluded.
\end{proof}
\bigskip As we will focus our attention on the crossed-products by finite
cyclic groups in the fourth section of this paper and Proposition (\re
{Presentation}) is particularly explicit in this case, we include the
following corollary:
\begin{corollary}
Let $\sigma $ be an automorphism of order $n$ of a unital C*-algebra $A$.
Then $A\rtimes _{\sigma }\mathbb{Z}_{n}$ is *-isomorphic to
\begin{equation*}
\left\{
\begin{array}{c}
\left[
\begin{array}{ccccc}
a_{1} & a_{2} & a_{3} & \cdots & a_{n} \\
\sigma (a_{n}) & \sigma (a_{1}) & \sigma (a_{2}) & \sigma (a_{3}) & \\
\sigma ^{2}(a_{n-1)} & \sigma ^{2}(a_{n}) & \sigma ^{2}(a_{1}) & \ddots &
\ddots \\
\vdots & \ddots & \ddots & \ddots & \sigma ^{n-2}(a_{2}) \\
\sigma ^{n-1}(a_{2}) & \sigma ^{n-1}(a_{3}) & \cdots & \sigma ^{n-1}(a_{n})
& \sigma ^{n-1}(a_{1}
\end{array
\right] \in M_{n}(A) \\
a_{1},\ldots ,a_{n}\in
\end{array
\right\}
\end{equation*
where $U^{1}$ mapped to $\left[
\begin{array}{cccc}
0 & 1 & & 0 \\
\vdots & 0 & \ddots & \\
0 & \vdots & \ddots & 1 \\
1 & 0 & \cdots &
\end{array
\right] $ and $A$ is embedded diagonally as $a\in A\mapsto \left[
\begin{array}{cccc}
a & & & \\
& \sigma (a) & & \\
& & \ddots & \\
& & & \sigma ^{n-1}(a
\end{array
\right] $. In particular, $A\rtimes _{\sigma }\mathbb{Z}_{n}=A\oplus
AU^{1}\oplus \ldots \oplus AU^{n-1}$.
\end{corollary}
\begin{proof}
Simply write $\mathbb{Z}_{n}=\left\{ 0,\ldots ,n-1\right\} $ so that:
\begin{equation*}
V_{1}= \left[
\begin{array}{cccc}
0 & 1 & & 0 \\
\vdots & 0 & \ddots & \\
0 & \vdots & \ddots & 1 \\
1 & 0 & \cdots &
\end{array}
\right] \text{.}
\end{equation*}
The result is a direct computation of $\oplus _{k=0}^{n-1}\psi (A)\left(
V_{1}\right) ^{k}$.
\end{proof}
\bigskip We now turn our attention to the irreducible representations of
A\rtimes _{\alpha }G$. Proposition (\ref{Presentation}) suggests that we
construct some representations from one representation of $A$ and the left
regular representation of $G$. Of particular interest is to decide when such
representations are irreducible. We will use many times the following lemma
\cite[2.3.4 p. 30]{Dixmier}, whose proof is included for the reader's
convenience:
\begin{lemma}[Schur]
\label{Schur}Let $\pi _{1}$ and $\pi _{2}$ be two irreducible
representations of a C*-algebra $A$ acting respectively on Hilbert spaces
\mathcal{H}_{1}$ and $\mathcal{H}_{2}$. Then $\pi _{1}$ and $\pi _{2}$ are
unitarily equivalent if and only if there exists a nonzero operator $T
\mathcal{H}_{2}\longrightarrow \mathcal{H}_{1}$ such that for all $a\in A$
we have $T\pi _{1}(a)=\pi _{2}(a)T$. Moreover, if there exists such a
nonzero intertwining operator, then it is unique up to a nonzero scalar
multiple.
\end{lemma}
\begin{proof}
If $\pi _{1}$ and $\pi _{2}$ are unitarily equivalent then there exists a
unitary $T$ such that for all $a\in A$ we have $T\pi _{1}(a)=\pi _{2}(a)T$.
In particular, $T\not=0$. Moreover, assume that there exists $T^{\prime }$
such that $T^{\prime }\pi _{1}=\pi _{2}T^{\prime }$. Then $T^{\ast
}T^{\prime }\pi _{1}=T^{\ast }\pi _{2}T^{\prime }=\pi _{1}T^{\ast }T^{\prime
}$. Hence since $\pi _{1}$ is irreducible, there exists $\lambda \in \mathbb
C}$ such that $T^{\prime }=\lambda T$.
Conversely, assume that there exists a nonzero operator $T:\mathcal{H
_{2}\longrightarrow \mathcal{H}_{1}$ such that for all $a\in A$ we have
\begin{equation}
T\pi _{1}(a)=\pi _{2}(a)T\text{.} \label{UnitaryEq}
\end{equation
Then for all $a\in A$
\begin{equation*}
T^{\ast }T\pi _{1}(a)=T^{\ast }\pi _{2}(a)T\text{.}
\end{equation*
In particular $T^{\ast }T\pi _{1}(a^{\ast })=T^{\ast }\pi _{2}(a^{\ast })T$
for all $a\in A$. Applying the adjoint operation to this equality leads to
\pi _{1}(a)T^{\ast }T=T^{\ast }\pi _{2}(a)T$ and thus
\begin{equation*}
T^{\ast }T\pi _{1}(a)=\pi _{1}(a)T^{\ast }T\text{.}
\end{equation*
Since $\pi _{1}$ is irreducible, there exists $\lambda \in \mathbb{C}$ such
that $T^{\ast }T=\lambda 1_{\mathcal{H}_{2}}$. Since $T\not=0$ we have
\lambda \not=0$. Up to replacing $T$ by $\frac{1}{\mu }T$ where $\mu
^{2}=\left\vert \lambda \right\vert $ and $\mu \in \mathbb{R}$ we thus get
T^{\ast }T=1_{\mathcal{H}_{2}}$. Thus $T$ is an isometry. In particular,
TT^{\ast }$ is a nonzero projection.
Similarly, we get $\pi _{2}(a)TT^{\ast }=TT^{\ast }\pi _{2}(a)$ and thus
TT^{\ast }$ is scalar as well. Hence $TT^{\ast }$ is the identity again (As
the only nonzero scalar projection) and thus $T$ is a unitary operator.
Hence by (\ref{UnitaryEq}), $\pi _{1}$ and $\pi _{2}$ are unitarily
equivalent.
\end{proof}
\bigskip Given a Hilbert space $\mathcal{H}$, the C*-algebra of all bounded
linear operators on $\mathcal{H}$ is denoted by $\mathcal{B}\left( \mathcal{
}\right) $.
\begin{theorem}
\label{RegularIrred}Let $G$ be a finite group with neutral element $e$ and
\alpha $ an action of $G$ on a unital C*-algebra $A$. Let $\pi :A\rightarrow
\mathcal{B}\left( \mathcal{H}\right) $ be a representation of $A$ and let
\lambda $ be the left regular representation of $G$ on $\ell _{2}(G)$. Let
\delta _{g}$ be the function in $\ell _{2}(G)$ which is $1$ at $g\in G$ and
0$ otherwise. Define $\Pi :A\rtimes _{\alpha }G\rightarrow \mathcal{B}\left(
\ell _{2}\left( G\right) \otimes \mathcal{H}\right) $ b
\begin{eqnarray*}
\Pi \left( a\right) \left( \delta _{g}\otimes \xi \right) &=&\delta
_{g}\otimes \pi \left( \alpha _{g^{-1}}\left( a\right) \right) \xi ,\text{
and} \\
\Pi \left( g\right) &=&\lambda \left( g\right) \otimes 1_{\mathcal{H}}\text{
}
\end{eqnarray*
Then $\Pi $ is irreducible if and only if $\pi $ is irreducible and $\pi $
is not unitarily equivalent to $\pi \circ \alpha _{g}$ for any $g\in
G\setminus \left\{ e\right\} $.
\end{theorem}
\begin{proof}
Assume now that $\pi $ is irreducible and not unitarily equivalent to $\pi
\circ \alpha _{g}$ whenever $g\in G\setminus \left\{ e\right\} $. Suppose
that $\Pi $ is reducible. Then there exists a non-scalar operator $\Omega $
in the commutant of $\Pi \left( A\rtimes _{\alpha }G\right) $. Now, we
observe that the commutant of $\left\{ \lambda \left( g\right) \otimes 1_
\mathcal{H}}:g\in G\right\} $ is $\rho \left( G\right) \otimes \mathcal{B
\left( \mathcal{H}\right) $, where $\rho $ is the right regular
representation of $G$. Hence, there exist an operator $T_{g}$ on $\mathcal{H}
$ for all $g\in G$ such that $\Omega =\sum_{g\in G}$ $\rho \left( g\right)
\otimes T_{g}$. For every $\xi \in \mathcal{H}$ and $a\in A$, we hav
\begin{eqnarray*}
\left( \sum_{g\in G}\rho \left( g\right) \otimes T_{g}\right) \Pi \left(
a\right) \left( \delta _{0}\otimes \xi \right) &=&\left( \sum_{g\in G}\rho
\left( g\right) \otimes T_{g}\right) \left( \delta _{0}\otimes \pi \left(
a\right) \xi \right) \\
&=&\sum_{g\in G}\delta _{g}\otimes T_{g}\pi \left( a\right) \xi
\end{eqnarray*}
an
\begin{eqnarray*}
\Pi \left( a\right) \left( \sum_{g\in G}\rho \left( g\right) \otimes
T_{g}\right) \left( \delta _{0}\otimes \xi \right) &=&\Pi \left( a\right)
\left( \sum_{g\in G}\delta _{g}\otimes T_{g}\xi \right) \\
&=&\sum_{g\in G}\delta _{g}\otimes \pi \left( \alpha _{g^{-1}}(a)\right)
T_{g}\xi \text{.}
\end{eqnarray*}
Therefore, for every $g\in G$ and for all $a\in A$
\begin{equation}
\pi \left( \alpha _{g^{-1}}(a)\right) T_{g}=T_{g}\pi \left( a\right) \text{.}
\label{RegularIrred1}
\end{equation
\qquad
Since $\Omega $ is non scalar, there exists $g_{0}\in G\setminus \left\{
e\right\} $ such that $T_{g_{0}}\not=0$. By Lemma (\ref{Schur}), Equality
\ref{RegularIrred1}) for $g_{0}$ implies that $\pi $ and $\pi \circ \alpha
_{g_{0}}$, which are irreducible, are also unitarily equivalent since
T_{g_{0}}\not=0$. This is a contradiction. So $\Pi $ is irreducible.
We now show the converse. First, note that if $\pi $ is reducible then there
exists a projection $p$ on $\mathcal{H}$ which is neither $0$ or $1$ such
that $p$ commutes with the range of $\pi $. It is then immediate that
1\otimes p$ commutes with the range of $\Pi $ and thus $\Pi $ is reducible.
Assume now that there exists $g\in G\setminus \left\{ e\right\} $ such that
\pi $ and $\pi \circ \alpha _{g}$ are unitarily equivalent. Then there
exists a unitary $V$ such that for every $a\in A$:
\begin{equation*}
\pi \left( a\right) =V\pi \left( \alpha _{g}\left( a\right) \right) V^{\ast
\text{.}
\end{equation*
Let us show that $\rho \left( g\right) \otimes V$ is in the commutant of
\Pi \left( A\rtimes _{\alpha }G\right) .$ We only need to check that it
commutes with $\Pi \left( a\right) $ for $a\in A$
\begin{eqnarray*}
\left( \rho \left( g\right) \otimes V\right) \Pi \left( a\right) \left(
\delta _{h}\otimes \xi \right) &=&\delta _{hg}\otimes V\pi \left( \alpha
_{h^{-1}}\left( a\right) \right) \xi \text{, and} \\
\Pi \left( a\right) \left( \rho \left( g\right) \otimes V\right) \left(
\delta _{h}\otimes \xi \right) &=&\delta _{hg}\otimes \pi \left( \alpha
_{g^{-1}}\alpha _{h^{-1}}\left( a\right) \right) V\xi \text{.}
\end{eqnarray*
Since $V\pi \left( \alpha _{h^{-1}}\left( a\right) \right) =\pi \left(
\alpha _{g^{-1}}\alpha _{h^{-1}}\left( a\right) \right) V,$ we conclude that
the two quantities are equal, and that $\Pi $ is reducible.
Hence, if $\Pi $ is irreducible, then $\pi $ is irreducible and not
equivalent to $\pi \circ \alpha _{g}$ for any $g\in G\setminus \left\{
e\right\} $.
\end{proof}
\bigskip Theorem (\ref{RegularIrred}) provides us with a possible family of
irreducible representations of the crossed-product. The representations
given in Theorem (\ref{RegularIrred}) are called \emph{regular
representations} of $A\rtimes _{\alpha }G$, whether or not they are
irreducible.
\bigskip However, we shall see that there are many irreducible
representations of $A\rtimes _{\alpha }G$ which are not regular. Easy
examples are provided by actions of finite cyclic groups by inner
automorphisms on full matrix algebras, where the identity representation is
in fact the only irreducible representation of the crossed-product. More
generally, the conditions that $\pi $ is irreducible and $\pi \circ \alpha
_{g}$ are not equivalent for $g\in G\backslash \{e\}$ are not necessary.
These observations will be placed into a more general context as we now
address the question raised at the start of this paper in the next section.
\section{Actions of Finite Groups}
This section is concerned with establishing results describing the
irreducible representations of crossed-products by finite groups. The main
tool for our study is to understand such actions from the perspective of the
spectrum of the C*-algebra. In this paper, the spectrum $\widehat{A}$ of a
C*-algebra $A$ is the set of unitary equivalence classes of irreducible
representations of $A$.
We start by two simple observations. Let $\alpha $ be the action of a finite
group $G$ on some unital C*-algebra $A$. Let $\pi _{1}$ and $\pi _{2}$ be
two equivalent irreducible representations of $A$, so that there exists a
unitary $u$ such that $u\pi _{1}u^{\ast }=\pi _{2}$. Then trivially $u\left(
\pi _{1}\circ \alpha _{g^{-1}}\right) u^{\ast }=\pi _{2}\circ \alpha
_{g^{-1}}$ for all $g\in G$. Moreover, $\pi _{1}\circ \alpha _{g^{-1}}$ has
the same range as $\pi _{1}$ and thus is irreducible as well. These two
remarks show that for all $g\in G$ there exists a map $\widehat{\alpha _{g}}$
of $G$ on $\widehat{A}$ defined by mapping the class of an irreducible
representation $\pi $ of $A$ to the class of $\pi \circ \alpha _{g^{-1}}$.
Since $\left( \pi \circ \alpha _{g^{-1}}\right) \circ \alpha _{h^{-1}}=\pi
\circ \alpha _{\left( hg\right) ^{-1}}$, we have $\widehat{\alpha }_{h}\circ
\widehat{\alpha }_{g}=\widehat{\alpha }_{hg}$, and trivially $\widehat
\alpha }_{e}$ is the identity on $\widehat{A}$. Thus $\widehat{\alpha }$ is
an action of $G$ on $\widehat{A}$.
Given a representation $\Pi $ of the crossed-product $A\rtimes _{\alpha }G$,
we define the support of $\Pi $ as the subset $\Sigma $ of $\widehat{A}$ of
all classes of irreducible representations of $A$ weakly contained in $\Pi
_{|A}$. Our main interest are in the support of irreducible representations
of $A\rtimes _{\alpha }G$ which we now prove are always finite.
\subsection{Finiteness of irreducible supports}
Let $G$ be a finite group of neutral element $e$. Let $\widehat{G}$ be the
dual of $G$ i.e. the set of unitary equivalence classes of irreducible
representations of $G$. By \cite[15.4.1, p. 291]{Dixmier}, the cardinal of
\widehat{G}$ is given by the number of conjugacy classes of $G$, so
\widehat{G}$ is a finite set. Let $\rho \in \widehat{G}$ and $\lambda $ be
any irreducible representation of $G$ of class $\rho $ acting on a Hilbert
space $\mathcal{H}$. Then $\overline{\lambda }$ is the (irreducible)\
representation $g\in G\mapsto \lambda (g)$ acting on the conjugate Hilbert
space $\overline{\mathcal{H}}$ \cite[13.1.5, p. 250]{Dixmier}. We define
\overline{\rho }$ as the class of representations unitarily equivalent to
\overline{\lambda }$.
\bigskip Let $B$ be a unital C*-algebra and $\alpha $ an action of $G$ on $B$
by *-automorphisms. We now recall from \cite{Hoegh-Krohn81} the definition
and elementary properties of the spectral subspaces of $B$ for the action
\alpha $ of $G$. Let $\rho \in \widehat{G}$. The character of $\rho $ is
denoted by $\chi _{\rho }$. All irreducible representations of $G$ whose
class in $\widehat{G}$ is $\rho $ act on vector spaces of the same dimension
which we denote by $\dim \rho $. We recall from \cite[15.3.3, p. 287
{Dixmier} that for any $\rho ,\rho ^{\prime }\in \widehat{G}$ we have
\begin{equation*}
\chi _{\rho }(e)=\dim \rho
\end{equation*
and
\begin{equation*}
\chi _{\rho }\ast \chi _{\rho ^{\prime }}(g)=\sum_{h\in G}\chi _{\rho
}(h)\chi _{\rho ^{\prime }}(gh^{-1})=\left\{
\begin{array}{ccc}
0 & \text{if} & \rho \not=\rho ^{\prime }\text{,} \\
\left( \dim \rho \right) ^{-1}\chi _{\rho }(g) & \text{if} & \rho =\rho
^{\prime }\text{.
\end{array
\right.
\end{equation*}
The spectral subspace of $B$ for $\alpha $ associated to $\rho \in \widehat{
}$ is the space $B_{\rho }$ defined by
\begin{equation*}
B_{\rho }=\left\{ \frac{\dim \left( \rho \right) }{\left\vert G\right\vert
\sum_{g\in G}\chi _{\overline{\rho }}(g)\alpha _{g}(b):b\in B\right\} \text{
}
\end{equation*
i.e. the range of the Banach space operator on $B$ defined by:
\begin{equation}
P_{\rho }:b\in B\mapsto \frac{\dim \left( \rho \right) }{\left\vert
G\right\vert }\sum_{g\in G}\chi _{\overline{\rho }}(g)\alpha _{g}(b)\text{.}
\label{SpectralProjDef}
\end{equation
In particular, the spectral subspace associated to the trivial
representation is the fixed point C*-subalgebra $B_{1}$ of $B$ for the
action $\alpha $ of $G$. Now, we have
\begin{eqnarray}
P_{\rho }\left( P_{\rho ^{\prime }}(a)\right) &=&\frac{\dim \left( \rho
\right) }{\left\vert G\right\vert }\frac{\dim \left( \rho ^{\prime }\right)
}{\left\vert G\right\vert }\sum_{g\in G}\sum_{h\in G}\chi _{\overline{\rho
}(g)\chi _{\overline{\rho ^{\prime }}}(h)\alpha _{gh}(a) \notag \\
&=&\frac{\dim \left( \rho \right) }{\left\vert G\right\vert }\frac{\dim
\left( \rho ^{\prime }\right) }{\left\vert G\right\vert }\sum_{g\in G}\left(
\sum_{h\in G}\chi _{\overline{\rho }}(gh^{-1})\chi _{\overline{\rho ^{\prime
}}}(h)\right) \alpha _{g}(a) \notag \\
&=&\left\{
\begin{array}{ccc}
0 & \text{if} & \rho \not=\rho ^{\prime } \\
\frac{\dim \left( \rho \right) }{\left\vert G\right\vert }\sum_{g\in G}\chi
_{\overline{\rho }}(g)\alpha _{g}(a) & \text{if} & \rho =\rho ^{\prime
\text{.
\end{array
\right. \label{SpectralOrthogonal}
\end{eqnarray
Hence $P_{\rho }^{2}=P_{\rho }$ so $P_{\rho }$ is a Banach space projection
and $P_{\rho }P_{\rho ^{\prime }}=0$ for all $\rho ^{\prime }\not=\rho $ so
these projections are pairwise orthogonal.
Moreover, for any $g,h\in G$, from \cite[15.4.2 (2) p. 292]{Dixmier}
\begin{equation}
\sum_{\rho \in \widehat{G}}\chi _{\rho }(g)\overline{\chi _{\rho }(h)
=\left\{
\begin{array}{cc}
\frac{\left\vert G\right\vert }{C(g)} & \text{if }g\text{ is conjugated with
}h\text{,} \\
0 & \text{otherwise.
\end{array
\right. \label{Sumation2}
\end{equation
where for $g\in G$ the quantity $C(g)$ is the number of elements in $G$
conjugated to $g$. In particular, note that since $g\in G\backslash \left\{
e\right\} $ is not conjugated to $e$, we have by Equality (\ref{Sumation2})\
that
\begin{equation}
\sum_{\rho \in \widehat{G}}\chi _{\rho }(g)\dim \rho =\sum_{\rho \in
\widehat{G}}\chi _{\rho }(g)\overline{\chi _{\rho }(e)}=0\text{.}
\label{NullSum}
\end{equation}
Furthermore, because each irreducible representation $\rho $ of $G$ appears
with multiplicity $\dim \rho $ in the left regular representation of $G$ one
can show \cite[15.4.1, p. 291]{Dixmier} that
\begin{equation}
\sum_{\rho \in \widehat{G}}\left( \dim \rho \right) ^{2}=\left\vert
G\right\vert \text{.} \label{RegularMul}
\end{equation}
Hence for all $b\in B$
\begin{eqnarray}
\sum_{\rho \in \widehat{G}}P_{\rho }(b) &=&\sum_{\rho \in \widehat{G}}\frac
\dim \left( \rho \right) }{\left\vert G\right\vert }\sum_{g\in G}\chi _{\rho
}(g)\alpha _{g}(b) \notag \\
&=&\frac{1}{\left\vert G\right\vert }\sum_{g\in G}\left( \sum_{\rho \in
\widehat{G}}\dim \left( \rho \right) \chi _{\rho }(g)\right) \alpha _{g}(b)
\notag \\
&=&\frac{1}{\left\vert G\right\vert }\left( \sum_{\rho \in \widehat{G}}\dim
\left( \rho \right) \chi (e)\right) \alpha _{e}(b)\text{ by Equality (\re
{NullSum})} \notag \\
&=&\left( \frac{1}{\left\vert G\right\vert }\sum_{\rho \in \widehat{G}}\dim
(\rho )^{2}\right) b=b\text{ by Equality (\ref{RegularMul}).}
\label{ProjectionSummation}
\end{eqnarray
Hence $\sum_{\rho \in \widehat{G}}P_{\rho }=\limfunc{Id}_{B}$. Thus by (\re
{SpectralOrthogonal})\ and (\ref{ProjectionSummation}) we have
\begin{equation}
B=\dbigoplus\limits_{\rho \in \widehat{G}}B_{\rho }\text{.}
\label{SpectralSummation}
\end{equation}
We now establish that the restriction of any irreducible representation of a
crossed-product of some unital C*-algebra $A$ by $G$ is the direct sum of
finitely many irreducible representations of $A$.
\begin{theorem}
\label{FiniteRep}Let $G$ be a finite group and $A$ a unital C*-algebra. Let
\alpha $ be an action of $G$ by *-automorphism on $A$. Let $\Pi $ be an
irreducible representation of $A\rtimes _{\alpha }G$ on some Hilbert space
\mathcal{H}$. We denote by $U^{g}$ the canonical unitary in $A\rtimes
_{\alpha }G$ corresponding to $g\in G$. Then:
\begin{itemize}
\item The action $g\mapsto \limfunc{Ad}\Pi (U^{g})$ on $\mathcal{B}\left(
\mathcal{H}\right) $ leaves the commutant $\Pi \left( A\right) ^{\prime }$
of $\Pi (A)$ invariant, and thus defines an action $\beta $ of $G$ on $\Pi
\left( A\right) ^{\prime }$,
\item The action $\beta $ is ergodic on $\Pi \left( A\right) ^{\prime }$,
\item The Von Neumann algebra $\Pi \left( A\right) ^{\prime }$ is finite
dimensional,
\item The representation $\Pi _{|A}$ of $A$ is equivalent to the direct sum
of finitely many irreducible representations of $A$.
\end{itemize}
\end{theorem}
\begin{proof}
Let $\mathfrak{M}=\Pi (A)^{\prime }$. Denote $U_{\Pi }^{g}=\Pi (U^{g})$ for
all $g\in G$. Let $T\in \mathfrak{M}$. Let $a\in A$ and $g\in G$. Then
\begin{eqnarray*}
U_{\Pi }^{g}TU_{\Pi }^{g\ast }\Pi (a) &=&U_{\Pi }^{g}TU_{\Pi }^{g\ast }\Pi
(a)U_{\Pi }^{g}U_{\Pi }^{g\ast } \\
&=&U_{\Pi }^{g}T\Pi \left( \alpha _{g^{-1}}(a)\right) U_{\Pi }^{g\ast } \\
&=&U_{\Pi }^{g}\Pi \left( \alpha _{g^{-1}}(a)\right) TU_{\Pi }^{g\ast } \\
&=&U_{\Pi }^{g}U_{\Pi }^{g\ast }\Pi (a)U_{\Pi }^{g}TU_{\Pi }^{g\ast } \\
&=&\Pi (a)U_{\Pi }^{g}TU_{\Pi }^{g\ast }\text{.}
\end{eqnarray*
Hence $U_{\Pi }^{g}TU_{\Pi }^{g\ast }\in \mathfrak{M}$ for all $g\in G$ and
T\in \mathfrak{M}$. Define $\beta _{g}(T)=U_{\Pi }^{g}TU_{\Pi }^{g\ast }$
for all $g\in G$ and $T\in \mathfrak{M}$. Then $g\in G\mapsto \beta _{g}$ is
an action of $G$ on $\mathfrak{M}$.
Let now $T\in \mathfrak{M}$ such that $\beta _{g}(T)=T$ for all $g\in G$.
Then $T$ commutes with $U_{\Pi }^{g}$ for all $g\in G$. Moreover by
definition of $\mathfrak{M}$, the operator $T$ commutes with $\Pi (A)$.
Hence $T$ commutes with $\Pi $ which is irreducible, so $T$ is scalar. Hence
$\beta $ is ergodic.
Let $\rho $ be an irreducible representation of $G$ (since $G$ is finite,
\rho $ is finite dimensional). By \cite[Proposition 2.1]{Hoegh-Krohn81}, the
spectral subspace $\mathfrak{M}_{\rho }$ of $\mathfrak{M}$ for $\beta $
associated to $\rho $ is finite dimensional. Since $\mathfrak{M}=\oplus
_{\rho \in \widehat{G}}\mathfrak{M}_{\rho }$ by Equality (\re
{SpectralSummation}) and since $\widehat{G}$ is finite by \cite[15.4.1, p.
291]{Dixmier} we conclude that $\mathfrak{M}$ is finite dimensional.
Denote $\Pi _{|A}$ by $\pi _{A}$. Let $p_{1},\ldots ,p_{k}$ be projections
in $\mathfrak{M}$, all minimal and such that $\sum_{i=1}^{k}p_{i}=1$. Let
i\in \left\{ 1,\ldots ,k\right\} $. Then by definition of $\mathfrak{M}$,
the projection $p_{i}$ commutes with $\pi _{A}$. Hence $p_{i}\pi _{A}p_{i}$
is a representation of $A$. Let $q$ be a projection of $p_{i}\mathcal{H}$
such that $p_{i}$ commutes with $p_{i}\pi _{A}p_{i}$. Then $q\leq p_{i}$ and
$q\in \mathfrak{M}$, so $q\in \left\{ 0,p_{i}\right\} $ since $p_{i}$ is
minimal. Hence $p_{i}\pi _{A}p_{i}$ is an irreducible representation of $A$.
Therefore
\begin{eqnarray*}
\pi _{A} &=&\left( \sum_{i=1}^{k}p_{i}\right) \pi _{A}\text{ since
\sum_{i=1}^{k}p_{i}=1\text{,} \\
&=&\sum_{i=1}^{k}p_{i}\pi _{A}p_{i}\text{ since }p_{i}=p_{i}^{2}\in
\mathfrak{M}\text{.}
\end{eqnarray*
Hence $\pi _{A}$ is the direct sum of finitely many irreducible
representations of $A$.
\end{proof}
\subsection{Minimality of the irreducible supports}
\bigskip The following is our key observation which will drive the proofs in
this section:
\begin{Observation}
\label{Observation}Let $\Pi $ be an irreducible representation of $A\rtimes
_{\alpha }G$ and let $\pi _{A}=\Pi _{|A}$. Then for each $g\in G$ the
representations $\pi _{A}$ and $\pi _{A}\circ \alpha _{g}$ are unitarily
equivalent. Hence, the decompositions in direct sums of irreducible
representations of $A$ for $\pi _{A}$ and $\pi _{A}\circ \alpha _{g}$ are
the same.
\end{Observation}
\bigskip This observation is the basis of the next lemma, which is
instrumental in the proof of the theorem to follow.
\begin{lemma}
\label{LemmaCycle}Let $\alpha $ be an action of a finite group $G$ on a
unital C*-algebra $A$. Let $\Pi $ be an irreducible representation of
A\rtimes _{\alpha }G$ and let $\pi _{A}$ be the restriction of $\Pi $ to $A
. Then there exists a finite subset $\Sigma $ of the spectrum $\widehat{A}$
of $A$ such that all irreducible subrepresentations of $\pi _{A}$ are in
\Sigma $. Moreover, all the elements of $\Sigma $ in a given orbit for
\widehat{\alpha }$ have the same multiplicity in $\pi _{A}$.
\end{lemma}
\begin{proof}
Let $\Sigma $ be the subset of the spectrum $\widehat{A}$ of $A$ consisting
of all classes of irreducible representations weakly contained in $\pi _{A}
. By Theorem (\ref{FiniteRep}), since $\Pi $ is irreducible, $\pi _{A}$ is a
finite direct sum of irreducible representations of $A$ so $\Sigma $ is
nonempty and finite.
Let $g\in G$. Now, by Observation (\ref{Observation}), since $\pi _{A}\circ
\alpha _{g^{-1}}$ is unitarily equivalent to $\pi _{A}$, its decomposition
in irreducible representations is the same as the one for $\pi _{A}$. Thus,
if $\eta \in \Sigma $ then $\widehat{\alpha }_{g}\left( \eta \right) \in
\Sigma $. Since $\widehat{\alpha }_{g}$ is a bijection on $\widehat{A}$ and
thus is injective, and since $\Sigma $ is finite, $\widehat{\alpha }_{g}$ is
a permutation of $\Sigma $.
Let $\Sigma _{\alpha }$ be the orbit of $\varphi \in \Sigma $ under
\widehat{\alpha }$ and write $\pi _{A}=\pi _{1}\oplus \ldots \oplus \pi _{k}$
using Theorem (\ref{FiniteRep}), where $\pi _{1},\ldots ,\pi _{k}$ are
irreducible representations of $A$, with the class of $\pi _{1}$ being
\varphi $. Now, for $g\in G$, let $n_{1,g},\ldots ,n_{m(g),g}$ be the
integers between $1$ and $k$ such that $\pi _{n_{i,g}}$ is equivalent to
\pi _{1}\circ \alpha _{g}$. In particular, $m(g)$ is the multiplicity of
\pi _{1}\circ \alpha _{g}$ in $\pi _{A}$. Then $\left( \pi _{n_{1,e}}\oplus
\ldots \oplus \pi _{n_{m(1),e}}\right) \circ \alpha _{g}$ must be the
subrepresentation $\pi _{n_{1,g}}\oplus \ldots \oplus \pi _{n_{m(g),g}}$ of
\pi _{A}$. So $m(g)=m(e)$ by uniqueness of the decomposition. Hence for all
g$ the multiplicity of $\widehat{\alpha }_{g}(\varphi )$ is the same as the
multiplicity of $\varphi $.
\end{proof}
\bigskip We now establish the main theorem of this paper, describing the
structure of irreducible representations of crossed-products by finite
groups. A \emph{unitary projective representation }of $G$ is a map $\Lambda $
from $G$ into the group of unitaries on some Hilbert space such that there
exists a complex valued 2-cocycle $\sigma $ on $G$ satisfying for all
g,h\in G$ the identity $\Lambda _{gh}=\sigma (g,h)\Lambda _{g}\Lambda _{h}$.
\begin{theorem}
\label{FiniteGroupConclusion}Let $G$ be a finite group and $\alpha $ be an
action of $G$ on a unital C*-algebra $A$ by *-automorphisms. Let $\Pi $ be
an irreducible representation of $A\rtimes _{\alpha }G$ on some Hilbert
space $\mathcal{H}$. Then there exists a subgroup $H$ of $G$ and a
representation $\pi $ of $A$ on some Hilbert space $\mathcal{J}$ such that,
up to conjugating $\Pi $ by some fixed unitary, and denoting the index of $H$
in $G$ by $m=G:H$ we have the following:
For any subset $\left\{ g_{1},\ldots ,g_{m}\right\} $ of $G$ such that
g_{1} $ is the neutral element of $G$ and $Hg_{j}\cap Hg_{i}=\left\{
g_{1}\right\} $ for $i\not=j$ while $G=\cup _{j=1}^{m}Hg_{j}$, we have:
\begin{enumerate}
\item The representations $\pi \circ \alpha _{g_{i}}$ and $\pi \circ \alpha
_{g_{j}}$ are disjoint for $i,j\in \left\{ 1,\ldots ,m\right\} $ and
i\not=j $ (so in particular, they are not unitarily equivalent),
\item There exists an irreducible representation $\pi _{1}$ of $A$ on a
Hilbert subspace $\mathcal{H}_{1}$ of $\mathcal{J}$ and some integer $r$
such that $\mathcal{J}=\mathbb{C}^{r}\otimes \mathcal{H}_{1}$ and $\pi =1_{
\mathbb{C}^{r}}\otimes \pi _{1}$,
\item For any $h\in H$ there exists a unitary $V^{h}$ on $\mathcal{H}_{1}$
such that $V^{h}\pi _{1}\left( V^{h}\right) ^{\ast }=\pi _{1}\circ \alpha
_{h}$, and $h\in H\mapsto V^{h}$ is a unitary projective representation of
H $ on $\mathcal{H}_{1}$,
\item We have $\mathcal{H}=\mathcal{J}_{g_{1}}\oplus \ldots \oplus \mathcal{
}_{g_{m}}$ where for all $i=1,\ldots ,m$ the space $\mathcal{J}_{g_{i}}$ is
an isometric copy of $\mathcal{J}$,
\item In this decomposition of $\mathcal{H}$ we have for all $a\in A$ that
\begin{equation}
\Pi (a)=\left[
\begin{array}{cccc}
\pi (a) & & & \\
& \pi \circ \alpha _{g_{2}}(a) & & \\
& & \ddots & \\
& & & \pi \circ \alpha _{g_{m}}(a
\end{array
\right] \label{PIA}
\end{equation}
\item In this same decomposition, for every $g$ there exists a permutation
\sigma ^{g}$ of $\left\{ 1,\ldots ,m\right\} $ and unitaries $U_{i}^{g}
\mathcal{J}_{g_{i}}\longrightarrow \mathcal{J}_{\sigma ^{g}(g_{i})}$ such
that
\begin{equation*}
\Pi (U^{g})=\left[ U_{j}^{g}\delta _{i}^{\sigma ^{g}(j)}\right]
_{i,j=1,\ldots ,m}
\end{equation*
where $\delta $ is the Kronecker symbol
\begin{equation}
\delta _{a}^{b}=\left\{
\begin{array}{cc}
1 & \text{if }a=b\text{,} \\
0 & \text{otherwise.
\end{array
\right. \label{Kronecker}
\end{equation
Moreover
\begin{equation*}
H=\left\{ g\in G:\sigma ^{g}(1)=1\right\} \text{.}
\end{equation*}
\item The representation $\Psi $ of $A\rtimes _{\alpha }H$ on $\mathcal{J}$
defined by $\Psi (a)=\pi (a)$ for all $a\in A$ and $\Psi (U^{h})=U_{1}^{h}$
for $h\in H$ is irreducible. Moreover, there exists an irreducible unitary
projective representation $\Lambda $ of $G$ on $\mathbb{C}^{r}$ such that on
$\mathcal{J}=\mathbb{C}^{r}\otimes \mathcal{H}_{1}$, while $\Psi (a)=1_
\mathbb{C}^{r}}\otimes \pi _{1}(a)$, we also have $\Psi
(U^{h})=U_{1}^{h}=\Lambda _{h}\otimes V^{h}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\Pi $ be an irreducible representation of $A\rtimes _{\alpha }G$.
Denote $\Pi _{|A}$ by $\pi _{A}$. By Theorem (\ref{FiniteRep}), there exists
a nonzero natural integer $k$ and irreducible representations $\pi
_{1},\ldots ,\pi _{k}$ of $A$, acting respectively on Hilbert spaces
\mathcal{H}_{1},\ldots ,\mathcal{H}_{k}$ such that up to a unitary
conjugation of $\Pi $, we have $\mathcal{H}=\mathcal{H}_{1}\oplus \ldots
\oplus \mathcal{H}_{k}$ and in this decomposition, for all $a\in A$
\begin{equation*}
\pi _{A}(a)=\left[
\begin{array}{cccc}
\pi _{1}(a) & & & \\
& \pi _{2}(a) & & \\
& & \ddots & \\
& & & \pi _{k}(a
\end{array
\right] \text{.}
\end{equation*}
At this stage, the indexing of the irreducible subrepresentations of $\pi
_{A}$ is only defined up to a permutation of $\left\{ 1,\ldots ,k\right\} $.
We start our proof by making a careful choice of such an indexing. To do so,
first choose $\pi _{1}$ arbitrarily among all irreducible subrepresentations
of $\pi _{A}$. Our next step is to set
\begin{equation*}
H=\left\{ g\in G:\pi _{1}\circ \alpha _{g}\text{ is equivalent to }\pi
_{1}\right\} \text{.}
\end{equation*
We now show that $H$ is a subgroup of $G$. For all $h\in H$ we denote by
V^{h}$ the (unique, up to a scalar multiple) unitary such that $V^{h}\pi
_{1}\left( V^{h}\right) ^{\ast }=\pi _{1}\circ \alpha _{h}$. Then if $g,h\in
H$ we have
\begin{eqnarray*}
\pi _{1}\circ \alpha _{gh^{-1}} &=&\left( \pi _{1}\circ \alpha _{g}\right)
\circ \alpha _{h^{-1}}=V^{g}\left( \pi _{1}\circ \alpha _{h^{-1}}\right)
V^{g^{-1}} \\
&=&V^{g}V^{h^{-1}}\pi _{1}V^{h}V^{g^{-1}}
\end{eqnarray*
so $\pi _{1}\circ \alpha _{gh^{-1}}$ is unitarily equivalent to $\pi _{1}$
and thus $gh^{-1}\in H$ by definition. Since $H$ trivially contains the
neutral element of $G$, we conclude that $H$ is a subgroup of $G$.
Let $\left\{ g_{1},\ldots ,g_{m}\right\} $ a family of right coset
representatives such that $g_{1}$ is the neutral element of $G$ \cite[p. 10
{Robinson82}, i.e. such that for $i\not=j$ we have $Hg_{j}\cap
Hg_{i}=\left\{ g_{1}\right\} $ while $G=\cup _{j=1}^{m}Hg_{j}$. In
particular, for $i\in \left\{ 2,\ldots ,m\right\} $ we have $g_{1}\not=g_{i}$
and by definition of $H$ this implies that $\pi _{1}\circ \alpha _{g_{i}}$
is not equivalent to $\pi _{1}$.
Then let $\pi _{2},\ldots ,\pi _{n_{1}}$ be all the representations
equivalent to $\pi _{1}$. We then choose $\pi _{n_{1}+1}$ to be a
subrepresentation of $\pi _{A}$ equivalent to $\pi _{1}\circ \alpha _{g_{1}}
. Again, we let $\pi _{n_{1}+1},\ldots ,\pi _{n_{2}}$ be all the
representations which are equivalent to $\pi _{n_{1}+1}$. More generally, we
let $\pi _{n_{j}+1},\ldots ,\pi _{n_{j+1}}$ be all the subrepresentations of
$\pi _{A}$ equivalent to $\pi _{1}\circ \alpha _{g_{j}}$ for all $j\in
\{1,\ldots ,m\}$. All other irreducible subrepresentations of $\pi _{A}$
left, if any, are indexed from $n_{m}+1$ to $k$ and we denote their direct
sum by $\Lambda $.
Note that $\Lambda $ contains no subrepresentation equivalent to any
representation $\pi _{1}\circ \alpha _{g}$ for any $g\in G$. Indeed, if
g\in G$ then there exists $h\in H$ and a unique $j\in \left\{ 1,\ldots
,m\right\} $ such that $g=hg_{j}$. Thus
\begin{equation*}
\pi _{1}\circ \alpha _{g}=\pi _{1}\circ \alpha _{h}\circ \alpha
_{g_{j}}=V^{h}\left( \pi _{1}\circ \alpha _{g_{j}}\right) V^{-h}
\end{equation*
and thus $\pi _{1}\circ \alpha _{g}$ is equivalent to one of the
representations $\pi _{1},\ldots ,\pi _{n_{m}}$ by construction. Also note
that if $\pi _{1}\circ \alpha _{g_{i}}$ is equivalent to $\pi _{1}\circ
\alpha _{g_{j}}$ then $g_{i}g_{j}^{-1}\in H$ which contradicts our choice of
$\left\{ g_{1},\ldots ,g_{m}\right\} $ unless $i=j$. Hence, for $i\not=j$
the representations $\pi _{1}\circ \alpha _{g_{j}}$ and $\pi _{1}\circ
\alpha _{g_{i}}$ are not equivalent.
Now, if $\varphi _{1},\ldots ,\varphi _{m}$ represent the
unitary-equivalence classes of the representations $\pi _{1},\pi _{1}\circ
\alpha _{g_{1}},\ldots ,\pi _{1}\circ \alpha _{g_{m}}$ then $\Sigma
_{1}=\left\{ \varphi _{1},\ldots ,\varphi _{m}\right\} $ is the orbit of
\varphi _{1}$ for the action $\widehat{\alpha }$ of $G$ on $\widehat{A}$.
Therefore, there exists $r\geq 1$ such that $n_{j}=jr+1$ for all $j=1,\ldots
,m$ by Lemma (\ref{LemmaCycle}), i.e. all the representations $\pi _{1}\circ
\alpha _{g_{i}}$ ($i=1,\ldots ,m$) have multiplicity $r$ in $\pi _{A}$.
Thus, (up to equivalence on $\Pi $) and writing $\mathcal{H}=\oplus
_{i=1}^{k}\mathcal{H}_{i}$ and in this decomposition
\begin{eqnarray}
\pi _{A} &=&\underset{\text{each equivalent to }\pi _{1}}{\underbrace{\pi
_{1}\oplus \ldots \oplus \pi _{r}}}\oplus \underset{\text{each equivalent to
}\pi _{1}\circ \alpha _{g_{1}}}{\underbrace{\pi _{r+1}\oplus \ldots \oplus
\pi _{2r}}}\oplus \cdots \label{DecompositionLambda} \\
&&\ldots \oplus \pi _{mr}\oplus \underset{\Lambda }{\underbrace{\pi
_{mr+1}\oplus \ldots \oplus \pi _{k}}} \notag \\
&=&\underset{\text{disjoint from }\Lambda \text{.}}{\underbrace{\pi
_{1}\oplus \ldots \oplus \pi _{n_{m}}}}\oplus \Lambda \text{.} \notag
\end{eqnarray}
Let $g\in G$. Still in the decomposition $\mathcal{H}=\mathcal{H}_{1}\oplus
\ldots \oplus \mathcal{H}_{k}$ with our choice of indexing, let us write
\begin{equation*}
\Pi \left( U^{g}\right) =U_{\Pi }^{g}=\left[
\begin{array}{cccc}
a_{11}^{g} & a_{12}^{g} & \cdots & a_{1k}^{g} \\
a_{21}^{g} & a_{22}^{g} & \cdots & a_{2k}^{g} \\
\vdots & & \ddots & \vdots \\
a_{k1}^{g} & a_{k2}^{g} & \cdots & a_{kk}^{g
\end{array
\right]
\end{equation*
for some operators $a_{ij}^{g}$ from $\mathcal{H}_{j}$ to $\mathcal{H}_{i}$
with $i,j=1,\ldots ,k$.
Since $U_{\Pi }^{g}\pi _{A}(a)=\pi _{A}(\alpha _{g}(a))U_{\Pi }^{g}$, we can
write
\begin{eqnarray}
&&\left[
\begin{array}{cccc}
a_{11}^{g}\pi _{1} & a_{12}^{g}\pi _{2} & \cdots & a_{1k}^{g}\pi _{k} \\
a_{21}^{g}\pi _{1} & a_{22}^{g}\pi _{2} & \cdots & a_{2k}^{g}\pi _{k} \\
\vdots & & \ddots & \vdots \\
a_{k1}^{g}\pi _{1} & a_{k2}^{g}\pi _{2} & \cdots & a_{kk}^{g}\pi _{k
\end{array
\right] \label{MainEquality} \\
&=&\left[
\begin{array}{cccc}
\left( \pi _{1}\circ \alpha _{g}\right) a_{11}^{g} & \left( \pi _{1}\circ
\alpha _{g}\right) a_{12}^{g} & \cdots & \left( \pi _{1}\circ \alpha
_{g}\right) a_{1k}^{g} \\
\left( \pi _{2}\circ \alpha _{g}\right) a_{21}^{g} & \left( \pi _{2}\circ
\alpha _{g}\right) a_{22}^{g} & \cdots & \left( \pi _{2}\circ \alpha
_{g}\right) a_{2k}^{g} \\
\vdots & & \ddots & \vdots \\
\left( \pi _{k}\circ \alpha _{g}\right) a_{k1}^{g} & \left( \pi _{k}\circ
\alpha _{g}\right) a_{k2}^{g} & \cdots & \left( \pi _{k}\circ \alpha
_{g}\right) a_{kk}^{g
\end{array
\right] \text{.} \notag
\end{eqnarray
As a consequence of Equality (\ref{MainEquality}), we observe that for all
i,j\in \left\{ 1,\ldots ,k\right\} $ we have
\begin{equation}
a_{ij}^{g}\pi _{j}=\left( \pi _{i}\circ \alpha _{g}\right) a_{ij}^{g}\text{.}
\label{SchurEquality}
\end{equation}
First, let $i>mr$. Then the equivalence class of $\pi _{i}$ is not in the
orbit $\Sigma _{1}$ of $\varphi _{1}$ for $\widehat{\alpha }$ by
construction. Hence $\pi _{i}\circ \alpha _{g}$ is not unitarily equivalent
to $\pi _{1}\circ \alpha _{\gamma }$ for any $\gamma \in G$. On the other
hand, let $j\leq mr$. The representation $\pi _{j}$ is equivalent to $\pi
_{1}\circ \alpha _{g_{l}}$ for some $l\in \left\{ 1,\ldots ,m\right\} $ by
our choice of indexing. Therefore, $\pi _{i}\circ \alpha _{g}$ and $\pi _{j}$
are not unitarily equivalent, yet they both are irreducible representations
of $A$. Hence by Lemma (\ref{Schur}) applied to Equality (\ref{SchurEquality
) we conclude that $a_{ij}^{g}=0$. Similarly, $\pi _{i}$ and $\pi _{j}\circ
\alpha _{g}$ are not equivalent so $a_{ji}^{g}=0$ as well.
Hence
\begin{equation*}
U_{\Pi }^{g}=\left[
\begin{array}{cccccc}
a_{11}^{g} & \cdots & a_{1mr}^{g} & 0 & \cdots & 0 \\
\vdots & & \vdots & \vdots & & \vdots \\
a_{mr1}^{g} & \cdots & a_{mr,mr}^{g} & 0 & \cdots & 0 \\
0 & \cdots & 0 & a_{mr+1,mr+1}^{g} & \cdots & a_{mr+1,k}^{g} \\
\vdots & & \vdots & \vdots & \ddots & \vdots \\
0 & \cdots & 0 & a_{k,mr+1}^{g} & \cdots & a_{kk}^{g
\end{array
\right] \text{.}
\end{equation*
If we assume that $n_{m}=mr<k$ then for all $g\in G$ the unitary $U_{\Pi
}^{g}$ commutes with the nontrivial projection $\underset{mr\text{ times}}
\underbrace{0\oplus \ldots \oplus 0}}\oplus \underset{k-mr\text{ times}}
\underbrace{1\oplus \ldots \oplus 1}}$ of $\mathcal{H}$, and so does $\pi
_{A}$. Yet $\Pi $ is irreducible, so this is not possible and thus $n_{m}=k
. Thus $\Sigma =\Sigma _{1}$ is an orbit of a single $\varphi \in \widehat{A}
$ for $\widehat{\alpha }$ and there is no $\Lambda $ left in Equality (\re
{DecompositionLambda}). In particular, the cardinal of $\Sigma _{1}$ is $m$.
Since by construction $\pi _{jr+z}$ is unitarily equivalent to $\pi
_{1}\circ \alpha _{g_{z}}$ for all $j=0,\ldots ,m-1$ and $z=1,\ldots ,r$,
there exists a unitary $\omega _{jr+z}$ from $\mathcal{H}_{1}$ onto
\mathcal{H}_{jr+z}$ such that $\omega _{jr+z}\left( \pi _{1}\circ \alpha
_{g_{z}}\right) \omega _{jr+z}^{\ast }=\pi _{jr+z}$ (note that we can choose
$\omega _{1}=1$). We define on $\mathcal{H}=\mathcal{H}_{1}\oplus \ldots
\oplus \mathcal{H}_{k}$ the diagonal unitary
\begin{equation*}
\Omega =\left[
\begin{array}{ccc}
\omega _{1}^{\ast } & & \\
& \ddots & \\
& & \omega _{k}^{\ast
\end{array
\right] \text{.}
\end{equation*
Denote by $\limfunc{Ad}\Omega $ is the *-automorphism on the C*-algebra of
bounded operators on $\mathcal{H}$ defined by $T\mapsto \Omega T\Omega
^{\ast }$. Then up to replacing $\Pi $ by $\limfunc{Ad}\Omega \circ \Pi $,
we can assume that $\pi _{jr+z}=\pi _{1}\circ \alpha _{g_{z}}$ for all $j\in
\left\{ 1,\ldots ,m\right\} $ and $z\in \left\{ 1,\ldots ,r\right\} $. Given
an irreducible representation $\eta $ of $A$ and any nonzero natural integer
$z$ we shall denote by $z\cdot \eta $ the representation $\underset{z\text{
times}}{\underbrace{\eta \oplus \ldots \oplus \eta }}$. Thus, if we set $\pi
=r\cdot \pi _{1}$ we see that $\pi _{A}$ can be written as in Equality (\re
{PIA}) with $\pi \circ \alpha _{g_{i}}$ disjoint from $\pi \circ \alpha
_{g_{j}}$ for $i,j\in \left\{ 1,\ldots ,m\right\} $ and $i\not=j$.
Let again $g\in G$. We now use the same type of argument to show that
U_{\Pi }^{g}$ is a \textquotedblleft unitary-permutation
shift\textquotedblright . Let $j\in \left\{ 0,\ldots ,m-1\right\} $. Let
q\in \left\{ 1,\ldots ,m\right\} $ such that $g_{j}g\in Hg_{q}$ --- by our
choice of $g_{1},\ldots ,g_{m}$ there is a unique such $q$. Let $i\in
\left\{ 0,\ldots ,m-1\right\} \backslash \left\{ q\right\} $ and $z,h\in
\left\{ 1,\ldots ,r\right\} $. By construction, the representation $\left(
r\cdot \pi _{rj+h}\right) \circ \alpha _{g}$ is unitarily equivalent to
r\cdot \pi _{rq+h}$ and disjoint from $r\cdot \pi _{ri+z}$. Yet by Equality
\ref{SchurEquality}) we have again that
\begin{equation*}
a_{ri+z,rj+h}^{g}\pi _{ri+z}=\left( \pi _{rj+h}\circ \alpha _{g}\right)
a_{ri+z,rj+h}^{g}\text{.}
\end{equation*
Thus $a_{ri+z,rj+h}^{g}=0$ by Lemma (\ref{Schur}) since $\pi _{ri+z}$ and
\pi _{rj+h}\circ \alpha _{g}$ are not equivalent yet irreducible. Thus, if
for all $z\in \left\{ 0,\ldots ,m-1\right\} $ we define the Hilbert subspace
$\mathcal{J}_{z}=\mathcal{H}_{zr+1}\oplus \ldots \oplus \mathcal{H}_{(z+1)r}$
of $\mathcal{H}$ then we conclude that $U_{\Pi }^{g}\left( \mathcal{J
_{j}\right) \subseteq \mathcal{J}_{q}$ and $\mathcal{H}=\mathcal{J
_{0}\oplus \ldots \oplus \mathcal{J}_{m-1}$. Moreover, by uniqueness of $q$
we also obtain that
\begin{equation}
U_{\Pi }^{g^{-1}}\left( \mathcal{J}_{q}\right) \subseteq \mathcal{J}_{j}
\label{InverseSubset}
\end{equation
and thus $U_{\Pi }^{g}\left( \mathcal{J}_{j}\right) =\mathcal{J}_{q}$.
Define $\sigma ^{g}(j)=q$. Then $\sigma ^{g}$ is a surjection of the finite
set $\left\{ 1,\ldots ,m\right\} $ by (\ref{InverseSubset}), so $\sigma ^{g}$
is a permutation. If $\delta $ is defined as in Equality (\ref{Kronecker})
then, if we set $U_{j}^{g}=U_{\Pi |\mathcal{J}_{j}}^{g}$ then
\begin{equation}
\left( r\cdot \left( \pi _{j}\circ \alpha _{g}\right) \right)
U_{i}^{g}=U_{i}^{g}\left( r\cdot \pi _{q}\right)
\end{equation
and
\begin{equation*}
U_{\Pi }^{g}=\left[ U_{j}^{g}\delta _{i}^{\sigma ^{g}(j)}\right] _{i,j}
\end{equation*}
for all $i=1,\ldots ,m$.
Since $U_{\Pi }$ is unitary, so are the operators $U_{1},\ldots ,U_{m}$. In
particular, $\mathcal{J}_{j}$ and $\mathcal{J}_{0}$ are isometric Hilbert
spaces for all $j=0,\ldots ,m-1$. Note that $\left( r\cdot \pi _{1}\right)
\circ \alpha _{g_{i}}$ acts on $\mathcal{J}_{i-1}$ for $i=1,\ldots ,m$ by
construction. We now denote $r\cdot \pi _{1}$ by $\pi $ and $\mathcal{J}
\mathcal{J}_{0}$.
Now, by construction $\sigma ^{g}(1)=1$ if and only if there exists an
operator $W$ on $\mathcal{J}_{1}\oplus \ldots \oplus \mathcal{J}_{m-1}$ such
that $U_{\Pi }^{g}=U_{1}^{g}\oplus W$, which is equivalent to $U_{1}^{g}\pi
_{1}=\left( \pi _{1}\circ \alpha _{g}\right) U_{1}^{g}$. By construction,
this is possible if and only if $g\in H$.
Let now $h\in H$. Hence $U_{1}^{h}\pi U_{1}^{h^{-1}}=\pi \circ \alpha _{h}$.
If we set $\Psi (a)=\pi (a)$ and $\Psi (U^{h})=U_{1}^{h}$, we thus define a
representation of $A\rtimes _{\alpha }H$ on $\mathcal{J}_{0}$. Let $b\in
A\rtimes _{\alpha }H$. Then there exists $g\in G\mapsto a_{g}$ such that
b=\sum_{g\in G}a_{g}U^{g}$. Hence $\Pi (b)=\sum_{g\in G}\pi
_{A}(a_{g})U_{\Pi }^{g}$. Let $Q$ be the projection of $\mathcal{H}$ on
\mathcal{J}_{0}$. Then
\begin{eqnarray}
Q\Pi (b)Q &=&\sum_{g\in G}\pi (a_{g})QU_{\Pi }^{g}Q \notag \\
&=&\sum_{h\in H}\pi (a_{h})U_{1}^{h}=\Psi \left( \sum_{h\in
H}a_{h}U^{h}\right) \text{.} \label{PsiRange}
\end{eqnarray
Since $\Pi $ is irreducible, the range of $\Pi $ is $\limfunc{WOT}$ dense by
the double commutant theorem. Hence, since the multiplication on the left
and right by a fixed operator is $\limfunc{WOT}$ continuous, we conclude
that $Q\Pi Q$ is $\limfunc{WOT}$ dense in $\mathcal{B}\left( Q\mathcal{H
\right) $. Therefore, by Equality (\ref{PsiRange}), we conclude by the
double commutant Theorem again that $\Psi $ is an irreducible representation
of $A\rtimes _{\alpha }H$.
Last, note that since $\pi _{1}$ is irreducible, if $h,g\in H$ then since
\begin{equation*}
V^{h^{-1}}V^{g^{-1}}V^{gh}\pi _{1}=\pi _{1}V^{h^{-1}}V^{g^{-1}}V^{gh}
\end{equation*
there exists $\lambda _{g,h}\in \mathbb{T}$ such that $V^{gh}=\lambda
_{g,h}V^{g}V^{h}$. Hence $g\in H\mapsto V^{g}$ is a projective
representation of $H$ on $\mathcal{H}_{1}$. Note that although the unitaries
$V^{h}$ are only defined up to a scalar, there is no apparent reason why one
could choose $\lambda $ to be the trivial cocycle unless the second
cohomology group of $H$ is trivial. We now note that $\mathcal{J}_{0}
\mathcal{J}=\mathbb{C}^{r}\otimes \mathcal{H}_{1}$ by construction. Now, for
all $h\in H$ we set $\upsilon _{h}=1_{\mathbb{C}^{r}}\otimes V^{h}$. Again,
\upsilon _{h}$ is a projective representation of $H$. Moreover, for $h\in H$
\begin{equation*}
U_{1}^{h}\upsilon _{h}^{\ast }\pi =\pi U_{1}^{h}\upsilon _{h}^{\ast }\text{.}
\end{equation*
Since $\pi =r\cdot \pi _{1}$, Lemma (\ref{Schur}) implies that there exist a
unitary $\Lambda _{h}\in M_{r}\left( \mathbb{C}\right) $ such that
U_{1}^{h}\upsilon _{h}^{\ast }=\Lambda _{h}\otimes 1_{\mathcal{H}_{1}}$.
Hence $U_{1}^{h}=\Lambda _{h}\otimes V^{h}$. Now, for $h,g\in H$ we have
U_{1}^{h}U_{1}^{g}=U_{1}^{hg}$ which implies that
\begin{equation*}
\left( \Lambda _{h}\otimes V^{h}\right) \left( \Lambda _{g}\otimes
V^{g}\right) =\Lambda _{h}\Lambda _{g}\otimes V^{h}V^{g}=\Lambda
_{hg}\otimes V^{hg}\text{.}
\end{equation*
Hence $h\mapsto \Lambda _{h}$ is a unitary projective representation of $H$
on $\mathbb{C}^{r}$ with cocycle $\overline{\lambda }$. Moreover, if $T$
commutes with the range of $\Lambda $ then $T\otimes 1$ commutes with the
range of $\Psi $, which contradicts the irreducibility of $\Psi $. Hence
\Lambda $ is irreducible. This completes the description of the
representation $\Psi $.
\end{proof}
\bigskip For generic groups, the representation $\Psi $ of Theorem (\re
{FiniteGroupConclusion}) may not be minimal, i.e. its restriction to $A$ may
be reducible. The simplest way to see this is by consider a finite group $G$
admitting a representation $\Lambda $ on $\mathbb{C}^{n}$ for some $n\in
\mathbb{N}$. Then $\Lambda $ extends to an irreducible representation $\Pi $
of the crossed-product $\mathbb{C}\rtimes _{\alpha }G$ where $\alpha $ is
the trivial action. Thus, $\Pi _{|\mathbb{C}}$, which decomposes into a
direct sum of irreducible representations of $\mathbb{C}$, must in fact be
the direct sum of $n$ copies of the (unique) identity representation of
\mathbb{C}$. Note that in this case $\Pi =\Psi $ using the notations of
Theorem (\ref{FiniteGroupConclusion}). Thus, for any $n\in \mathbb{N}$ one
can find an example where $\Psi $ is irreducible yet not minimal. This
situation will be illustrated with a much less trivial example in Example
\ref{ExPermutation1}) where $G$ will be permutation group on three elements.
However, the representation $\Psi $ must be minimal when the group $G$ is
chosen to be a finite cyclic group. We develop the theory for these groups
in the next section.
Because the representation $\Psi $ of Theorem (\ref{FiniteGroupConclusion})
is of central interest in the decomposition of $\Pi $, we establish the
following criterion for irreducibility for such representations. Note that
the next theorem also describes the situation where the commutant of $\Pi $
is a factor.
\begin{theorem}
\label{Homogeneous}Let $H$ be a discrete group. Let $\Psi $ be a
representation of $A\rtimes _{\alpha }H$ on a Hilbert space $\mathcal{H}$
and assume there exists an irreducible representation $\pi _{1}$ of $A$ on a
Hilbert space $\mathcal{H}_{1}$ such that $\mathcal{H}=\mathbb{C}^{r}\otimes
\mathcal{H}_{1}$, $\pi _{1}\circ \alpha _{h}$ is equivalent to $\pi _{1}$
for all $h\in H$ and $\Psi (a)=1_{\mathbb{C}^{r}}\otimes \pi (a)$ for all
a\in A$. Then there exist two unitary projective representations $\Lambda $
and $V$ of $H$ on $\mathbb{C}^{r}$ and $\mathcal{H}_{1}$ respectively such
that $\Psi (U^{h})=\Lambda _{h}\otimes V^{h}$. Moreover, the following are
equivalent:
\begin{enumerate}
\item $\Psi $ is irreducible,
\item The representation $\Lambda $ is irreducible.
\end{enumerate}
\end{theorem}
\begin{proof}
By assumption, for $h\in H$ there exists a unitary $V^{h}$ such that
V^{h}\pi _{1}\left( V^{h}\right) ^{\ast }=\pi _{1}\circ \alpha _{h}$ and
this unitary is unique up to a constant by Lemma (\ref{Schur}). From the
last section of the proof of Theorem (\ref{FiniteGroupConclusion}), we get
that $h\in H\mapsto V^{h}$ is a projective representation of $H$ for some
2-cocycle $\lambda $ and, since $\pi _{1}$ is irreducible, there exists a
projective representation $\Lambda $ of $H$ on $\mathbb{C}^{r}$ such that
\Psi (U^{h})=\Lambda _{h}\otimes V^{h}$, and moreover if $\Psi $ is
irreducible then so is $\Lambda $.
Suppose now $\Lambda $ is irreducible. Let $T\in \left[ \Psi \left( A\rtimes
_{\alpha }H\right) \right] ^{\prime }.$ Since $T$ commutes with $\Psi \left(
A\right) =1_{\mathbb{C}^{r}}\otimes \pi _{1}\left( A\right) $, it follows
that $T=D\otimes 1_{\mathcal{H}_{1}}$ for some $D\in M_{r}\left( \mathbb{C
\right) $. Now $T$ commutes with $\Psi (U^{h})$ for all $h\in H$, so $D$
commutes with $\Lambda _{g}$ for all $g\in H$. Hence $D$ is scalar and $\Psi
$ is irreducible.
\end{proof}
We also note that the group $H$ is not a priori a normal subgroup of $G$. It
is easy to check that the following two assertions are equivalent:
\begin{enumerate}
\item $H$ is a normal subgroup of $G$,
\item For all $g\in G$, the unitary $U_{\Pi }^{g}$ is block-diagonal in the
decomposition $\mathcal{H}=\mathcal{J}_{0}\oplus \ldots \oplus \mathcal{J
_{m-1}$ if and only if $g\in H$.
\end{enumerate}
In particular, when $G$ is Abelian then for $g\in G$ we have $\sigma
^{g}(1)=1$ if and only if $\sigma ^{g}=\limfunc{Id}$.
\bigskip We conclude by observing that the representation $\Psi $ involves
projective representations of $H$. We now offer an example to illustrate
this situation and shows that this phenomenon occurs even when $G$ is
Abelian. We shall see in the next section that finite cyclic groups have the
remarkable property that such unitary projective representations do not
occur.
\begin{example}
Let $p,q$ be two relatively prime integers. Let $\lambda =\exp \left( 2i\pi
\frac{p}{q}\right) $. Denote by $\mathbb{U}_{q}$ the group of $q^{\text{th}}$
roots of unity in $\mathbb{C}$. Let $\alpha $ be the action of $\mathbb{Z
_{q}$ on $C\left( \mathbb{U}_{q}\right) $ defined by $\alpha
_{1}(f)(z)=f\left( \lambda z\right) $. Then the crossed-product $A=C(\mathbb
U}_{q})\rtimes _{\alpha }\mathbb{Z}_{q}$ is isomorphic to $M_{q}(\mathbb{C})
. The canonical unitary is identified under this isomorphism with
\begin{equation*}
U=\left[
\begin{array}{cccc}
0 & 1 & 0 & 0 \\
\vdots & & \ddots & \vdots \\
0 & & & 1 \\
1 & 0 & \cdots &
\end{array
\right]
\end{equation*
while the generator $z\in \mathbb{U}_{q}\mapsto z$ of $C\left( \mathbb{U
_{q}\right) $ is mapped to
\begin{equation*}
V=\left[
\begin{array}{cccc}
1 & & & \\
& \lambda & & \\
& & \ddots & \\
& & & \lambda ^{q-1
\end{array
\right] \text{.}
\end{equation*
The dual action $\gamma $ of the Abelian group $G=\mathbb{Z}_{q}\times
\mathbb{Z}_{q}$ on $C(\mathbb{U}_{q})\rtimes _{\alpha }\mathbb{Z}_{q}$ can
thus be described by
\begin{equation*}
\gamma _{z,z^{\prime }}\left( U\right) =\exp \left( 2i\pi \frac{pz}{q
\right) U\text{ and }\gamma _{z,z^{\prime }}(V)=\exp \left( 2i\pi \frac
pz^{\prime }}{q}\right) V
\end{equation*
for all $\left( z,z^{\prime }\right) \in G$. Now, for $(z,z^{\prime })\in G$
we set $\Lambda (z,z^{\prime })=\lambda ^{zz^{\prime }}U^{z}V^{z^{\prime }}
. Note that $G$ is generated by $\zeta =\left( 1,0\right) $ and $\xi =\left(
0,1\right) $ and $\Lambda (\zeta )=U$ while $\Lambda (\xi )=V$. Since
VU=\lambda UV$, the map $\Lambda $ is a unitary projective representation of
$G$ on $\mathbb{C}^{2}$ associated to the group cohomology class of $\exp
\left( i\pi \sigma \right) $ where $\sigma $ is defined by
\begin{equation*}
\sigma (\left( z,z^{\prime }\right) ,\left( y,y^{\prime }\right) )=\frac{p}{
}\left( zy^{\prime }-z^{\prime }y\right) \text{.}
\end{equation*
Moreover, the dual action is of course an inner action, and more precisely
\begin{eqnarray*}
\gamma _{z,z^{\prime }}\left( a\right) &=&U^{z}V^{z^{\prime
}}aV^{-z^{\prime }}U^{-z} \\
&=&\Lambda (z,z^{\prime })a\Lambda \left( z,z^{\prime }\right) ^{\ast }\text
.}
\end{eqnarray*
We let $\Lambda ^{\prime }:z,z^{\prime }\in G\mapsto \Lambda (z^{\prime },z)
. Then an easy computation shows that $\Lambda ^{\prime }$ is a unitary
projective representation of $G$ on $\mathbb{C}^{2}$ associated to the
cocycle defined by $\exp \left( -i\pi \sigma \right) $, and $\Lambda
^{\prime }(\zeta )=V$ and $\Lambda ^{\prime }(\xi )=U$.
Let $B=A\rtimes _{\gamma }G$. Let us define the representation $\Psi $ of $B$
on $\mathbb{C}^{2}\otimes \mathbb{C}^{2}$ by
\begin{eqnarray*}
\Psi (a) &=&1\otimes a\text{,} \\
\Psi (U^{\zeta }) &=&V\otimes U\text{,} \\
\Psi (U^{\xi }) &=&U\otimes V\text{.}
\end{eqnarray*
First, we observe that
\begin{eqnarray*}
\Psi (U^{\zeta })\Psi (a)\Psi (U^{\zeta })^{\ast } &=&1\otimes UaU^{\ast
}=\Psi (\gamma _{\zeta }(a))\text{,} \\
\Psi (U^{\xi })\Psi (a)\Psi (U^{\xi })^{\ast } &=&1\otimes VaV^{\ast }=\Psi
(\gamma _{\xi }(a))\text{.}
\end{eqnarray*
Therefore $\Psi $ is indeed defining a representation of $B$. Moreover:
\begin{equation*}
\Psi (U^{g})=\Lambda ^{\prime }(g)\otimes \Lambda (g)
\end{equation*}
for $g\in G$. Since $\Lambda ^{\prime }$ is irreducible, $\Psi $ is
irreducible as well by Theorem (\ref{Homogeneous}). Last, the commutant of
\Psi (A)$ is $M_{2}\left( \mathbb{C}\right) $, i.e. the restriction of $\Psi
$ to $A$ is the direct sum of two copies of the identity representation of $A
$.
\end{example}
\bigskip We now turn to the special case of cyclic groups where the
representation $\Psi $ of Theorem (\ref{FiniteGroupConclusion}) is always
minimal, i.e. its restriction to $A$ is always an irreducible representation
of $A$. We shall characterize such minimal representations in terms of the
fixed point C*-subalgebra $A_{1}$ of $A$.
\section{Actions of Finite Cyclic Groups}
\bigskip Let $A$ be a unital C*-algebra and $\sigma $ be a *-automorphism of
$A$ of period $n$, for $n\in \mathbb{N}$, i.e. $\sigma ^{n}=\limfunc{Id}_{A}
. We shall not assume that $n$ is the smallest such natural integer, i.e.
\sigma $ may be of an order dividing $n$. The automorphism $\sigma $
naturally generates an action of $\mathbb{Z}_{n}$ on $A$ by letting $\alpha
_{z}(a)=\sigma ^{k}(a)$ for all $z\in \mathbb{Z}_{n}$ and $k\in \mathbb{Z}$
of class $z$ modulo $n$. The crossed-product $A\rtimes _{\alpha }\mathbb{Z
_{n}$ will be simply denoted by $A\rtimes _{\sigma }\mathbb{Z}_{n}$, and the
canonical unitary $U^{1}\in A\rtimes _{\sigma }\mathbb{Z}_{n}$ corresponding
to $1\in \mathbb{Z}_{n}$ will simply be denoted by $U$. The C*-algebra
A\rtimes _{\sigma }\mathbb{Z}_{n}$ is universal among all C*-algebras
generated by a copy of $A$ and a unitary $u$ such that $u^{n}=1$ and
uau^{\ast }=\sigma (a)$.
\bigskip Theorem (\ref{FiniteGroupConclusion}) already provides much
information about the structure of irreducible representations of $A\rtimes
_{\sigma }\mathbb{Z}_{n}$. Yet we shall see it is possible in this case to
characterize these representations in terms of irreducible representations
of $A$ and of the fixed point C*-subalgebra $A_{1}$ of $A$ for $\sigma $. Of
central importance in this characterization are minimal representations of $A
$ for $\sigma $ and their relation to irreducible representations of $A_{1}
. We start this section with the exploration of this connection. Next, we
propose a full characterization of irreducible representations of $A\rtimes
_{\sigma }\mathbb{Z}_{n}$.
\subsection{Minimal Representations}
An extreme case of irreducible representation for crossed-products is given
by:
\begin{definition}
Let $\Pi $ be an irreducible representation of $A\rtimes _{\alpha }G$ is
called minimal when its restriction to $A$ is irreducible. Moreover, if $\pi
$ is an irreducible representation of $A$ such that there exists some
irreducible representation $\Pi $ of $A\rtimes _{\alpha }G$ whose
restriction to $A$ is $\pi $, then we say that $\pi $ is minimal for the
action $\alpha $ of $G$.
\end{definition}
Such representations play a central role in the description of irreducible
representations of $A\rtimes _{\sigma }\mathbb{Z}_{n}$ when $\sigma $ is an
automorphism of period $n$. We propose to characterize them in term of the
fixed point C*-subalgebra $A_{1}$ of $A$. The set $\widehat{\mathbb{Z}_{n}}$
of irreducible representations of $\mathbb{Z}_{n}$ is the Pontryagin dual of
$\mathbb{Z}_{n}$ which we naturally identify with the group $\mathbb{U}_{n}$
of $n^{\text{th}}$ roots of the unit in $\mathbb{C}$. Let $\lambda \in
\mathbb{U}_{n}$. Thus $k\in \mathbb{Z}_{n}\mapsto \lambda ^{n}$ is an
irreducible representation of $\mathbb{Z}_{n}$ and the spectral subspace
A_{\lambda }$ of $A$ for $\lambda $ is given by $\left\{ a:\sigma
(a)=\lambda a\right\} $. Indeed, $A_{\lambda }$ is by definition the range
of the projection $P_{\lambda }:a\in A\mapsto \frac{1}{n}\sum_{k=0}^{n-1
\lambda ^{-k}\sigma ^{k}(a)$ by Equality (\ref{SpectralProjDef}), and it is
easy to check that $P_{\lambda }(a)=a\iff \sigma (a)=\lambda a$ from the
definition of $P_{\lambda }$.
\begin{theorem}
\bigskip \label{Rep}Let $\sigma $ be a *-automorphism of a unital C*-algebra
$A$ of period $n$. Let $\Pi $ be an irreducible representation of $A\rtimes
_{\sigma }\mathbb{Z}_{n}$ on a Hilbert space $\mathcal{H}$ and let $\pi _{A}$
be its restriction to $A$. Let $\Sigma $ be the spectrum of $U_{\Pi }:=\pi
(U)$. Now, $\Sigma $ is a subset of $\mathbb{U}_{n}$; let us write $\Sigma
=\left\{ \lambda _{1},\ldots ,\lambda _{p}\right\} $ and denote the spectral
subspace of $U_{\Pi }$ associated to $\lambda _{j}$ by $\mathcal{H}_{j}$.
With the decomposition $\mathcal{H}=\oplus _{k=1}^{p}\mathcal{H}_{k}$, we
write, for all $a\in A$
\begin{equation}
\pi _{A}(a)=\left[
\begin{array}{cccc}
\alpha _{11}(a) & \alpha _{12}(a) & \cdots & \alpha _{1p}(a) \\
\alpha _{21}(a) & \alpha _{22}(a) & & \alpha _{2p}(a) \\
\vdots & & \ddots & \vdots \\
\alpha _{p1}(a) & \alpha _{p2}(a) & \cdots & \alpha _{nn}(a
\end{array
\right] \text{.} \label{RepAlphaDec}
\end{equation
Then for $k,j\in \left\{ 1,\ldots ,p\right\} $ the map $\alpha _{jk}$ is a
linear map on $A_{\lambda _{j}\overline{\lambda _{k}}}$ and null on $\oplus
_{\mu \not=\lambda _{j}\overline{\lambda _{k}}}A_{\mu }$. Moreover, the maps
$\alpha _{kk}$ are irreducible *-representations of the fixed point
C*-algebra $A_{1}$.
Furthermore, the following are equivalent:
\begin{itemize}
\item The representation $\pi _{A}$ of $A$ is irreducible, i.e. $\Pi $ is
minimal,
\item The *-representations $\alpha _{11},\ldots ,\alpha _{pp}$ are pairwise
not unitarily equivalent, i.e. for all $i\not=j\in \left\{ 1,\ldots
,p\right\} $ the representation $\alpha _{ii}$ is not equivalent to $\alpha
_{jj}$.
\end{itemize}
\end{theorem}
\begin{proof}
Since $U_{\Pi }^{n}=1$, the spectrum of the unitary $U_{\Pi }$ is a subset
\Sigma =\left\{ \lambda _{1},\ldots ,\lambda _{p}\right\} $ of $\mathbb{U
_{n}$ for some $p\in \mathbb{N}$. We write $\mathcal{H}=\mathcal{H
_{1}\oplus \ldots \oplus \mathcal{H}_{p}$ where $\mathcal{H}_{i}$ is the
spectral subspace of $U_{\Pi }$ for the eigenvalue $\lambda _{i}$ for
i=1,\ldots ,p$, so that $U_{\Pi }=\left[
\begin{array}{ccc}
\lambda _{1} & & \\
& \ddots & \\
& & \lambda _{p
\end{array
\right] $. Let $i,j\in \left\{ 1,\ldots ,p\right\} $ and let $\alpha _{ij}$
be the map defined by Identity (\ref{RepAlphaDec}). First, it is immediate
that $\alpha _{ij}$ is linear. Now, a simple computation shows that
\begin{eqnarray*}
&&U_{\Pi }\pi (a)U_{\Pi }^{\ast }= \\
&=&\left[
\begin{array}{ccc}
\lambda _{1} & & \\
& \ddots & \\
& & \lambda _{p
\end{array
\right] \left[
\begin{array}{ccc}
\alpha _{11}(a) & \cdots & \alpha _{1p}(a) \\
\vdots & & \vdots \\
\alpha _{p1}(a) & \cdots & \alpha _{pp}(a
\end{array
\right] \left[
\begin{array}{ccc}
\overline{\lambda _{1}} & & \\
& \ddots & \\
& & \overline{\lambda _{p}
\end{array
\right] \\
&=&\left[
\begin{array}{cccc}
\alpha _{11}(a) & \lambda _{1}\overline{\lambda _{2}}\alpha _{12}(a) & \cdots
& \lambda _{1}\overline{\lambda _{p}}\alpha _{1p}(a) \\
\lambda _{2}\overline{\lambda _{1}}\alpha _{21}(a) & \alpha _{22}(a) & &
\lambda _{2}\overline{\lambda _{p}}\alpha _{2p}(a) \\
\vdots & & \ddots & \vdots \\
\lambda _{p}\overline{\lambda _{1}}\alpha _{p1}(a) & \lambda _{p}\overline
\lambda _{2}}\alpha _{p2}(a) & \cdots & \alpha _{pp}(a
\end{array
\right] =\pi \left( \sigma (a)\right) \text{.}
\end{eqnarray*}
Therefore for all $i,j\in \left\{ 1,\ldots ,p\right\} $ we have that $\alpha
_{ij}(\sigma (a))=\lambda _{i}\overline{\lambda _{j}}\alpha _{ij}(a)$. Let
a\in A_{\mu }$ for $\mu \in \mathbb{U}_{n}$, i.e. $\sigma (a)=\mu a$. Then
\alpha _{ij}(\sigma (a))=\mu \alpha _{ij}(a)$. Therefore either $\alpha
_{ij}(a)=0$ or $\mu =\lambda _{i}\overline{\lambda _{j}}$.
In particular, $\alpha _{jj}$ is a representation of $A_{1}$ for all $j\in
\left\{ 1,\ldots ,p\right\} $. Indeed, if $a\in A_{1}$ then $\alpha
_{jk}(a)=0$ if $j\not=k$ and thus $\pi _{A}(a)$ is diagonal. Since $\pi _{A}$
is a representation of $A$, it follows from easy computations that $\alpha
_{jj}$ are representations of $A_{1}$.
Now, since $A\oplus AU\oplus \ldots \oplus AU^{n-1}=A\rtimes _{\sigma
\mathbb{Z}_{n}$, every element of the range of $\Pi $ is of the form $\oplus
_{j=0}^{n-1}\pi _{A}(a_{j})U_{\Pi }^{j}$ for $a_{0},\ldots ,a_{n-1}\in A$.
Now, let $i\in \left\{ 1,\ldots ,p\right\} $. We observe that the $(i,i)$
entry of $\oplus _{j=0}^{n-1}\pi _{A}(a_{i})U_{\Pi }^{j}$ in the
decomposition $\mathcal{H}=\mathcal{H}_{1}\oplus \ldots \oplus \mathcal{H
_{p}$ is given by $\sum_{j=0}^{n-1}\lambda _{i}^{j}\alpha
_{ii}(a_{j})=\alpha _{ii}\left( \sum_{j=0}^{n-1}\lambda _{i}^{j}a_{j}\right)
$. Hence, the $(i,i)$ entries of operators in the range of $\Pi $ are
exactly given by the operators in the range of $\alpha _{ii}$. Now, let $T$
be any operator acting on $\mathcal{H}$. Since $\Pi $ is irreducible, by the
Von Neumann double commutant Theorem \cite[Theorem 1.7.1]{Davidson}, $T$ is
the limit, in the weak operator topology ($\limfunc{WOT}$), of elements in
the range of $\Pi $. In particular, the $(i,i)$ entry of $T$ in the
decomposition $\mathcal{H}=\mathcal{H}_{1}\oplus \ldots \oplus \mathcal{H
_{p}$ is itself a $\limfunc{WOT}$ limit of elements in the range of $\alpha
_{ii}$ since the left and right multiplications by a fixed operator are
\limfunc{WOT}$ continuous \cite[p. 16]{Davidson}. Therefore, the range of
\alpha _{ii}$ is $\limfunc{WOT}$ dense in $\mathcal{H}_{i}$. Thus, by the
double commutant theorem again, $\alpha _{ii}$ is irreducible.
We now turn to characterizing minimal representations. We first establish a
necessary condition.
Suppose that there exists $i,j\in \left\{ 1,\ldots ,p\right\} $ with
i\not=j $ and a unitary $u$ such that $u\alpha _{ii}u^{\ast }=\alpha _{jj}$.
In the decomposition $\mathcal{H}=\mathcal{H}_{1}\oplus \ldots \oplus
\mathcal{H}_{p}$, define the block-diagonal unitary
\begin{equation*}
D_{u}^{i}=\underset{i-1\text{ times}}{\underbrace{1\oplus \ldots \oplus 1}
\oplus u\oplus \underset{p-i\text{ times}}{\underbrace{1\oplus \ldots \oplus
1}}\text{.}
\end{equation*}
Then by conjugating $\pi _{A}$ by $D_{u}^{i}$, we see that we may as well
assume $\alpha _{ii}=\alpha _{jj}$. Yet, this implies that in the $\limfunc
WOT}$-closure of the range of $\pi _{A}$, every operator has the same $(i,i)$
and $(j,j)$ entry in the decomposition $\mathcal{H}=\mathcal{H}_{1}\oplus
\ldots \oplus \mathcal{H}_{p}$. Hence the range of $\pi _{A}$ is not
\limfunc{WOT}$-dense and thus $\pi _{A}$ is reducible, so $\Pi $ is not
minimal.
We now prove that our necessary condition is also sufficient. Assume that
\alpha _{11},\ldots ,\alpha _{pp}$ are pairwise not unitary equivalent. The
claim is that $\pi _{A}$ is irreducible.
Let $T\in \left( \pi \left( A\right) \right) ^{\prime }.$ Decompose $T=\left[
\begin{array}{ccc}
T_{11} & \cdots & T_{1p} \\
\vdots & & \vdots \\
T_{p1} & \cdots & T_{pp
\end{array
\right] $ with respect to the decomposition $\mathcal{H}=\oplus _{i=1}^{p
\mathcal{H}_{i}$. Let $i\neq j.$ First, note that if $a\in A_{1}$ then
\alpha _{ij}(a)=0$. Second, since $T$ commutes with $\pi _{A}(a)$ for $a\in
A_{1}$, we have
\begin{equation}
\alpha _{ii}\left( a\right) T_{ij}=T_{ij}\alpha _{jj}\left( a\right) \text{
for }a\in A_{1}\text{.} \label{Rep1}
\end{equation}
By Lemma (\ref{Schur}), since $\alpha _{ii}$ and $\alpha _{jj}$ are
irreducible and not unitarily equivalent for $i\not=j$, we conclude that
T_{ij}=0$. Moreover, for all $i\in \left\{ 1,\ldots ,p\right\} $ and $a\in
A_{1}$ we have $\alpha _{ii}\left( a\right) T_{ii}=T_{ii}\alpha _{ii}\left(
a\right) $. Since $\alpha _{ii}$ is irreducible, we conclude that $T_{ii}$
is a scalar. Therefore, the operator $T$ commutes with the operator $U_{\Pi
} $. Since $\Pi $ is irreducible, we conclude that $T$ itself is a scalar.
Therefore, $\pi _{A}$ is an irreducible representation of $A$ and thus $\Pi $
is minimal.
\end{proof}
\bigskip Together with Theorem (\ref{FiniteGroupConclusion}), Theorem (\re
{Rep}) will allow us to now develop further the description of arbitrary
irreducible representations of crossed-products by finite cyclic groups. It
is interesting to look at a few very simple examples to get some intuition
as to what could be a more complete structure theory for irreducible
representations of crossed-products by $\mathbb{Z}_{n}$. First of all, one
should not expect in general that the spectrum of $U_{\Pi }$ is a coset of
\mathbb{Z}_{n}$, as the simple action of $\sigma =\limfunc{Ad}\left[
\begin{array}{cc}
i & \\
& e^{i\frac{3\pi }{4}
\end{array
\right] $ on $M_{2}\left( \mathbb{C}\right) $ shows. In this case, the
identity is the only irreducible representation of the crossed-product
M_{2}\left( \mathbb{C}\right) \rtimes _{\sigma }\mathbb{Z}_{4}=M_{2}\left(
\mathbb{C}\right) $ and clearly $\left\{ i,e^{i\frac{3\pi }{4}}\right\} $ is
not a coset of $\mathbb{Z}_{4}$. Of course, this is an example of a minimal
representation.
\bigskip In \cite{Latremoliere06}, we showed that all irreducible
representations of $A\rtimes _{\sigma }\mathbb{Z}_{2}$ where regular or
minimal. The following example shows that we can not expect the same in the
general case.
\begin{example}
\label{CuteExample}Let $A=M_{2}\left( \mathbb{C}\right) \oplus M_{2}\left(
\mathbb{C}\right) $ and define $\sigma (M\oplus N)=WNW^{\ast }\oplus M$ with
$W=\left[
\begin{array}{cc}
0 & 1 \\
1 &
\end{array
\right] $. Then $\sigma ^{4}=\limfunc{Id}_{A}$ and $\sigma ^{2}(M\oplus
N)=WMW^{\ast }\oplus WNW^{\ast }$. Now, let $\pi _{i}:M_{1}\oplus M_{2}\in
A\mapsto M_{i}$ with $i=1,2$. Of course, $\pi _{1},\pi _{2}$ are the only
two irreducible representations of $A$ up to equivalence, and they are not
equivalent to each other (since they have complementary kernels). Now, we
consider the following representation $\Pi $ of $A\rtimes _{\sigma }\mathbb{
}_{4}$. It acts on $\mathbb{C}^{4}$. We set
\begin{equation*}
\pi _{A}=\left[
\begin{array}{cc}
\pi _{1} & 0 \\
0 & \pi _{2
\end{array
\right]
\end{equation*
and
\begin{equation*}
U_{\Pi }=\left[
\begin{array}{cc}
0 & 1 \\
W &
\end{array
\right] \text{.}
\end{equation*
First, observe that $\Pi $ thus defined is irreducible. Indeed, $M$ commutes
with $\pi _{A}$ if and only if $M=\left[
\begin{array}{cc}
\lambda & b \\
c & \m
\end{array
\right] $ with $\lambda ,\mu \in \mathbb{C}$ and $b\pi _{2}(a)=\pi _{1}(a)c$
with $a\in A$. Now, $M$ commutes with $U_{\Pi }$ if and only if $\lambda
=\mu $ and $Wb=c$. Now, let $a\in M_{2}\left( \mathbb{C}\right) $ be
arbitrary; then $b\pi _{2}\left( a\oplus Wa\right) =\pi _{1}\left( a\oplus
Wa\right) c$ i.e
\begin{equation*}
bWa=abW\text{.}
\end{equation*
Hence $bW$ is scalar. So $b=\lambda W$. Thus $b$ commutes with $W$. But then
for an arbitrary $a$ we have $b\pi _{2}\left( aW\oplus a\right) =\pi
_{1}\left( aW\oplus a\right) bW$ i.e. $ba=aWbW=ab$ so $b$ commutes with
M_{2}\left( \mathbb{C}\right) $ and thus is scalar. Hence $b=0$. So
M=\lambda 1$ for $\lambda \in \mathbb{C}$ as needed.
Moreover, the restriction of $\Pi $ to $A$ is $\pi _{A}=\pi _{1}\oplus \pi
_{2}$. Thus, $\pi _{A}$ is reducible. Now, the fixed point C*-algebra $A_{1}$
is the C*-algebra $\left\{ M\oplus M:M=\left[
\begin{array}{cc}
a & b \\
b &
\end{array
\right] ;a,b\in \mathbb{C}\right\} $. Thus, $A_{1}$ has two irreducible
representations which are not equivalent
\begin{equation*}
\varphi _{1}:\left[
\begin{array}{cc}
a & b \\
b &
\end{array
\right] \in A_{1}\mapsto a+b
\end{equation*
an
\begin{equation*}
\varphi _{2}:\left[
\begin{array}{cc}
a & b \\
b &
\end{array
\right] \in A_{1}\mapsto a-b\text{.}
\end{equation*}
We note that for $i=1,2$ we have $\pi _{i}$ restricted to $A_{1}$ is
\varphi _{1}\oplus \varphi _{2}$.
\end{example}
Now, using the notations of Example (\ref{CuteExample}), $\Pi $ is not
regular, since the restriction of any irreducible regular representation to
the fixed point algebra $A_{1}$ is given by the sum of several copies of the
same irreducible representation of $A_{1}$. Trivially, $\Pi $ is not minimal
either since $\Pi _{|A}=\pi _{1}\oplus \pi _{2}$. However, both $\pi _{1}$
and $\pi _{2}$ are minimal for the action of $\sigma ^{2}$. Moreover, both
\pi _{1}$ and $\pi _{2}$ restricted to $A_{1}$ are the same representation
\alpha _{1}\oplus \alpha _{2}$. We shall see in the next section that this
pattern is in fact general.
\subsection{Characterization of Irreducible Representations}
We now present the main result of this paper concerning crossed products by
finite cyclic groups. In this context, one can go further than Theorem\ (\re
{FiniteGroupConclusion}) to obtain a characterization of irreducible
representations of the crossed-products in term of the C*-algebras $A$ and
A_{1}$. The next lemma is the sufficient condition for this characterization.
\begin{lemma}
\label{SufficientCyclic}Let $\pi _{1}$ be an irreducible representation of
A $ acting on a Hilbert space $\mathcal{J}$. Assume that there exists a
unitary $V$ on $\mathcal{J}$ such that for some $m,k\in \left\{ 1,\ldots
,n\right\} $ with $n=mk$ we have $\pi _{1}\circ \sigma ^{m}=V\pi _{1}V^{\ast
}$ and $V^{k}=1$, and that $m$ is the smallest such nonzero natural integer,
i.e. $\pi _{1}\circ \sigma ^{j}$ is not unitarily equivalent to $\pi _{1}$
for $j\in \left\{ 2,\ldots ,m-1\right\} $. Then define the following
operators on the Hilbert space $\mathcal{H}=\underset{m\text{ times}}
\underbrace{\mathcal{J}\oplus \ldots \oplus \mathcal{J}}\text{:}}
\begin{equation*}
\Pi \left( U\right) =\left[
\begin{array}{ccccc}
0 & 1 & 0 & \cdots & 0 \\
\vdots & & 1 & & \vdots \\
\vdots & & & \ddots & \vdots \\
0 & 0 & \cdots & 0 & 1 \\
V & 0 & \cdots & 0 &
\end{array
\right]
\end{equation*
and for all $a\in A$
\begin{equation*}
\pi _{A}(a)=\left[
\begin{array}{cccc}
\pi _{1}(a) & & & \\
& \pi _{1}\circ \sigma (a) & & \\
& & \ddots & \\
& & & \pi _{1}\circ \sigma ^{m-1}(a
\end{array
\right] \text{.}
\end{equation*
Then the unique extension of $\Pi $ to $A\rtimes _{\sigma }\mathbb{Z}_{n}$
is an irreducible representation of $A\rtimes _{\sigma }\mathbb{Z}_{n}$.
\end{lemma}
\begin{proof}
An easy computation shows that $\Pi $ thus defined is a representation of
A\rtimes _{\sigma }\mathbb{Z}_{n}$ on $\mathcal{H}=\underset{m\text{ times}}
\underbrace{\mathcal{J}\oplus \ldots \oplus \mathcal{J}}}$. Write $\pi
_{i}=\pi _{1}\circ \sigma ^{i-1}$ for $i=1,\ldots ,m$. Let $T$ be an
operator which commutes with the range of $\Pi $. Then $T$ commutes with
\pi _{A}:=\Pi _{|A}$. Writing $T$ in the decomposition $\mathcal{H}=\mathcal
J}\oplus \ldots \oplus \mathcal{J}$ as
\begin{equation*}
T=\left[
\begin{array}{ccc}
T_{11} & \cdots & T_{1m} \\
\vdots & & \vdots \\
T_{m1} & \cdots & T_{mm
\end{array
\right]
\end{equation*
Let $i,j\in \left\{ 1,\ldots ,m\right\} $. Since $T\pi _{A}(a)=\pi _{A}(a)T$
for all $a\in A$, we conclude that $\pi _{i}(a)T_{ij}=T_{ij}\pi _{j}(a)$. By
Lemma\ (\ref{Schur}), since $\pi _{i}$ and $\pi _{j}$ are irreducible and
not unitarily equivalent, we conclude that $T_{ij}=0$. Moreover, $T_{ii}$
commutes with $\pi _{i}$ which is irreducible, so we conclude that
\begin{equation*}
T=\left[
\begin{array}{ccc}
\lambda _{1} & & \\
& \ddots & \\
& & \lambda _{m
\end{array
\right]
\end{equation*
for $\lambda _{1},\ldots ,\lambda _{m}\in \mathbb{C}$. Since $T$ commutes
with $U_{\Pi }$ we conclude that $\lambda _{1}=\lambda _{i}$ for all $i\in
\left\{ 1,\ldots ,m\right\} $. Hence $\Pi $ is irreducible.
\end{proof}
\bigskip We now are ready to describe in detail the structure of irreducible
representations of crossed-products by finite cyclic groups in terms of
irreducible representations of $A$ and $A_{1}$.
\begin{theorem}
\label{CyclicConclusion}Let $\sigma $ be a *-automorphism of period $n$ of a
unital C*-algebra $A$. Then the following are equivalent:
\begin{enumerate}
\item $\Pi $ is an irreducible representation of $A\rtimes _{\sigma }\mathbb
Z}_{n}$,
\item There exists $k,m\in \mathbb{N}$ with $km=n$, an irreducible
representation $\pi _{1}$ of $A$ on a Hilbert space $\mathcal{J}$ and a
unitary $V$ on $\mathcal{J}$ such that $V^{k}=1$ and $V\pi _{1}\left( \cdot
\right) V=\pi _{1}\circ \sigma ^{m}\left( \cdot \right) $ such that
\begin{equation*}
\Pi (U)=\left[
\begin{array}{cccc}
0 & 1 & & \\
& \ddots & \ddots & \\
& & 0 & 1 \\
V & & &
\end{array
\right]
\end{equation*
and for all $a\in A$
\begin{equation*}
\Pi (a)=\left[
\begin{array}{cccc}
\pi _{1}(a) & & & \\
& \pi _{1}\circ \sigma (a) & & \\
& & \ddots & \\
& & & \pi _{1}\circ \sigma ^{m-1}(a
\end{array
\right]
\end{equation*
where for any $i\in \left\{ 1,\ldots ,m-1\right\} $ the representations $\pi
_{1}$ and $\pi _{1}\circ \sigma ^{i}$ are not equivalent.
\end{enumerate}
Moreover, if (2) holds then the representation $\psi $ of $A\rtimes _{\sigma
^{m}}\mathbb{Z}_{k}$ on $\mathcal{J}$\ defined by $\psi (a)=\pi _{1}(a)$ for
$a\in A$ and $\psi (U)=V$ is a minimal representation of $A\rtimes _{\sigma
^{m}}\mathbb{Z}_{k}$. Let $\eta $ be the cardinal of the spectrum of $V$.
The restriction of $\pi _{1}$ to $A_{1}$ is therefore the sum of $\eta $
irreducible representations $\varphi _{1},\ldots ,\varphi _{\eta }$ of
A_{1} $ which are not pairwise equivalent. Last, the restriction of $\pi
_{1}\circ \sigma ^{i}$ to $A_{1}$ is unitarily equivalent to $\varphi
_{1}\oplus \ldots \oplus \varphi _{\eta }=\pi _{1|A_{1}}$ for all $i\in
\left\{ 0,\ldots ,m-1\right\} $.
\end{theorem}
\begin{proof}
By Lemma\ (\ref{SufficientCyclic}), (2) implies (1). We now turn to the
proof of (1) implies (2). Let $\Pi $ be an irreducible representation of
A\rtimes _{\sigma }\mathbb{Z}_{n}$. By Theorem (\ref{FiniteGroupConclusion
), there exists $m\in \mathbb{N}$ such that $m$ divides $n$, an irreducible
representation $\pi _{1}$ of $A$ on some space $\mathcal{H}_{1}$ and $r\in
\mathbb{N}$ with $r>0$ such that, if $\pi =r\cdot \pi _{1}$ then up to
conjugating $\Pi $ by some unitary:
\begin{itemize}
\item For all $i=1,\ldots ,m-1$ the representation $\pi \circ \sigma ^{i}$
is not equivalent to $\pi $,
\item The representation $\pi \circ \sigma ^{m}$ is equivalent to $\pi $,
\item We have the decomposition $\mathcal{H}=\mathcal{J}_{0}\oplus \ldots
\oplus \mathcal{J}_{m-1}$ where $\mathcal{J}_{i}$ is the space on which
\left( r\cdot \pi \right) \circ \sigma ^{i}$ acts for $i\in \left\{ 0,\ldots
,m\right\} $ and is isometrically isomorphic to $\mathcal{J}$,
\item In the decomposition, $\mathcal{H}=\mathcal{J}_{0}\oplus \ldots \oplus
\mathcal{J}_{m-1}$ there exists unitaries $U_{1},\ldots ,U_{m}$ such that
\begin{equation*}
U_{\Pi }=\left[
\begin{array}{ccccc}
0 & U_{1} & 0 & \cdots & 0 \\
0 & 0 & U_{2} & 0 & \vdots \\
\vdots & & \ddots & \ddots & 0 \\
0 & & & 0 & U_{m-1} \\
U_{m} & 0 & \cdots & 0 &
\end{array
\right]
\end{equation*
with $\left( \pi _{i}\circ \sigma \right) U_{i}=U_{i}\pi _{i+1}$ and $U_{i}
\mathcal{H}_{i+1}\longrightarrow \mathcal{H}_{i}$ for all $i\in \mathbb{Z
_{m}$.
\end{itemize}
Indeed, if $G=\mathbb{Z}_{n}$ in Theorem (\ref{FiniteGroupConclusion}) then
H$, as a subgroup of $G$, is of the form $\left( m\mathbb{Z}\right) /
\mathbb{Z}$ with $m$ dividing $n$, and if we let $g_{1}=0$, $g_{2}=1$,
\ldots , $g_{m}=m-1$ then we can check that this choice satisfies the
hypothesis of Theorem (\ref{FiniteGroupConclusion}). With this choice, the
permutation $\sigma ^{1}$ is then easily seen to be given by the cycle
\left( 1~2~\ldots ~m\right) $.
We will find it convenient to introduce some notation for the rest of the
proof. By Theorem (\ref{FiniteGroupConclusion}), for $i\in \left\{ 0,\ldots
,m-1\right\} $, after possibly conjugating $\Pi $ by some unitary, we can
decompose $\mathcal{J}_{i}$ as $\mathcal{H}_{ri+1}\oplus \ldots \oplus
\mathcal{H}_{r(i+1)}$, where $\mathcal{H}_{ri+j}$ is isometrically
isomorphic to $\mathcal{H}_{1}$ for all $j\in \left\{ 1,\ldots ,r\right\} $,
so that the restriction of $\Pi _{|A}$ to the space $\mathcal{J}_{i}$ is
written $\left( \underset{r\text{ times}}{\underbrace{\pi _{1}\oplus \ldots
\oplus \pi _{1}}}\right) \circ \sigma ^{i}$ in this decomposition.
We now show how to conjugate $\Pi $ by a unitary to simplify its expression
further.
If we define the unitary $\Upsilon $ from $\mathcal{H}=\mathcal{J}_{0}\oplus
\ldots \oplus \mathcal{J}_{m-1}$ onto $\oplus _{1}^{m}\mathcal{J}_{m-1}$ by:
\begin{equation*}
\Upsilon =\left[
\begin{array}{cccc}
U_{m}^{\ast }U_{m-1}^{\ast }\cdots U_{1}^{\ast } & & & \\
& U_{m}^{\ast }\cdots U_{2}^{\ast } & & \\
& & \ddots & \\
& & & U_{m}^{\ast
\end{array
\right]
\end{equation*
then the unitary $\limfunc{Ad}\left( \Upsilon \right) \circ \Pi \left(
U\right) $ of $\oplus _{1}^{m}\mathcal{J}_{m-1}$\ is of the simpler for
\begin{equation}
\limfunc{Ad}\Upsilon \circ \Pi \left( U\right) =\left[
\begin{array}{ccccc}
0 & 1 & 0 & \cdots & 0 \\
0 & 0 & 1 & \ddots & \vdots \\
\vdots & \vdots & \ddots & \ddots & 0 \\
0 & 0 & \cdots & 0 & 1 \\
V & 0 & \cdots & 0 &
\end{array
\right] \label{UnitaryShift}
\end{equation
for some unitary $V$ of $\mathcal{J}_{m-1}$. Moreover, if we write $\rho
_{1}=\limfunc{Ad}\left( U_{i}^{\ast }\ldots U_{1}^{\ast }\right) \circ \pi
_{1}$, then
\begin{equation*}
\limfunc{Ad}\Upsilon \circ \pi _{A}=\dbigoplus\limits_{j=1}^{m}\left(
\underset{r\text{ times}}{\underbrace{\rho _{1}\circ \sigma ^{j-1}\oplus
\ldots \oplus \rho _{1}\circ \sigma ^{j-1}}}\right)
\end{equation*
and $\rho _{1}$ is by definition an irreducible representation of $A$
unitarily equivalent to $\pi _{1}$.
To simplify notations, we shall henceforth drop the notation $\limfunc{Ad
\Upsilon $ and simply write $\Pi $ for $\limfunc{Ad}\Upsilon \circ \Pi $. In
other words, we replace $\Pi $ by $\limfunc{Ad}\Upsilon \circ \Pi $ and we
shall use the notations introduced to study $\Pi $ henceforth, with the
understanding that for all $j=0,\ldots ,m-1$ and $k=1,\ldots ,r$ we have
that $\pi _{rj+k}=\pi _{1}\circ \sigma ^{j}$, that $\mathcal{J}_{j}$ is an
isometric copy of $\mathcal{J}_{0}$ and that $\mathcal{H}$ $=\mathcal{J
_{0}\oplus \ldots \oplus \mathcal{J}_{m-1}$ with $\mathcal{J}_{j}=\mathcal{H
_{rj+1}\oplus \ldots \oplus \mathcal{H}_{r(j+1)}$ where $\pi _{rj+k}$ acts
on $\mathcal{H}_{rj+k}$ which is an isometric copy of $\mathcal{H}_{1}$.
Moreover, $U_{\Pi }$ is given by Equality (\ref{UnitaryShift})\ for some
unitary $V$ of $\mathcal{J}_{0}$.
We are left to show that each irreducible subrepresentation of $\pi _{A}$ is
of multiplicity one, i.e. $r=1$. We recall that we have shown above that
H=\left( m\mathbb{Z}\right) /n\mathbb{Z}$ with $n=mk$ and $k\in \mathbb{N}$.
Using the notations of Theorem (\ref{FiniteGroupConclusion}), the
representation $\Psi $ defined by $\Psi (a)=\pi (a)$ for all $a\in A$ and
\Psi (U^{m})=V$ is an irreducible representation of $A\rtimes _{\alpha }H$.
Now $A\rtimes _{\alpha }H$ is *-isomorphic to $A\rtimes _{\alpha ^{m}
\mathbb{Z}_{k}$ by universality of the C*-crossed-product, and we now
identify these two C*-algebras. The image of $U^{m}\in A\rtimes _{\alpha }H$
in the crossed-product $A\rtimes _{\alpha ^{m}}\mathbb{Z}_{k}$ is denoted by
$\upsilon $ and is the canonical unitary of $A\rtimes _{\alpha ^{m}}\mathbb{
}_{k}$. Thus by Theorem (\ref{FiniteGroupConclusion}) $\Psi $ is an
irreducible representation of $A\rtimes _{\alpha ^{m}}\mathbb{Z}_{k}$ which
(up to conjugacy) acts on the space $\mathbb{C}^{r}\otimes \mathcal{H}_{1}$
and is of the form $\Psi (a)=1_{\mathbb{C}^{r}}\otimes \pi _{1}(a)$ for
a\in A$ and $\Psi (\upsilon ^{z})=\Omega (z)\otimes W(z)$ for $z\in \mathbb{
}_{k}$ where $\Omega $ and $W$ are some unitary projective representations
of $\mathbb{Z}_{k}$ on $\mathbb{C}^{r}$ and $\mathcal{H}_{1}$ respectively,
with $\Omega $ being irreducible. Since $\mathbb{Z}_{k}$ is cyclic, the
range of the projective representation $\Omega $ is contained in the
C*-algebra $C^{\ast }\left( \Omega (1)\right) $ which is Abelian since
\Omega (1)$ is a unitary. Hence, since $\Omega $ is irreducible, $C^{\ast
}\left( \Omega (1)\right) $ is an irreducible Abelian C*-algebra of
operators acting on $\mathbb{C}^{r}$. Hence $r=1$ and $\mathcal{J}=\mathcal{
}_{1}$. Moreover, since $U_{\Pi }^{m}=V\oplus \ldots \oplus V$ then $U_{\Pi
}^{n}=V^{k}\oplus \ldots \oplus V^{k}=1_{\mathcal{H}}$ and thus $V^{k}=1_
\mathcal{J}}$. Therefore, (2) holds as claimed.
Last, we also observed that $V\pi _{1}V=\pi _{1}\circ \sigma ^{k}$ by
construction (since $U_{\Pi }^{k}=V\oplus \ldots \oplus V$). Hence by
definition, since $\pi _{1}$ is irreducible, the representation $\psi $ of
A\rtimes _{\sigma ^{k}}\mathbb{Z}_{\mu }$ defined by $\psi (a)=\pi _{1}(a)$
for $a\in A$ and $\psi (U)=V$ is minimal. Hence, by Theorem (\ref{Rep}), the
restriction of $\pi _{1}$ to the fixed point C*-algebra $A_{1}$ is the
direct sum of $\eta $ irreducible representations $\varphi _{1},\ldots
,\varphi _{\eta }$ of $A_{1}$ such that $\varphi _{i}$ and $\varphi _{j}$
are not unitarily equivalent for $i\not=j\in \left\{ 1,\ldots ,\eta \right\}
$, where $\eta $ is the cardinal of the spectrum of $V$. Moreover, since
\pi _{i}=\pi _{1}\circ \sigma ^{i}$ it is immediate that $\pi _{i}$
restricted to $A_{1}$ equals to $\pi _{1}$ restricted to $A_{1}$. This
concludes our proof.
\end{proof}
\begin{corollary}
Let $\Pi $ be an irreducible representation of $A\rtimes _{\sigma }\mathbb{Z
_{n}$. The following are equivalent:
\begin{enumerate}
\item Up to unitary equivalence, $\Pi $ is an irreducible regular
representation of $A\rtimes _{\sigma }\mathbb{Z}_{n}$, i.e. it is induced by
a unique irreducible representation $\pi $ of $A$ and
\begin{equation*}
U_{\Pi }=\left[
\begin{array}{cccc}
0 & 1 & & \\
& 0 & \ddots & \\
& & \ddots & 1 \\
1 & & &
\end{array
\right]
\end{equation*
while $\pi _{A}=\oplus _{i=0}^{n-1}\pi \circ \sigma ^{i}$ and $\pi \circ
\sigma ^{i}$ is not equivalent to $\pi \circ \sigma ^{j}$ for $i,j=1,\ldots
,n-1$ with $i\not=j$,
\item There exists an irreducible subrepresentation $\pi $ of $\Pi _{|A}$
such that $\pi \circ \sigma ^{i}$ is not equivalent to $\pi $ for
i=1,\ldots ,n-1$,
\item There exists a unique irreducible representation $\varphi $ of $A_{1}$
such that $\Pi _{|A_{1}}$ is equivalent to $n\cdot \varphi $,
\item There is no $k\in \left\{ 1,\ldots ,n-1\right\} $ such that the
C*-algebra generated by $\Pi (A)$ and $U_{\Pi }^{k}$ is reducible.
\end{enumerate}
\end{corollary}
\begin{proof}
It is a direct application of Theorem\ (\ref{CyclicConclusion}).
\end{proof}
We thus have concluded that all irreducible representations of crossed
products by finite cyclic groups have a structure which is a composite of
the two cases found in \cite{Latremoliere06}. Indeed, such representations
cycle through a collection of minimal representations, which all share the
same restriction to the fixed point algebra. The later is a finite sum of
irreducible mutually disjoint representations of the fixed point algebra.
\begin{remark}
Let $\sigma $ be an order $n$ automorphism of a unital C*-algebra $A$ and
let $\Pi $ be an irreducible representation of $A\rtimes _{\sigma }\mathbb{Z}
$. We recall \cite{Zeller-Meier68} that $A\rtimes _{\sigma }\mathbb{Z}$ is
generated by $A$ and a unitary $U$ such that $UaU^{\ast }=\sigma (a)$ for
all $a\in A$ and is universal for these commutation relations. We denote
\Pi (U)$ by $U_{\Pi }$ and $\Pi (a)$ by $\pi (a)$ for all $a\in A$. Now,
note that $U_{\Pi }^{n}$ commutes with $\pi $ since $\sigma ^{n}=\limfunc{Id
_{A}$ and of course $U_{\Pi }^{n}$ commutes with $U_{\Pi }$ so, since $\Pi $
is irreducible, there exists $\lambda \in \mathbb{T}$ such that $U_{\Pi
}^{n}=\lambda $. Now, define $V_{\Pi }=\overline{\mu }U_{\Pi }$ for any $\mu
\in \mathbb{T}$ such that $\mu ^{n}=\lambda $. Then $V_{\Pi }^{n}=1$ and
thus $\left( \pi ,V_{\Pi }\right) $ is an irreducible representation of
A\rtimes _{\sigma }\mathbb{Z}_{n}$ which is then fully described by Theorem
\ref{CyclicConclusion}).
\end{remark}
In the last section of this paper, we give a necessary condition on
irreducible representations of crossed-products by the group $\mathfrak{S
_{3}$ of permutations of $\left\{ 1,2,3\right\} $. This last example
illustrates some of the behavior which distinguish the conclusion of Theorem
(\ref{FiniteGroupConclusion}) from the one of Theorem (\ref{CyclicConclusion
).
\section{Application: Crossed-Products by the permutation group on $\left\{
1,2,3\right\} $}
As an application, we derive the structure of the irreducible
representations of crossed-products by the group $\mathfrak{S}_{3}$ of
permutations of $\left\{ 1,2,3\right\} $. This group is isomorphic to
\mathbb{Z}_{3}\rtimes _{\gamma }\mathbb{Z}_{2}$ where $\gamma $ is defined
as follows:\ if $\eta $ and $\tau $ are the respective images of $1\in
\mathbb{Z}$ in the groups $\mathbb{Z}_{3}$ and $\mathbb{Z}_{2}$ then the
action $\gamma $ of $\mathbb{Z}_{2}$ on $\mathbb{Z}_{3}$ is given by $\gamma
_{\tau }(\eta )=\eta ^{2}$. Thus in $\mathbb{Z}_{3}\rtimes _{\gamma }\mathbb
Z}_{2}$ we have $\tau \eta \tau =\eta ^{2}$, $\tau ^{2}=1$ and $\eta ^{3}=1$
(using the multiplicative notation for the group law). An isomorphism
between $\mathfrak{S}_{3}$ and $\mathbb{Z}_{3}\rtimes _{\gamma }\mathbb{Z
_{2}$ is given by sending the transposition $\left( 1~2\right) $ to $\tau $
and the $3$-cycle $\left( 1~2~3\right) $ to $\eta $. From now on we shall
identify these two groups implicitly using this isomorphism.
\begin{theorem}
\label{Permutation3}Let $\alpha $ be an action of $\mathfrak{S}_{3}$ on $A$.
Let $\Pi $ be an irreducible representation of $A\rtimes _{\alpha }\mathfrak
S}_{3}$. We denote by $\tau $ and $\eta $ the permutations $\left(
1~2\right) $ and $\left( 1~2~3\right) $. The set $\left\{ \tau ,\eta
\right\} $ is a generator set of $\mathfrak{S}_{3}$. We denote by $U_{\tau }$
and $U_{\eta }$ the canonical unitaries in $A\rtimes _{\alpha }\mathfrak{S
_{3}$ corresponding respectively to $\tau $ and $\eta $. Then either (up to
a unitary conjugation of $\Pi $):
\begin{itemize}
\item $\Pi $ is minimal, i.e. $\Pi _{|A}$ is irreducible,
\item There exists an irreducible representation $\pi _{1}$ on $\mathcal{H
_{1}$ of $A$ such that $\mathcal{H}=\mathcal{H}_{1}\oplus \mathcal{H}_{1}$
with $\pi _{A}=\pi _{1}\oplus \pi _{1}\circ \alpha _{\tau }$. Then $\Pi
(U_{\tau })=\left[
\begin{array}{cc}
0 & 1 \\
1 &
\end{array
\right] $ in this decomposition. \emph{Observe that }$\pi _{1}$ \emph{may or
not be equivalent to} $\pi _{1}\circ \alpha _{\tau }$. Moreover, $\pi _{1}$
and $\pi _{1}\circ \alpha _{\tau }$ are minimal for the action of $\eta $.
\item There exists an irreducible representation $\pi _{1}$ on $\mathcal{H
_{1}$ of $A$ such that $\pi _{1}$ and $\pi _{1}\circ \alpha _{\eta ^{i}}$
are non equivalent for $i=1,2$ and such that $\mathcal{H}=\mathcal{H
_{1}\oplus \mathcal{H}_{1}\oplus \mathcal{H}_{1}$ with $\pi _{A}=\pi
_{1}\oplus \pi _{1}\circ \alpha _{\eta }\oplus \pi _{1}\circ \alpha _{\eta
^{2}}$. Then $\Pi (U_{\eta })=\left[
\begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 &
\end{array
\right] $ in this decomposition.
\item Last, there exists an irreducible representation $\pi _{1}$ on
\mathcal{H}_{1}$ of $A$ such that $\pi _{1}\circ \alpha _{\sigma }$ is not
equivalent to $\pi _{1}$ for $\sigma \in \mathfrak{S}_{3}\backslash \left\{
\limfunc{Id}\right\} $ and $\mathcal{H}=\mathcal{H}_{1}^{\oplus 6}$ with
\begin{equation*}
\pi _{A}=\pi _{1}\oplus \pi _{1}\circ \alpha _{\eta }\oplus \pi _{1}\circ
\alpha _{\eta ^{2}}\oplus \pi _{1}\circ \alpha _{\tau }\oplus \pi _{1}\circ
\alpha _{\eta \tau }\oplus \pi _{1}\circ \alpha _{\eta ^{2}\tau }
\end{equation*
an
\begin{equation*}
\Pi \left( U_{\eta }\right) =\left[
\begin{array}{cccccc}
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 &
\end{array
\right] \text{,}
\end{equation*
whil
\begin{equation*}
\Pi \left( U_{\tau }\right) =\left[
\begin{array}{cccccc}
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 &
\end{array
\right] \text{.}
\end{equation*}
\end{itemize}
\end{theorem}
\begin{proof}
The C*-algebra $A\rtimes _{\alpha }\mathfrak{S}_{3}$ is generated by a copy
of $A$ and two unitaries $U_{\tau }$ and $U_{\eta }$ that satisfy $U_{\tau
}^{2}=U_{\eta }^{3}=1$, $U_{\tau }U_{\eta }U_{\tau }=U_{\eta }^{2}$ and for
all $a\in A$ we have $U_{\tau }aU_{\tau }^{\ast }=\alpha _{\tau }(a)$ and
U_{\eta }aU_{\eta }^{\ast }=\alpha _{\eta }(a)$. Notice that $\mathfrak{S
_{3}=\mathbb{Z}_{3}\rtimes _{\gamma }\mathbb{Z}_{2}$ with $\gamma _{\tau
}\left( \eta \right) =\tau \eta \tau $. So we have $A\rtimes _{\alpha
\mathfrak{S}_{3}=\left( A\rtimes _{\alpha _{\eta }}\mathbb{Z}_{3}\right)
\rtimes _{\beta }\mathbb{Z}_{2}$ where $\beta :a\in A\mapsto \alpha _{\tau
}(a)$ and $\beta (U_{\eta })=U_{\tau \eta \tau }=U_{\eta }^{2}$. Since
A\rtimes _{\alpha _{\eta }}\mathbb{Z}_{3}=A+AU_{\eta }+AU_{\eta }^{2}$, the
relation between $\beta $ and $\alpha _{\eta }$ is given by
\begin{equation*}
\beta \left( x_{1}+x_{2}U_{\eta }+x_{3}U_{\eta }^{2}\right) =\alpha _{\tau
}(x_{1})+\alpha _{\tau }(x_{3})U_{\eta }+\alpha _{\tau }(x_{2})U_{\eta }^{2}
\end{equation*
for all $x_{1}$,$x_{2}$ and $x_{3}\in A$. We now proceed with a careful
analysis of $\beta $ and $\alpha _{\eta }$ to describe all irreducible
representations of $A\rtimes _{\alpha }\mathfrak{S}_{3}$.
Let $\Pi $ be an irreducible representation of $A\rtimes _{\alpha }\mathfrak
S}_{3}$ on some Hilbert space $\mathcal{H}$. Thus $\Pi $ is an irreducible
representation of $\left[ A\rtimes _{\alpha _{\eta }}\mathbb{Z}_{3}\right]
\rtimes _{\beta }\mathbb{Z}_{2}$. We now have two cases: either $\Pi
_{|A\rtimes _{\alpha _{\eta }}\mathbb{Z}_{3}}$ is irreducible or it is
reducible.
\begin{description}
\item[Case 1: $\Pi _{|A\rtimes _{\protect\alpha _{\protect\eta }}\mathbb{Z
_{3}}$ is irreducible.] Hence $\Pi $ is minimal for the action $\beta $ of
\mathbb{Z}_{2}$. This case splits in two cases.
\begin{description}
\item[Case 1a: $\protect\pi _{A}$ is irreducible] Then $\Pi $ is minimal for
the action $\alpha $ of $\mathfrak{S}_{3}$ by definition.
\item[Case 1b: $\protect\pi _{A}$ is reducible] By Theorem (\re
{CyclicConclusion}), there exists an irreducible representation $\pi _{1}$
of $A$ on some Hilbert space $\mathcal{H}_{1}$ such that $\pi _{1}$, $\pi
_{1}\circ \alpha _{\eta }$ and $\pi _{1}\circ \alpha _{\eta }^{2}$ are not
unitarily equivalent, $\mathcal{H}=\mathcal{H}_{1}\oplus \mathcal{H
_{1}\oplus \mathcal{H}_{1}$ and
\begin{equation*}
\Pi (U_{\eta })=\left[
\begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 &
\end{array
\right] \text{ and\ }\Pi \left( a\right) =\left[
\begin{array}{ccc}
\pi _{1}(a) & & \\
& \pi _{1}\circ \alpha _{\eta }(a) & \\
& & \pi _{1}\circ \alpha _{\eta ^{2}}(a
\end{array
\right] \text{.}
\end{equation*}
\end{description}
\item[Case 2: $\Pi _{|A\rtimes _{\protect\beta }\mathbb{Z}_{3}}$ is
reducible.] From Theorem (\ref{CyclicConclusion}), or alternatively \cit
{Latremoliere06}, there exists an irreducible representation $\pi _{1}$ of
A\rtimes _{\alpha _{\eta }}\mathbb{Z}_{3}$ such that for all $z\in A\rtimes
_{\alpha _{\eta }}\mathbb{Z}_{3}$ we have
\begin{equation}
\Pi (a)=\left[
\begin{array}{cc}
\pi _{1}(z) & 0 \\
0 & \pi _{1}\circ \beta (z
\end{array
\right] \text{ and }\Pi \left( U_{\tau }\right) =\left[
\begin{array}{cc}
0 & 1 \\
1 &
\end{array
\right] \label{Case2-1}
\end{equation
where $\pi _{1}$ and $\pi _{1}\circ \beta $ are not unitarily equivalent.
This case splits again in two cases:
\begin{description}
\item[Case 2a:\ $\protect\pi _{1|A}$ is irreducible] Thus $\pi _{1}$ is a
minimal representation of $A\rtimes _{\alpha _{\eta }}\mathbb{Z}_{3}$. In
particular:
\begin{equation*}
\pi _{A}(a)=\left[
\begin{array}{cc}
\pi _{1}(a) & 0 \\
0 & \pi _{1}\circ \alpha _{\tau }(a
\end{array
\right]
\end{equation*
and $\Pi \left( U_{\eta }\right) $ is a block-diagonal unitary in this
decomposition. However, we can not conclude that $\pi _{1|A}$ and $\pi
_{1|A}\circ \alpha _{\tau }$ are equivalent or non-equivalent. Examples (\re
{ExPermutation2})\ and (\ref{ExPermutation1})\ illustrate that both
possibilities occur.
\item[Case 2b: $\protect\pi _{1|A}$ is reducible] Then $\Pi _{|A\rtimes
_{\alpha _{\eta }}\mathbb{Z}_{3}}$ is described by Theorem (\re
{CyclicConclusion}). Since $3$ is prime, only one possibility occurs: there
exists an irreducible representation $\pi $ of $A$ such that $\pi \circ
\alpha _{\eta }$ and $\pi \circ \alpha _{\eta }^{2}$ are not equivalent and
\begin{equation*}
\Pi (a)=\left[
\begin{array}{ccc}
\pi (a) & 0 & 0 \\
0 & \pi (\alpha _{\eta }(a)) & 0 \\
0 & 0 & \pi \left( \alpha _{\eta ^{2}}(a)\right
\end{array
\right]
\end{equation*
and $\Pi \left( U_{\eta }\right) =\left[
\begin{array}{ccc}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 &
\end{array
\right] $. Note that
\begin{equation*}
\Pi \left( \beta (U_{\Pi }\right) )=\left[
\begin{array}{ccc}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 &
\end{array
\right] \text{.}
\end{equation*}
Together with (\ref{Case2-1}), we get that $\mathcal{H}$ splits into the
direct sum of six copies of the Hilbert space on which $\pi $ acts and
\begin{equation*}
\Pi (a)=\left[
\begin{array}{cccccc}
\pi (a) & & & & & \\
& \pi (\alpha _{\eta }(a)) & & & & \\
& & \pi (\alpha _{\eta ^{2}}(a)) & & & \\
& & & \pi \left( \alpha _{\tau }(a)\right) & & \\
& & & & \pi \left( \alpha _{\eta \tau }(a)\right) & \\
& & & & & \pi \left( \alpha _{\eta ^{2}\tau (a)}\right
\end{array
\right]
\end{equation*
an
\begin{equation*}
\Pi (U_{\eta })=\left[
\begin{array}{cccccc}
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 &
\end{array
\right]
\end{equation*
whil
\begin{equation*}
\Pi \left( U_{\tau }\right) =\left[
\begin{array}{cccccc}
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 &
\end{array
\right] \text{.}
\end{equation*
Thus $\Pi $ is regular induced by $\pi $, and therefore, as $\Pi $ is
irreducible, $\pi \circ \alpha _{\sigma }$ is not equivalent to $\pi $ for
any $\sigma \in \mathfrak{S}_{3}\backslash \{\limfunc{Id}\}$ by Theorem (\re
{RegularIrred}).
\end{description}
\end{description}
This concludes our proof.
\end{proof}
\bigskip We show that all four possibilities above do occur in a nontrivial
manner. We use the generators $\tau $ and $\eta $ as defined in Theorem (\re
{Permutation3}). Denote by $e$ the identity of $\left\{ 1,2,3\right\} $.
Notice that $\tau ^{2}=\eta ^{3}=e$ and $\tau \eta \tau =\eta ^{2}$ and
\tau \eta ^{2}\tau =\tau $, while
\begin{equation*}
\mathfrak{S}_{3}=\left\{ e,\eta ,\eta ^{2},\tau ,\eta \tau ,\eta ^{2}\tau
\right\} \text{.}
\end{equation*
In particular, $\left\{ 1,\eta ,\eta ^{2}\right\} $ is a normal subgroup of
\mathfrak{S}_{3}$. Now, consider the universal C*-algebra of the free group
on three generators $A=C^{\ast }\left( \mathbb{F}_{3}\right) $ and denote by
$U_{1},U_{2}$ and $U_{3}$ its three canonical unitary generators. Then we
define the action $\alpha $ of $\mathfrak{S}_{3}$ on $A$ by setting $\alpha
_{\sigma }\left( U_{i}\right) =U_{\sigma (i)}$ for any $\sigma \in \mathfrak
S}_{3}$. We now show that this simple example admits in a nontrivial way all
types of representations described in Theorem (\ref{Permutation3}).
\begin{example}
There exists a nontrivial irreducible representation $\pi :C^{\ast }\left(
\mathbb{F}_{3}\right) \rightarrow M_{2}\left( \mathbb{C}\right) $ such that
\pi $ and $\pi \circ \alpha _{\tau }$ are unitarily equivalent, but $\pi $
and $\pi \circ \alpha _{\eta }$ are not. Indeed, set
\begin{equation*}
\pi \left( U_{1}\right)
\begin{bmatrix}
0 & 1 \\
1 &
\end{bmatrix
\qquad \pi \left( U_{2}\right)
\begin{bmatrix}
0 & -1 \\
-1 &
\end{bmatrix
\qquad \pi \left( U_{3}\right)
\begin{bmatrix}
1 & 0 \\
0 & -
\end{bmatrix
.
\end{equation*
We check easily that $\pi $ is an irreducible $\ast $-representation. Sinc
\begin{equation*}
\begin{bmatrix}
1 & 0 \\
0 & -
\end{bmatrix
\left[ \pi \circ \alpha _{\tau }\right]
\begin{bmatrix}
1 & 0 \\
0 & -
\end{bmatrix
=\pi ,
\end{equation*
$\pi $ and $\pi \circ \alpha _{\tau }$ are unitarily equivalent. To see that
$\pi $ and $\pi \circ \alpha _{\eta }$ are not unitarily equivalent, notice
that $\pi \left( U_{1}U_{2}-U_{2}U_{1}\right) =0$ but that
\begin{equation*}
\pi \left( U_{2}U_{3}-U_{3}U_{2}\right)
\begin{bmatrix}
0 & 2 \\
-2 &
\end{bmatrix
\text{.}
\end{equation*}
\end{example}
\begin{example}
\label{Minimal}There exists a non trivial irreducible representation $\pi
:C^{\ast }\left( \mathbb{F}_{3}\right) \rightarrow M_{3}\left( \mathbb{C
\right) $ such that $\pi $ and $\pi \circ \alpha _{\tau }$ are unitarily
equivalent and $\pi $ and $\pi \circ \alpha _{\eta }$ are also unitarily
equivalent. Let $\lambda =\exp \left( \frac{1}{3}2i\pi \right) $. Define
\begin{equation*}
\pi \left( U_{1}\right) =\left[
\begin{array}{cc}
0 & \lambda \\
\lambda ^{2} &
\end{array
\right] ,\ \ \ \pi \left( U_{2}\right) =\left[
\begin{array}{cc}
0 & \lambda ^{2} \\
\lambda &
\end{array
\right] \text{ and }\pi \left( U_{3}\right) =\left[
\begin{array}{cc}
0 & 1 \\
1 &
\end{array
\right] \text{.}
\end{equation*
Let $V=\left[
\begin{array}{cc}
1 & 0 \\
0 & \lambda ^{2
\end{array
\right] $. We check that $V\pi \left( U_{i}\right) V^{\ast }=\pi \left(
U_{\left( i+1\right) \func{mod}3}\right) .$ Then let $W=\pi (U_{3})$. Then
W\pi \left( U_{1}\right) W^{\ast }=\pi \left( U_{2}\right) $, $W\pi \left(
U_{2}\right) W^{\ast }=\pi \left( U_{1}\right) $, and $W\pi \left(
U_{3}\right) W^{\ast }=\pi \left( U_{3}\right) $. Thus $\pi $ is a minimal
representation of $C^{\ast }\left( \mathbb{F}_{3}\right) $ for the action
\alpha $ of $\mathfrak{S}_{3}$.
\end{example}
\begin{example}
\label{ExPermutation2}There exists an irreducible representation $\pi
:C^{\ast }\left( \mathbb{F}_{3}\right) \rightarrow M_{3}\left( \mathbb{C
\right) $ such that $\pi $ and $\pi \circ \alpha _{\eta }$ are unitarily
equivalent, but $\pi $ and $\pi \circ \alpha _{\tau }$ are not: Let $\lambda
=\exp \left( \frac{1}{3}2\pi i\right) $ and define unitaries $T$ and $V$ by
\begin{equation*}
T
\begin{bmatrix}
0 & -\frac{4}{5} & {\Large -}\frac{3}{5} \\
\frac{4}{5} & -\frac{9}{25} & \frac{12}{25} \\
\frac{3}{5} & \frac{12}{25} & -\frac{16}{25
\end{bmatrix}
\text{ \ \ \ and}\ \ \ V
\begin{bmatrix}
1 & 0 & 0 \\
0 & \lambda & 0 \\
0 & 0 & \lambda ^{2
\end{bmatrix
\end{equation*}
Define
\begin{equation*}
\pi \left( U_{1}\right) =VTV^{2}\qquad \pi \left( U_{2}\right)
=V^{2}TV\qquad \pi \left( U_{3}\right) =T\text{.}
\end{equation*}
It is clear that $\pi $ and $\pi \circ \alpha _{\eta }$ are unitarily
equivalent. We will show that $\pi $ and $\pi \circ \alpha _{\tau }$ are not
unitarily equivalent. Suppose on the contrary that they are. Then there
exists a unitary $W$ such that $W=W^{\ast }=W^{-1}$ and
\begin{equation*}
WTW=T,\qquad W\left( VTV^{2}\right) W=V^{2}TV\qquad W\left( V^{2}TV\right)
W=VTV^{2}.
\end{equation*
From here we conclude that $VWV$ performs the same transformations, that is
\begin{eqnarray*}
\left( VWV\right) T\left( VWV\right) ^{\ast } &=&T, \\
\left( VWV\right) \left[ VTV^{2}\right] \left( VWV\right) ^{\ast }
&=&V^{2}TV, \\
\left( VWV\right) \left[ V^{2}TV\right] \left( VWV\right) ^{\ast }
&=&VTV^{2}.
\end{eqnarray*
Indeed,
\begin{eqnarray*}
W\left( VTV^{2}\right) W &=&V^{2}TV\text{ so} \\
V\left[ W\left( VTV^{2}\right) W\right] &=&V\left[ V^{2}TV\right] =TV\text{.}
\end{eqnarray*}
Then we multiply both sides by $V^{2}$ from the right to get
\begin{equation*}
VWVTV^{2}WV^{2}=T \text{.}
\end{equation*}
Since
\begin{equation*}
\left( VWV\right) ^{\ast }=V^{\ast }W^{\ast }V^{\ast }=V^{2}WV^{2}\text{,}
\end{equation*
we get the first equation. Similarly we get the other two.
Since $\pi $ is irreducible we conclude that there exists a constant $c$
such that
\begin{equation*}
VWV=cW.
\end{equation*
$V$ has a precise form and when we compute $VWV-cW$ we conclude that this
equation has a non zero solution iff $c=1,$ $c=\lambda ,$ or $c=\lambda
^{2}. $ Moreover, the solutions have the form
\begin{eqnarray*}
W &=
\begin{bmatrix}
x & 0 & 0 \\
0 & 0 & y \\
0 & z &
\end{bmatrix
\text{ if }c=1 \\
W &=
\begin{bmatrix}
0 & x & 0 \\
y & 0 & 0 \\
0 & 0 &
\end{bmatrix
\text{ if }c=\lambda \\
W &=
\begin{bmatrix}
0 & 0 & x \\
0 & y & 0 \\
z & 0 &
\end{bmatrix
\text{ if }c=\lambda ^{2}
\end{eqnarray*
for some $x,y,c\in
\mathbb{C}
$.
Now we easily check that $T$ does not commute with any of the three $W$'s.
For example
\begin{eqnarray*}
&
\begin{bmatrix}
x & 0 & 0 \\
0 & 0 & y \\
0 & z &
\end{bmatrix
\begin{bmatrix}
0 & -\frac{4}{5} & -\frac{3}{5} \\
\frac{4}{5} & -\frac{9}{25} & \frac{12}{25} \\
\frac{3}{5} & \frac{12}{25} & -\frac{16}{25
\end{bmatrix
\begin{bmatrix}
0 & -\frac{4}{5} & -\frac{3}{5} \\
\frac{4}{5} & -\frac{9}{25} & \frac{12}{25} \\
\frac{3}{5} & \frac{12}{25} & -\frac{16}{25
\end{bmatrix
\begin{bmatrix}
x & 0 & 0 \\
0 & 0 & y \\
0 & z &
\end{bmatrix}
\\
&=
\begin{bmatrix}
0 & \frac{3}{5}z-\frac{4}{5}x & \frac{4}{5}y-\frac{3}{5}x \\
\frac{3}{5}y-\frac{4}{5}x & \frac{12}{25}y-\frac{12}{25}z & -\frac{7}{25}y
\\
\frac{4}{5}z-\frac{3}{5}x & \frac{7}{25}z & \frac{12}{25}z-\frac{12}{25}
\end{bmatrix
\text{.}
\end{eqnarray*
This of course implies that $x=y=z=0$.
\end{example}
\begin{example}
\label{Torus1}This example acts on $A=C\left( \mathbb{T}^{3}\right) $.
Define for $f\in C(\mathbb{T}^{3})$ and $\left( z_{1},z_{2},z_{3}\right) \in
\mathbb{T}^{3}$:
\begin{equation*}
\alpha _{\eta }\left( f\right) \left( z_{1},z_{2},z_{3}\right) =f\left(
z_{2},z_{3},z_{1}\right)
\end{equation*
and
\begin{equation*}
\alpha _{\tau }\left( f\right) \left( z_{1},z_{2},z_{3}\right) =f\left(
z_{2},z_{1},z_{3}\right)
\end{equation*
on $C\left( \mathbb{T}^{3}\right) $. We can build a non trivial irreducible
representation $\pi :C\left( \mathbb{T}^{3}\right) \rightarrow
\mathbb{C}
$ such that $\pi $ and $\pi \circ \alpha _{\eta }$ are not unitarily
equivalent and $\pi $ and $\pi \circ \alpha _{\tau }$ are also not unitarily
equivalent. Let $x=\left( x_{1},x_{2},x_{3}\right) \in \mathbb{T}^{3}$ be
such that $x_{1}\neq x_{2},$ $x_{2}\neq x_{3},$ and $x_{3}\neq x_{1}.$Define
$\pi (f)=f(x)$. Then we obtain an irreducible representation of the required
type as the regular representation induced by $\pi $, using Theorem\ (\re
{RegularIrred}).
\end{example}
\bigskip Now, Theorem (\ref{FiniteGroupConclusion}) allowed for the
irreducible subrepresentations of $\Pi _{|A}$ to have multiplicity greater
than one, for irreducible representations $\Pi $ of $A\rtimes _{\alpha }G$.
This situation is however prohibited when $G$ is finite cyclic by Theorem
\ref{CyclicConclusion}). We show that finite polycyclic groups such as
\mathfrak{S}_{3}$ can provide examples where $\Pi _{|A}$ may not be
multiplicity free, thus showing again that Theorem (\re
{FiniteGroupConclusion}) can not be strengthened to the conclusion of
Theorem (\ref{CyclicConclusion}).
\begin{example}
\label{ExPermutation1}We shall use the notations of Theorem (\re
{Permutation3}). There exists a unital C*-algebra $A$, an action $\alpha $
of $\mathfrak{S}_{3}$ on $A$ and an irreducible representation $\widetilde
\Pi }:A\rtimes _{\alpha }S_{3}\rightarrow B\left( \mathcal{H}\oplus \mathcal
H}\right) $ such that for all $x\in A$ we have
\begin{equation}
\widetilde{\Pi }\left( x\right)
\begin{bmatrix}
\pi \left( x\right) & 0 \\
0 & \pi \left( \alpha _{\tau }(x)\right
\end{bmatrix}
\label{ExampleMultiplicity2-1}
\end{equation
for some irreducible representation $\pi :A\rightarrow B\left( \mathcal{H
\right) $ such that $\pi $ and $\pi \circ \alpha _{\tau }$ are equivalent.
Note that $\pi $ is thus minimal for the action of $\alpha _{\eta }$.
Indeed, let us start with any unital C*-algebra $A$ for which there exists
an action $\alpha $ of $\mathfrak{S}_{3}$ and an irreducible representation
\Pi :A\rtimes _{\alpha }\mathfrak{S}_{3}\rightarrow B\left( \mathcal{H
\right) $ such that $\pi =\Pi _{|A}$ is also irreducible, i.e. $\Pi $ is
minimal. For instance, Example (\ref{Minimal})\ provides such a situation.
Let $V_{\eta }=\Pi \left( U_{\eta }\right) $ and $V_{\tau }=\Pi (U_{\tau })
. Then for all $x\in A
\begin{eqnarray*}
V_{\eta }\pi \left( x\right) V_{\eta }^{\ast } &=&\pi \left( \alpha _{\eta
}\left( x\right) \right) \text{,} \\
V_{\eta }\pi \left( x\right) V_{\eta }^{\ast } &=&\pi \left( \alpha _{\eta
}\left( x\right) \right) \text{,} \\
V_{\tau }^{2}=1\text{, }V_{\eta }^{3} &=&1\text{ }\text{and }V_{\tau
}V_{\eta }V_{\tau }=V_{\eta }^{2}\text{.}
\end{eqnarray*
Let $\omega =\exp \left( \frac{1}{3}2\pi i\right) $. For $x\in A$ define
\widetilde{\Pi }\left( x\right) $ by (\ref{ExampleMultiplicity2-1}); let
W_{\eta }=\widetilde{\Pi }\left( U_{\eta }\right) $ and $W_{\tau }
\widetilde{\Pi }\left( U_{\tau }\right) $ given by
\begin{equation*}
W_{\eta }
\begin{bmatrix}
\omega V_{\eta } & 0 \\
0 & \omega ^{2}V_{\eta
\end{bmatrix
\text{ \qquad }W_{\tau }
\begin{bmatrix}
0 & 1 \\
1 &
\end{bmatrix
\text{.}
\end{equation*}
We easily check that:
\begin{equation*}
W_{\eta }\widetilde{\Pi }\left( x\right) W_{\eta }^{\ast }=\widetilde{\Pi
\left( \alpha _{\eta }\left( x\right) \right) \text{, } W_{\tau }\widetilde
\Pi }\left( x\right) W_{\tau }^{\ast }=\widetilde{\Pi } \left( \alpha _{\tau
}\left( x\right) \right) \text{,}
\end{equation*}
and:
\begin{equation*}
\left( W_{\tau }\right) ^{3}=1, \left( W_{\tau }\right) ^{2}=1 \text{.}
\end{equation*}
Moreover,
\begin{eqnarray*}
W_{\tau }W_{\eta }W_{\tau } &=
\begin{bmatrix}
0 & 1 \\
1 &
\end{bmatrix
\begin{bmatrix}
\omega V_{\eta } & 0 \\
0 & \omega ^{2}V_{\eta
\end{bmatrix
\begin{bmatrix}
0 & 1 \\
1 &
\end{bmatrix}
\\
&=
\begin{bmatrix}
\omega ^{2}\left( V_{\eta }\right) ^{2} & 0 \\
0 & \omega \left( V_{\eta }\right) ^{2
\end{bmatrix
=\left( V_{\eta }\right) ^{2}\text{,}
\end{eqnarray*
because $\omega ^{4}=\omega .$
We need to prove that $\widetilde{\Pi }:A\rtimes _{\alpha }S_{3}\rightarrow
B\left( \mathcal{H}\oplus \mathcal{H}\right) $ is irreducible. Let
\begin{equation*}
T
\begin{bmatrix}
a & b \\
c &
\end{bmatrix
\end{equation*
be in the commutant of $\widetilde{\Pi }\left( A\rtimes _{\alpha
}S_{3}\right) $. For every $x\in A$:
\begin{equation*}
T\left[
\begin{array}{cc}
\pi (x) & 0 \\
0 & \pi (\alpha _{\tau }(x)
\end{array
\right] =\left[
\begin{array}{cc}
\pi (x) & 0 \\
0 & \pi \left( \alpha _{\tau }(x)\right
\end{array
\right] T\text{.}
\end{equation*
Since $\pi $ is an irreducible representation of $A$ and $\pi \circ \alpha
_{\tau }=V_{\tau }\pi V_{\tau }$ by construction, we conclude by Lemma (\re
{Schur})\ that $a$ and $b$ are multiple of the identity, while $c$ and $d$
are multiples of $V_{\tau }$. Since $TW_{\tau }=W_{\tau }T$ we conclude that
$a=d$ and $b=c$. This means that
\begin{equation*}
T-aI
\begin{bmatrix}
0 & bV_{\tau } \\
bV_{\tau } &
\end{bmatrix
\end{equation*
is in the commutant of the $\widetilde{\pi }\left( A\rtimes _{\alpha
}S_{3}\right) $. However, this element must commute with $W_{\eta }$. This
can only happen if $b=0$. This completes the proof.
\end{example}
\bigskip Thus, using Example (\ref{ExPermutation1}), there exists an
irreducible representation $\widetilde{\Pi }$ of $C^{\ast }(\mathbb{F
_{3})\rtimes _{\alpha }\mathfrak{S}_{3}$ such that $\widetilde{\Pi
_{|C^{\ast }(\mathbb{F}_{3})}$ is the sum of two equivalent irreducible
representations of $C^{\ast }(\mathbb{F}_{3})$, a situation which is
impossible for crossed-product by finite cyclic groups by Theorem (\re
{CyclicConclusion}).
\bigskip In general, repeated applications of Theorem (\ref{CyclicConclusion
) can lead to detailed descriptions of irreducible representations of
crossed-products of unital C*-algebra by finite polycyclic groups, based
upon the same method as we used in Theorem (\ref{Permutation3}). Of course,
in these situations Theorem (\ref{FiniteGroupConclusion})\ provides already
a detailed necessary condition on such representations, and much of the
structure can be read from this result.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,869,038,156,321 | arxiv | \section{Introduction}
Numerous astrophysical observations strongly support the existence in our galaxy of a cold dark matter halo, that may consist of Weakly Interacting Massive Particles~(WIMPs)~\cite{Feng:2010gw,Bertone:2004pz}. The principal search mode of direct WIMP detection is the identification of an $\mathcal{O}$(keV) nuclear recoil produced by WIMP-nucleus elastic scattering. Since the speed of the Earth relative to the dark matter halo varies depending on the Earth's velocity with respect to the Sun, the dark matter detection rate is expected to demonstrate a annual modulation. This modulation is expected to be at a maximum~(minimum) on June~2~(Dec.~2) with an amplitude between a few and \unit[20]{\%}, assuming the standard halo model~\cite{Drukier:1986tm,Freese:1987wu,Frandsen:2011gi}. The CoGeNT~\cite{Aalseth:2011wp,Aalseth:2010vx}, DAMA/LIBRA~\cite{Bernabei:2010mq} and CRESST-II~\cite{Angloher:2011uu} collaborations have all reported an excess of events above all known backgrounds. The CoGeNT~\cite{Aalseth:2011wp} and DAMA/LIBRA~\cite{Bernabei:2010mq,Bernabei:2008yh} collaborations have also claimed evidence for annual modulations in their event rates at \unit[2.8]{$\sigma$} and \unit[8.9]{$\sigma$} respectively. Fits to the available data favor a light WIMP with mass \unit[$\sim$10]{GeV/c$^{2}$} and spin-independent cross-section \unit[10$^{-41}$--10$^{-39}$]{cm$^{2}$}~\cite{Hooper:2012ft,Kelso:2011gd,Kopp:2011yr}.\\
The null observations by CDMS-II~\cite{Ahmed:2010hw,Ahmed:2010wy,Ahmed:2012vq}, XENON100~\cite{Angle:2011th,Aprile:2011hi,Aprile:2012hi} and EDELWEISS~\cite{Armengaud:2012kd} exclude much of the allowed WIMP signal regions mentioned above~\cite{HerreroGarcia:2012fu}. The tension between these exclusion limits and the positive observations can be significantly reduced, but not removed, when taking into account experimental~\cite{Collar:2012ed,Collar:2011wq} and astrophysical uncertainties~\cite{Kopp:2011yr,Schwetz:2011xm,Farina:2011pw,Kelso:2011gd,Foot:2010rj,HerreroGarcia:2011aa,Frandsen:2011gi}. This tension has led to suggestions that the CoGeNT and DAMA/LIBRA modulations are due to conventional annual phenomena~\cite{Ralston:2010bd,Blum:2011jf}. The atmospheric muon rate and the radon level in the underground experimental hall modulate annually. Signals that can simulate dark matter interactions may be produced by ($\alpha$,n) reactions from radon decay in the active volume or by nuclear recoils from spallation neutrons originating from atmospheric muon interactions. The CoGeNT collaboration has stated that contamination from these backgrounds is small compared to the observed signal~\cite{Aalseth:2012if,Aalseth:2011wp}. The MINOS experiment monitors both of these quantities in an adjacent experimental hall to that of the CoGeNT experiment in the Soudan Underground Laboratory. In this paper we compare the modulations of the CoGeNT event rate data to that of the atmospheric muon rate and radon level data collected at the same time by the MINOS experiment.\\
The annual modulation of the muon flux deep underground has been observed by many different experiments~\cite{Ambrosio:1997tc,Bouchta:1999,Bellini:2012te,Solvi:2009,Desiati:2011,Adamson:2009zf}. The similarities of the amplitudes and phases of the modulations observed in the LVD muon~\cite{Solvi:2009} and DAMA/LIBRA data sets motivated the hypothesis that modulation in the latter may be muon-induced. It has been suggested that spallation neutrons or long-lived activated isotopes produced by these muons may be responsible for the DAMA/LIBRA modulation~\cite{Blum:2011jf,Ralston:2010bd}. This now seems unlikely as recent detailed comparisons of the DAMA/LIBRA modulation to that of the muon fluxes measured by LVD~\cite{Solvi:2009}, Borexino~\cite{Bellini:2012te} and MACRO~\cite{Ambrosio:1997tc}, all in the Gran Sasso National Laboratory~(LNGS), have shown that the phases of the two modulations differ significantly~\cite{Bernabei:2012wp,FernandezMartinez:2012wd,Chang:2011eb}. This conclusion does not preclude the possibility that the CoGeNT modulation, or a significant fraction thereof, is due to muon related processes.\\
The phase of the modulation of the muon flux can vary substantially depending on geographic location and calendar year since the flux is strongly correlated with the effective atmospheric temperature~\cite{Ambrosio:1997tc,Bouchta:1999,Bellini:2012te,Solvi:2009,Desiati:2011,Adamson:2009zf}. Therefore, to be able to reject with high confidence the muon hypothesis as the source of the CoGeNT modulation, the muon data must be collected concurrently with the CoGeNT data and in close proximity to the CoGeNT detector. The muon data collected by the MINOS experiment fulfill these criteria. Similarly to the DAMA/LIBRA muon studies~\cite{Bernabei:2012wp}, we compare the phase of the observed MINOS muon modulation to that of the CoGeNT data modulation. Comparisons of the CoGeNT data to non-concurrent MINOS muon data~\cite{Adamson:2009zf}, and indirectly to effective temperature variations, have been presented in Ref.~\cite{Chang:2011eb} and indicate that the data sets are not correlated.\\
We note that the 16.6\% amplitude of the CoGeNT event rate modulation~\cite{Aalseth:2011wp} is significantly larger than the $\sim$2\% amplitude of the MINOS muon rate modulation~\cite{Adamson:2009zf}. This difference suggests that the muon temporal variation cannot fully account for the observed CoGeNT modulation. In this paper we examine the relative phases of the two modulations which provides an independent test of the potential correlation between the CoGeNT and MINOS muon data sets.\\
The radon level in the Soudan Underground Laboratory is at a maximum~(minimum) in the summer~(winter) months due to the pressure gradients created by the relative temperature differences between the air in the laboratory and that on the surface~\cite{Goodman:1999}. In the MINOS cavern we have observed that the radon concentration varies by a factor of six over the year, corresponding to a modulation amplitude of $\sim$60\%. A large modulation amplitude could therefore be introduced into the CoGeNT data by even a small amount of contamination from this background.\\
The radon progeny also modulate with a one year period $T$, but do so with a delayed phase and reduced amplitude. The decays between $^{222}$Rn and $^{210}$Pb occur very quickly~($\sim$minutes) and therefore have negligible impact on either the phase or the amplitude. Since $^{210}$Pb has a half-life of \unit[22]{years}, its decay and the decays of its progenies will not contribute to the modulation.\\
The following Section of the paper discusses the selected experimental data sets. In Section~\ref{sec:TheDifferentTests} we present the best fit modulation parameters determined for each of these data sets. We then describe the measurements of the phase differences between the CoGeNT and MINOS muon and radon data sets obtained from a simultaneous fit of the data to phase-shifted sinusoidal functions, a shape-free $\chi^{2}$ data-to-data comparison and a bin by bin correlation test. Section \ref{sec:Conclusion} summarizes our conclusions.\\
\section{The Selected Data}
\label{sec:TheData}
The CoGeNT dark matter experiment~\cite{Barbeau:2007qi,Aalseth:2012if} and the Far Detector of the MINOS long baseline neutrino experiment~\cite{Michael:2008bc} are located \unit[705]{m} underground in two different caverns of the Soudan Underground Laboratory. The MINOS cavern, which houses the MINOS detector, is \unit[82]{m} long, \unit[15]{m} wide and \unit[13]{m} high and is oriented along the direction of the NuMI neutrino beam~\cite{Crane:1995ky}. The CoGeNT and CDMS-II dark matter experiments are located in the Soudan 2 cavern which is similar in shape to the MINOS cavern but is \unit[70]{m} long and is oriented north-south. The two experimental caverns are connected by an east-west passage on their north side and are served by a common ventilation system which replaces the lab air several times per hour.\\
\subsection{The CoGeNT Data}
\label{sec:CoGeNTData}
CoGeNT is an experiment for direct detection of dark matter which employs a \unit[0.44]{kg} p-type point contact germanium detector~\cite{Aalseth:2008rx,Aalseth:2010vx,Aalseth:2011wp}. The CoGeNT collaboration has published its results using data collected over a period of 458 days between Dec.~4,~2009 and Mar.~6,~2011 with a total of 442 live days~\cite{Aalseth:2011wp}. The data were presented in fifteen 30-day intervals and one 8-day interval, then fit to a modulation hypothesis of the form:
\begin{equation}
R=R_{0}\left(1+A\cdot\textrm{cos}\left [ \frac{2\pi}{T}(t-t_{0}) \right ] \right ),
\label{eq:cosine}
\end{equation}
where $R_{0}$ is the mean rate, $A$ is the modulation amplitude and $T$ is the period. The time $t$ is the number of days since Jan.~1, 2010. The phase $t_{0}$ is the day at which the signal is at a maximum. The published CoGeNT best fit results are given in the last line of Table~\ref{tab:FitResults}. The modulation hypothesis is preferred over the null hypothesis at \unit[2.8]{$\sigma$}. The CoGeNT collaboration has released the background-subtracted data set used in this analysis to the public. The results of our $\chi^{2}$ fit of the CoGeNT data to Eq.~(\ref{eq:cosine}), discussed further in Sec.~\ref{sec:CosineTest}, are in good agreement with the published results~\cite{Aalseth:2011wp}.\\
\subsection{The MINOS Data}
The MINOS Far Detector has been collecting atmospheric muon data since August 2003~\cite{Michael:2008bc,Adamson:2007ww}. The experiment also records the radon level in the laboratory air. The MINOS muon and radon data used in this analysis were collected between June~4,~2009 and Sept.~6,~2011. This collection window is 12 months longer than the CoGeNT run period, from Dec.~4,~2009 to March~6,~2011, allowing the data-to-data comparisons described in Sections \ref{sec:ShapeFreeTest} and \ref{sec:CorrelationTest}.\\
The event selection and data quality requirements used in this analysis are identical to those in the previous study of seasonal muon intensity variation at the MINOS Far Detector, with the additional requirement that the reconstructed muon track be downward going. Restricting the data set to contain only days with greater than \unit[10,000]{s} of live time yields a total of 738~good days of atmospheric muon data. These good days include \unit[449]{days} which occurred between Dec.~4,~2009 and March~6,~2011 inclusive.\\
The radon level in the MINOS cavern air, inferred from counting the number of alpha decays, is measured every hour by a Model 1027 Sun Nuclear Corporation radon monitor~\cite{radon-monitor}. A daily measure of the radon level is determined by averaging the 24 measurements taken throughout the day. The standard deviation of these measurements, $\sigma$, is taken to be the error on the daily radon measurement. While larger than the standard error on the mean value, $\sigma/\sqrt{24}$, this choice is more consistent with the published accuracy of the radon monitor~\cite{radon-monitor}. There are 786 good days during which the radon monitor operated continuously throughout the day. These good days include \unit[458]{days} which occurred between Dec.~4,~2009 and March~6,~2011 inclusive. The radon monitor was moved to different locations in the Soudan Underground Laboratory and cross calibrated with other detectors running simultaneously. This demonstrated that the radon level does not vary spatially in the laboratory to within the resolution of the monitor. Thus the radon levels measured in the MINOS cavern can be used to evaluate whether the CoGeNT data are correlated with the radon level in the Soudan cavern.\\
\begin{figure}[thb]
\begin{center}
\includegraphics[width=0.5\textwidth]{Muon-Radon-allCoGeNT_2009-12-04_to_2011-03-06_monthly-noshade-nofit-marker.eps}
\end{center}
\caption{The residuals of the MINOS Far Detector muon rates, radon levels and CoGeNT event rates as a function of time. The MINOS muon and radon data have been scaled by factors of 10 and one-half respectively to fit on the same graph and, for this figure, use the same binning as the CoGeNT results. The vertical dashed lines indicate the start of a new calendar year. The arrow marks the date where a dark matter signal is expected to peak.}
\label{fig:AllData}
\end{figure}
The MINOS muon rate and radon level residuals, and the CoGeNT event rate residuals, are plotted as a function of time in Fig.~\ref{fig:AllData}. The CoGeNT event rate residuals are calculated with respect to a mean rate of \unit[97.7]{events/30~days}. The MINOS muon rate residuals are calculated with respect to a mean rate of \unit[(0.4431 $\pm$ 0.0001)]{Hz}. The MINOS radon level residuals are calculated with respect to a mean level \unit[(11.94 $\pm$ 0.11)]{pCi/l}. All three data sets possess clear modulation signatures. In the following section we quantify any potential correlations between these modulations.\\
\section{Modulation Comparisons}
\label{sec:TheDifferentTests}
If the CoGeNT modulation is caused by either the muon or radon backgrounds then it should modulate with the same shape as those backgrounds. Therefore, if the phase of the CoGeNT modulation is significantly different than that of the MINOS muon or radon data we can infer that they are likely not causally related. \\
The most common approach in the literature to evaluating potential correlations, and discussed here in Sec.~\ref{sec:CosineTest}, is to fit the data to Eq.~(\ref{eq:cosine}) and compare the phases and periods of the best fits. The CoGeNT and DAMA/LIBRA modulations are a good fit to a cosine function. This is the expected signature for an isothermal dark matter halo. The true form of the modulation may be more complex as it is dependent on assumptions made regarding the velocity distribution of the dark matter particles in the halo~\cite{Freese:2012xd,Chang:2011eb}. The muon modulation is not fit well by a cosine function~\cite{FernandezMartinez:2012wd,Chang:2011eb}. The muon and radon modulations are correlated with atmospheric temperatures. Therefore, their modulations are cyclical but not necessarily sinusoidal. Imposing such constraints onto the data may bias the results of the cosine based fit comparison. We address this concern in Sections \ref{sec:ShapeFreeTest} and \ref{sec:CorrelationTest} by performing shape-free data-to-data comparisons that allow us to evaluate the phase differences and potential correlations regardless of the underlying functional forms of the modulations.\\
\begin{table*}[Htb]
\begin{tabular}{ccccccc} \hline
Data & $\chi^{2}$/N.d.o.f. & Mean Rate & Amplitude & Period &Phase & Date of \\
& & [$R_{0}$] & [$A$,\%] & [$T$,days] & [$t_{0}$,days]& Maximum \\ \hline
\multicolumn{7}{c}{Best fit modulation parameters assuming a fixed period of \unit[365.25]{days}.}\\\hline
Muon &1909~/~(449-3) & \unit[(0.4428 $\pm$ 0.0001)]{Hz} & 1.25 $\pm$ 0.03 & 365.25 &182.8 $\pm$ 1.7& July 1 \\
Radon &176~/~(458-3) & \unit[(11.9 $\pm$ 0.1)]{pCi/l} & 57.7 $\pm$ 0.9 & 365.25 &215.0 $\pm$ 1.1& Aug. 3 \\
CoGeNT~(Our Fit) &6.6~/~(16-3) & \unit[(97.9 $\pm$ 3.6)]{counts/30~days} & 16.9 $\pm$ 5.4 & 365.25 &108.4 $\pm$ 16.9& Apr. 18\\ \hline
\multicolumn{7}{c}{Best fit modulation parameters without a fixed period assumption.}\\\hline
Muon &1788~/~(449-4) & \unit[(0.4431 $\pm$ 0.0001)]{Hz} & 1.37 $\pm$ 0.04 &317.2 $\pm$ 3.2 &187.3 $\pm$ 1.4& July 6 \\
Radon &176~/~(458-4) & \unit[(12.0 $\pm$ 0.1)]{pCi/l} & 57.7 $\pm$ 0.9 &367.4 $\pm$ 3.5 &215.2 $\pm$ 1.1& Aug. 3 \\
CoGeNT~(Our Fit) &6.4~/~(16-4) & \unit[(97.7 $\pm$ 3.6)]{counts/30~days} & 16.7 $\pm$ 5.4 &348 $\pm$ 42 &113.7 $\pm$ 17.9& Apr. 23 \\ \hlin
\multicolumn{7}{c}{Published CoGeNT modulation parameters~\cite{Aalseth:2011wp}.}\\\hline
CoGeNT &7.8~/~(16-4) & N/A & 16.6 $\pm$ 3.8 &347 $\pm$ 29 &$\sim$115 $\pm$ 12 & Apr. 25\\ \hline
\end{tabular}
\caption{The best fit results produced by fitting the MINOS muon rate, radon level and CoGeNT event rate data to Eq.~(\ref{eq:cosine}). The fits reported in the first three rows of the table have been performed with the period fixed to \unit[1]{year}~(\unit[365.25]{days}). The last column gives the dates in 2010 at which the fits to the data are at a maximum. }
\label{tab:FitResults}
\end{table*}
\subsection{Cosine $\chi^{2}$ Test}
\label{sec:CosineTest}
The nominal modulation parameters for the CoGeNT and MINOS muon and radon data sets were determined by performing a $\chi^{2}$ fit test of Eq.~(\ref{eq:cosine}) to the data described in Sec.~\ref{sec:TheData} and shown in Fig.~\ref{fig:AllData}. The results of these fits are given in Table~\ref{tab:FitResults}. The confidence limit contours for the best fit phase and period are shown in Fig.~\ref{fig:PhasePeriodFits}. \\
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{ThreeConfidenceEllipses-floating.eps}
\end{center}
\caption{Confidence limit contours for the period and phase as determined by fitting the CoGeNT event rate and MINOS Far Detector muon rate and radon level data to Eq.~(\ref{eq:cosine}). The best fit values are given in Table \ref{tab:FitResults}. The horizontal and vertical black lines mark the expected period and phase for a dark matter signal. }
\label{fig:PhasePeriodFits}
\end{figure}
Our fit to the CoGeNT data is in good agreement with the published results~\cite{Aalseth:2011wp} and disfavors the null modulation hypothesis at \unit[3.1]{$\sigma$}. The significance with which we exclude the null modulation hypothesis is defined as the square root of the difference between the $\chi^{2}$ value of the best fit point and that of the null modulation hypothesis. This definition is different from that used in the published CoGeNT analysis and gives a slightly stronger exclusion. The small differences between our best fit values to the CoGeNT data and the published CoGeNT best fit values may be explained by the assumption in our fits that the CoGeNT errors are uncorrelated.\\
The two apparent occurrences of sudden stratospheric warming events~\cite{Osprey:2009ni} in early 2010 and early 2011, which temporarily increased the muon rate, drive the large $\chi^{2}$ for the muon fit and cause the best fit period to be significantly smaller than one year. If the complete MINOS muon data set, August 2003 to April 2012, is fit, minimizing the impact of short term fluctuations, a period much closer to one year is obtained, $T$=\unit[(364.5 $\pm$ 0.3)]{days}, and the phase remains unchanged. \\
The best fit phase differences $\delta t_{0}$ between the CoGeNT phase and the MINOS muon and radon phases are determined by minimizing:
\begin{footnotesize}
\begin{eqnarray}
\label{eq:LongFit}
\chi^{2}(\delta t_{0})&=&\sum_{i=1}^{N_{M}} \frac{(R_{ob,M,i}-R_{ex}(R_{0,M},A_{M},t_{0},T))^{2}}{\sigma_{M,i}^{2}}\\
&+&\sum_{i=1}^{N_{C}=16} \frac{(R_{ob,C,i}-R_{ex}(R_{0,C},A_{C},t_{0}+\delta t_{0},T))^{2}}{\sigma_{C,i}^{2}}.\nonumber
\end{eqnarray}
\end{footnotesize}
The first term in Eq.~(\ref{eq:LongFit}) is the $\chi^{2}$ contribution from the MINOS muon rate or radon level data where $N_{M}$ is the number of live days concurrent with the CoGeNT data collection period. The second term is the contribution from the CoGeNT event rate data. $R_{ob,M,i}$~($R_{ob,C,i}$) is the $i^{th}$ observed MINOS~(CoGeNT) data point. $\sigma_{M,i}$ and $\sigma_{C,j}$ are the uncertainties on the MINOS and CoGeNT data points respectively. $R_{ex}$ is the expected value, as determined by Eq.~(\ref{eq:cosine}), assuming the given modulation parameters and $\delta t_{0}$ is defined as the phase of the CoGeNT data minus the phase of the MINOS data. The $\chi^{2}$, as a function of this phase difference, is determined by minimizing the $\chi^{2}$ over the MINOS mean value $R_{0,M}$, the amplitude $A_{M}$ and phase $t_{0}$ and the CoGeNT mean value $R_{0,C}$, amplitude $A_{C}$; and, for some fits, a common period $T$.\\
Figure \ref{fig:DeltaPhaseFits} shows the $\Delta\chi^{2}$ curves, as a function of $\delta t_{0}$, for the simultaneous fits of the MINOS and CoGeNT data to Eq.~(\ref{eq:LongFit}) assuming a common period of one year. The best fit phase differences are \unit[(-75 $\pm$ 18)]{days} and \unit[(-110 $\pm$ 18)]{days} for the comparison to the muon and radon data respectively and \unit[(-67 $\pm$ 17)]{days} and \unit[(-112 $\pm$ 18)]{days} respectively when minimizing the $\chi^{2}$ over the period $T$. The statistical significance at which equivalent phases for the MINOS and CoGeNT data can be excluded is given by the square root of the $\Delta\chi^{2}$ difference between the best fit point and the value at $\delta t_{0}=0$. As can be seen from Fig.~\ref{fig:DeltaPhaseFits} the phases of the MINOS muon and radon data are inconsistent with the phase of the CoGeNT data at \unit[3.0]{$\sigma$} and \unit[3.1]{$\sigma$} respectively.\\
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{ChiSquaredFitBothLine-limit_fixedT-nolabel.eps}
\end{center}
\caption{The $\Delta\chi^{2}$ distributions comparing the phases of the MINOS muon rate and radon level data to the phase of the CoGeNT event rate data using Eq.~(\ref{eq:LongFit}). The $\Delta\chi^{2}$ curves are calculated with respect to their $\chi^{2}$ minima. The flattening of the $\Delta\chi^{2}$ curves indicate that these exclusions are limited by the confidence with which the CoGeNT data can exclude the null modulation hypothesis. }
\label{fig:DeltaPhaseFits}
\end{figure}
\subsection{Shape-Free $\chi^{2}$ Test}
\label{sec:ShapeFreeTest}
In this section we determine the relative phase $\delta t_{0}$ between the MINOS and CoGeNT data sets, without an {\it a priori} assumption regarding their shape, by calculating the $\chi^{2}$ difference between their respective modulations. The $\chi^{2}$ difference, assuming a common binning, is defined as:
\begin{equation}
\chi^{2}(\delta t_{0})= \sum_{i=1}^{N_{C}=16}\frac{(R_{C,i}-f\cdot R_{M,i}(\delta t_{0}))}{\sigma^{2}_{C,i}+\sigma^{2}_{M,i}}.
\label{eq:ShortFit}
\end{equation}
$R_{M,i}$~($R_{C,i}$) is the $i^{th}$~ MINOS~(CoGeNT) residual and $\sigma_{M,i}$ and $\sigma_{C,i}$ are the uncertainties on the MINOS and CoGeNT residuals respectively. We marginalize over the difference in amplitudes, for each $\delta t_{0}$, by minimizing the $\chi^{2}$ over a positive definite multiplicative factor $f$. If the data have similar underlying forms, we expect the $\chi^{2}$ to be a minimum when the phase difference between them is zero. The $\chi^{2}$ values, as a function of $\delta t$, are determined by shifting the time-axis of the MINOS data by $\delta t$ days and recalculating Eq.~(\ref{eq:ShortFit}). Figure~\ref{fig:ShortFitPlots} shows the $\Delta\chi^{2}$ curves as a function of the MINOS data offset, which is equivalent to the relative phase $\delta t_{0}$. The curves are not smooth due to statistical fluctuations in the data. By offsetting the MINOS data we vary the number of MINOS live days which overlap the CoGeNT data. To ensure that each subset of MINOS data, for every $\delta t$, contains the same number of live days we substitute the historical daily average of that date for those days which do not pass the live-time selection criteria. The best fit phase differences between the CoGeNT data and the MINOS muon~(radon) data, corresponding to the minimum of the $\Delta\chi^{2}$ curves in Fig.~\ref{fig:ShortFitPlots}, are \unit[$-83^{+25}_{-5}$]{days}~(\unit[$-123^{+18}_{-16}$]{days}). The statistical significance, as defined in Sec.~\ref{sec:CosineTest}, at which equivalent phases for the CoGeNT and MINOS muon~(radon) data are excluded is \unit[2.9]{$\sigma$}~(\unit[3.0]{$\sigma$}). \\
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{ChiSquaredFitHypFree-Both.eps}
\end{center}
\caption{The $\Delta\chi^{2}$ distributions comparing the phases of the MINOS muon rate and radon level data to the phase of the CoGeNT event rate data using Eq.~(\ref{eq:ShortFit}). The $\Delta\chi^{2}$ curves are calculated with respect to their $\chi^{2}$ minima. }
\label{fig:ShortFitPlots}
\end{figure}
\subsection{Correlation Test}
\label{sec:CorrelationTest}
Residual muon or radon backgrounds in the CoGeNT data could cause a correlation between the CoGeNT modulation and the MINOS muon and radon modulation measurements. The degree of correlation has been evaluated using Pearson's coefficient of correlation, calculated as:
\begin{small}
\begin{equation}
\rho=\frac{1}{N_{C}-1}\sum_{i=1}^{N_{C}=16}\frac{(R_{ob,M,i}-\overline{R_{ob,M}})(R_{ob,C,i}-\overline{R_{ob,C}})}{\sigma_{M}\sigma_{C}},
\label{eq:correlation}
\end{equation}
\end{small}
where $N_{C}$ is the number of bins, $\overline{R_{ob,M}}$ and $\overline{R_{ob,C}}$ are the average values of the MINOS and CoGeNT data sets. $\sigma_{M}$ and $\sigma_{C}$ are the standard deviations of the points comprising the MINOS and CoGeNT data sets respectively. The correlation coefficients, and their Fisher transforms~\cite{Fisher:1936et}, are given in Table~\ref{tab:Correlations}. \\
\begin{table}[htb]
\begin{tabular}{ccc} \hline
Data Set & Correlation & Fisher\\
& Coefficient~($\rho$) & Transform\\ \hline
CoGeNT vs Muon Data & 0.19 & 0.19 $\pm$ 0.28 \\
CoGeNT vs Radon Data & -0.29 & -0.30 $\pm $0.28\\ \hline
\end{tabular}
\caption{The coefficients of correlation, and their Fisher transforms, calculated between the CoGeNT event rate data and the MINOS muon rate and radon level data. Both data sets being compared are consistent with no correlation at $\sim$1$\sigma$. }
\label{tab:Correlations}
\end{table}
Even if there is no causal relationship between the observed MINOS muon and radon modulations and the CoGeNT modulation, there will be some correlation between these data sets as they all follow an approximate sinusoidal variation. The expected value of the correlation is related to their relative phases. For example, if the phase difference between two periodic data sets is smaller~(larger) than one-quarter of the period, the correlation should be positive~(negative). One can therefore infer from the results in Table~\ref{tab:Correlations} that the effective phase difference between CoGeNT and the MINOS muon data is near to but less than \unit[365.25/4]{days}, while between CoGeNT and the MINOS radon data it is near to but more than \unit[365.25/4]{days}. \\
To verify whether the calculated correlations are consistent with the observed modulation phases we generated a series of pseudo-experiments. Sampling from two cosine curves, with the precision and binning of the CoGeNT and MINOS data sets and amplitudes taken from Table~\ref{tab:FitResults}, we calculated the Fisher transform as a function of the phase difference between the two curves. We find that the observed values of the Fisher transforms in Table~\ref{tab:Correlations}, (0.19 $\pm$ 0.28) and (-0.30 $\pm$ 0.28), correspond to phase differences of \unit[-77$^{+31}_{-47}$]{days} and \unit[-117$^{+53}_{-37}$]{days} respectively. These values are consistent with the phase differences calculated in the preceding sections.\\
\section{Conclusion}
\label{sec:Conclusion}
We have performed a comparison of the modulation phases observed in the CoGeNT and MINOS atmospheric muon and radon data, all collected concurrently between Dec.~4, 2009 and March~6, 2011 in the Soudan Underground Laboratory. We have presented the results of a shape-free data-to-data comparison which indicate that the phases of the CoGeNT data and the atmospheric muon and radon data are different by \unit[$-83^{+25}_{-5}$]{days}~(\unit[2.9]{$\sigma$}) and \unit[$-123^{+18}_{-16}$]{days}~(\unit[3.0]{$\sigma$}) respectively. The calculated correlation coefficients between the CoGeNT and MINOS data sets are statistically consistent with the no-correlation hypothesis. The cosine fit test measures the phase difference between the CoGeNT and MINOS muon data sets to be \unit[(-75 $\pm$ 18)]{days}, inconsistent at \unit[3.0]{$\sigma$}, and between the CoGeNT and MINOS radon data sets to be \unit[(-110 $\pm$ 18)]{days}, inconsistent at \unit[3.1]{$\sigma$}. The similarity between the results of both these tests indicate that no significant bias is introduced when imposing a sinusoidal shape on the data. It is also clear that our exclusions are limited by the degree to which the CoGeNT data exclude the null modulation hypothesis. Based on the studies described above, it appears unlikely that muon or radon related processes contribute significantly to the observed CoGeNT modulation.\\
\section{Acknowledgments}
\label{sec:Acknowledgements}
This work was supported by the US DOE, the UK STFC, the US NSF, the State and University of Minnesota, the University of Athens, Greece and Brazil's FAPESP and CNPq. We are grateful to the Minnesota Department of Natural Resources, the crew of Soudan Underground Laboratory, and the staff of Fermilab for their contributions to this effort. We also thank Juan Collar and the CoGeNT collaboration for sharing their data thus facilitating this analysis.
|
2,869,038,156,322 | arxiv | \section{Introduction}
While deep learning is often said to require large amounts of labeled data, the study of neural networks brought a major revolution in unsupervised and weakly supervised learning. For example, the advent of GANs ~\cite{goodfellow2014generative} led to an unprecedented ability to generate images, powerful unsupervised techniques now exist for mapping samples across multiple domains e.g.,\cite{zhu2017unpaired,choi2018stargan}, and unsupervised image representation methods learn powerful representations without labeled data e.g., \cite{oord2018representation,he2019momentum}.
The accuracy of supervised image localization methods has improved dramatically in the last decade \cite{ren2015faster, liu2016ssd, duan2019centernet}. However, these methods rely on bounding-box supervision, which is not always available. Weakly supervised methods have emerged as an alternative that relies only on image-level labeling to one of the multiple classes.
Most weakly supervised localization methods rely on the assumption that a trained image classifier $f$ relies on image regions that are within the foreground segment and try to analyze the behavior of the classifier to extract this information \cite{zhou2016learning, qin2019rethinking}. Breaking down this assumption one can identify three challenges: first, the classifier can rely on context, i.e., on regions outside the object. Second, the classifier may rely on parts of the object and ignore much of it. Third, there is the problem of the explainability of the classifier; i.e., building a causal model that interprets their behavior is an open problem.
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}c@{~~}c@{~~}c@{~~}c@{~~}c@{}}
\multirow{0}{*}{\raisebox{-.755in}[0pt][0pt]{\includegraphics[scale=2]{teaser/im_3.png}}}&
\includegraphics[width=0.125\linewidth]{teaser/map3_0_0.856684094068809.png}&
\includegraphics[width=0.125\linewidth]{teaser/map3_1_0.8767517004305101.png}&
\includegraphics[width=0.125\linewidth]{teaser/map3_3_0.8794310499883268.png}&
\includegraphics[width=0.125\linewidth]{teaser/map3_4_0.8769929680466846.png}\\
&\includegraphics[width=0.125\linewidth]{teaser/map3_5_0.4949245123136565.png}&
\includegraphics[width=0.125\linewidth]{teaser/map3_6_0.37170113537999677.png}&
\includegraphics[width=0.125\linewidth]{teaser/map3_10_0.22157424066507725.png}&
\includegraphics[width=0.125\linewidth]{teaser/map3_20_0.13637012109666755.png}\\
&\includegraphics[width=0.125\linewidth]{teaser/map3_26_0.15866139089131512.png}&
\includegraphics[width=0.125\linewidth]{teaser/map3_35_0.11541742019900247.png}&
\includegraphics[width=0.125\linewidth]{teaser/map3_40_0.07990983137659693.png}&
\includegraphics[width=0.125\linewidth]{teaser/map3_49_0.08195319777443963.png} \\
\end{tabular}
\end{center}
\caption{Given an input image (left), our method trains a generative network to output weights such that a weighted image would be classified similarly to the input image. As training progresses (small panels), the generated weights tend to become more specific. An automatic stopping criterion is used to return the checkpoint in which the weight map provides good localization.}
\label{fig:teaser}
\end{figure}
For the first problem, one can attempt to rely on explainability methods that differentiate between positive and negative contributions ~\cite{nam2019relative, gur2021visualization}, or assume, as we do, that modern classifiers are less prone to such issues, especially when delineating between similar classes, which tend to appear in similar contexts.
The second challenge, which may lead to the identification of parts instead of the entire object, is a major issue in modern weakly supervised localization methods ~\cite{zhang2018adversarial, zhang2018self, xue2019danet}. In our case, we offer to solve it by an early stopping technique, since the generative method we propose evolves to become increasingly localized during the training process. Our method, in contrast to many weakly supervised approaches, employs a segmentation-like network $g$ that provides a pixel-wise weight map given the input image ($I$). Since it is trained on all images, rather than solving one image at a time, it learns general patterns before it learns to identify specific image locations associated with a given class.
An example is shown in Fig.~\ref{fig:teaser}, demonstrating an image from the CUB-200-2011 bird dataset. The output of $g$ is shown for consecutive epochs. As can be seen, the outline of the bird is first identified. As training progresses, the network learns to output specific parts of the bird, which are most relevant to distinguishing it from other birds. It is also evident that the transition between these two modes can be found by considering the completeness of the obtained shape. Averaged over many images, this provides a robust stopping criterion.
\begin{figure*}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=1\linewidth]{ARCH6.PNG}
\end{tabular}
\end{center}
\caption{The proposed method employs, similarly to other methods, an image-level pre-trained classifier $f$. It generates a per-pixel weight map for the input image $I$, using a learned network $g$. This network is optimized so that the pseudo-probabilities $f$ outputs on $I$ are close (in terms of CE=the Cross-Entropy loss) to the ones it outputs on weighted image $I\cdot g(i)$. A regularization loss term, denoted by $R$, encourages $M=g(I)$ to be sparse.}
\label{fig:arch}
\end{figure*}
The third challenge we have pointed to was the challenge of extracting the relevant information from the classifier $f$. While many methods rely on the activations of the network and on its gradients e.g., ~\cite{sundararajan2017axiomatic,smilkov2017smoothgrad,selvaraju2017grad,bach2015pixel}, this is known to be an unreliable process~\cite{asano2019critical}. In fact, one can already point to the collinearity problem in linear regression as presenting ambiguity regarding the causal links between input and prediction~\cite{dormann2013collinearity}.
Our approach, instead of trying to analyze the behavior of the classifier $f$ given $I$ by considering the information flow within $f$, derives a training loss for training network $g$ that only considers the output of $f$. Specifically, we require that the pseudo probabilities provided by $f$ given the input image be the same as the vector of pseudo probabilities $f$ outputs outputs on the image $I\cdot g(I)$, which is a pixel-wise weighting of an image by the weight mask produced by $g$.
{In another form of our method, we replace the classifier by a Siamese network. Two types of triplet losses are then used to train network. The first one tries to separate between the foreground the background of the anchor image, while keeping the representation of the entire image close to that of the foreground. The second loss aims to keep the latent distance close of masked foreground images of the same class, and distance the latent representation of foreground images from different classes.}
Our experiments, similarly to other recent contributions, focus on datasets for fine-grained classification, in which the problem of weakly supervised localization is more challenging, due to the need of $f$ to attend to well-localized differences between the classes. As our results show, using the label-difference criteria on a pretrained classifier or Siamese network, in order to train a segmentation network $g$, leads to results that are far superior to the state-of-the-art methods.
\section{Related Work}
Supervised deep learning algorithms often require a training set that relies on extensive human annotation. This is especially the case for segmentation \cite{long2015fully, zhao2018icnet} and object detection \cite{ren2015faster, liu2016ssd, zhou2019objects}.
In order to decrease the dependency on human annotation, weakly supervised approaches have been developed \cite{bilen2016weakly, jie2017deep}. In this paper, we focus on image-level annotation in order to predict the localization of the object bounding box.
Fine-grained recognition datasets have become extremely popular as benchmarks for weakly supervised object localization (WSOL). In fine-grained recognition, the annotation is more challenging, since it requires professional knowledge and domain experts, for instance, of Ornithology for the CUB-200-2011 dataset \cite{wah2011caltech}. Since fine-grained recognition methods require specialization in order to differentiate between similar classes, and since this specialization is often accompanied by the extraction of localized features~\cite{lin2015bilinear,yu2018hierarchical,gao2016compact,fu2017look,peng2017object, yang2018learning}, the information extracted from such classifiers is often less accessible when trying to segment the entire object.
Many algorithms were proposed for the task of WSOL. The Class Activation Map (CAM) explainability method \cite{zhou2016learning} and its variants \cite{qin2019rethinking} identify the salient pixels that lead to the classification. A multi-task loss function proposed by \cite{lu2020geometry} takes shape into consideration. Adversarial Complementary Learning (ACoL) \cite{zhang2018adversarial} employs two parallel classifiers where complementary regions are discovered via an adversarial erasing of feature maps. The divergent activation approach (DA) \cite{xue2019danet} aggregates and shares information from different spatial layers of the backbone. A similarity score that contrasts high-activation regions with other regions was proposed by \cite{zhang2020rethinking}.
A common improvement is to add a localization assistance branch to the CNN classifier \cite{zhang2018self, zhang2020inter}. While this creates clear separation between the supervised classification goal and the indirectly supervised localization goal, it only partly solves the challenge of focusing on the most discriminative regions at the expense of detecting the entire object. Other attempts to solve this problem mask random patches in the training set, thus forcing the network to rely less on well-localized cues \cite{singh2017hide, bazzani2016self, yun2019cutmix}. Similarly, an attention-based dropout layer encourages the network to also consider less discriminative parts of the image \cite{choe2019attention}.
Previous WSOL works have been criticized by \cite{choe2020evaluating} for selecting the best checkpoint and the hyperparameters by considering the test data. It also offers an evaluation metric for weakly supervised segmentation where instead of bounding-box annotation, a foreground-background mask is given. Unlike conventional segmentation metrics, one considers multiple thresholds, since tuning the threshold is challenging in the weakly supervised setting. Very recently, \cite{choe2021region} presented a new method for weighting the feature maps. Evaluation is done following the stringent settings presented by \cite{choe2020evaluating}, and weakly-supervised segmentation results are presented on Oxford-flower102 \cite{nilsback2008automated}.
Our work mitigates the challenge imposed by the locality of discriminative features by employing early stopping. Our intuition is that the learned localization network becomes increasingly specific as training progresses. The usage of early stopping before the network is fully trained for a given task is reminiscent of the usage of early stopping in the Deep image prior technique of \cite{ulyanov2018deep}, where it is used to prevent a network that is trained to reconstruct the input image from overfitting on it, thus allowing it to express patterns that match the inductive bias of CNNs.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.895\linewidth]{Loss_inner.PNG} \\ (a) \\
\includegraphics[width=0.895\linewidth]{Loss_outer.PNG} \\ (b) \\
\end{tabular}
\end{center}
\caption{Two triplet losses are used for training $g$ for method II, where $f$ is a pretrained supervised Siamese network. (a) The inner loss brings together the image representation (yellow box) and the masked image representation (green box), while distancing the background representation (red box) from the original image representation. (b) The outer loss is a triplet loss that distances the foreground representation of same class images (anchor and positive samples), while distancing that of different classes (anchor and negative samples).
}
\label{fig:triplet}
\end{figure}
\section{Method}
This section introduces the proposed methods for weakly supervised fined-grained object localization. One method is based on employing a classifier $f$ that receives an image and outputs a pseudo probability vector. The other method employs a Siamese network that combines two copies of $f$ with shared weights. The two methods provide similar results, and throughout the paper we present sample outputs for the first method. See supplementary for visual results of the second method.
Both methods employ an encoder-decoder network $g$ that maps the input image $I \in R^{3 \times W \times H}$ to pixel-wise weight mask $M\in R^{W \times H}$. Such networks are typically used in supervision segmentation \cite{long2015fully, ronneberger2015u} and object detection \cite{ren2015faster} tasks, where the ground truth map is given and the network weights are updated accordingly. Under the weakly supervised object localization (WSOL) settings, the ground truth is not given locally (per pixel). Instead, only a global label of the image is given.
\begin{figure*}[t]
\setlength{\tabcolsep}{4.5pt}
\renewcommand{\arraystretch}{2}
\begin{tabular}{ccccccc}
Epoch 1 & Epoch 4 & Epoch 7 & Epoch 10 & Epoch 13 & Epoch 22 & Epoch 31 \\
\vspace{-.4cm}
\includegraphics[width=0.12\linewidth]{early_stop/1/blend_1_0_0.806746945349176.png} &
\includegraphics[width=0.12\linewidth]{early_stop/1/blend_1_3_0.8543411023821713.png} &
\includegraphics[width=0.12\linewidth]{early_stop/1/blend_1_6_0.7698068365589409.png} &
\includegraphics[width=0.12\linewidth]{early_stop/1/blend_1_9_0.7751157732202053.png} &
\includegraphics[width=0.12\linewidth]{early_stop/1/blend_1_12_0.7738731951784706.png} &
\includegraphics[width=0.12\linewidth]{early_stop/1/blend_1_21_0.4021073003115254.png} &
\includegraphics[width=0.12\linewidth]{early_stop/1/blend_1_30_0.3705613377208558.png} \\
(IOU = 0.8067) & (IOU = 0.8543) & (IOU = 0.7698) & (IOU = 0.7751) & (IOU = 0.7738) & (IOU = 0.4021) & (IOU = 0.3705) \\
\vspace{-.4cm}
\includegraphics[width=0.12\linewidth]{early_stop/4/blend_4_0_0.8182634233982214.png} &
\includegraphics[width=0.12\linewidth]{early_stop/4/blend_4_3_0.9517664806160023.png} &
\includegraphics[width=0.12\linewidth]{early_stop/4/blend_4_6_0.9347398095466206.png} &
\includegraphics[width=0.12\linewidth]{early_stop/4/blend_4_9_0.8212831127871816.png} &
\includegraphics[width=0.12\linewidth]{early_stop/4/blend_4_12_0.43756438485798543.png} &
\includegraphics[width=0.12\linewidth]{early_stop/4/blend_4_21_0.43153999377644775.png} &
\includegraphics[width=0.12\linewidth]{early_stop/4/blend_4_30_0.3677100780317842.png} \\
(IOU = 0.8182) & (IOU = 0.9517) & (IOU = 0.9347) & (IOU = 0.8212) & (IOU = 0.4375) & (IOU = 0.4315) & (IOU = 0.3677) \\
\vspace{-.4cm}
\includegraphics[width=0.12\linewidth]{early_stop/8/blend_8_0_0.8884304332094165.png} &
\includegraphics[width=0.12\linewidth]{early_stop/8/blend_8_3_0.7983859724478312.png} &
\includegraphics[width=0.12\linewidth]{early_stop/8/blend_8_6_0.864787177579105.png} &
\includegraphics[width=0.12\linewidth]{early_stop/8/blend_8_9_0.8655949202269382.png} &
\includegraphics[width=0.12\linewidth]{early_stop/8/blend_8_12_0.8927172288602544.png} &
\includegraphics[width=0.12\linewidth]{early_stop/8/blend_8_21_0.583996566363886.png} &
\includegraphics[width=0.12\linewidth]{early_stop/8/blend_8_30_0.5225078837473154.png} \\
(IOU = 0.8884) & (IOU = 0.7983) & (IOU = 0.8647) & (IOU = 0.8655) & (IOU = 0.8927) & (IOU = 0.5839) & (IOU = 0.5225) \\
\end{tabular}
\caption{The weight map generated by network $g$ for various epochs on the CUB dataset. Also shown are the obtained bounding boxes. Our stopping criterion relies on some of the higher weights being left out of the obtained bounding boxes when $g$ starts to over-segment.}
\label{fig:EarlyStop}
\end{figure*}
\paragraph{Methods I (classifier based)}
To train network $g$ under WSOL settings, we offer a novel two-stage algorithm: (1) train a classifier $f: R^{3 \times W \times H} \to R^{d}$, with the global supervision, where d is the number of categories in the dataset, and (2) freeze the classifier weights and employ a classification-invariance loss function for the training of $g$:
\begin{equation}
\label{eq:one}
\min_{ g } D(f(I),f(I \odot g(I))) + \lambda R(g(I)),
\end{equation}
where D measures the discrepancy between the classifier output, $R$ is a regularizer that minimizes the size of the highlighted region produced by the weight mask $M=g(I)$, and $\lambda$ is a weight factor. The goal of this loss is to maintain the same classifier representation $f$ with and without masking by $g$, see Fig~\ref{fig:arch}. In our implementation, the cross-entropy loss is used for $D$, and $R$ is the L1 norm. For the sake of simplicity, $\lambda$ is set to one and no tuning of this value was ever performed.
Without the regularization term $R(g(I))$, the architecture would converge to a naive solution where the output mask is fixed at one. With the regularization term, the network is encouraged to provide more localized solutions, in which the non-relevant attributes are ignored.
\paragraph{Methods II (Siamese network)}
The second method follows the architectures and algorithms given by Method I. However, instead of a classifier $f$, it compares an anchor image $I_a$ to a same class (``positive'') image $I_p$ vs. a negative image from a different class $I_n$. In the case of the 2nd method, $f$ is pretrained using the following triplet loss:
\begin{equation}
\label{eq:triplet}
L = \max \big\{ \|f(I_a) - f(I_p) \| - \|f(I_a) - f(I_n) \| + 1 , 0 \big\}
\end{equation}
Once $f$ is trained, we train network $g$ to minimize two triplet loss terms:
\begin{equation}
\label{eq:triplet1}
L_\text{inner} = \max \big\{ \|f(I_a) - f(J^f_a) \| - \|f(I_a) - f(J^b_a) \| + 1 , 0 \big\} \end{equation}
\begin{equation}
\label{eq:triplet2}
L_\text{outer} = \max \big\{ \|f(J^f_{a}) - f(J^f_{p}) \| - \|f(J^f_{a}) - f(J^f_{n}) \| + 1 , 0 \big\}
\end{equation}
where $J^f_{x} = g(I_x) \odot I_x$, and $J^b_{x} = (1-g(I_x)) \odot I_x$ for $x\in{a,p,n}$. The first triplet loss $L_{\text{inner}}$ aim is to have the representation of the entire image $f(I_a)$ similar to that of the foreground $f(J^f_a)$ and dissimilar to that of the background part of the image $f(J^b_a)$, see Fig~\ref{fig:triplet}(a). The second triplet loss $L_{\text{outer}}$ is illustrated in Fig~\ref{fig:triplet}(b). The goal of this loss is to encourage the representation of masked (foreground) images of the same class to be similar, while distancing the representation of foreground images from different classes.
A regularization term is applied to $M_{a}$ as before $\lambda R(M_a)$ to obtain a mask that is a subset of the pixels in the image.
\noindent{\bf Architecture\quad} The encoder of $g$ is a ResNet~\cite{he2016deep}, in which the receptive field is of size $32\times 32$, i.e., the image is downscaled five times. The ResNet contains four residual blocks, with 64, 128, 256, and 512 output channels, whenever the architecture of ResNet 18 or ResNet 34 is used) and 256, 512, 1024, and 2048 output channels for the case of ResNet 50 and ResNet 101. Pre-trained weights, obtained by training on ImageNet \cite{russakovsky2015imagenet}, are used at initialization.
The decoder of $g$ consists of five upsampling blocks in order to obtain an output resolution that is identical to the original image size. Each block contains two convolutional layers with a kernel size equal to 3 and zero padding equal to one. In addition, we use batch normalization after the last convolution layer before the activation function. The first four layers' activation function is a ReLU, while the last layer's activation function is sigmoid. Each layer receives a skip connection from the block of the encoder that has the same spatial resolution~\cite{ronneberger2015u}.
Both our methods use the same $f$ backbone in order to obtain the localization map. This backbone is a ResNet 18~\cite{he2016deep} with four blocks with a receptive field is of size $32\times 32$. The Resnet output vector's dimension is 512, as obtained from global average pooling operator on the last feature map. This backbone is initialed with pre-trained imagenet parameters. In method I, a single Fully connected layer is used in order to obtain a prediction vector for classification. For the Siamese network, the 512 dim representation is projected linearly to a vector of 200 neurons that is used in the two triplet losses (Eq.~\ref{eq:triplet1},\ref{eq:triplet2}).
\noindent{\bf Setting the bounding box\quad} The output of $g$ is map $M$ in range 0 to 1, obtained by the sigmoid activation function. In order to derive a bounding box from this map, we follow the method of \cite{qin2019rethinking,choe2021region,choe2020evaluating}. First, a threshold $\tau$ is calculated as
\begin{equation}
\tau = \max(M)/10
\end{equation}
Next, we threshold all values lower than $\tau$, setting them to zero, and obtain a map $\hat M$.
We then apply the topological structural analysis method of \cite{suzuki1985topological} to the thresholded map, in order to discover the boundaries of the proposed object in the map. The method produces multiple proposals for the object contours, and we select the contour with the largest area. The bounding box is obtained by considering the bounding box of the selected contour.
\noindent{\bf Early stopping\quad}The discriminative attributes that enable the classifier $f$ to distinguish between labels that belong to visually similar categories often rely on well-localized regions of the image. As a result, the loss that we offer can lead to an over-localization of the object in the image.
In order to overcome this, we rely on an empirical observation: the pixel weighting network $g$ becomes increasingly well-localized as training progresses. In other words, $g$ becomes increasingly specific in the highlighted regions as training progresses. This way, it is able to improve the regularization term without sacrificing the loss that measures the discrepancy of the labels.
This increase in localization is demonstrated in Fig~\ref{fig:EarlyStop}. After the first epoch, the prediction of $g$ provides a coarse mask of the object. After a few more epochs the map is much more refined and provides a good delineation of the relevant object. When training continues, the network tends to ignore non-discriminative pixels and the output becomes too partial to support segmentation of the entire object.
In order to make use of the ability of $g$ to capture the object at intermediate stages of training, while avoiding the pitfalls of using a well-trained $g$, we propose to use an early stopping procedure. Recall that the bounding box selection algorithm selects one contour. If the image has multiple contours, the bounding box would not contain these. We rely on this signal and select an epoch in which the bounding box still contains most of the weights of the mask $\hat M$.
Specifically, the epoch in which to stop is selected by considering the average score $S$ among the images in the validation set. For a single sample in the dataset, it is computed as:
\begin{equation}
S = \frac{\sum_{x=x_1}^{x_2} \sum_{y=y_1}^{y_2} \hat M_{xy}}{ \sum_{x=0}^{H} \sum_{y=0}^{W} \hat M_{xy} }\,,
\end{equation}
\noindent where $x_1$, $x_2$, $y_1$, $y_2$ are the bounding box coordinates for this image, and H, W are the dimensions of the image. The epoch (checkpoint) selected has the maximal average $S$ score.
\begin{table}[t]
\centering
\begin{tabular}{lccc}
\toprule
Method & GT-known & Top1 & Top1\\
& loc[\%] & loc[\%] & cls[\%]\\
\midrule
CAM (Zhou, 2016) & 56.00 & 43.67 & 80.65\\
ACoL (Zhang, 2018) & 59.30 & 45.92 & 71.90\\
SPG (Zhang, 2018) & 58.90 & 48.90 & -\\
DANet (Xue, 2019) & 67.00 & 52.52 & 75.40\\
RCAM (Zhang, 2020) & 70.00 & 53.00 & -\\
ADL (Choe, 2019) & 75.40 & 53.04 & 80.34\\
I2C (Zhang, 2020) & 72.60 & 55.99 & 76.70\\
infoCAM+ (Qin, 2019) & 75.89 & 54.35 & 73.97\\
PsyNet (Baek, 2020) & 80.32 & 57.97 & 69.67 \\
RDAP (Choe, 2021) & 82.36 & 65.84 & 75.56\\
ART (Singh, 2020) & 82.65 & 65.22 & 77.51\\
Ours (method I) & 82.85 & 67.00 & 79.56\\
Ours (method II) & {\bf83.03}& {\bf 67.12} & 79.56\\
\bottomrule
\end{tabular}
\caption{Results on the CUB benchmark}
\label{tab:CUB object recognition}
\medskip
\centering
\begin{tabular}{lccc}
\toprule
Method & GT-known & Top1 & Top1\\
& loc[\%] & loc[\%] & cls[\%]\\
\midrule
CAM (Zhou, 2016) & 65.2 & 56.8 & 88.9\\
HaS (Singh, 2017) & 87.4 & 76.6 & 87.6\\
ADL (Choe, 2019) & 82.8 & 73.8 & 88.9\\
RDAP (Choe, 2021) & 92.9 & 84.1 & 89.7\\
Ours (method I) & {\bf 96.1} & {\bf84.9} & 87.9\\
Ours (method II) & 95.1 & 83.7 & 87.9\\
\bottomrule
\end{tabular}
\caption{Results for the Stanford cars benchmark.}
\label{tab:CARS}
\medskip
\centering
\begin{tabular}{@{}l@{~}cc@{}}
\toprule
Method & GT-known-loc[\%] & Top1-loc[\%]\\
\midrule
CAM (Zhou, 2016) & 54.56 & 40.55\\
infoCAM (Qin, 2019) & 57.79 & 43.34\\
infoCAM+ (Qin, 2019) & 57.71 & 43.07\\
Ours (method I) & 60.21 & 43.80\\
Ours (method II) & {\bf60.41} & {\bf44.00}\\
\bottomrule
\end{tabular}
\caption{Results for Tiny-imagenet. In all methods, the classifier is a Resnet50.}
\label{tab:tiny object recognition}
\end{table}
\begin{table}[t]
\begin{minipage}{.2223485\textwidth}%
\begin{tabular}{@{}l@{~~}c@{}}
\toprule
Method & PxAP\\
\midrule
CAM \cite{zhou2016learning} & 62.57 \\
ART \cite{singh2020attributional} & 75.45 \\
Ours (method I) & 76.30 \\
Ours (method II) & {\bf 76.70} \\
\bottomrule
\end{tabular}
\caption{Results for CUB \cite{wah2011caltech} segmentation. The PxAP score aggregates the average precision over multiple thresholds.}
\label{tab:cub_seg}
\end{minipage}\hfill%
\begin{minipage}{.22485\textwidth}%
\begin{tabular}{@{}l@{~~}c@{}}
\toprule
Method & PxAP\\
\midrule
CAM \cite{zhou2016learning} & 69.0 \\
HaS \cite{singh2017hide} & 63.1 \\
ADL\cite{choe2019attention} & 69.8 \\
RDAP \cite{choe2021region} & 71.4 \\
Ours (method I) & {\bf 75.6}\\
Ours (method II) & 75.2\\
\bottomrule
\end{tabular}
\caption{Results for oxford flowers segmentation.}
\label{tab:flowers}
\end{minipage}
\end{table}
\section{Experiments}
We use the exact same benchmark settings as the baseline methods, using the provided train/test split and evaluation protocol. The datasets supply object localization ground-truth annotation only for the validation and the test sets. The supervision signal that is provided during training is the global class-level annotation of the object. No part-level annotation is used, and we do not compare with methods that rely on such annotation.
\noindent{\bf Benchmarks\quad} We evaluate our algorithm on the three most popular publicly available fine-grained datasets: (1) CUB-200-2011 \cite{wah2011caltech} (2) Stanford Cars \cite{krause20133d} (3) Oxford-flowers \cite{nilsback2008automated}. In addition, we test our proposed algorithm on the generic image classification dataset tiny-imagenet \cite{le2015tiny}.
CUB-200-2011 \cite{wah2011caltech} contains 200 birds species, with 11,788 images divided into 5994 training images and 5794 test images. The images were taken at different resolutions and aspect ratios where the object is not centralized. The variations in the dataset, such as pose, viewpoint and illumination increase the complexity of the task.
Stanford Car \cite{krause20133d} has 196
categories of cars, which differ in at least one of three attributes: manufacturer, model, and year. There are 8144 samples in the training set and 8041 samples in the test set.
Oxford-102 \cite{nilsback2008automated} contains 102 categories of flowers, with 1020 training images and 6149 test images. The dataset supplies mask annotations (not just bounding boxes), which enables the utilization of segmentation metrics.
Tiny-ImageNet is a small version of ImageNet, with a reduced number of classes, number of instances per class, and image resolution (64*64). It consists of 200 classes, with 500 training images and 50 validation images per class. Unlike CUB-200-2011 and other fine-grained classification datasets, Tiny-ImageNet contains various object types. Training classifiers on Tiny-ImageNet is faster than training on imagenet, due to the smaller number of samples and their reduced dimensions. However, for the same reasons, obtaining high classification accuracy is also challenging.
\begin{figure*}[t]
\setlength{\tabcolsep}{4.5pt}
\renewcommand{\arraystretch}{2}
\begin{tabular}{cccccccc}
\includegraphics[width=0.105\linewidth]{CUB/localization/0_0.1_0.8959264671523877.png} &
\includegraphics[width=0.105\linewidth]{CUB/localization/2_0.1_0.8941326051684734.png} &
\includegraphics[width=0.105\linewidth]{CUB/localization/20_0.1_0.8933113381462445.png} &
\includegraphics[width=0.105\linewidth]{CUB/localization/23_0.1_0.9829766523565433.png} &
\includegraphics[width=0.105\linewidth]{CUB/localization/3_0.1_0.8762654447903818.png} &
\includegraphics[width=0.105\linewidth]{CUB/localization/33_0.1_0.9576165652179168.png} &
\includegraphics[width=0.105\linewidth]{CUB/localization/37_0.1_0.9208287748482922.png} &
\includegraphics[width=0.105\linewidth]{CUB/localization/38_0.1_0.9201422064468663.png} \\
\includegraphics[width=0.105\linewidth]{tiny/im_106_0.1_0.7935442244934218.png} &
\includegraphics[width=0.105\linewidth]{tiny/im_13_0.1_0.8658227848101265.png} &
\includegraphics[width=0.105\linewidth]{tiny/im_20_0.1_0.9681372549019608.png} &
\includegraphics[width=0.105\linewidth]{tiny/im_22_0.1_0.8485519104997409.png} &
\includegraphics[width=0.105\linewidth]{tiny/im_25_0.1_0.9414111194215906.png} &
\includegraphics[width=0.105\linewidth]{tiny/im_32_0.1_0.880773875518809.png} &
\includegraphics[width=0.105\linewidth]{tiny/im_67_0.1_0.6703468430986544.png} &
\includegraphics[width=0.105\linewidth]{tiny/im_80_0.1_0.7696022727272728.png} \\
\includegraphics[width=0.105\linewidth]{CARS/24_0.1_0.9371126373352264.png} &
\includegraphics[width=0.105\linewidth]{CARS/29_0.1_0.8714685514515081.png} &
\includegraphics[width=0.105\linewidth]{CARS/3_0.1_0.9218277489045805.png} &
\includegraphics[width=0.105\linewidth]{CARS/32_0.1_0.83788818773195.png} &
\includegraphics[width=0.105\linewidth]{CARS/36_0.1_0.898989983852904.png} &
\includegraphics[width=0.105\linewidth]{CARS/48_0.1_0.8971367020604962.png} &
\includegraphics[width=0.105\linewidth]{CARS/54_0.1_0.9131027655972265.png} &
\includegraphics[width=0.105\linewidth]{CARS/61_0.1_0.9083101487866391.png} \\
\end{tabular}
\caption{Samples results from CUB, CAR, and Tiny ImageNet. The green bounding boxes represent the ground truth and the red ones represent our predicted bounding box. For brevity, all visual results in this paper are obtained with method I. See supplementary for the (quite similar) results of method II. }
\label{fig:samples results}
\end{figure*}
\begin{figure}[t]
\setlength{\tabcolsep}{4.5pt}
\renewcommand{\arraystretch}{2}
\begin{tabular}{ccc}
\includegraphics[width=0.285\linewidth]{flowers/image195.png} &
\includegraphics[width=0.285\linewidth]{flowers/gt195.png} &
\includegraphics[width=0.285\linewidth]{flowers/out195.png} \\
\includegraphics[width=0.285\linewidth]{flowers/image213.png} &
\includegraphics[width=0.285\linewidth]{flowers/gt213.png} &
\includegraphics[width=0.285\linewidth]{flowers/out213.png} \\
(a) & (b) & (c)\\
\end{tabular}
\caption{Oxford Flowers 102 results where (a) is the input image (b) is the ground-truth (c) output map $M$.}
\label{fig:flowers}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.21\linewidth]{CUB_SEG/im_4_0.1_0.7016199546485261.png} &
\includegraphics[width=0.21\linewidth]{CUB_SEG/gt4.png} &
\includegraphics[width=0.21\linewidth]{CUB_SEG/mask_HEAT_TEST_1_4.jpg} &
\includegraphics[width=0.21\linewidth]{CUB_SEG/out4.png} \\
\includegraphics[width=0.21\linewidth]{CUB_SEG/im_15_0.1_0.8817170415071955.png} &
\includegraphics[width=0.21\linewidth]{CUB_SEG/gt15.png} &
\includegraphics[width=0.21\linewidth]{CUB_SEG/mask_HEAT_TEST_1_15.jpg} &
\includegraphics[width=0.21\linewidth]{CUB_SEG/out15.png} \\
\end{tabular}
\caption{CUB weakly supervised segmentation where the first row is the input image, the second row is the ground-truth mask, the third row is the output map of \cite{singh2020attributional} and the last row is our output map $M$.}
\label{fig:cub_seg}
\end{figure}
On all datasets, except for Oxford-102, we compute two accuracy scores. For both scores, an intersection over union above 0.5 indicates correct localization. In the GT-known-loc score, we compare the bounding box even if the classifier was mistaken. In the Top1-loc, we only consider the result to be correct if the classifier has predicted the correct class for the image. While we claim no contribution to the classifier, we present this score as well, since it is prevalent in the literature. Moreover, we note that we outperform other methods by this score, even though the top-1 classification of our method, reported as Top1-cls is not as high as that of other methods.
For the CUB and Oxford datasets, which contain foreground-background masks, we also compute a segmentation metric. Specifically, we use the pixel average precision (PxAP) metric proposed by \cite{choe2020evaluating}. It relies on the following definitions:
\begin{equation}
PxPrec(\sigma) = \frac{ \mid \big\{ {M_{ij}^{(n)} \geq \sigma} \big\} \cap \big\{ T_{ij}^{(n)} = 1\big\}\mid }{\mid \big\{ M_{ij}^{(n)} \geq \sigma \big\}\mid }
\end{equation}
\begin{equation}
PxRec(\sigma) = \frac{ \mid \big\{ {M_{ij}^{(n)} \geq \sigma} \big\} \cap \big\{ T_{ij}^{(n)} = 1\big\}\mid }{\mid \big\{ T_{ij}^{(n)} = 1\big\}\mid }
\end{equation}
\noindent where $i,j$ are the indices of the pixels, $M$ is the weighting map, and $T$ is the ground-truth mask.
PxAP is the area under the precision-recall curve
\begin{equation}
\nonumber
PxAP : = \sum_l PxPrec(\sigma_l)(PxRec(\sigma_l) - PxPrec(\sigma_{l-1}))
\end{equation}
\noindent{\bf Implementation details\quad} While the method has many hyperparameters, listed below, all of these are fairly standard and no attempt was done to tune them in any way. The test data was not used in any way to determine the hyperparameters or the early stopping epoch.
In our method, the input image is first resized to 256$\times$256, and then a crop is applied to reduce it to size 224$\times$224. During training, cropping is applied at a random image location, while during validation and test, we take the crop from the center of the image.
The classifier $f$ based on Resnet18 backbone for fine-grained datasets, and Resnet50 for tiny-imagenet. The optimizer is SGD with a batch size of 16, and an initial learning rate of 0.001 for 200 epochs, the weight decay is 1e-4. The scheduler decreases the learning rate by a factor of 10 every 80 epochs. For augmentation, the training procedure employs a resize to a fixed size followed by a random crop, as well as a random horizontal flip. During inference, the algorithm employs resize and central crop. The classifier consists of a single fully-connected layer $R^{k} \to R^{d}$, where $d$ is a number of classes, and k is the latent space size.
Network $g$ for the fine-grained recognition datasets is trained with the SGD optimizer with a batch size of 128, and an initial learning rate of 0.0001 for 100 epochs. The tiny imagenet model is trained with the same SGD optimizer, a batch size of 128, weight decay 5e-5, an initial learning rate of 0.0001, and for 1000 epochs.
During the phase of training both $f$ and $g$, random horizontal flip with 0.5 probability is applied as augmentation. Training of all models takes place on a single GeForce RTX 2080Ti Nvidia GPU. This attests to the efficiency of our methods but prevents us from running on much larger generic classification datasets.
\noindent{\bf Results\quad} Tab.~\ref{tab:CUB object recognition} compares our results to the existing methods for CUB-200-2011. The accuracy of method I is 82.85\% for ground-truth known localization and 67\% for top-1 localization accuracy, which outperforms all other methods. Method II is slightly better on this dataset. Note that the top-1 localization accuracy is high even though our classifier is standard and not the leading one (we make no claims regarding the performance of the classifier).
On the Stanford Car \cite{krause20133d} we obtain the baseline results from \cite{choe2021region}, who ran all methods cleanly. The results presented in Tab.~\ref{tab:CARS} indicate that both our methods outperforms all baselines in the GT-known localization by a larger margin than that obtained between the previous work. There is an advantage to method I. In the Top1 loc score, which also incorporates the classifier accuracy, the simplicity of our accuracy eliminates some of the gap from the best baseline method (RDAP). Method I is still best. However, Method II is outperformed by this method.
Tab.~\ref{tab:tiny object recognition} summarises the results for the Tiny-imagenet dataset. All baseline methods employ the same ResNet50 classifier. Both our methods outperform the baseline by a margin that is larger than the difference between the baseline methods, with method II showing a slight advantage.
The output of our method, in comparison to the ground truth, is presented in Fig~\ref{fig:samples results}. As can be seen, our method can match the ground truth bounding box well and is not overly focused on small discriminative regions.
The results that evaluate the weight map obtained from $g$ as a segmentation map are presented in Tab.\ref{tab:flowers} and Tab.\ref{tab:cub_seg} for the Oxford flowers and CUB datasets, respectively. Evidently, the PxAP score for both our methods is considerably higher than for the other methods, by a sizable margin. On Oxford the first of our methods shows an advantage, and on CUB there is a preference for method II over method I.
\begin{figure}
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.17\linewidth]{no_regular/im_75_0.1_0.3554914318840579.png} &
\includegraphics[width=0.17\linewidth]{no_regular/im_85_0.1_0.36899310344827585.png} &
\includegraphics[width=0.17\linewidth]{no_regular/im_92_0.1_0.5615783683160495.png} &
\includegraphics[width=0.17\linewidth]{no_regular/im_97_0.1_0.5975357021842356.png} \\
\includegraphics[width=0.17\linewidth]{no_regular/map75_0.1_0.3554914318840579.png} &
\includegraphics[width=0.17\linewidth]{no_regular/map85_0.1_0.36899310344827585.png} &
\includegraphics[width=0.17\linewidth]{no_regular/map92_0.1_0.5615783683160495.png} &
\includegraphics[width=0.17\linewidth]{no_regular/map97_0.1_0.5975357021842356.png} \\
\includegraphics[width=0.17\linewidth]{no_regular/map75_0.1_0.8326498947220338.png} &
\includegraphics[width=0.17\linewidth]{no_regular/map85_0.1_0.753268092956289.png} &
\includegraphics[width=0.17\linewidth]{no_regular/map92_0.1_0.6960404589156807.png} &
\includegraphics[width=0.17\linewidth]{no_regular/map97_0.1_0.9010079968909285.png} \\
\end{tabular}
\caption{An ablation experiment. Top: input image, middle: the results obtained without the regularization term in Eq.~\ref{eq:one}, bottom: the results of our method.}
\label{fig:no_reg}
\end{figure}
Sample results on the Oxford-flowers102 dataset are presented in Fig.~\ref{fig:flowers}. As can be seen, the output of $g$ matches the ground-truth masks well. Fig.~\ref{fig:cub_seg} presents the output masks of our algorithm and of ART~\cite{singh2020attributional} for samples from the CUB-200-2011 dataset. As can be seen, our output is considerably more uniform than that of ART even though the numerical evaluation indicates only a modest gap in performance for this specific dataset.
As a first ablation experiment, we remove the regularization parameter in Eq.~\ref{eq:one}. Without this term, a uniform $g$ that outputs 1 everywhere would reach a zero loss. In practice, as can be seen in Fig.~\ref{fig:no_reg} the network converges to another low-loss solution that covers most of the image.
As for method II, Tab.~\ref{tab:cub_ablation} summaries the impact of each term in the loss function on CUB WSOL dataset. Both of the triplet losses are impact the performances and essential in order to obtain optimal results. The importance of the regularization is also evident in this ablation.
\begin{table}
\centering
\begin{tabular}{cccc}
\toprule
$R(g(I))$ & $L_{outer}$ & $L_{inner}$ & GT-known-loc[\%]\\
\midrule
- & $\surd$ & $\surd$ & 62.13 \\
$\surd$ & - & $\surd$ & 81.64 \\
$\surd$ & $\surd$ & - & 79.92 \\
$\surd$ & $\surd$ & $\surd$ & 83.03 \\
\bottomrule
\end{tabular}
\caption{Ablation results for method II on CUB \cite{wah2011caltech} WSOL.}
\label{tab:cub_ablation}
\end{table}
\section{Discussion}
Many of the recent WSOL methods improve the underlying classifier $f$ in order to better match the needs of localization tasks. For example, \cite{singh2017hide} covers the discriminative parts during the training of the classifier in order to have it rely on additional regions. For a similar reason, \cite{zhang2018adversarial} train two classifiers, where one covers the feature maps of the other. \cite{zhang2020rethinking} improves the representation of the classifier by employing quantization. In contrast, our method employs a standard pre-trained classifier and obtains state-of-the-art results out-of-the-box.
Furthermore, we note that $f$ does not need to be a classifier. We plan, as future work, to use self-supervised representations such as SwAV \cite{caron2020unsupervised} as well as text-image matching neural networks such as CLIP~\cite{radford2021learning}.
\section{Conclusions}
We present a new WSOL method that relies on a novel and elegant training loss that produces leading segmentation results, and with minimal post-processing also state-of-the-art localization. Unlike methods that rely on explainability, the classifier $f$ serves as a black-box and only a few assumptions are made regarding its nature.
Over-segmentation is prevented by applying an early stopping criterion that does not require any pixel-level labels, and without modifying the training of $f$ in any way. The method is simple to implement and optimize and our code is attached as supplementary.
\section*{Acknowledgment}
This project has received funding from the European Research Council (ERC) under the European
Unions Horizon 2020 research and innovation programme (grant ERC CoG 725974).
{\small
\bibliographystyle{ieee_fullname}
|
2,869,038,156,323 | arxiv | \section{}
\bibliographystyle{elsarticle-harv}
\section{Introduction}
In the Brazilian market, on each ex-dividend date, listed equity options have their strike reduced
by the respective dividend value.
Merton has proved in his paper \cite[p. 152]{merton1973rationaloptionpricing}
that options under this adjustment are not dividend protected, therefore it is necessary
to incorporate these discrete dividends to price these options.
To account for discrete dividends, many authors had made great contributions in the realm of
approximations and numerical solutions. In the book \cite{haug2007complete},
Haug compares many approximations and adjustments from other authors and gives an excellent
alternative to the case of multiple dividends also presented in \cite{haug2003back}.
In an outstanding work, \cite{THAKOOR20181} advocate for the use of Fejér quadrature to
approach the integration step in each dividend payment date, reporting that such implementation
yields fast and accurate results.
This work presents a new alternative to solve the multiple dividends problem in
the case of listed Brazilian options.
Section \ref{sec:dynamics} lays the theoretical foundation,
which consists of one way to model the impacts of dividends on the price of Brazilian
call and put options. Brazilian listed options are constituted of calls with American and European
exercises and puts with European exercise, this section also proves that both options can be treated as having European
exercise for pricing purposes, and gives a pricing strategy
based on the Laplace transform and its inverse.
Section \ref{sec:numerical_procedure} delivers the main result of this paper,
formulating a numerical procedure to price listed Brazilian options, explaining how to implement
the fast Laplace transform (FLT) as a function that calls the already known fast Fourier transform,
and driving through the discrete approximation details to find the option's premium and sensitivities
to the underlying and to time. Section \ref{sec:results} shows that the FLT pricer delivers
the option's premium and Greeks with high accuracy and performance when compared with the
benchmark implementation from \cite{THAKOOR20181}.
The idea of building a fast Laplace transform via the fast Fourier transform
presented in section \ref{sec:numerical_procedure} was inspired
by a great online video from \cite{yt_steven_brunton_laplace} where he
explains how to derive the Laplace transform as an extension of the Fourier transform.
More on the subject of using the spectral derivatives to solve the heat equation can be found in
the book \cite{brunton2022data}.
\section{Option dynamics}
\label{sec:dynamics}
The dividend dynamic has the impact of lowering the stock price by the present value of the dividend at the ex-dividend date.
If the owner sells a stock before the ex-dividend date the future dividend payment goes to the stock buyer,
on the other hand, if the stock is sold after this date the dividend will go instead
to the person who owned the stock at the ex-dividend date, even if the payment is set to occur on a later date.
Hence, the new buyer will expect to pay a lower price because the stock owner just lost the right to receive that
dividend payment.
For options, this drop in value has the effect of reducing the value of call options and raising the value of put options.
This is the result of a direct impact on the intrinsic value of these options, namely:
\begin{align*}
\text{call} = \max\{S_t - K, 0\},\; \text{and put} = \max\{K - S_t , 0\}.
\end{align*}
To protect option holders from this price change, the Brazilian stock exchange adjusts the option's strike
price by subtracting the dividend value from the strike on the same ex-dividend date, which keeps the intrinsic value for both
option types. As Merton has shown in his paper \cite[p. 152]{merton1973rationaloptionpricing}, although the
intrinsic value is preserved the option price still depends on the value and ex-date of the dividend.
This section will cover the theoretical background and main theorems used to model the option's dynamics
and price over time.
\subsection{Theoretical background}
Let $f(t)$ be right continuous with left limits
such that $f(t_-) = \lim_{u \uparrow t}f(u)$,
and $\mathcal J f_t = f(t) - f(t_-)$. Note that if $f$ is
continuous in the open interval $(a,b)$ thus $\mathcal J f = 0$.
With these definitions, the stochastic differential equation (SDE)
for a dividend-paying stock can be expressed as:
\begin{subequations}
\label{eq:s_process_with_dividend}
\begin{align}
dS_t &= S_t r_t \mathrm dt + S_t\sigma_t \mathrm dW_t + \mathcal J S_t, \\
\mathcal J S_t &=
\left\lbrace \begin{array}{rl}
-D_i, & \text{if } t = t_i \text{, and} \\
0, & \text{otherwise.}
\end{array}\right.
\end{align}
\end{subequations}
Where $ 0 < t_1 < \dots < t_n < T$, with $\{t_1, \dots, t_n\}$ being ex-dividend dates,
and $\{D_1, \dots, D_n\}$ being the value of each dividend paid respectively. To simplify
the notation in this work is assumed that the payment date occurs on the same ex-date, this
assumption can be weakened by taking $D_i$ to be the discounted value from all parts of that dividend
payed on dates $t^\star_l$ with amounts of $D^\star_l$ to the ex-date $t_i$:
\begin{align} \label{eq:div_present_value}
D_i = \sum_l D^\star_l \exp\left(-\int_{t_i}^{t^\star_l}r_u\; \mathrm du\right).
\end{align}
It is also useful to define a money account where the dividend paid at each time $t_i$ is invested.
The value of this account at any time can be expressed as:
\begin{subequations}
\label{eq:cash_past_dividends}
\begin{align}
I_t &= \sum_{i=1}^n D_i \exp \left( {\int_{t_i}^t r_u\; \mathrm du}\right) 1_{[t_t, \infty)}(t), \\
\mathrm dI_t &= r_t I_t \mathrm dt + \mathcal J I_t, \\
\mathcal J I_t &= \left\lbrace \begin{array}{rl}
D_i, & \text{if } t = t_i \text{, and} \\
0, & \text{otherwise.}
\end{array}\right.
\end{align}
\end{subequations}
\begin{thm} \label{thm:black_scholes_pde}
Between ex-dividend dates, $t \in [t_k, t_{k+1})$, a security $V_t = V(S_t, t)$ possesses the following
partial differential equation (PDE):
\begin{align}
\label{eq:black_scholes_pde}
\frac{\partial V_t}{\partial t}
+ \frac{(\sigma_t S_t)^2}2 \frac{\partial^2 V_t}{\partial S_t^2}
+ S_t r_t \frac{\partial V_t}{\partial S_t} - r_tV_t = 0,
\end{align}
which is the standard PDE from \cite[p. 643, eq. 7]{scholes1973pricing} for a
non-dividend-paying stock.
\end{thm}
The derivatives in equation \eqref{eq:black_scholes_pde} are also referred as Greeks
, namely $\Delta$, $\Gamma$, and $\Theta$, such that:
\begin{align}
\Delta = \frac{\partial V_t}{\partial S_t}, \;
\Gamma=\frac{\partial^2 V_t}{\partial S_t^2},\; \text{ and }
\Theta = \frac{\partial V_t}{\partial t}.
\end{align}
\begin{coro} \label{coro:option_martingale}
From theorem \ref{thm:black_scholes_pde}, the discounted security price
$X_t = e^{-\int_0^t r_u\;\mathrm du}V_t$ is a martingale.
\end{coro}
\begin{coro} \label{coro:put_price}
From corollary \ref{coro:option_martingale}, the price of a European put option with strike adjustment is:
\begin{align}
P_t = \mathbb E \left[ \left. e^{-\int_t^T r_u\;\mathrm du} \max\left\{K_T - S_T, 0\right\}\right| \mathcal F_t\right].
\end{align}
\end{coro}
\begin{coro}
If $r$ and $\sigma$ are constants and with some constant $K_T$,
usually the strike price for vanilla options,
the following change of variables:
\begin{subequations}
\label{eq:pde_change_of_variables}
\begin{align}
\tau &= T - t,\;
\tilde S_\tau = S_t, \label{eq:pde_change_tau_S}\\
v(x, \tau ) &= \tilde V(\tilde S_\tau, \tau) = V(S_t, t) \label{eq:pde_change_Vv}\\
x &= \ln \frac {\tilde S_\tau}{K_T} + \left(r - \frac{\sigma^2}{2}\right) \tau, \label{eq:pde_change_Sx}\\
F(x ,\tau) &= v(x, \tau)e^{\int_0^\tau r_u\;\mathrm du}, \label{eq:pde_change_vF}
\end{align}
\end{subequations}
imply the derivatives:
\begin{subequations}
\label{eq:black_heat_equation_greeks}
\begin{align}
\frac{\partial V_t}{\partial S_t} &= \frac 1 S_t \frac{\partial F}{\partial x} e^{-\int_t^T r_u\;\mathrm du},\label{eq:black_heat_equation_greeks:delta}\\
\frac{\partial^2 V_t}{\partial S_t^2} &= \frac{1}{S_t^2} \left( \frac{\partial^2 F}{\partial x^2} - \frac{\partial F}{\partial x}\right)e^{-\int_t^T r_u\;\mathrm du},\label{eq:black_heat_equation_greeks:gamma}\\
\frac{\partial V_t}{\partial t} &= - \left( \frac{\partial F}{\partial\tau} +\left(r - \frac{\sigma^2}{2}\right) \frac{\partial F}{\partial x} - r_t F \right)e^{-\int_t^T r_u\;\mathrm du},
\end{align}
\end{subequations}
which transform equation \eqref{eq:black_scholes_pde} into a standard heat equation:
\begin{align} \label{eq:black_heat_equation}
\frac{\partial F}{\partial \tau} = \frac{\sigma^2} 2 \frac{\partial^2 F}{\partial x^2}.
\end{align}
\end{coro}
Theorem \ref{thm:black_scholes_pde} allows derivatives on dividend-paying stocks to be treated the same way
derivatives on a stock without dividends, when $t$ is between ex-dividends dates.
The only distinction is at the exact moment $t = t_i$ when a dividend goes ex date $(t_i^-\rightarrow t_i)$.
Here is assumed that, although the stock price has jumped, the security price remains the same. To
represent this assumption the price $V_t$ is remapped into the new stock price so that:
\begin{align} \label{eq:security_dividend_remapping}
V_{i-1}(S_{t_i^-},{t_i^-}) = V_i(S_t, t_i),
\end{align}
where $V_i(S_t, t)$ is the security price for $t \in [t_i, t_{i+1})$.
In particular, for American call options, the strike adjustment on ex-dates works in a similar fashion
preserving the intrinsic value at jumps. To see that let $g_i(S_t) = \max(S_t - K_{t_i}, 0)$ be the
intrinsic value of a call option at time $t \in [t_i, t_{i+1})$:
\begin{align}
g_{i-1}(S_{t_i^-}) &= \max(S_{t_i^-} - K_{t_i^-}, 0) \nonumber \\
&= \max(S_{t_i^-} -D_i - (K_{t_i^-} - D_i), 0) \nonumber \\
&= \max(S_{t_i} - K_{t_i} , 0), \nonumber \\
\therefore \, g_{i-1}(S_{t_i^-})&= g_i(S_{t_i^-}), \label{eq:call_payof_continuous}
\end{align}
with $K_0 = K$ being the initial strike price, and:
\begin{align}
K_{t_i} = K_{t_i^-} - D_i,
\end{align}
being the adjustment made on this strike at the ex-date.
The fact that the intrinsic value is preserved makes the American call option behaves like
a European option as the intrinsic value is always less than the security price.
Therefore:
\begin{thm} \label{thm:american_is_european}
The early exercise of Brazilian listed call options is never optimal
\footnote{The option owner can always exercise early if it is desired, but it is never optimal.}
because these options are strike adjusted.
So American calls can be treated as having European exercise for pricing purposes.
\end{thm}
\begin{coro} \label{coro:call_price}
From theorem \ref{thm:american_is_european} and corollary \ref{coro:option_martingale}, the
price of a call option with strike adjustment is:
\begin{align}
C_t = \mathbb E \left[ \left. e^{-\int_t^T r_u \;\mathrm du} \max\left\{S_T - K_T, 0\right\}\right| \mathcal F_t\right].
\end{align}
\end{coro}
\begin{coro}
\label{coro:parity_put_call}
From theorem corollaries \ref{coro:call_price} and \ref{coro:put_price} follows that the put-call parity holds
for dividend adjusted options:
\begin{align}
\label{eq:parity_put_call}
C_t - P_t = \mathbb E \left[ \left. e^{-\int_t^T r_u\;\mathrm du} \left(S_T - K_T\right)\right| \mathcal F_t\right].
\end{align}
\end{coro}
\begin{thm}
\label{thm:forward_price}
If the interest rate $r_t$, dividend dates $t_i$ and values $D_i$, are deterministic,
the forward price is given by:
\begin{align}\label{eq:forward_price}
\mathbb E\left[\left. S_T \right| \mathcal F_t \right] = e^{\int_t^T r_u\;\mathrm du}(S_t + I_t) - I_T.
\end{align}
\end{thm}
\subsection{Pricing strategy}
\label{sec:pricing_strategy}
To price European vanilla options
on dividend paying stocks, equation \eqref{eq:black_heat_equation} is solved recursively in each interval
from $0 < \tau_1 < \dots < \tau_n < T $, where $\tau_i = T - t_{n - i + 1}$,
and $\tau_i^- = T - t_{n 1 - i }^-$. To do so,
consider a distinct function $F_i(x, \tau)$ for each interval $(\tau_{i-1}^-, \tau_i]$,
adding one more assumption that the dividends $D_i$ are known at time $t=0$ and solve
${n+1}$ PDEs such that:
\begin{align} \label{eq:pde_recursive}
\frac{\partial F_i}{\partial \tau} = \frac{\sigma^2} 2 \frac{\partial^2 F_i}{\partial x^2},
\end{align}
with the following initial conditions:
\begin{align}
F_i(x_{\tau_{i-1}^-}, \tau_{i-1}^-) &= F_{i-1}(x_{\tau_{i-1}}, \tau_{i-1}),
\end{align}
and especially for call options, the initial and boundary conditions are:
\begin{subequations}
\label{eq:initial_and_boundary_call_conditions}
\begin{align}
F_1(x_{0}, 0) &= K_T\max\left\{e^{x_0} - 1, 0\right\} , \label{eq:call_ic}\\
\left. \frac {\partial}{\partial x}F_i(x, \tau) \right|_{x=0} &= F_i(0, \tau) = 0. \label{eq:call_bc}
\end{align}
\end{subequations}
Solving for the call option price the put option price can be found using the put-call parity and the forward price from
equations \eqref{eq:parity_put_call} and \eqref{eq:forward_price}.
Now, take the Laplace transform for $x$ on both sides of the equation \eqref{eq:pde_recursive}:
\begin{align}
\mathcal L\left\{ \frac{\partial F_i(x,\tau)}{\partial \tau} \right\} &= \mathcal L\left\{ \frac{\sigma^2} 2 \frac{\partial^2 F_i(x, \tau)}{\partial x^2} \right\} , \nonumber \\
\frac{\partial \bar F_i(s, \tau)}{\partial \tau} &= \frac{\sigma^2} 2 \frac{\partial^2 \bar F_i(s, \tau)}{\partial x^2} \nonumber \\
&= \frac{\sigma^2} 2 \left[
s^2\bar F_i(s, \tau) -
\left(
s F_i(x, \tau) +
\left. \frac {\partial}{\partial x}F_i(x, \tau)
\right)\right|_{x=0^+}
\right], \label{eq:laplace_diff_heat_equation}
\end{align}
applying the boundary conditions \eqref{eq:call_bc}:
\begin{align}
\frac{\partial \bar F_i(s, \tau)}{\partial \tau}
&= \frac{(\sigma s)^2} 2
\bar F_i(s, \tau),
\end{align}
which can be solved for $\tau_i$ starting from $\tau_{i-1}^-$:
\begin{align} \label{eq:pde_s_solution}
\bar F_i(s, \tau_i)
&= e^{\frac{(\sigma s)^2} 2 (\tau_i - \tau_{i-1})}
\bar F_i(s, \tau_{i-1}^-),
\end{align}
invert the Laplace transform to get to $x$ domain:
\begin{align} \label{eq:pde_x_solution}
F_i(x, \tau_i)
&= \mathcal L^{-1}\left\{ \bar F_i(s, \tau_i) \right\},
\end{align}
and then remapped to account for the dividend event:
\begin{align} \label{eq:f_dividend_remapping}
F_{i+1}(x_{\tau_i^-},{\tau_i^-}) = F_{i}(x_\tau, \tau_i).
\end{align}
Repeat the steps from equation \eqref{eq:pde_s_solution} to \eqref{eq:f_dividend_remapping}
until $F_{n+1}(x_T, T)$ is found. Finally, use equations \eqref{eq:pde_change_tau_S} to \eqref{eq:pde_change_vF}
to retrieve $V_{n+1}(S_0, 0)$ which is the call option price. The Greeks $\Delta$ and $\Gamma$
are found by replacing:
\begin{subequations}
\label{eq:fx_derivatives}
\begin{align}
\frac{\partial F_{n+1}(x_T, T)} {\partial x} &= \mathcal L^{-1}\left\{ s \bar F_{n+1}(s, T) \right\}, \\
\frac{\partial^2 F_{n+1}(x_T, T)} {\partial x^2} &= \mathcal L^{-1}\left\{ s^2 \bar F_{n+1}(s, T) \right\}
\end{align}
\end{subequations}
in equations \eqref{eq:black_heat_equation_greeks:delta} and \eqref{eq:black_heat_equation_greeks:gamma}
respectively, and $\Theta$ can be found by replacing $\Delta$ and $\Gamma$ into the equation
\eqref{eq:black_scholes_pde}.
\section{Numerical procedure}
\label{sec:numerical_procedure}
This section brings the main result of this paper, a pricing procedure that can give the theoretical price for Brazilian listed
equity options in the presence of dividends. Keeping the same assumptions from \cite{scholes1973pricing}.
The idea consists of reducing the PDE in equation \eqref{eq:black_scholes_pde} to the heat
equation \eqref{eq:pde_recursive} that can be solved numerically with the fast Laplace transform, taking special care for each
ex-date where we remap the spacial variable as described in equation \eqref{eq:security_dividend_remapping}.
It is customary to reduce equation \eqref{eq:pde_recursive} to an ordinary differential equation by applying the Fourier
transform in the $x$ domain to solve the heat equation analytically, unfortunately,
this will make us run into numerical problems for securities like vanilla calls
because their price function is not periodic, therefore, not well suitable for a Fourier transform (FFT).
To solve this problem the fast Laplace transform is used, a way to leverage the fast Fourier transform
implementation to work on problems where the Laplace transform is better suited.
\subsection{Fast Laplace transform}
To understand how the Fourier transform can be used to derive the Laplace transform, imagine a
function $f(x)$ that is not periodic. To use the Fourier transform
work on a new function $g(x;\lambda)$, such that:
\begin{align} \label{eq:flt_gx}
g(x; \lambda) = e^{-\lambda x} f(x) u(x),
\end{align}
where $u(x)$ is the Heaviside step function, and $\lambda$ is some positive value that will
make $e^{-\lambda x} f(x) \to f(0)$ when $x \to \infty$, so that $g(x; \lambda)$ can be considered periodic on
that interval. Applying the Fourier transform to both sides of equation \eqref{eq:flt_gx} gives:
\begin{align}
\mathcal F\left\{ g(x) \right\} &= \int_{-\infty}^\infty e^{-i 2\pi \xi x} e^{-\lambda x} f(x) u(x)\;\mathrm dx, \\
\hat g(\xi; \lambda) &= \int_0^\infty e^{-\left(\lambda + i2\pi\xi \right) x} f(x) \;\mathrm dx,
\end{align}
which can be written as the Laplace transform if $s = \lambda + i 2\pi\xi$:
\begin{align}
\bar f(s) &\triangleq \mathcal L \left\{f(x)\right\} \triangleq \int_0^\infty e^{-s x} f(x)\;\mathrm dx,
\end{align}
yielding the following relation between the Laplace and Fourier transform:
\begin{align}\label{eq:laplace_fourier}
\mathcal L \left\{f(x)\right\} = \mathcal F\left\{ e^{-\lambda x} f(x) u(x) \right\}.
\end{align}
If only positive values for $x$ are considered,
the inverse Laplace transform can be found via the inverse Fourier transform as well:
\begin{align}
\mathcal L^{-1} \left\{\bar f(s)\right\} &= f(x)\nonumber \\
&= e^{\lambda x} g(x; \lambda), \text{ from equation \eqref{eq:flt_gx}} \nonumber \\
&= e^{\lambda x} \mathcal F^{-1} \left\{ \hat g(\xi; \lambda) \right\} \nonumber \\
&= e^{\lambda x} \mathcal F^{-1} \left\{ \bar f(\lambda + i2\pi\xi) \right\},
\end{align}
therefore:
\begin{align} \label{eq:laplace_fourier_inv}
f(x) = \mathcal L^{-1} \left\{\bar f(s)\right\} = e^{\lambda x} \mathcal F^{-1} \left\{ \bar f(\lambda + i2\pi\xi) \right\}.
\end{align}
Next, the relations from equations \eqref{eq:laplace_fourier} and \eqref{eq:laplace_fourier_inv} are adapted to
the fast Fourier transform (FFT) to get the fast Laplace transform (FLT).
The fast Fourier transform algorithm is any algorithm that can compute the discrete Fourier transform and
it's inverse in $\mathrm O(n\log n)$ complexity, the best know example is the algorithm presented by \cite{cooley1965algorithm}.
These FFT implementations are heavily optimized and available in many programming languages and frameworks.
To simplify the notation let $\{a\}_N = \{a_0, \dots, a_{N-1}\}$ be a sequence of numbers, and
let $\alpha$, $\beta$ and $c$ be constants, the multiplication and addition operations on
sequences are point-to-point operations defined as:
\begin{align*}
\alpha\{a\}_N + \beta \{b\}_N &= \{\alpha a_0+\beta b_0,\dots, \alpha a_{N-1}+ \beta b_{N-1}\}, \\
\{a\}_N + c &= \{a_0+ c,\dots, a_{N-1}+ c\}, \\
\{a\}_N \{b\}_N &= \{a_0b_0,\dots,a_{N-1}b_{N-1}\}, \\
\frac{\{a\}_N} {\{b\}_N} &= \left\{\frac {a_0} {b_0},\dots, \frac {a_{N-1}}{b_{N-1}}\right\}.
\end{align*}
Consider a sequence of $N$ samples of $f(x)$,
$\{f\}_N$, sampled on an equally spaced $\{x\}_N$ sequence such that
$\rho = x_{i+1} - x_{i},\; \forall i \in \{0,\dots,N-2\}$.
Choose $\lambda$ big enough and create a new sequence
$\{g\}_N$, thus,
the discrete Laplace transform $\mathcal L_d \{f\}_N = \{\bar f\}_N$ of the input sequence $\{f\}_N$ is defined as the
discrete Fourier transform $\{\hat g\}_N$
of the input $\{g\}_N$ as:
\begin{subequations}
\label{eq:flt}
\begin{align}
g_n &= e^{-\lambda\frac{ n}{N}} f_n, \label{eq:gf_discrete}\\
\bar f_k &= \hat g_k = \sum_{n=0}^{N-1} e^{-i2\pi k\frac{n}{N}} g_n,
\end{align}
\end{subequations}
and for the inverse discrete Laplace transform $\mathcal L^{-1}_d \{\bar f\}_N = \{f\}_N$ use the
inverse discrete Fourier transform on $\{\bar f\}_N$ obtaining $\{g\}_N$
and invert the equation \eqref{eq:gf_discrete} to get $\{f\}_N$:
\begin{align}
f_n &= e^{\lambda \frac{n}{N}}\frac{1}{N}\sum_{k=0} ^{N-1} e^{i2\pi k\frac{n}{N}} \bar f_k. \label{eq:iflt}
\end{align}
To work with these functions, those programming languages also provide functions to retrieve the discrete Fourier frequencies
$\xi_k$, with these frequencies, retrieve the Laplace frequencies $s_k$ so that:
\begin{subequations}
\begin{align}
\xi_k &= \frac{k}{N}, \\
s_k &= \frac{\lambda}{N} + i2\pi \xi_k. \label{eq:flt_freq}
\end{align}
\end{subequations}
Yielding the discrete Laplace transform and its inverse for a fixed $\lambda$:
\begin{subequations}
\begin{align}
\bar f_k &= \sum_{n=0}^{N-1} e^{-s_kn} f_n,\\
f_n &= \frac 1 N\sum_{k=0}^{N-1} e^{s_kn} \bar f_k.
\end{align}
\end{subequations}
\begin{thm} \label{thm:discrete_flt_derivative}
For the same sampled space as above, the discrete Laplace transform of derivative of $f(x)$ is:
\begin{align}
\label{eq:discrete_flt_derivative}
\mathcal L_d\{f'(x)\}_N = \frac {\{s\}_N} \rho \mathcal L_d\{f'(x)\}_N - f_0.
\end{align}
\end{thm}
\subsection{Pricing call options}
The procedure stated next can be used to price European call options,
whether they are strike adjusted or not. It also can be used to price strike adjusted American calls
due to theorem \ref{thm:american_is_european}.
Special care should be taken for non-strike adjusted American calls once in this case we would have to
check for early exercise at each ex-dividend date.
To solve the series of PDEs in equation \eqref{eq:pde_recursive}, set the integration $x$ domain $[-L/2, L/2]$
to cover a significant amount of standard deviations $n_\sigma$
and subdivide this range into $N$ equally spaced points \footnote{$N$ is assumed to be even to simplify the discretization.}
which implies a sampling rate $\rho$, thus:
\begin{subequations}
\begin{align}
L &= 2n_\sigma \sigma \sqrt T, \\
\rho &= \frac L N,\\
x_n &= -\frac L 2 + n\rho,\; n \in \{0,\dots, N-1\}.
\end{align}
\end{subequations}
The sequence $\{x\}_N$ will be fixed for all time steps, on the other hand, the correspondent sequence $\{\tilde S_\tau\}$
will change obeying equation \eqref{eq:pde_change_Sx} accordingly:
\begin{align} \label{eq:s_tau_j_from_x}
(\tilde S_\tau)_n = K_T\exp\left(x_n - \left(r - \frac{\sigma^2} 2\right)\tau\right),
\end{align}
with this, use the discretized version of equation \eqref{eq:call_ic} to get the sequence $\{F_1(x, 0)\}_N$ in $\tau = 0$:
\begin{align}
F_1(x_n, 0) = K_T \max \{e^{x_n} -1, 0\}.
\end{align}
From here on apply the fast Laplace transform and its inverse which obey equations \eqref{eq:flt} and \eqref{eq:iflt}
respectively.
This transform needs um more parameter $\lambda$ which can be found by analyzing the convergence of
$e^{-\lambda x} f(x) \to f(x_0)$ when $x \to \infty$, which gives the following approximation:
\begin{align}
e^{-\lambda \frac{N-1} N} F_1(x_{N-1}, 0) = F_1(x_0, 0)
\implies \lambda =\frac N {N-1} \ln\left(\frac {F_1(x_{N-1}, 0)}{F_1(x_0, 0)}\right).
\end{align}
Next, follow as described in section \ref{sec:pricing_strategy} applying the discretized
versions of equations \eqref{eq:pde_s_solution} to \eqref{eq:f_dividend_remapping}.
\begin{subequations}
\begin{align}
\bar F_i(s_k, \tau_i)
&= \exp\left( \frac{(\sigma s_k)^2} {2\rho^2} (\tau_i - \tau_{i-1}) \right)
\bar F_i(s_k, \tau_{i-1}^-), \label{eq:pde_s_solution_discrete} \\
F_i(\{x\}_N, \tau_i)
&= \mathcal L_d^{-1}\left\{ \bar F_i(\{s\}_N, \tau_i) \right\},\label{eq:pde_x_solution_discrete}
\end{align}
\end{subequations}
and to do the remapping from equation \eqref{eq:f_dividend_remapping}
interpolate the sequence $\{x\}_N$ using the mapping $(x_{\tau^-})_j \to F_i(x_j, \tau_i)$
to find the sequence $F_{i-1}(\{x\}_N,{\tau_i^-})$, where $(x_{\tau^-})_j$ can be
described as accounting for the dividend change in
$\tilde S_\tau$ from equation \eqref{eq:s_tau_j_from_x}, thus:
\begin{align}
(x_{\tau^-})_j = \ln\left(\frac{(\tilde S_\tau)_j + D_{n+1 -j}}{K_T} \right) + \left(r - \frac{\sigma^2} 2\right)\tau.
\end{align}
Iterate, solving each $F_i(\{x\}_N,\tau)$ until the final mapping $\{x\}_N \to F_{n+1}(\{x\}_N,T)$ is reached,
also find the first and second derivatives with respect to $x$ via equations \eqref{eq:fx_derivatives}, namely:
$\partial_x F_{n+1}(\{x\}_N, T)$ , and $\partial_{xx} F_{n+1}(\{x\}_N, T)$,
in which $x_T$ can be interpolated to find $F_{n+1}(x_T,T)$, $\partial_x F_{n+1}(x_T, T)$, and $\partial_{xx} F_{n+1}(x_T, T)$, so that:
\begin{align}
C_t &= F_{n+1}(x_T,T) \exp(-rT),\\
\Delta_t &= \frac 1 S_t \frac{\partial F_{n+1}(x_T, T)} {\partial x} \exp(-rT),\\
\Gamma_t &= \frac 1 {S_t^2} \left( \frac{\partial^2 F_{n+1}(x_T, T)} {\partial x^2} - \frac{\partial F_{n+1}(x_T, T)} {\partial x}\right)\exp(-rT),\\
\Theta_t &= C_t r - S_t r\Delta_t - \frac {\sigma^2 S^2} 2 \Gamma_t,
\end{align}
will give the vanilla call price and Greeks.
\section{Results}
\label{sec:results}
To establish a baseline\footnote{The source code used to produce the following results can be found at \url{https://github.com/maikonaraujo/paper_202209}.},
the first step is to test the model in the condition where there is no dividend
being paid up to the expiry. In this condition, the model should deliver the same price and Greeks as
the plain vanilla call formulas from the Black-Scholes model. Figure \ref{fig:bs_baseline} compares the
premium and Greeks for a range of moneyness using the fast Laplace transform pricer in green against
the Black-Scholes results in blue, showing the error below in red. To stress the model the volatility was set to a very low
level of $\sigma = 1\%$ in a very close to maturity option ($T = 5$ days), and the Laplace transform
was discretized with $N=2^{10}$ equally spaced points. In this section,
all FLT pricers use $n_\sigma = 7.5$.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/pdg_5_1.pdf}
\caption{Premium $C(y)$, and Greeks $\Delta(y)$ and $\Gamma(y)$,
$S_0 = 100$, $r = 6\%$ , $\sigma = 1\%$, and $T = 5$ days.}
\label{fig:bs_baseline}
\end{figure}
For the case where only one dividend $D$ is paid is paid at time $t$ for an option expiring in
$T=1$ year and the market conditions are $S_0=100$, $r=6\%$, $\sigma=30\%$, table \ref{tbl:thakoor_bhuruth}
compares the values of the FLT pricer with the \cite{THAKOOR20181} European pricer (TB)
for dividends paid very close to the reference date ($t=0.0001$), $6$ months ($t=0.5$)
and very close to expiry ($t=0.9999$) for in-the-money, at-the-money, and out-off-the-money strikes
$K=70$, $K=100$, and $K=130$ respectively. The prices from the FLT pricer agree with the ones from TB pricer.
The FLT was discretized with $N=1024$ equally spaced points, and TB uses $N=500$ integration points with $\xi=6.5$
standard deviations. Both prices have the final strike adjusted to account for the dividend payment.
\begin{table}[H]
\centering
\caption{One dividend payment - Comparison with Thakoor, and Bhuruth model.}
\label{tbl:thakoor_bhuruth}
\begin{footnotesize}
\begin{tabular}{@{}ccSSSSSSSSS@{}}
\toprule
& &\multicolumn{3}{c}{$K=70$}&\multicolumn{3}{c}{$K=100$}&\multicolumn{3}{c}{$K=130$}
\\\cmidrule(lr){3-5}\cmidrule(lr){6-8}\cmidrule(lr){9-11}
t &D &FLT &TB &{Diff.} &FLT &TB &{Diff.} &FLT &TB &{Diff.} \\\midrule
0.0001&7 &34.3193 &34.3193 &{2.76e-12} &13.6870&13.6870&{9.63e-13} &4.1862&4.1862&{3.19e-13} \\
0.5000&7 &34.6419 &34.6419 &{3.55 e-14} &14.2172&14.2172&{1.42e-13 }&4.5808&4.5808&{3.11e-14 } \\
0.9999&7 &34.9844 &34.9844 &{2.55 e-08} &14.7170&14.7170&{2.39e-08 }&4.9195&4.9195&{2.32e-08 } \\\midrule
0.0001&20&33.1952 &33.1952 &{4.76 e-13} &11.7740&11.7740&{3.45e-13 }&2.9221&2.9221&{3.46e-14 } \\
0.5000&20&34.0504 &34.0504 &{3.55 e-14} &13.3326&13.3326&{1.49e-13 }&4.0013&4.0013&{2.13e-14 } \\
0.9999&20&34.9842 &34.9842 &{2.55 e-08} &14.7168&14.7168&{2.44e-08 }&4.9194&4.9194&{2.32e-08 } \\\midrule
0.0001&50&31.1664 &31.1664 &{2.38 e-13} &7.3596 &7.3596 &{1.10e-13 }&0.7210&0.7210&{2.44e-15 } \\
0.5000&50&32.9077 &32.9077 &{1.84e-06} &11.5707&11.5707&{5.51e-14 }&2.9205&2.9205&{4.13e-14 } \\
0.9999&50&34.9840 &34.9840 &{2.55e-08 } &14.7165&14.7165&{2.44e-08 }&4.9192&4.9192&{2.32e-08 } \\\bottomrule
\end{tabular}
\end{footnotesize}
\end{table}
Starting from the same market conditions and strikes used in table \eqref{tbl:thakoor_bhuruth},
table \eqref{tbl:option_greeks} adds two cases with multiple dividends and compares the Greeks
from the FLT pricer, namely $\Delta$, and $\Gamma$, against Greeks from a numeric bump in the stock price of $h=0.01$, so that:
\begin{align}
\Delta_n &= \frac{\mathrm{FLT}(S + h) - \mathrm{FLT}(S - h)}{2h}, \\
\Gamma_n &= \frac{\Delta(S + h) - \Delta(S - h)}{2h}.
\end{align}
For these simulations, the FLT uses $N=100$ equally spaced points.
It can be seen that the numerical Greeks agree with the ones from the numeric bump central derivative.
\begin{table}[H]
\centering
\caption{Greeks - Analytical and numerical bumps.}
\label{tbl:option_greeks}
\begin{footnotesize}
\begin{tabular}{@{}cSSSSSSSSSS@{}}
\toprule
&\multicolumn{5}{c}{$(t_i,D_i) = \{(0.2, 4), (0.4,5) , (0.6, 6), (0.8, 3) \}$ }&\multicolumn{5}{c}{$(t_i,D_i) = \{(0.2, 9), (0.6,9) \}$ }
\\ \cmidrule(lr){2-6}\cmidrule(lr){7-11}
K &FLT &{$10^2\Delta$} &{$10^2\Delta_{n}$}&{$10^4\Gamma$}&{$10^4\Gamma_n$}& FLT &{$10^2\Delta$}&{$10^2\Delta_{n}$}&{$10^4\Gamma$}&{$10^4\Gamma_n$} \\ \midrule
70 & 34.1131 & 95.41 & 95.41 & 35.99 & 35.99 & 33.9703 & 95.69 & 95.69 & 34.83 & 34.83 \\
100 & 13.4083 & 63.41 & 63.41 & 137.87 & 137.87 & 13.1728 & 63.41 & 63.41 & 140.34 & 140.34 \\
130 & 4.0395 & 27.40 & 27.40 & 120.74 & 120.74 & 3.8780 & 26.91 & 26.91 & 121.80 & 121.80 \\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{table}
Concerning performance, when compared to the TB pricer, the FLT pricer presents better results because it reaches the same level of accuracy faster \footnote{Both models were implemented in python, a highly optimized C/C++ implementation could modify this result.}
and has the advantage of also delivering the Greeks $\Delta$, $\Gamma$, and $\Theta$ together with the option's premium.
Figure \ref{fig:time_accuray} shows some simulations for time versus accuracy.
The line (a) shows the four dividend case $(t_i,D_i) = \{(0.2, 4), (0.4,5) , (0.6, 6), (0.8, 3) \}$ and the line (b) shows the two dividend case
$(t_i,D_i) = \{(0.2, 9), (0.6,9) \}$. The columns show strikes $70, 100,$ and $130$. To estimate a true value both prices were simulated with extreme discretizations
finding prices $\hat p_a$ and $\hat p_b$ which agree with each other with an error of order (1.0E-11) for the case (a) and order (1.0E-14) for the case (b).
The accuracy is measured by:
\begin{align}
\text{Accuracy}_i &= - \log\left(|p_i - \hat p|\right), \\
\end{align}
for each simulation. The time taken is measured by running 10 batches of 25 evaluations,
and selecting the best average time, therefore, if the batch took time $b_j$ for $(j \in 1,\dots,10)$, $t_i$ is:
\begin{align}
t_i = \frac{\min\{b_j\}}{25}.
\end{align}
To give a sense of variance of the time measured, the size $s_i$ of each marker in figure \ref{fig:time_accuray} is proportional to:
\begin{align}
s_i = \frac{\min\{b_j\}}{\max\{b_j\}}
\end{align}
thus simulations with greater variance have smaller markers. Everything was run on a PC with 16GB of RAM, an intel i7 processor with 2.60GHz, 6 cores,
the L1, L2, and L3 cache sizes are 384KB, 1.5MB, and 12MB respectively.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/perf_comp_win.pdf}
\caption{Time and Accuracy comparison.
(a) $(t_i,D_i) = \{(0.2, 4), (0.4,5) , (0.6, 6), (0.8, 3) \}$.
(b) $(t_i,D_i) = \{(0.2, 9), (0.6,9) \}$.}
\label{fig:time_accuray}
\end{figure}
\section{Conclusion}
This work has presented a numerical procedure able to price and hedge options with
discrete multiple dividends using the fast Laplace transform. The FLT is a great tool for solving
differential equations and delivers great performance and stability when applied to options pricing
with constant volatility. Brazilian listed equity options represent a significant¢ market and are
a perfect fit for this approach, but other markets can also benefit from this methodology.
\section{Proofs of theorems and corollaries}
\begin{pf}[Theorem \ref{thm:black_scholes_pde}]
Let $\pi_t$ be a self-financing portfolio of a long position in a security $V_t$, a short position
on the stock price $S_t$, and a cash account to invest the received dividends up to time $t$: $I_t$.
The value of this portfolio at any time is given by:
\begin{align}
\pi_t = V_t - \Delta_t (S_t + I_t).
\end{align}
The derivative of this portfolio is given by:
\begin{align}
\mathrm d\pi_t &= \frac{\partial V_t}{\partial t} \mathrm dt + \frac{\partial V_t}{\partial S_t}\mathrm dS_t + \frac{\mathrm d\left\langle S_t,S_t\right\rangle }{2} \frac{\partial^2 V_t}{\partial S_t^2}
+ \mathcal J V_t
- \Delta_t (\mathrm dS_t +\mathrm dI_t)
\end{align}
Assuming that the security value is continuous for all $t\in [0,T]$, we have $\mathcal J V_t = V_t(S_t) - V_{t^-}(S_{t^-}) = 0, \,\forall t \in [0, T]$,
additionally with equations \eqref{eq:s_process_with_dividend} and \eqref{eq:cash_past_dividends} follows that:
\begin{align}
\mathrm d\pi_t &= \frac{\partial V_t}{\partial t}\mathrm dt
+ \frac{\partial V_t}{\partial S_t} (S_t r_t\mathrm dt + S_t\sigma_t\mathrm dW_t)
+ \frac{(\sigma_t S_t)^2 }{2} \frac{\partial^2 V_t}{\partial S_t^2} \mathrm dt
- \Delta_t \left((S_t+I_t) r_t\mathrm dt + S_t \sigma_t\mathrm dW_t \right),
\end{align}
choosing $\Delta_t = \frac{\partial V_t}{\partial S_t}$ we hedge the risk $dW_t$ away, turning $\pi_t$ into a risk free
portfolio hence $d\pi_t = \pi_t r_t dt$. Moreover:
\begin{align}
\pi_t r_t \mathrm dt &= \frac{\partial V_t}{\partial t}\mathrm dt
+ \frac{(\sigma_t S_t)^2}2 \frac{\partial^2 V_t}{\partial S_t^2} \mathrm dt
- \Delta_t I_t r_t\mathrm dt , \\
(V_t -\Delta_t(S_t + I_t)) r_t &= \frac{\partial V_t}{\partial t}
+ \frac{(\sigma_t S_t)^2}2 \frac{\partial^2 V_t}{\partial S_t^2}
- \Delta_t I_t r_t , \\
0 &= \frac{\partial V_t}{\partial t}
+ \frac{(\sigma_t S_t)^2}2 \frac{\partial^2 V_t}{\partial S_t^2}
+ S_t r_t \frac{\partial V_t}{\partial S_t} - r_tV_t.
\end{align}
\qed
\end{pf}
\begin{pf}[Corollary \ref{coro:option_martingale}]
Taking the derivative of $X_t = e^{-\int_0^t r_u\;\mathrm du}V_t$ we get:
\begin{align}
\mathrm dX_t & = -r_t e^{-\int_0^t r_u\;\mathrm du}V_t \mathrm dt + e^{-\int_0^t r_u\;\mathrm du}\mathrm dV_t \nonumber \\
&= e^{-\int_0^t r_u\;\mathrm du} \left( -rV_t \mathrm dt
+ \frac{\partial V_t}{ \partial t} \mathrm dt
+ \frac{1}{2}(\sigma_tS_t)^2\frac{\partial^2 V_t}{\partial S_t^2}\mathrm dt
+ \frac{\partial V_t}{\partial S_t}(S_tr_t \mathrm dt + S_t\sigma_t \mathrm dW_t)
\right),
\end{align}
from theorem \ref{thm:black_scholes_pde} we know that the terms multiplying $dt$ add to zero, thus:
\begin{align}
\mathrm dX_t & = e^{-\int_0^t r_u\; \mathrm du}\frac{\partial V_t}{\partial S_t}S_t\sigma_t \mathrm dW_t \nonumber \\
& = S_t\sigma_t\frac{\partial X_t}{\partial S_t}\mathrm dW_t,
\end{align}
proving that $X_t$ is a martingale. \qed
\end{pf}
\begin{pf}[Theorem \ref{thm:american_is_european}]
At any time $t$ the option holder can decide between exercising the option, hence receiving its intrinsic
value $g_i(S_t)$ or keeping the option. Therefore, the value of the American option at any
time $t$ can be described as:
\begin{align}
C_{t} = \max \left\{ \mathbb E\left[ \left. e^{-\int_t^u r_v\;\mathrm dv} C_{u>t} \right| \mathcal F_t\right] , g_n(S_t)\right\} ,
\end{align}
where $t < u <= T$. To prove that it is never optimal to exercise, it suffices to prove that for each $t \in [t_i, t_{i+1})$:
\begin{align}
\mathbb E\left[ \left. e^{-\int_t^{t_{i+1}} r_v\; \mathrm dv} C_{t_{i+1}} \right| \mathcal F_t\right] \ge g_i(S_t).
\end{align}
Let's prove it by backward induction. Starting with the last period where $t \in [t_n, T]$, we have:
\begin{align}
\mathbb E\left[ \left. e^{-\int_t^T r_v\;\mathrm dv} g_n(S_T) \right| \mathcal F_t\right]
&\ge \mathbb E\left[ \left. g_n\left(e^{-\int_t^T r_v\;\mathrm dv} S_T\right) \right| \mathcal F_t\right] \text{ (lemma \ref{lemma:call_payof_convex}),}\nonumber \\
&\ge g_n\left(\mathbb E\left[ \left. e^{-\int_t^T r_v\;\mathrm dv} S_T \right| \mathcal F_t\right]\right) \text{ (Jensen's inequality),}\nonumber \\
&= g_n\left(S_t\right) \text{ (martingale property).} \label{eq:last_dividend_call_intrinsic}
\end{align}
Now, let's take an instantaneous step back in time for the exact moment when the last dividend goes ex-date, then:
\begin{align}
C_{t_n^-} &= max \left\{ C_{t_n} , g_{n-1}(S_{t_n^-}) \right\} \nonumber \\
&= max \left\{ C_{t_n} , g_n(S_{t_n}) \right\} \text{ (equation \eqref{eq:call_payof_continuous})}, \nonumber \\
\therefore \, C_{t_n^-} &= C_{t_n} \text{ (by equation \eqref{eq:last_dividend_call_intrinsic})}.
\end{align}
To complete the inductive argument, consider a time $t \in [t_k, t_{k+1})$, our inductive hypothesis is:
\begin{align}
C_{t_{k+1}^-} &\ge g_k(S_{t_{k+1}^-}),
\end{align}
from here we can proceed in the same manner as we did deriving equation \eqref{eq:last_dividend_call_intrinsic}:
\begin{align}
\mathbb E\left[\left. e^{-\int_t^{t_{k+1}} r_u\;\mathrm du} C_{t_{k+1}^-} \right| \mathcal F_t\right] &\ge \mathbb E\left[ \left.e^{-\int_t^{t_{k+1}} r_u\;\mathrm du} g_k(S_{t_{k+1}^-}) \right| \mathcal F_t\right] \nonumber \\
&\ge \mathbb E\left[ \left. g_k\left(e^{-\int_t^{t_{k+1}} r_u \;\mathrm du}S_{t_{k+1}^-}\right) \right| \mathcal F_t\right] \nonumber \\
&\ge g_k \left(\mathbb E\left[ \left. e^{-\int_t^{t_{k+1}} r_u\;\mathrm du}S_{t_{k+1}^-} \right| \mathcal F_t\right] \right)\nonumber \\
&= g_k \left( S_t \right),
\end{align}
which proves the induction. \qed
\begin{lemma} \label{lemma:call_payof_convex}
For any $g(x) = \max \left\{ x - K, 0\right\}$, $x,K > 0$ we have:
\begin{align}
g(a x) \le ag(x),
\end{align}
when $a \in [0, 1]$. The proof is a direct application of Jensen's inequality to the convex function
$g(x)$.
\end{lemma}
\end{pf}
\begin{pf}[Theorem \ref{thm:forward_price}]
Fist, note that the process $X_t = e^{-\int_0^{t} r_u\;\mathrm du}(S_t + I_t)$ is a martingale:
\begin{align}
\mathrm dX_t &= -r_t X_t \mathrm dt + e^{-\int_0^{t} r_u\; \mathrm du}(\mathrm dS_t + \mathrm dI_t) \nonumber \\
&= -r_t X_t\mathrm dt + e^{-\int_0^{t} r_u\;\mathrm du}(r_tS_t \mathrm dt + \sigma_t S_t\mathrm dW_t + \mathcal J S_t + rI_t \mathrm dt +\mathcal J I_t), \nonumber \\
&= -r_t X_t \mathrm dt + e^{-\int_0^{t} r_u\;\mathrm du}(r_tS_t \mathrm dt + \sigma_t S_t\mathrm dW_t + rI_t\mathrm dt) \text{, because }(\mathcal J S_t = - \mathcal J I_t),\nonumber \\
&= -r_t X_t\mathrm dt + + r_tX_t \mathrm dt + e^{-\int_0^{t} r_u\;\mathrm du}\sigma_t S_t\mathrm dW_t , \nonumber \\
&= e^{-\int_0^{t} r_u\;\mathrm du}\sigma_t S_t\mathrm dW_t.
\end{align}
Therefore:
\begin{align}
\mathbb E\left[\left. e^{-\int_0^T r_u\;\mathrm du}(S_T + I_T) \right| \mathcal F_t \right] = e^{-\int_0^t r_u\;\mathrm du}(S_t + I_t) ,\nonumber \\
\mathbb E\left[\left. S_T \right| \mathcal F_t \right] = e^{\int_t^T r_u\;\mathrm du}(S_t + I_t) - I_T.
\end{align}
\qed
\end{pf}
\begin{pf}[Theorem \ref{thm:discrete_flt_derivative}]
Letting $x(n)= \rho n + x_0$ and taking the continuous derivative of $f(x)u(x-x_0)$ with respect to $x$:
\begin{align}
[f(x)u(x - x_0)]' &= f'(x)u(x-x_0) + f(x)\delta(x-x_0) \implies \\
\frac {\mathrm d [f(n)u(n)]}{\mathrm d n} \frac {\mathrm d n}{\mathrm d x} &= \frac {\mathrm df(x)} {\mathrm dx} u(n) + f(n) \mathbf 1_{n=0},
\end{align}
then applying the discrete Laplace transform to both sides:
\begin{align}\label{eq:proof_lft_deriv_eq1}
\sum_{n=0}^{N-1} e^{-s_kn} \frac {\mathrm d [f(n)u(n)]}{\mathrm d n} \frac {\mathrm d n}{\mathrm d x}
&=\sum_{n=0}^{N-1} e^{-s_kn} \frac {\mathrm df(x)} {\mathrm dx} u(n) + \sum_{n=0}^{N-1} e^{-s_kn} f(n) \mathbf 1_{n=0}, \nonumber \\
&=\sum_{n=0}^{N-1} e^{-s_kn} \frac {\mathrm df(x)} {\mathrm dx} + f_0,
\end{align}
The left-hand side of equation \eqref{eq:proof_lft_deriv_eq1} can be written with the inverse Laplace transform:
\begin{align}
\sum_{n=0}^{N-1} e^{-s_kn} \frac {\mathrm d [f(n)u(n)]}{\mathrm d n} \frac {\mathrm d n}{\mathrm d x}
&= \sum_{n=0}^{N-1} e^{-s_kn} \frac {\mathrm d\left( \frac 1 N \sum_{l=0}^{N-1} e^{s_l n} \bar f_l \right)} {\mathrm dn}\frac {1}{\rho}
= \sum_{n=0}^{N-1} e^{-s_kn} \left( \frac 1 N \sum_{l=0}^{N-1} e^{s_l n} s_l\bar f_l \right)\frac {1}{\rho} \nonumber \\
&= \frac 1 {N\rho} \sum_{n=0}^{N-1}\sum_{l=0}^{N-1} e^{(s_l-s_k) n} s_l\bar f_l
= \frac 1 {N\rho} \sum_{l=0}^{N-1} s_l\bar f_l\sum_{n=0}^{N-1} e^{\frac{i2\pi(l-k) n}{N}} \nonumber \\
&= \frac 1 {N\rho} \sum_{l=0}^{N-1} s_l\bar f_l N\mathbf 1_{l=k}
= \frac {s_k} {\rho} \bar f_k.
\end{align}
Taking sequences from $k \in \{0, \dots, N-1\}$ and rewriting equation \eqref{eq:proof_lft_deriv_eq1} in terms of the
discrete Laplace transform finishes the proof:
\begin{align}
\frac {\{s\}_N} {\rho} \mathcal L_d\{f\}_N &= \mathcal L_d\{f'(x)\}_N + f_0, \nonumber\\
\therefore \mathcal L_d\{f'(x)\}_N &=\frac {\{s\}_N} {\rho} \mathcal L_d\{f\}_N - f_0.
\end{align}
\qed
\end{pf}
|
2,869,038,156,324 | arxiv | \section{\label{sec:level1}Introduction}
One manifestation of the Dirac physics in graphene is a quantum Hall effect (QHE) \cite{Novoselov2005,Zhang2005} with an energy spectrum quantized in Landau levels (LLs) at energies $E_n=\pm v_{\mathrm{F}}\sqrt{2\hbar n e B}$, with a $4eB/h$ degeneracy (valley and spin) \cite{Goerbig2011} and a sequence of Hall resistance plateaus at $R_{\mathrm{H}}=\pm R_{\mathrm{K}}/[4(n+1/2)]$, where $n\geqslant 0$ and $R_{\mathrm{K}}\equiv h/e^2$. The QHE at LLs filling factor $\nu=\pm2$ ($\nu=n_\mathrm{s}h/eB$, where $n_\mathrm{s}$ is the carrier density) is very robust and can even survive at room temperature \cite{Novoselov2007}. This comes from an energy spacing $\Delta E(B)\approx 35\sqrt{B[\mathrm{T}]}~\mathrm{meV}$ between the first two degenerated LLs, which is larger than in GaAs ($\approx1.7B[\mathrm{T}]~\mathrm{meV}$), for accessible magnetic fields. This opens the door for a $10^{-9}$-accurate quantum resistance standard in graphene, surpassing the usual GaAs-based one, in operating at lower magnetic fields ($B\leq$ 4 T), higher temperature ($T\geq$ 4 K) and higher measurement current ($I\geq100~\mu$A) \cite{Poirier2010}. From previous investigations of the QHE in graphene \cite{Giesberg2009,Tzalenchuk2010,Guignard2012,Wosczczyna2012}, it was concluded that achieving this goal requires at least the production of a large area graphene monolayer ($\sim 10~000~\mathrm{\mu m^{2}}$) of high carrier mobility $\mu > 10~000~\mathrm{cm^{2}V^{-1}s^{-1}}$ (assuming $\mu B\gg1$ stays a relevant quantization criterion \cite{Schopfer2012}) and homogeneous low carrier density ($n_{\mathrm{s}}<2\times 10^{11}\mathrm{cm^{-2}}$). However, the question arises whether some defects, specific to each source of graphene, can jeopardize the quantization accuracy. It was thereby shown, using exfoliated graphene, that the presence of high density of charged impurities in the substrate on which graphene lies can limit the robustness of the Hall resistance quantization by a reduction of the breakdown current of the QHE \cite{Guignard2012}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=8.5cm]{Fig1.eps}
\caption{(a) Longitudinal conductance and carrier mobility vs. $V_g$ and (b) $R_\mathrm{H}$ and $R_\mathrm{xx}$ vs. $V_g$ for sample S1. Insert in (a): Hall bar optical image. The length scale (red segment) between voltage terminals is $200~\mathrm{\mu m}$ and equal to the Hall bar width.}\label{fig1}
\end{center}
\end{figure}
Although the quantization of $R_{\mathrm{H}}$ was measured with an uncertainty of $9\times 10^{-11}$ in a large $35\times 160~\mu \mathrm{m}^2$ sample made of graphene grown by sublimation of silicon from silicon carbide, at 14 T and $0.3~\textrm{K}$ \cite{Janssen2011}, it was recently demonstrated, both experimentally\cite{Chua2014} and theoretically\cite{Lofwander2013}, that bilayer stripes forming along the silicon-carbide edge steps during the growth and crossing the Hall bar, can short-circuit the edge states and strongly alter the Hall quantization.
Growth based on chemical vapor deposition (CVD) appears to be a promising route to produce large-area graphene with high mobility \cite{Petrone2012,Cummings2014}. The QHE is now commonly observed in such graphene. However, in a $\mathrm{7\times7~mm^2}$ sample, $R_{\mathrm{H}}$ at $\nu =2$ was found to deviate from $R_\mathrm{K}/2$ by more than $10^{-2}$, while the longitudinal resistance per square reached $R_\mathrm{xx} =200~\Omega$ \cite{Shen2011}, which is the mark of a high dissipation, still unexplained. In comparison, a GaAs-based quantum resistance standard satisfies $R_\mathrm{xx} < 100~\mathrm{\mu \Omega}$. This highlights the need for exploration of the precise electronic transport mechanisms at work in CVD graphene.
In this paper, we investigate the QHE in large Hall bars made of polycrystalline CVD graphene. We observe a strong dissipation characterized by an unexpected power law dependence of the conductance with \emph{T}, \emph{B}, and \emph{I}, which reveals an unconventional carrier backscattering mechanism. Structural characterizations bring out line defects crossing the devices, such as grain boundaries (GBs) or wrinkles naturally existing in polycrystalline CVD graphene. While some works exist at $B=0$ T \cite{Tsen2012, Tuan2013, Yazyev2010, Zhu2012, Pereira2010}, the impact on transport of these line defects has been hardly investigated, to our knowledge, in the QHE regime \cite{Jauregui2011, Ni2012, Calado2014}. With the support of numerical simulations we highlight their paramount role in limiting the Hall quantization.
\section{\label{sec:level1}Sample fabrication}
Large scale graphene films were grown on Cu foils by standard CVD method. In this process, gaseous methane [2 sccm (sccm denotes standard cubic centimeter per minute at STP)] and hydrogen (70 sccm) precursors were introduced into a quartz tube reactor heated at 1000 $^{\circ}$C for 40 min under a total pressure of 1 mbar. After cooling, graphene was transferred onto a Si wafer with 285 nm thick SiO$_2$ layer, by etching the underneath Cu, using 0.1 g/ml $\mathrm{(NH_4)_2S_2O_8}$ solution \cite{Han2014}. The Hall bar samples studied in the paper were fabricated by optical lithography, oxygen plasma etching and contacted with Ti/Au (5 nm/60 nm) electrodes. Both samples (S1 and S2) were grown and transferred in the same process. Sample S1 was measured as fabricated while sample S2 was annealed at $110\, ^{\circ}$C in a H$_2$/Ar atmosphere during 10 hours. Hall bars dimensions are $ 200 \times 400~\mathrm{\mu m^{2}}$ (inset of Fig. \ref{fig1}(a)). Main magneto-transport results concern sample S1, results in sample S2 are used to illustrate reproducibility and sample independence. For this, unless specified, results and discussions concern sample S1.
\section{\label{sec:level1}Results and discussion}
\subsection{\label{sec:level2} Conductance laws}
Figure \ref{fig1}(a) shows the conductance at zero magnetic field deduced from the resistance per square $G_{xx}=1/R_{xx}$, $G_{xx}$ as a function of the gate voltage $V_g$ at $0.3~\mathrm{K}$. The charge neutrality point (CNP) is positioned at $V_g=3.5~\mathrm{V} $, which indicates a residual hole density of $\mathrm{\sim2.6\times 10^{11}cm^{-2}}$, assuming a $\mathrm{SiO_2}$/Si back-gate efficiency of $7\times10^{10}~\mathrm{cm^{-2}/V}$. At high carrier density ($\sim1\times 10^{12}\mathrm{cm^{-2}}$), the hole (electron) mobility is $\sim 3100~\mathrm{{cm}^{2}V^{-1}s^{-1}}$ ($\sim 2300~\mathrm{{cm}^{2}V^{-1}s^{-1}}$). The electron phase coherence length $L_\mathrm{\phi}$, the inter-valley scattering length $L_\mathrm{iv}$, and the intra-valley scattering length are $\mathrm{0.9~\mu m}$, $\mathrm{0.3~\mu m}$ and $\mathrm{0.1~\mu m}$, respectively, as deduced from the measurement (see Appendix A) of the weak localization correction to the conductance at $0.3~\mathrm{K}$ \cite{McCannWL2006}. The lower value of $L_\mathrm{iv}$ compared to $L_\mathrm{\phi}$ indicates the presence of a significant concentration of short-range scatterers.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=8.5cm]{Fig2.eps}
\caption{(a) $G_\mathrm{xx}$ and (b) $G_\mathrm{xy}$ vs. $\nu$ for \emph{T} between 0.3 K and 40 K at 19 T, obtained in sample S1. Temperature color code apply for both figures. Arrows in (a) indicate the values of $\nu$ at which measurements in Fig. 3(a) are performed.}\label{fig2}
\end{center}
\end{figure}
The Hall resistance, $R_{\mathrm{H}}$, measured at 0.3 K and 19 T, is reported as a function of $V_g$ in Fig. \ref{fig1}(b). It features well-developed $R_{\mathrm{H}}$ plateaus at values $h/\nu e^2$ for $\nu=\pm2,\pm6$, which coincide with the minima of the longitudinal resistance per square $R_\mathrm{xx}$. Close to the CNP, additional high resistance peaks with $R_\mathrm{H},R_\mathrm{xx}\gg h/e^2$ are observed, corresponding to plateaus with transverse conductance $G_\mathrm{xy}= R_{\mathrm{H}} / (R_{\mathrm{H}}^{2} + R_{\mathrm{xx}}^{2})$ around $0$ and $e^2/h$ in Fig. \ref{fig2}(b). These plateaus are accompanied by minima of the longitudinal conductance per square $G_\mathrm{xx}=R_{\mathrm{xx}} / (R_{\mathrm{H}}^{2} + R_{\mathrm{xx}}^{2})$ also located around $\nu=0$ and $\nu=1$, respectively, Fig. \ref{fig2}(a). Such conductance plateaus can be explained by the degeneracy lifting of the $n=0$ LL \cite{Goerbig2011,Kharitonov2012}, which is usually observed in graphene with much higher carrier mobility. We therefore do not exclude the possibility that the carrier mobility inside a monocrystalline grain would be higher than the moderate value calculated from the mean conductance $G_\mathrm{xx}$ averaged over several grains. More extensive analysis of these additional plateaus is beyond the scope of this article.
Although nice plateaus are observed, it turns out that $R_{\mathrm{H}}$ is not well quantized, even on the $\nu=-2$ plateau, deviating from $R_\mathrm{K}/2$ by more than $10^{-2}$ in relative value at a current of $1~\mu$A, while $R_{\mathrm{xx}}$, which reflects the dissipation arising from backscattering between counter-propagating quantum Hall edge states, is
higher than $150~\Omega$. This is unexpected since the quantization of $R_{\mathrm{H}}$ has been measured with uncertainties several orders of magnitude lower in exfoliated samples smaller than ours and with similar carrier mobilities\cite{Giesberg2009,Guignard2012,Wosczczyna2012}. This shows that the transport properties in the QHE regime are very sensitive to the defect-type and that the mobility at $B=0$ T does not constitute a sufficient criteria of quantization.
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.5cm]{Fig3.eps}
\caption{(a) $G_\mathrm{xx}$ vs. $T$ in log-log scale at 19 T for S1. Inset: $G_\mathrm{xx}$ in log scale vs. 1/\emph{T} for $\nu=-1.7$ at 19 and 10 T and at $\nu=-2.3$ for comparison. (b) $G_\mathrm{xx}$ vs. $T$ in log-log scale for S2. Inset: $G_\mathrm{xx}$ vs. $\nu$ at 0.3 K, arrows indicate the values of $\nu$ at which measurements are performed.}\label{fig3}
\end{center}
\end{figure}
To identify the mechanism responsible for this loss of quantization, we analysed $G_\mathrm{xx}$, known as the quantization parameter \cite{Jeckelmann2001},
over a large range of $\nu$ values, at several temperatures between 0.3 K and 40 K (see Fig. \ref{fig2}(a)), and at magnetic fields between 5 T and 19 T. Measurements of $R_{\mathrm{H}}$
and $R_{\mathrm{xx}}$ were carried out using a low-frequency AC measurement current of 1 nA, which ensures the absence of current effects, see fig. \ref{fig4}(b). Except for $\nu=-1.7$, where $G_\mathrm{xx}$ reaches its minimum, and at \emph{B}=19 T, it appears for both type of carriers (electrons and holes) that neither $G_\mathrm{xx}(T)$ nor $G_\mathrm{xx}(B)$ (Figs. \ref{fig3}(a) and \ref{fig4}(a), respectively) has an exponential behavior, which would be expected for a dissipation mechanism based on thermal activation to a higher-energy LL or variable range hopping (VRH) through localized states in the bulk. This greatly differs from what has been observed in both exfoliated \cite{Giesbergact2007,Giesbergvrh2009,Bennaceur2012} and epitaxial graphene \cite{Tzalenchuk2011}. Rather, whatever the quantum Hall state, at $\nu=\pm2$ or $\pm6$, $G_\mathrm{xx}$ follows a power law dependence as a function of
temperature ($G_\mathrm{xx}\propto T^{\alpha}$) and magnetic induction ($G_\mathrm{xx}\propto B^{-\beta}$) with $\alpha\in [0.3,1.1]$ (at 19 T) and $\beta \in [2.1, 3.4]$ (at 0.3 K). The temperature dependence becomes smoother with $\nu$ moving away from the conductance minimum. For $G_\mathrm{xx}(T)$, we can also define two temperature regimes characterized by larger $\alpha$
at lower temperature and a smooth crossover. In a given temperature regime and magnetic field, $\alpha$ slightly varies with $\nu$, away from the LL centers. The same temperature behavior
of $G_\mathrm{xx}$, with similar $\alpha$ values, was observed in sample S2, Fig. \ref{fig3}(b). In S1, the dependence of $G_\mathrm{xx}$ on $T$ ($B$) becomes smoother with decreasing $B$ (increasing $T$)(Fig. \ref{fig3}(a) and \ref{fig4}(a)), characterized by decreasing values of $\alpha$ ($\beta$). Such behaviors are consistent with a reducing inter-LL energy gap. Interestingly,
the $G_\mathrm{xx}$ power law temperature dependence, observed for $\nu$ corresponding to $G_\mathrm{xx}$ minima, is similar to that observed at $G_\mathrm{xx}$ maxima, where charge transport is known to occur through extended LL states (as shown for $\nu=-4$ in Fig. \ref{fig3}(a)). This suggests the scenario that the strong backscattering observed near $\nu=\pm2$ and $\pm6$ is caused by extended
or poorly localized states existing at energies between LLs.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{Fig4.eps}
\caption{(a) $G_\mathrm{xx}$ vs. $B$ in log-log scale at 0.3 K for different filling factors for S1. (b) $G_\mathrm{xx}$ vs. $I$ and $G^T_\mathrm{xx}$ vs. $I^{*}$ in log-log scale for the two samples with $I^{*}[\mathrm{A}]=0.87\times10^{-6}~T[\mathrm{K}]^{1.74}$ for S1 and $I^{*}[\mathrm{A}]=0.6\times10^{-6}~T[\mathrm{K}]^{2.1}$ for S2.}\label{fig4}
\end{center}
\end{figure}
At $\nu=-1.7$, a fit of $G_\mathrm{xx}(T)$ with an Arhenius law $\propto \exp[-(T_{\mathrm{act}}/T)]$ results in an activation temperature of 2.4 K $\ll\Delta E(B=19~\mathrm{T})/k_\mathrm{B}\sim 1834~\mathrm{K}$ (inset of Fig. 3(a)), suggesting mobility edge energies unexpectedly far from the LL centers and confirming the fragility of the $R_{\mathrm{H}}$ quantization. A fit with a VRH theory including a soft Coulomb gap \cite{Shklovskii1984}, $G_{\rm xx}\propto(1/T)\exp(-(T_0/T)^{1/2})$, is also possible and leads to $T_0=27~\mathrm{K}$ and a high value for the localization length $\xi=Ce^2/(4\pi\epsilon_0\epsilon_r k_\mathrm{B}T_0)$ (with $C\sim6.2$ \cite{Furlan1998}), equal to $\sim 1~\mathrm{\mu m}\gg l_B(19~\mathrm{T})\sim 6~\mathrm{nm}$ \cite{CommentXsi,Bennaceur2012}, which is the mark of poorly localized states in the bulk that can even have a metallic behaviour since $\xi\geq L_\mathrm{\phi}$. Decreasing the magnetic field from 19 T to 10 T, while $\nu$ is fixed at -1.7, results in a transition to a power law temperature dependence [Fig. 3(a)(inset)]. This can be explained once again by the delocalization of states between LLs because of a further increasing increasing $\xi$, and a decreasing inter-LL energy gap.
The analysis of the dependence of $G_\mathrm{xx}$ on the current is also instructive. Near $\nu=-2$, a significant increase of $G_\mathrm{xx}$ starting from currents as low as 100 nA indicates a breakdown current density of the QHE lower than $5\times10^{-3}$ A/m, which is unexpectedly small compared to values measured in epitaxial graphene (up to 43 A/m at 23 T) \cite{Alexander-Webber2013} or in exfoliated graphene 0.5 A/m at 18 T\cite{Wosczczyna2012}. This also suggests the existence of extended states accessible at low electric field. Moreover, Fig. 4(b) shows that a similar current-temperature conversion relationship, $I^{*}\propto T^p$ with $p\sim 2$, exists for both samples S1 and S2. This allows for a good superposition of $G_\mathrm{xx}(I)$ and $G_\mathrm{xx}^T(I^{*})$,
where $G_\mathrm{xx}(T)=G_\mathrm{xx}^T(I^{*})$, on a common current scale at sufficiently high $I$ such that $G_\mathrm{xx}$ is not limited by $T$. A relationship $I\propto T$ is expected in the QHE regime from the VRH mechanism \cite{Furlan1998}, as it has been observed in exfoliated graphene \cite{Bennaceur2012}. On the other hand, $I\propto T^2$ was observed in graphene in the metallic regime, at low magnetic field \cite{Baker2012} or in regime of Schubnikov-de-Haas oscillations \cite{Baker2013} and explained by the coupling of carriers to acoustic phonons. The predicted relationship between the current and the temperature is given by $I=\sqrt{\sqrt{n_s}A\gamma/R_{\rm xx}(B=0)}T^{2}$ where $n_s$ is the carrier density, $A$ is the sample area and $\gamma=5.36\times 10^{-26}\mathrm{WK^{-4}m}$ is a constant\cite{Kubakaddi2009,Baker2012}. Considering $R_{\rm xx}(B=0)=1.8~\mathrm{k\Omega}$ at $n_s\sim 1\times10^{12}\mathrm{cm^{-2}}$ (hole density corresponding to $\nu=-2$ at \emph{B}=19 T), one calculates $I[\mathrm{A}]\sim1.09\times10^{-6}~T[K]^{2}$ which is in a good agreement with our experimental determination $I^{*}[\mathrm{A}]=0.87\times10^{-6}~T[\mathrm{K}]^{1.74}$ for sample S1 and $I^{*}[\mathrm{A}]=0.6\times10^{-6}~T[\mathrm{K}]^{2.1}$ for sample S2 (see fig. \ref{fig4}(b)). This suggests that we can ascribe our observation of $I\propto T^2$ to the manifestation of a metallic regime, which involves extended or poorly localized states, in a weakened QHE regime.
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.5cm]{Fig5.eps}
\caption{(a) Optical and (b) atomic force microscopies. (c) Raman D peak map (scale bar is $1.5~\mathrm{\mu m}$). Figures (a)-(c) concern about the same area of sample S2. (d) Representation of the network of line defects corresponding to short-circuit paths between the sample edges. (e) Raman signal on (A) and away (B) from a wrinkle. Inset: zoom in the D peak zone of the Raman spectra.}\label{fig5}
\end{center}
\end{figure}
\subsection{\label{sec:level2} Structural characterizations}
To better understand our results, complementary structural analyses were performed combining different techniques (Fig. 5). Optical and atomic force microscopy reveal the existence of multilayer patches and a high density and variety of wrinkles. Multilayer patches are known to form locally during CVD growth\cite{Han2014}. Assuming they are located at the center of the grains, from their pacing we can deduce a typical monocrystalline grain sizes ranging from $\mathrm{1~\mu m}$ to $\mathrm{10~\mu m}$ (GBs were not directly observable with the techniques used).
Given the small size of the patches (Fig. 5(a)) compared to the width of the Hall bars and the ability of carriers to skirt local defects in the QHE regime \cite{Yoshioka1998}, these patches are not expected to cause the observed strong backscattering. In the same way, only large bilayer stripes crossing the Hall bar channel are expected to significantly alter the perfect quantization\cite{Chua2014,Lofwander2013}. Raman spectroscopy in most of the optically clean areas indicates high quality graphene, since no D-peak is observable (Fig. 5(c))\cite{Ferrari2007}. On the other hand, the presence of the D-peak, which confirms the existence of sharp defects, as already revealed by weak localization transport experiments, is measured at locations on most wrinkles. Such a Raman D-peak is the signature of underlying defects such as vacancies or GBs \cite{QingkaiYu2011, Duong2012}. In our samples, wrinkles and GBs are likely to form a continuous network connecting Hall bar edges. Carriers moving from source to drain then cannot avoid crossing some line defects (Fig. 5(d)), which is
expected to impact charge transport.
\begin{figure}[h]
\begin{center}
\includegraphics[width=8.5cm]{Fig6.eps}
\caption{(a) Two-terminal magnetoconductance of a pristine aGR, and of aGR with a 8-5 line defect crossing the sample (represented in (b)) including a ramdom disorder potential of $W=0.4$ eV (blue line) and $W=2$ eV (red line). (b) Representation of the 8-5 line defect crossing the aGR. (c) and (d) spatial distribution of the electrons injected from the source contact (to the right) at 200 meV is shown in insets (c) $W=0.4$ eV and (d) $W=2$ eV.}\label{fig6}
\end{center}
\end{figure}
\subsection{\label{sec:level2} Numerical simulations}
To more closely study this impact on the QHE, we performed numerical calculations of the two-terminal conductance of a 200 nm wide armchair graphene ribbon (aGR) crossed by a line of pentagons and octagons \cite{Bahamon2011,Song2012} by using the Green's function approach within the tight-binding framework \cite{Cresti2006}. To simulate a more realistic line defect, a random (Anderson \cite{Anderson1958}) potential with a uniform distribution in the range [-\emph{W}/2,+\emph{W}/2], where \emph{W} is the disorder strength, was introduced on the line defect sites (Fig. 6(b)) to mimic a generic short-range disorder, as the one generated by ad-atoms or vacancies.
In the QHE regime, the calculations reported were performed at \emph{B}=80 T so that $l_B\sim 3~\mathrm{nm}$ is significantly smaller than the ribbon width (in a similar ratio of the experimental $l_B$ to the smallest grain size) and larger than the interatomic distance. For a 100 nm-wide ribbon and \emph{B}=40 T qualitatively very similar results, not shown, were obtained. The calculated conductance almost systematically deviates from the value expected for pristine graphene by up to one spin-degenerated conduction channel [Fig. 6(a)], for weak disorder ($W=0.4~\mathrm{eV}$), significantly larger than what is experimentally observed. The deviation is higher for electrons than for holes, where the asymmetry results from the sublattice symmetry breaking caused by the line defect.
As demonstrated in Fig. 6(c), the deviation of the conductance from the case of pristine graphene is caused by a circulating current along the line defect. An analysis of the energy spectrum shows that counter-propagating states on either side of the line defect can hybridize and form non-chiral quasi-1D extended states \cite{Cummings} able to carry current, which crosslink the opposite sample edges. Acting as a direct short-circuit, such states are responsible for a strong carrier backscattering. Remarkably, higher Anderson disorder reinforces wave-function localization along the line defect and reduces the circulation of current (Fig. 6(d)), which finally improves the Hall conductance quantization. It is also found that, due to the disorder, the deviation of the Hall conductance from pristine quantization reduces with increasing magnetic field and sample width (i.e. the length of the line defect network), both of which enhance the localization. See Appendix B for additional details. Thus, a moderate alteration of the Hall conductance quantization comparable to what is experimentally observed can be reproduced.
Moreover, even though the simulations were run at 0 K, the existence of extended or poorly localized states along the line defect suggests smooth temperature behavior. Localization by strong disorder along the line defect also leads to the possible observation of VRH or thermal activation behavior, characteristic of an Anderson insulator. This is in sound agreement with our experimental observations, since, following the proposed scenario, $G_\mathrm{xx}$ measured at $\nu$ values corresponding to minima should be dominated by the conductance along the line defects, which is much higher than the bulk conductance inside the grains.
Finally, calculations performed for scrolled graphene \cite{Cresti2012} indicate that wrinkles are also expected to alter the Hall conductance quantization in a similar fashion. Recent experimental results also suggest such an impact \cite{Calado2014}.
\section{\label{sec:level1}Conclusion}
To conclude, in polycrystalline CVD graphene characterized by a high density of line defects such as GBs and wrinkles, we highlight an unusual highly dissipative electronic transport in the QHE regime, which reveals the existence of poorly localized states between LLs and manifests itself as a deviation of $R_{\mathrm{H}}$ from the pristine quantization. Numerical simulations confirm that such states can exist along a line defect crossing a Hall bar and yielding strong backscattering between edge states. The impact of line effects turn out to be similar to that of crossing bilayer stripes in graphene grown by sublimation of silicon from silicon carbide\cite{Chua2014}. Further theoretical work, possibly considering Coulomb interactions and Luttinger physics \cite{Fisher1997}, is required to explain the observed temperature, magnetic field and current dependence of $G_\mathrm{xx}$. Our work also motivates the investigation of the QHE in CVD graphene monocrystals, whose size is continuously in progress \cite{Zhou2013}, not only to discern the respective roles of GBs and wrinkles but also to progress towards an operational graphene-based quantum resistance standard. More generally, QHE turns out to be an extremely efficient tool to reveal line defects in 2D materials whose precise
characterization is crucial in view of future applications.
\begin{acknowledgments}
We wish to acknowledge D. Leprat and L. Serkovic for technical support, D. C. Glattli, J.-N. Fuchs, M. O. Goerbig, S. Florens and Th. Champel for fruitful discussions. This research has received funding from the Agence national de la Recherche (ANR) , Metrograph project (Grant No. ANR-2011-NANO-004). It has been performed within the EMRP (European Metrology Research Program), project SIB51, Graphohm. The EMRP is jointly funded by the EMRP participating countries within EURAMET (European association of national metrology institutes) and the European Union.
\end{acknowledgments}
|
2,869,038,156,325 | arxiv | \section{Introduction} \label{intro}
Let $\mathbb{F}_q$ denote the finite field of order $q$, a power of the prime $p$. The proliferation of primitive elements of $\mathbb{F}_q$ gives rise to many interesting properties. For example, it was proved in \cite{COT} that for any non-zero $\alpha, \beta, \epsilon \in \mathbb{F}_q$ the equation $
\epsilon = a \alpha + b \beta$
is soluble in primitive elements $a, b$ provided that $q>61$. Since $a$ is primitive if and only if $a^{-1}$, its multiplicative inverse in $\mathbb{F}_q$, is primitive, one may look for linear relations amongst primitive elements and their inverses and, as in the above example, seek a lower bound on $q$ beyond which such relations hold --- this is the purpose of the current paper.
Given
arbitrary non-zero elements $u, v \in\mathbb{F}_q$, call a pair ($a,b$) of primitive elements of $\mathbb{F}_q$ \emph{$(u,v)$-primitive}
if additionally the elements $ua+vb$ and $va^{-1}+ub^{-1}$ are each primitive. The task is to
find an asymptotic expression for $N=N(q,u,v)$, defined as the number of $(u,v)$-primitive pairs $(a, b)$ in $\mathbb{F}_q$.
In the situation in which $\mathbb{F}_q$ is a prime field, i.e., $q=p$, this problem was introduced by Li and Han \cite{LiHa}. In that context, $a, b$ are considered
to be integers in $I_{p}= \{1,2, \ldots, p-1\}$ with inverses $a^{-1}, b^{-1} \in I_p$. Similarly, $u, v$ can be taken to be in $I_p$.
To state the result of \cite{LiHa} we introduce some notation. For a positive integer $m$ let $\omega(m)$ be the number of distinct prime divisors of $m$ and
$W(m)=2^{\omega(m)}$ be the number of square-free divisors of $m$.
Further, define $\theta(m)$ as $\phi(m)/m$, where $\phi$ is Euler's function, and
$\tau(m)=\prod_{l|m}\left(1-\frac{1}{l-1}+\frac{1}{(l-1)^2}\right)$, where the product is taken over all $\omega(m)$ distinct prime divisors $l$ of $m$.
\begin{thm} [Li--Han]
\label{sheep}
Let $p$ be an odd prime and $n$ any integer in $I_p$. Set $\theta =\theta(p-1)$, $\tau= \tau(p-1)$ and $W=W(p-1)$. Then
\begin{equation}
\label{goat}
\left|N(p,1,n)-\theta^3\tau\cdot (p-1)^2\right| \leq 5 \theta^4 W^4 p^{3/2}.
\end{equation}
\end{thm}
Li and Han gave the following as corollaries to Theorem \ref{sheep}.
\begin{cor}[Li--Han]\label{lamb}
Every sufficiently large $p$ has primitive roots $a$ and $b$ such that both $a+b$ and $a^{-1}+b^{-1} $ are also primitive. Also,
every sufficiently large $p$ has primitive roots $a$ and $b$ such that both $a-b$ and $b^{-1}-a^{-1} $ are also primitive.
\end{cor}
We establish an improved estimate for $N(q, u, v)$ in the case of a general finite field.
\begin{thm}\label{bull} Let $q>2$ be a prime power. Set $\theta= \theta(q-1), \tau=\tau(q-1), W=W(q-1)$.
Then, for arbitrary non-zero $u, v \in \mathbb{F}_q$,
\begin{equation}\label{horse}
\left| N(q,u,v) - \theta^3\tau\cdot(q-1)\ q\right| \leq \theta^4 W^3\cdot(q-1)\sqrt{q}.
\end{equation}
\end{thm}
\smallskip
The principal improvement in Theorem \ref{bull} over Theorem \ref{sheep} is the reduction from $W^4$ to $W^3$ in the error term.
Its effect can be described as follows. Let $\mathcal{S}$ be the set of prime powers $q$ such that, for any pair
of non-zero elements ($u,v$) in $\mathbb{F}_q$, there exists a $(u,v)$-primitive pair in $\mathbb{F}_q$. Explicit calculations using
(\ref{goat}) guarantee that all $q$ exceeding $5.7\times10^{364}$ (or with $\omega(q-1)>150$) are in $\mathcal{S}$.
On the other hand, using (\ref{horse}), we conclude that all prime powers $q$ exceeding $1.7 \times 10^{84}$ (or with $\omega(q-1) >46$)
are in $\mathcal{S}$.
For existence questions
it is clear that the interest in Theorems \ref{sheep}
and \ref{bull} lies in their lower bounds. Hence we shall describe a method that, while not delivering an asymptotic estimate,
establishes a lower bound for $N(q,u,v)$. This yields a non-trivial lower bound applicable to a wider range of prime powers $q$.
Let $\mathop{\mathrm{Rad}}(m)$ be the {\em radical} of $m$, i.e., the
product of the distinct primes dividing a positive integer $m$, and let $\mathop{\mathrm{Rad}}(q-1)$
be expressed as $kp_1\cdots p_s$
for some divisor $k$ and distinct primes $p_1, \ldots,p_s$.
Define $\delta_4=1-4\sum_{i=1}^s\frac{1}{p_i}$.
\begin{thm}\label{platinum}
Suppose $\delta_4 >0$. Set $\theta=\theta(k), \tau=\tau(k), W=W(k)$. Then
$$ N(q,u,v) \geq \delta_4 \theta^3\cdot(q-1)\{\tau\ q -\theta W^3\sqrt{q}\}.$$
\end{thm}
A consequence of Theorem \ref{platinum} is that all prime powers $q$
exceeding $6.9\times10^{10}$
are in $\mathcal{S}$.
A stronger conclusion, however, can be drawn by introducing a subset $\mathcal{T} \subseteq \mathcal{S}$.
Define a single primitive element $a$ to be \emph{$(u,v)$-primitive} if, additionally, $ua+va^{-1}$ is primitive, and define $\mathcal{T}$ as the set of prime powers $q$
such that, for any pair of non-zero elements ($u,v$) in $\mathbb{F}_q$, there exists a $(u,v)$-primitive element in $\mathbb{F}_q$ (in the above sense).
Easily, if $a$ is a $(u,v)$-primitive element, then $(a,a^{-1})$ is a $(u,v)$-primitive pair so that, indeed, $\mathcal{T} \subseteq \mathcal{S}$.
For even $q$ the existence of $(1,1)$-primitive elements was the simpler topic\footnote{The harder problem treated in \cite{WaCaFe} and \cite{Co14} concerned the existence of a $(1,1)$-primitive element in an extension
field $\mathbb{F}_{q^n}$ which is also normal over the base field $\mathbb{F}_q$.} considered by Wang, Cao and Feng in \cite{WaCaFe}. Their investigations were completed by Cohen \cite{Co14} --- see also the reference to \cite{Co87} at the end of Section \ref{existence}.
Results on the existence of $(1,1)$-primitive elements in
$\mathbb{F}_q$ have recently been given by Liao, Li and Pu \cite{LiLiPu}. In this paper
we use a sieving method and some computation to establish the
following theorem.
\begin{thm}\label{ox} Define
\begin{equation}\label{fridge}
\begin{split}
\mathcal{E_T}&=\{2,3,4,5,7,9,11,13,19,25,29,31,37,41,43,49,61,81,97,121,169\},\\
\mathcal{E_S} &= \{2,3, 4,5,7, 13\}.
\end{split}
\end{equation}
Then $\mathcal{E_T}$ is the set of prime powers \emph{not} in $\mathcal{T}$ and $\mathcal{E_S}$ is the set of prime powers \emph{not} in $\mathcal{S}$.
\end{thm}
This generalises and resolves completely the problem posed by Li and Han in \cite{LiHa}. From Theorem \ref{ox} we can easily deduce the following, which resolves completely the `sufficiently large' of Corollary \ref{lamb}.
\begin{cor} \label{calf} Let $q$ be a prime power.
\begin{enumerate} [label=\emph{(\roman*)}]
\item Suppose $q \not \in \{2,3,4,5,7,9,13,25,121\}$. Then there is a primitive element $a$ in $\mathbb{F}_q$ such that $a+ a^{-1}$ is primitive.
\item Suppose $q \not \in \{2,3,4,5,9,13,25,61,121\}$. Then there is a primitive element $a$ in $\mathbb{F}_q$ such that $a-a^{-1}$ is primitive.
\item Suppose $q \not \in \{2, 3, 4, 5, 7, 13\}$. Then there are primitive elements $a$ and $b$ in $\mathbb{F}_q$ such that both $a+b$ and $a^{-1}+b^{-1} $ are also primitive.
\item Suppose $q \not \in \{2, 3, 4, 5, 13\}$. Then there are primitive elements $a$ and $b$ in $\mathbb{F}_q$ such that both $a-b$ and $b^{-1}-a^{-1} $ are also primitive.
\end{enumerate}
\end{cor}
The outline of this paper is as follows. In \S \ref{prelim} we introduce some notation that, in \S \ref{buffet}, allows us to prove Theorem \ref{bull}. In \S \ref{pairs} we introduce a sieve and prove Theorem \ref{platinum}. In \S \ref{skew} we introduce some asymmetry and prove Theorem \ref{emerald}, which is sometimes stronger in practice than Theorem \ref{platinum}. In \S \ref{earl} we prove Theorems \ref{lioncub} and \ref{tiger}, which are criteria for membership of $\mathcal{T}$. Finally, in \S \ref{existence} and \S\ref{comp_res} we present theoretical and computational results that prove Theorem~\ref{ox}.
\section{Preliminaries}\label{prelim}
To set Theorem \ref{platinum} and subsequent results in context, we introduce an extension of the concept of a primitive element in $\mathbb{F}_q$.
Let $e$ be a divisor of $q-1$. Then a non-zero element
$a \in \mathbb{F}_q$ is defined to be {\em $e$-free} if $a=b^d$, where $b \in \mathbb{F}_q$ and $d|e$, implies $d=1$. This property depends only on $\mathop{\mathrm{Rad}}(e)$.
In particular, $a$ is primitive if and only if it is $(q-1)$-free.
Given $e|q-1$, the characteristic function $\lambda_e$ for the subset of $e$-free elements of $\mathbb{F}_q^*$ is expressed in terms of the multiplicative
characters of $\mathbb{F}_q$
and is given by
$$ \lambda_e(a) =\theta(e) \sum_{d|e} \frac{\mu(d)}{\phi(d)}\sum_{\chi \in \Gamma_{d}} \chi(a). $$
Here $\Gamma_{d}$ denotes the set of $\phi(d)$ multiplicative characters of order $d$.
Consistent with dependence only on $\mathop{\mathrm{Rad}}(e)$ is the fact that the only non-zero contributions to
$\lambda_e(a)$ can arise from \emph{square-free} values of $d$: we can assume throughout that every value of $d$ considered is square-free.
Finally, we generalise the definition of $\delta_4$ used in Theorem \ref{platinum}.
\begin{equation} \label{delta}
\delta _j = \delta_j(p_1,\ldots,p_s)=1-j\sum_{i=1}^s\frac{1}{p_s}.
\end{equation}
Specifically, in the sequel, we shall employ $\delta_4, \delta_3$ and $\delta_2$.
\section{Asymptotic estimate for $(u,v)$-primitive pairs}\label{buffet}
In this section we prove Theorem \ref{bull}. Assume that $q$ and non-zero elements $u,v$ of
$\mathbb{F}_q$ are given. Writing $\lambda=\lambda_{q-1}$, we have from \S \ref{prelim} that
$$N:=N(q,u,v)= \sum_{a,b \neq 0} \lambda(a)\lambda(b)\lambda(ua+vb)\lambda(va^{-1}+ub^{-1}). $$
Hence
\begin{equation} \label{pig}
N= \theta^4 \sum_{\substack{d_j|q-1,\\j=1,\ldots, 4}} \frac{\mu(d_1)\mu(d_2)\mu(d_3)\mu(d_4)}{\phi(d_1)\phi(d_2)\phi(d_3)\phi(d_4)}\sum_{\substack{\chi_j \in \Gamma_j\\j=1,\ldots,4}}S,
\end{equation}
where
\begin{eqnarray}\label{piglet}
S&=& \sum_{a\neq 0}\sum_{b\neq 0} \chi_1(a)\chi_2(b) \chi_3(ua+bv)\chi_4(va^{-1}+ub^{-1}) \nonumber\\
&=& \sum_{a\neq 0}\sum_{b\neq 0} \chi_1(ab) \chi_2(b)\chi_3(uab+vb)\chi_4(v a^{-1}b^{-1}+ub^{-1}) \nonumber\\
&=& \sum_{a \neq 0}\sum_{b\neq 0}\chi_1\chi_2\chi_3\chi_4^{-1}(b) \chi_{1}\chi_{4}^{-1}(a) \chi_{3}\chi_{4}(ua+v).
\end{eqnarray}
If $\chi_1\chi_2\chi_3\chi_4^{-1}\neq \chi_0$, the principal character, then the sum over $b$ in (\ref{piglet}) is zero, whence $S=0$.
So in what follows assume $\chi_1\chi_2\chi_3\chi^{-1}_4= \chi_0$.
If $\chi_1\chi_4^{-1}=\chi_3\chi_4=\chi_0$, then $\chi_1=\chi_2=\chi_4=\chi_3^{-1}$ and $S=(q-1)(q-2)$.
Hence $d_1=d_2=d_3=d_4$ and, as in \cite[p.\ 7]{LiHa}, the contribution of all such terms in (\ref{pig}) to $N$ is
\begin{equation}\label{piggy1}
\theta^4 \cdot(q-1)(q-2) \sum_{d|q-1}\frac{\mu^4(d)}{\phi^3(d)} =\theta^3 \tau\cdot(q-1)(q-2).
\end{equation}
If $\chi_1\chi_4^{-1}=\chi_0$ (so that $\chi_2\chi_3=\chi_0$) but $\chi_3 \chi_4 \neq \chi_0$, then
$$S= -(q-1) \sum_{a \neq 0}\chi_3\chi_4(ua+v) =-\chi_3\chi_4(v)(q-1),$$ so that $|S|=q-1$. Similarly,
$|S|=q-1$ when $\chi_3\chi_4=\chi_0$ but $\chi_1\chi_4^{-1}\neq \chi_0$.
Finally, if $\eta_1=\chi_1\chi_4^{-1}\neq \chi_0$ and $\eta_2 = \chi_3 \chi_4 \neq \chi_0$,
then
\[S=(q-1)\sum_{a\neq 0}\eta_1(a) \eta_2(ua+v)= \eta_1\eta_2(v)\eta_1(-1/u)(q-1)J(\eta_1, \eta_2), \]
where $J$ denotes the Jacobi sum, so that $|S|= (q-1)\sqrt{q}$.
We now obtain a bound for $|M|$, where $M$ is the sum of terms in (\ref{pig}) corresponding to characters of square-free orders with $\chi_4=\chi_1\chi_2\chi_3$
excluding those with $\chi_1=\chi_2 =\chi_4=\chi_3^{-1}$ (which were accounted for in (\ref{piggy1})). Thus, we sum over all characters $\chi_1, \chi_2, \chi_3$
and allow $\chi_4$ to be defined by $\chi_4=\chi_1\chi_2\chi_3$, in which case $d_4$ is the degree of the resulting character $\chi_4$. In general,
$d_4$ is not determined by $d_1,d_2,d_3$ so we simply use the bound $\phi(d_4) \geq 1$.
For simplicity, we
use the bound $|S| \leq (q-1)= (q-1)\sqrt{q}- (q-1)( \sqrt{q}-1)$ whenever $\chi_1=\chi_4$ (so $\chi_2\chi_3=\chi_0$) and include terms with $\chi_1=\chi_2 =\chi_4=\chi_3^{-1}$,
but the bound $|S| \leq (q-1) \sqrt{q}$, otherwise. Thus
\begin{equation}\label{piggy2}
|M| \leq \theta^4 (W^3 \cdot(q-1)\sqrt{q} -|L|),
\end{equation}
where $L$ accounts for the discrepancy in terms with $\chi_1=\chi_4$ so that
$$|L|=\sum_{d_1, d_2|(q-1)}\frac{|\mu(d_1)|\mu^2(d_2)}{\phi(d_1)\phi^2(d_2)}\sum_{\substack { \chi_1\in \Gamma_{d_1}\\ \chi_2 \in \Gamma_{d_2}}}(q-1) (\sqrt{q}-1).$$
Hence
\begin{equation}\label{piggy3}
|L| = (q-1)(\sqrt{q}-1)W\sum_{d|q-1}\frac{\mu(d)^{2}}{\phi(d)}=\frac{W(q-1)(\sqrt{q}-1)}{\theta}.
\end{equation}
Combining (\ref{piggy1}), (\ref{piggy2}), (\ref{piggy3}) we deduce that
\begin{equation}\label{hog}
|N -\theta^3\tau \cdot (q-1)(q-2)| \leq \theta^3\cdot(q-1)\{\theta W^3 \sqrt{q}-W(\sqrt{q}-1)\},
\end{equation}
The implicit upper bound in (\ref{horse}) is immediate from (\ref{hog}). The lower bound follows
from (\ref{hog}) since $2\tau < W\cdot(\sqrt{q}-1)$.
\begin{cor}\label{cow}
The prime power $q$ is in $\mathcal{S}$ whenever $q >W^6(q-1)$.
\begin{proof}By looking at the factors from each prime $l|q-1$ we see that $\tau(q-1)>\theta(q-1)$.
\end{proof}
\end{cor}
\section{Sieving for $(u,v)$-primitive pairs}\label{pairs}
We now introduce the sieving machinery and prove Theorem \ref{platinum}, which is an improvement on Theorem \ref{bull}.
As in \S \ref{intro}, write $\mathop{\mathrm{Rad}}(q-1)=kp_1\cdots p_s$, where $p_1, \ldots, p_s$ are the
\emph{sieving primes}. For divisors $e_1,\ldots,e_4$ of $q-1$, denote by
$N(e_1,e_2,e_3,e_4)$ the number of non-zero pairs $a,b \in \mathbb{F}_q$ for which, respectively,
$a, b,ua+v b,va^{-1}+ub^{-1}$ are $e_1,e_2,e_3,e_4$-free. When $e_1=e_2=e_3=e_4=e$ abbreviate $N(e_1,e_2,e_3,e_4)$ to $N_e$.
In particular, $N=N_{q-1}$.
\begin{lem}\label{hen}
We have
$$ N_{q-1} \geq \sum_{i=1}^s\{N(p_ik,k,k,k)+N(k,p_ik,k,k) +N(k,k,p_ik,k) +N(k,k,k,p_ik)\} -(4s-1)N_k.$$
Hence, with $\delta_4$ defined by $(\ref{delta})$,
\begin{equation} \label{goose}
\begin{split}
N_{q-1} \geq &\sum_{i=1}^s\{[N(p_ik,k,k,k)-\theta(p_i)N_k]+[N(k,p_ik,k,k)-\theta(p_i)N_k]\\ &+[N(k,k,p_ik,k)-\theta(p_i)N_k]
+[N(k,k,k,p_ik)-\theta(p_i)N_k]\}
+\delta_4 N_k.
\end{split}
\end{equation}
\end{lem}
As with previous applications of the sieving method, we need an estimate for the various differences appearing in (\ref{goose}). Somewhat surprisingly, in this instance,
they vanish.
\begin{lem}\label{gold}
For $i=1, \ldots,s$,
\begin{equation*}
N(p_ik,k,k,k)-\theta(p_i)N_k=0.
\end{equation*}
Similarly, the other differences in $ (\ref{goose})$ vanish.
\end{lem}
\begin{proof}
As in (\ref{pig})
\begin{equation} \label{dog}
N(p_ik,k,k,k)-\theta(p_i)N_k=\theta(p_i)\theta^4(k) \sum_{\substack{d_j|k,\\j=1,\ldots, 4}} \frac{\mu(p_id_1)\mu(d_2)\mu(d_3)\mu(d_4)}{\phi(p_id_1)\phi(d_2)\phi(d_3)\phi(d_4)}
\sum_{ \substack{\chi_1\in \Gamma_{p_id_1}\\\chi_j\in \Gamma_{d_j}\\ j=2,\ldots,4}}S,
\end{equation}
where $S$ is given by (\ref{piglet}). Now, in every character sum $S$ appearing in (\ref{dog}), $\chi_1\chi_2\chi_3\chi_4^{-1}$ has degree divisible
by $p_i$ (since this is so for $\chi_1$, but not any of $\chi_2,\chi_3, \chi_4$), whence $S=0$.
\end{proof}
Since, by Lemmas \ref{hen} and \ref{gold}, we have $N(q,u,v)=N_{q-1}\geq \delta_4N_k$, the argument of Theorem \ref{bull} (based on (\ref{piggy2}), (\ref{piggy3}) and
(\ref{hog}) but with $k$ instead of $q-1$) yields Theorem \ref{platinum}.
As a consequence of Theorem \ref{platinum} we deduce an extension of Corollary \ref{cow}.
\begin{cor}\label{diamond}
Suppose $\delta_4 >0$.
Then the prime power $q$ is in $\mathcal{S}$ whenever $q >W^6(k)$.
\end{cor}
\begin{proof} As for Corollary \ref{cow}.
\end{proof}
Of course, the assumption $\delta_4>0$ is critical for the deduction of Corollary \ref{diamond}. Once this holds, unusually (because of Lemma \ref{gold}),
the criterion does not depend on $\delta_4$.
\section{An asymmetric sieve}\label{skew}
We now obtain a result, in Theorem \ref{emerald}, that is sometimes, though not always, stronger than Theorem \ref{platinum}. We do this by considering some asymmetrical situations in \S \ref{pairs}.
\begin{lem}\label{crab} With notation as in \S $\ref{pairs}$ set $\theta=\theta(k), \tau=\tau(k), W=W(k)$. Further, write $\theta_{q-1}$ for $\theta(q-1)$
and $N_{k,q-1}$ for $N(k,k,k,q-1)$.
Then
$$ N_{k,q-1}\geq \theta^2\theta_{q-1} \cdot(q-1)\{\tau\ q -\theta W^3\sqrt{q}\}.$$
\end{lem}
\begin{proof} As at (\ref{pig})
\begin{equation}
\label{swine}
N_{k,q-1}=\theta^3\theta_{q-1} \sum_{\substack{d_1,d_2,d_3|k,\\d_4|q-1}} \frac{\mu(d_1)\mu(d_2)\mu(d_3)\mu(d_4)}{\phi(d_1)\phi(d_2)\phi(d_3)\phi(d_4)}
\sum_{ \substack{\chi_j\in \Gamma_{d_j}\\ j=1,\ldots,4}}S,
\end{equation}
where $S$ is given by (\ref{piglet}) and so is zero unless $\chi_4=\chi_1\chi_2\chi_3$. Now, if $d_4 \nmid k$ then the degree of $\chi_4$
is not a divisor of $k$ and hence $\chi_4 \neq \chi_1\chi_2\chi_3$, whence $S=0$. It follows that, in (\ref{swine}), we can restrict $d_4$ to divisors of
$k$. The lemma then follows as in the proof of Theorem \ref{platinum}.
\end{proof}
We may take $k= \mathop{\mathrm{Rad}}(q-1)$ in Lemma \ref{crab} to obtain another proof of the lower bound of Theorem \ref{bull}.
The (obvious) asymmetric version of Lemma \ref{hen} features $\delta_3$ in place of $\delta_4$.
\begin{lem} We have
$$ N_{q-1} \geq \sum_{i=1}^s\{N(p_ik,k,k,q-1)+N(k,p_ik,k,q-1) +N(k,k,p_ik,q-1) \} -(3s-1)N_{k,q-1}.$$
Hence, with $\delta_3$ defined by $(\ref{delta})$
\begin{equation} \label{turkey}
\begin{split}
N_{q-1} \geq &\sum_{i=1}^s\{[N(p_ik,k,k,q-1)-\theta(p_i)N_{k,q-1}]+[N(k,p_ik,k,q-1)-\theta(p_i)N_{k,q-1}]\\&+[N(k,k,p_ik,q-1)-\theta(p_i)N_{k,q-1}]\}
+\delta_3 N_{k,q-1}.
\end{split}
\end{equation}
\end{lem}
The various differences in (\ref{turkey}) do not vanish (cf. Lemma \ref{gold}) but can be usefully bounded.
\begin{lem} \label{brass}
For $i=1, \ldots,s$,
$$|N(p_ik,k,k,q)-\theta(p_i)N_{k,q-1}| \leq \frac{1}{p_i}\theta^3\theta_{q-1}W^3 \cdot(q-1)\sqrt{q},$$
where $\theta=\theta(k), W=W(k)$.
A similar bound applies to the other differences in $(\ref{turkey})$.
\end{lem}
\begin{proof} In the expansion of $\Delta=N(p_ik,k,k,q)-\theta(p_i)N_{k,q-1}$ into character sums $S$ analogous to (\ref{dog}) or (\ref{swine}), the degree of the $\chi_1$
must be $p_id_1$, where $d_1|k$. But, since $S$ vanishes unless $\chi_4=\chi_1\chi_2\chi_3$, we need only include terms in which the degree of $\chi_4$ similarly is
$p_id_4$, where $d_4|k$. Hence
\begin{equation} \label{pup}
\Delta=\theta(p_i)\theta^3(k)\theta_{q-1} \sum_{\substack{d_j|k,\\j=1,\ldots, 3}} \frac{\mu(p_id_1)\mu(d_2)\mu(d_3)\mu(p_id_4)}{\phi(p_id_1)\phi(d_2)\phi(d_3)\phi(p_id_4)}
\sum_{ \substack{\ \chi_1\in \Gamma_{p_id_1}\\\chi_2\in \Gamma_{d_2}, \chi_3\in \Gamma_{d_3}\\ \chi_4=\chi_1\chi_2\chi_3}}S,
\end{equation}
where $S$ is given by (\ref{piglet}) and the degree of $\chi_4$ is written as $p_id_4$ with $d_4|k$. Since $|S| \leq (q-1)\sqrt{q}$ for each occurrence in (\ref{pup}), and
$\phi(p_id_4) \geq \phi(p_i)$, it follows that
$$|\Delta| \leq \theta^3(k)\theta_{q-1} \frac{\theta(p_i)}{{p_i-1}}W^3\cdot(q-1) \sqrt{q}$$
and the result follows because $\theta(p_{i})/(p_i-1) =1/p_i$.
\end{proof}
\begin{thm}\label{emerald}
Suppose $\delta_3 >0$. Set $\theta=\theta(k), \tau=\tau(k), W=W(k)$. Then
\begin{equation*}
N(q,u,v) \geq \theta^2\theta_{q-1}\cdot(q-1)\{\delta_3 \tau\ q -\theta W^3\sqrt{q}\}.
\end{equation*}
\end{thm}
\begin{proof} Apply the bounds of Lemmas \ref{crab} and \ref{brass} to (\ref{turkey}). We obtain
$$ N_q \geq \delta_3\theta^2\theta_{q-1} \cdot(q-1)\{\tau\ q -\theta W^3\sqrt{q}\}
-\left\{\sum_{i=1}^s\frac{3}{p_i}\right\}\theta^3\theta_{q-1}W^3 \cdot(q-1)\sqrt{q}.$$
The result follows since $-\sum_{i=1}^s (3/p_i)=1- \delta_3$.
\end{proof}
Generally, Theorem \ref{emerald} gives a better bound than Theorem \ref{platinum} because it allows us to choose more sieving primes, i.e., a larger value of $s$.
\section{$(u,v)$-primitive elements}\label{earl}
For given prime power $q$ and non-zero elements $(u, v)$ in $\mathbb{F}_q$ define $M=M(q,u,v)$ as the number of primitive elements
in $\mathbb{F}_q$ such that $ua+va^{-1}$ is also primitive. More generally, for divisors $e_1, e_2$ of $q-1$, define $M_{e_1,e_2}$ to be the number
of (non-zero) elements $a \in \mathbb{F}_q$ such that $a$ is $e_1$-free and $ua+va^{-1}$ is $e_2$-free and abbreviate $M_{e,e}$ to $M_e$.
Then
\begin{equation} \label{cat}
M_e= \theta^2\sum_{d_1|e}\sum_{d_2|e}\frac{\mu(d_1)\mu(d_2)}{\phi(d_1)\phi(d_2)}\sum_{ \deg \chi_1=d_1}\sum_{\deg \chi_2=d_2}T,
\end{equation}
where $\theta= \theta(e)$ and
$$T= \sum_{a \in \mathbb{F}_q} \chi_1(a)\chi_2\left(\frac{ua^2+v}{a}\right)=\sum_{a \in \mathbb{F}_q} \chi_1\chi_2^{-1}(a)\chi_2(ua^2+v).$$
If $\chi_1=\chi_2=\chi_0$, then $T=q-1-\varepsilon$, where $\epsilon$ is the number of zeros of $ua^2+b$ in $\mathbb{F}_q$. (Here, $\varepsilon$
is 0 or 2 if $q$ is odd and 1 if $q$ is even.)
If $\chi_2=\chi_0$ but $\chi_1 \neq \chi_0$, then $|T|\leq \varepsilon$.
If $\chi_1=\chi_2 \neq \chi_0$, then $|T| \leq \sqrt{q}$.
If $\chi_1 \neq \chi_2$ and $\chi_2 \neq \chi_0$, then $|T| \leq 2\sqrt{q}$.
Hence, from (\ref{cat}),
\begin{equation} \label{kitty}
|M_e-\theta^2\cdot(q-1-\varepsilon)|\leq \theta^2\{2\sqrt{q}A-\sqrt{q}B-(2\sqrt{q}-\varepsilon)C+\sqrt{q}-\varepsilon\}.
\end{equation}
In (\ref{kitty}),
$$A=\sum_{d_1|e}\sum_{d_2|e}\frac{|\mu(d_1)\mu(d_2)|}{\phi(d_1)\phi(d_2)}\sum_{ \deg \chi_1=d_1}\sum_{\deg \chi_2=d_2}1=W^2,$$
where $W=W(e)$. Further,
$$B=\sum_{d|e}\frac{\mu^2(d)}{\phi^2(d)}\sum_{ \deg \chi=d}1= \sum_{d|e}\frac{\mu^2(d)}{\phi(d)}=1/\theta. $$
Finally,
$$C=\sum_{d|e}\frac{\mu(d)}{\phi(d)}\sum_{ \deg \chi=d}1=W.$$
It follows that
\begin{equation}\label{lion}
|M_e-\theta^2\cdot(q-1-\varepsilon)|\leq \theta^2\left\{2\sqrt{q}\left[W^2-W-\frac{1}{2}\left(\frac{1}{\theta}-1\right)\right]+\varepsilon(W-1)\right\}.
\end{equation}
We may take $e=q-1$ in (\ref{lion}) to deduce an asymptotic expression for $M(q,u,v)$ (see also \cite[Thm.\ 1.3]{LiLiPu}), thereby proving the following theorem.
\begin{thm} \label{lioncub}
Let $q$ be a prime power and set $\theta=\theta(q-1)$ and $W=W(q-1)$. Then
$$\left|M(q,u,v)-\theta^2\cdot(q-1-\varepsilon)\right|\leq \theta^2\left\{2\sqrt{q}\left[W^2-W-\frac{1}{2}\left(\frac{1}{\theta}-1\right)\right]+\varepsilon(W-1)\right\}.$$
\end{thm}
We now introduce the sieve (with the usual notation). First, here is the analogue of Lemma \ref{hen}.
\begin{lem}\label{cock} Define $\delta_2$ by $(\ref{delta})$. Then
$$ M\geq \sum_{i=1}^s\{[M_{p_ik,k}-\theta(p_i)M_{k,k}]+[M_{k,p_ik}-\theta(p_i)M_{k,k}]\} +\delta_2 M_{k,k}.$$
\end{lem}
\begin{lem} \label{puma} Let $\theta=\theta(k), W=W(k)$. Then
\begin{enumerate} [label=\emph{(\roman*)}]
\item $\displaystyle{|M_{p_ik,k}-\theta(p_i)M_{k,k}| \leq \theta^2\left(1-\frac{1}{p_i}\right)\cdot\{2\sqrt{q}(W^2-W)+\varepsilon W\}}.$
\item $\displaystyle{ |M_{k,p_ik}-\theta(p_i)M_{k,k}| \leq 2\left(1-\frac{1}{p_i}\right)\theta^2W^2\cdot \sqrt{q}} .$
\end{enumerate}
\end{lem}
\begin{proof}
$$|M_{p_ik,k}-\theta(p_i)M_{k,k}| =\sum_{d_1|k}\sum_{d_2|k}\frac{|\mu(p_id_1)\mu(d_2)|}{\phi(p_i)\phi(d_1)\phi(d_2)}\sum_{ \deg \chi_1=p_id_1}\sum_{\deg \chi_2=d_2}T,$$
and (i) follows from the bounds on $T$.
Similarly, (ii) holds, since the only character sums involved have $\chi_1 \neq \chi_2$ and $\chi_2 \neq \chi_0$.
\end{proof}
\begin{thm} \label{tiger} Assume $q>3$ is a prime power.
With $\theta=\theta(k), W=W(k)$ (as in Lemma $\ref{puma}$), suppose $\delta_2 >0$.
Then
\begin{equation}\label{leopard}
M > \theta^2\sqrt{q} \left\{\sqrt{q} -2\left(\frac{2s-1}{\delta_2}+2\right)\left[W^2-\frac{W}{2}\left(1-\frac{1}{\sqrt{q}}\right)\right]\right\}.
\end{equation}
Hence, if
\begin{equation} \label{cheetah}
\sqrt{q} >2\left(\frac{2s-1}{\delta_2}+2\right)\left[W^2-\frac{W}{2}\left(1-\frac{1}{\sqrt{q}}\right)\right],
\end{equation}
then $q \in \mathcal{T}$.
\end{thm}
\begin{proof}
Apply Lemma \ref{puma} to the bound of Lemma \ref{cock}. Observe that
$$\sum_{i=1}^s \left(1- \frac{1}{p_i}\right)=\frac{1}{2}(2s-1+\delta_2).$$
Hence
$$M \geq \delta_2\theta^2\left\{2 \sqrt{q}\left(\frac{2s-1}{\delta_2}+1\right)\left[W^2-\frac{W}{2}\left(1-\frac{\varepsilon}{2\sqrt{q}}\right)\right]+U\right\},$$
where
$$U=(q-1-\varepsilon)-\left\{2\sqrt{q}\left[W^2-W-\frac{1}{2}\left(\frac{1}{\theta}-1\right)\right]+\varepsilon(W-1)\right\} $$
The inequality (\ref{leopard}) follows, since $W\sqrt{q}> \varepsilon W+1 \ (\varepsilon \leq 2)$, certainly for $q>3$.
The criterion (\ref{cheetah}) is then immediate.
\end{proof}
\section{Existence proofs}\label{existence}
In this section we begin to prove Theorem \ref{ox} by demonstrating, using the theorems we have established, that all but finitely many $q$ are members of $\mathcal{T}$and $\mathcal{S}$. Specifically, we prove that all but at most 3031 prime powers $q$ are in $\mathcal{T}$ and all but at most 532 values of $q$ are not in $\mathcal{S}$. Moreover, the possible exceptions could be listed explicitly (although we do not do so).
First, we can show that $q \in \mathcal{T}$
by establishing that the (stronger) sufficient inequality $q>4W^4(q-1)$ (derived from Theorem \ref{lioncub}) holds whenever $\omega(q-1) \geq 17$, and the inequality
\[q> 4\left(\frac{{2s-1}}{\delta_2}+2\right)^2W^4(k) \]
(derived from Theorem \ref{tiger}) holds with $s=5$ whenever $9\leq \omega(q-1)\leq 16$. We therefore only have to consider those $q$ with $\omega(q-1) \leq 8$.
Now, for each value of $1\leq \omega(q-1) \leq 8$ we find a value of $s\in[1, \omega(q-1) -1]$ such that the right-side of (\ref{cheetah}) is minimised --- call this $q_{\textrm{max}}$. Now $q-1 \geq p_{1} p_{2} \ldots p_{\omega(q-1))}$:= $q_{\textrm{min}}$. We therefore need only check $q\in (q_{\textrm{min}}, q_{\textrm{max}})$.
For example, when $\omega(q-1) = 8$ we choose $s=5$, whence $\delta_{2} \geq 1 - 2(1/7 + 1/11 + 1/13 + 1/17 + 1/19)> 0.1557$ and so $q_{\textrm{max}} < 5.15\cdot 10^{7}$. We also have that $q_{\textrm{min}} = 9,699,691$. We enumerate all prime powers in $(q_{\textrm{min}}, q_{\textrm{max}})$ and select those with $\omega(q-1) = 8$. There are 49 such values, the largest of which is $q= 51,269,791$. For each of these 49 values we now compute the exact value of $\delta_{2}$ for each $s$. For example, for $s=5$ and $q= 51,269,791$ we have $\delta_{2} > 0.387$ --- a considerable improvement. We now look to see whether (\ref{leopard}) holds for these values of $q$. We find that (\ref{leopard}) is true for all but 9 values, the largest of which is $31, 651, 621$.
We continue in this way, the only deviation from the above example being that for $\omega(q-1) = 1$ we use Theorem \ref{lioncub}. Our results are summarised in Table~\ref{mountain}, which lists, for each value of $\omega(q-1)$, the number of $q$ for which Theorem \ref{tiger} \emph{fails} to show that $q \in \mathcal{T}$. Table \ref{mountain} also gives the least and greatest prime and prime power in each category.
\begin{table}[ht]
\centering
\caption{\it{Numbers of primes and prime powers $q$ not shown to be in $\mathcal{T}$.}}
\label{mountain}
\medskip
\begin{tabular}{c c r r |c r r}
\hline
$\omega(q-1)$& primes & least & greatest & prime powers & least & greatest \\
\hline
8 & 9 & 13123111 & 31651621 & 0 & - & -\\
7 & 171 & 870871 & 10840831 & 2 & 2042041& 7447441\\
6 & 698& 43891 & 2972971& 11& 175561& 1692601 \\
5 & 951& 2311 & 813121 & 18& 17161 & 776161 \\
4 & 813& 211 & 102061 & 30 & 841& 63001 \\
3 & 257& 31 & 9721 & 16& 343 & 2401 \\
2& 40& 7 & 769 & 9& 16 & 289\\
1& 3& 3 & 17 & 3 & 4 & 9\\
\hline
\hline
\end{tabular}
\end{table}
In total, in Table~\ref{mountain}, there are 2942 prime values of $q$ which may not be in $\mathcal{T}$. The prime $2$ is excluded (but clearly $2 \not \in \mathcal{T}$).
The total of 89 (non-prime) prime powers comprise 69 prime squares and 20 higher powers. Unsurprisingly, the latter are powers of small primes as follows: $2^3,2^4,2^5,2^6,$ $2^8,2^{10},$ $2^{12}$, $3^3,3^4,$ $3^5,3^6,5^3,5^4,5^6,7^3,$ $7^4, 11^3, 11^4, 13^3, 13^4, 31^3$.
Excluding $q=2$, the above leaves a total of 3031 possible prime powers $q$ as candidates for non-membership. Let $\mathcal{C_T}$ be the set of these 3031 candidates.
We reduce this number substantially in Theorem \ref{final_T}, and, in \S\ref{Hillary} prove Theorem \ref{ox}.
We can pass our list $\mathcal{C_T}$ of 3031 possible exceptions through Theorems \ref{bull}, \ref{platinum} and \ref{emerald}. Note that these test for membership of $\mathcal{S}$. We find that there are only 532 possible prime powers not in $\mathcal{S}$.
At this point it is pertinent to add some remarks on the parity of $q$. When $q$ is even, a $(1,1)$-primitive element is the same as a $(1,-1)$-primitive element. It was proved in \cite{Co14}
that all fields $\mathbb{F}_{2^n}, n\geq 3$, contain a $(1,1)$-primitive element. This proof was theoretical, except for the
values $n=6$ and $12$ when an explicit $(1,1)$-primitive element was given. In fact, in the preparation
of \cite{Co14}, the first author overlooked previous work of his \cite{Co87} in which a theoretical proof was given
(even in these two difficult cases).
An explanation for the oversight is
that \cite{Co87} was framed in the notions of Garbe \cite{Ga} relating to the order and level of an irreducible polynomial,
rather than an element of the field.
\begin{comment}
Specifically, when formulated in terms of field elements, the {\em level} of a non-zero $a \in \mathbb{F}_q$
is defined as the order of the element $b=a+a^{-1}$. In the situation when $q=q_0^2$ is a square then it is possible that $a \in \mathbb{F}_q \setminus \mathbb{F}_{q_0}$
whereas $b \in \mathbb{F}_{q_0}$, in which case the order of $a$ divides $q_0+1$ while its level divides $q_0-1$. It was proved in \cite{Co87}, Theorem 4.1, that, if
$q$ is an even square, then there is an element $a \in \mathbb{F}_q$ with order $q_0+1$ and level $q_0-1$. Alternatively (whether or not
$q$ is a square), $b$ does not belong to a proper subfield of $\mathbb{F}_q$, in which case the question
is whether there exists an element $a$ whose order and level
are both $q-1$, i.e., whether $a$ is a $(1,1)$-primitive element.
\end{comment}
In fact, for $q$ even, existence was established in \cite{Co87}, Theorem 5.2.
In any event, we can assume from now on that $q$ is odd.
\begin{comment}
If $\mathbb{F}_q$ does not contain a (1,1)-primitive element or a (1,-1)-primitive element, then $q$ must be
one of the (3031) prime powers referred to in Table \ref{mountain}. For each such prime value $q=p$, starting with the least primitive root $a_0 \ \mathrm{mod} p$,
a quick MAPLE program produced a primitive root $a=a_0^j$ (with $(j,p-1)=1$) for which $a+1/a$ is also a primitive root, except for the prime values listed in Theorem \ref{calf},
(i), (ii). For prime powers $q$ we used a similar, slightly more elaborate process. It turns out there are no $(1,\pm 1)$-elements in $\mathbb{F}_q$
when $q=9, 25, 121$. We do not give further details beyond illustrating the results when $q=p^r, r \geq 3$ (a ``higher" prime power),
Thus, Table \ref{joy} displays the primitive minimal polynomial $f_a(x)$ of degree $r$ over $z\mathbb{F}_p$ satisfied
by $a \in \mathbb{F}_q$ such that the minimal polynomial $f_b(x)$ of $b=a+a^{-1}$ is also primitive of degree $r$. Similarly, Table \ref{joy2} displays minimal polynomials of $a$
and $c=a-a^{-1}$.
\begin{table}
\centering
\caption { \it{Minimal polynomials of $(1,1)$-elements, $q=p^r, r\geq 3$}}
\label{joy}
\medskip
\begin{tabular}{cc c}
\hline
$q$ & $f_a(x)$ & $f_b(x), b=a+a^{-1}$ \\
\hline
$3^3$& $x^3-x^2+x+1$ & $x^3-x+1$\\
$3^4$& $x^4+x-1$ &$x^4-x^3-x^2+x-1$ \\
$3^5$ & $x^5-x+1$ &$x^5-x^4+x^3+x^2+x+1$ \\
$3^6$ & $x^6-x^5-x^4-x^3+x^2-1$ & $x^6-x^5+x^4+x^2+x-1$\\
$5^3$ &$ x^3+x^2+x-2$ & $x^3-2x^2+2x-2$ \\
$5^4$ & $x^4-2x^3-2x^2+2$ & $x^4-2x^3-2x^2+2x+2$ \\
$5^6$& $x^6+x^5+2x^4-2x^3-x-2$ &$ x^6-x^5+2x^3-x^2-2x+2$ \\
$7^3$& $x^3+3x+2$ & $x^3-2x^2-3$ \\
$7^4$ & $x^4+3x^3+3x^2+2x+3$& $x^4-x^3+2x^2-3x+3$ \\
$11^3$& $x^3+x+4$ & $x^3+3x^2-2x+4$ \\
$11^4$& $x^4-3x^3+x^2-4$ & $x^4-3x^3+5x^2+5x+2$ \\
$13^3$ &$ x^3+x+6$ &$x^3-2x^2-2x+6$ \\
$13^4$&$x^4+5x^3-x^2+4x-2$ &$ x^4+3x^3+5x^2+10x+6$ \\
$31^3$& $x^3+x+14$ & $x^3-11x^2-2x+14$ \\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{\it{Minimal polynomials of $(1,-1)$-elements, $q=p^r, r\geq 3$.}}
\label{joy2}
\medskip
\begin{tabular}{c c c}
\hline
$q$ & $f_a(x)$ & $f_c(x), c=a-a^{-1}$ \\
\hline
$3^3$& $x^3-x+1$ & $x^3+x^2-x+1$\\
$3^4$& $x^4+x^3-x^2-x-1$ & $x^4-x-1$ \\
$3^5$ & $x^5-x+1$ & $x^5+x^4-x^3+x^2+x+1$ \\
$3^6$ & $x^6+x^4-x^3+x^2+x-1$ & $x^6+x^5+x^3+x^2+x-1$\\
$5^3$ & $x^3+x^2-x-2$ & $x^3-2x^2+x+2$\\
$5^4$ & $x^4+x^2+2x-2$ & $x^4+x^3+2x^2+x+2$\\
$5^6$& $x^6-2x^5+x^4-x^3-x-2$ & $x^6-2x^4-2x^3-2x^2-2$ \\
$7^3$& $x^3-2x^2+3x-3$&$x^3-x^2-3 $\\
$7^4$ & $x^4+3x^3+x^2+3$ & $x^4+3x^3+3x^2+2x+3$\\
$11^3$& $x^3+x+4$ & $x^3-3x^2+4x+3$\\
$11^4$& $x^4+2x^3+4x^2+3x-3$ & $x^4+3x^3+5x^2+3x-4$ \\
$13^3$ &$x^3-4x^2-x+2$ & $x^3+3x^2-2x+2$ \\
$13^4$&$x^4+x^3-2x^2-x+6$ & $x^4-x^3+4x^2+6x+2$ \\
$31^3$& $x^3+5x^2-4x-12$ & $x^3+15x^2+15x+7$ \\
\hline
\end{tabular}
\end{table}
\end{comment}
\section{Computational results}
\label{comp_res}
In this section we give algorithms and timings for our computations verifying that many
$q$ are in
$\mathcal{T}$.
Let $w=u^{-1}v$ and $r=a+wa^{-1}$. Then
\[
ua+va^{-1} = u(a+wa^{-1}) = ur,
\]
and if $u$ and $v$ are non-zero elements of~$\F_q$ then $w$ will also be a non-zero element of~$\F_q$. Thus, to verify
if $q\in\mathcal{T}$ it is sufficient to verify that for all non-zero elements $u$ and $w$ of~$\F_q$,
there exists a primitive element $a$ of~$\F_q$ such that $ur$ is also a primitive element of~$\F_q$. The
transformation of the original problem into one with a multiplicative structure allows
discrete logarithms to be used. As primitive elements are easy to characterise using discrete
logarithms, this will give rise to important computational savings.
Let $\gamma$ be a primitive element of~$\F_q$ and let $\log v$ denote the base $\gamma$ discrete logarithm of the
non-zero element $v$ of $\F_q$. Let $p_1,p_2,\ldots,p_{\omega(q-1)}$ be the distinct prime divisors of~$q-1$, and let $R$ be
their product (the radical of $q-1$). If $u$ and $r$ are both non-zero then $ur$ is a primitive element of~$\F_q$
if and only if
\[
\gcd\bigl( \log u + \log r , q-1 \bigr) = 1,
\]
i.e., if and only if
\[
\log u \not\equiv -\log r \bmod p_i,\qquad i=1,\ldots,\omega(q-1).
\]
For a given~$w$, it follows that each primitive element $a$ for which $r$ is non-zero takes care of
$\prod_{i=1}^{\omega(q-1)} (p_i-1)$ residue classes of $\log u\bmod R$.
The first result of this setup is Algorithm~\ref{algo_Te}.
\begin{algorithm}[h]
\DontPrintSemicolon
\AlgoDontDisplayBlockMarkers
\SetAlgoNoEnd
\SetAlgoNoLine
\SetKwProg{Proc}{Procedure}{}{}%
\SetKwFunction{checkbetagamma}{check\_beta\_inv\_gamma}%
\SetKwFunction{checkq}{check\_q}%
\SetKwFunction{rad}{rad}%
\SetKw{KwNext}{next}%
\SetKw{KwTo}{in}%
\Proc{\checkq{$q$}}{
\textit{Construct $\F_q$ and primitive element $\gamma$}\;
$R \leftarrow$ \rad($q - 1$)\;
\For{$0 \leq j < q-1$}{
$v \leftarrow \gamma^j$\;
\For{$0 \leq k \leq R$}{
$u \leftarrow \gamma^k$\;
\For{$l$ in stored\_logs}{
\If{GCD(k+l, R) = 1}{
\KwNext $k$\;
}
}
\For{$1 \leq m < q-1$}{
\If{GCD(m, R) = 1}{
$a \leftarrow \gamma^m$\;
$l \leftarrow \log_{\gamma}(a + v a^{-1})$\;
Store $l$ in stored\_logs\;
\If{GCD(k+l, R) = 1}{
\KwNext $k$\;
}
}
}
\If{$m = q-1$}{
FAIL\;
}
}
}
}
\caption{Check whether $q \in \mathcal{T}$\label{algo_Te}}
\end{algorithm}
\begin{comment}
It is more efficient to check $u(a + a^{-1} v)$ for primitivity as we can
use the $\log$ of the second factor and that of $u$ in the $\gcd$ and avoid
extra arithmetic and primitivity checks.
\end{comment}
To maximise efficiency in Algorithm~\ref{algo_Te} we store the $\log$s we
have computed as well as the elements which have already been determined to
be primitive so we can first check through our list of stored primitive
elements $a$ and only generate more primitive elements as needed.
We can write $\mathcal{E_T}$ (defined in (\ref{fridge}))
as
$$\{2, 3, 4, 5, 9\} \cup \{ 7, 11, 13, 19, 25, 29, 37, 41, 49, 81, 97\} \cup
\{31, 43, 61, 121, 169\}$$ according to $\omega(q-1)$.
It has been checked by running Algorithm~\ref{algo_Te}
using Magma~\cite{magma223} that
$\mathcal{E_T} \cap \mathcal{T} = \emptyset$ and
that $q \in \mathcal{T}$ for all $q \in \mathcal{C_T} \setminus \mathcal{E_T}, \omega(q-1) < 7$.
In Table~\ref{timings_T} we provide total timings for these checks for all
$q \in \mathcal{C_T}$, grouped by $\omega(q-1)$, on a
2.6GHz Intel\textsuperscript{\tiny\textregistered}{} Xeon\textsuperscript{\tiny\textregistered}{} E5-2670 or similar machine.
Note that checking that the 140 $q$ with $\omega(q-1)=6$ of the 532 $q$ which are not known to be in $\mathcal{S}$ are in $\mathcal{T} \subset \mathcal{S}$ took only 178 days
and each such $q$ could be checked in less than 3.4 days.
\begin{table}[h]
\begin{centering}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
$\omega(q-1)$ & 1 & 2 & 3 & 4 & 5 & 6\\% & 7 & 8 \\
Number of $q$ checked & 6 & 49 & 273 & 843 & 969 & 709 \\
Time & 1.87 & 2.7s & 591s & 1.9 days & 98.9 days & 20.003 years \\
\hline
\end{tabular}\caption{Total timings for checking whether $q \in \mathcal{T}$ for $q \in \mathcal{C_T}$.}\label{timings_T}
\end{centering}
\end{table}
We found it efficient to store primitive elements $a$ and check
$u (a + v a^{-1})$ for primitivity with the stored $a$ first before
generating more primitive elements. Note that we do not need to check
all $(u, v)$ pairs since $u a + v a^{-1} = v a^{-1} + u (a^{-1})^{-1}$
so if there is a $(u, v)$-primitive element there is also a $(v, u)$-primitive
element. However, this improvement is made redundant by factoring out $u$
and iterating through only $R$ many.
We noticed that if $a$ is a $(u, v)$-primitive element then it is also a
$(u', v')$-primitive element when $(v'-v) = -a^2(u - u')$.
Unfortunately, these observations did not improve the
efficiency of our algorithms.
\begin{algorithm}[h]
\DontPrintSemicolon
\AlgoDontDisplayBlockMarkers
\SetAlgoNoEnd
\SetAlgoNoLine
\SetKwProg{Proc}{Procedure}{}{}%
\SetKwFunction{checkbetagamma}{check\_beta\_inv\_gamma}%
\SetKwFunction{checkq}{check\_q}%
\SetKwFunction{rad}{rad}%
\SetKw{KwNext}{next}%
\SetKw{KwTo}{in}%
\Proc{\checkq{$q$}}{
\textit{Construct $\F_q$ and primitive element $\gamma$}\;
\For{$0 \leq k < q-1$}{
$u \leftarrow \gamma^k$\;
\For{$0 \leq l < q-1$}{
$v \leftarrow \gamma^l$\;
\For{$0 \leq m < q-1$}{
\If{GCD(m, q-1) = 1}{
$a \leftarrow \gamma^m$\;
\For{$0 \leq n < q-1$}{
\If{GCD(n, q-1) = 1}{
$b \leftarrow \gamma^n$\;
\If{$u a + v b$ and $v a^{-1} + u b^{-1}$ are primitive}{
\KwNext $l$\;
}
}
}
}
}
\If{$m = q-1$}{
FAIL\;
}
}
}
}
\caption{Check whether $q \in \mathcal{S}$\label{algo_S}}
\end{algorithm}
We also checked using Algorithm~\ref{algo_S}
whether $q \in \mathcal{S}$ for $q \in \mathcal{E_T}$ and found
that only $2, 3, 4, 5, 7, 13 \notin \mathcal{S}$.
The computations for these
checks took about 1 second using Magma on a
3.4GHz Intel\textsuperscript{\tiny\textregistered}{} Core\textsuperscript{\tiny\texttrademark}{} i7-3770 or similar machine.
Finally, we deduce Corollary~\ref{calf} by checking which $(u, v)$ caused failures in
Algorithms~\ref{algo_Te} and~\ref{algo_S}.
\begin{comment}
DO WE REALLY WANT TO DO THIS STILL? WE HAVE PROVED SO MUCH MORE?
Finally, we complete the proof of Corollary~\ref{calf} (iii), (iv)
by taking
$$\mathcal{E_S} \cap \{2, 3, 4, 5, 7, 9, 13, 25, 121\} \quad \textrm{and} \quad \mathcal{E_S} \cap \{2, 3, 4, 5, 9, 13, 25, 61, 121\} $$
resulting in $\mathcal{E_S}$ and $\mathcal{E_S}\setminus\{7\}$ respectively.
We check which $(u, v)$ caused failures in Algorithm~\ref{algo_S}
for $q$ in these sets and find that all $q\in\mathcal{E_S}$ failed on $(1, 1)$
and all $q \in \mathcal{E_S}$ failed on $(1, -1)$ except $q = 7$.
This proves (iii) and (iv).
\end{comment}
We summarise our results from this section in the following theorem.
\begin{thm}
\label{final_T}
All
prime powers $q$ with $\omega(q-1) < 7$ and $q\notin \mathcal{E_T} $ are in $\mathcal{T}$
and so also in $\mathcal{S}$. There are at most $182$ values of $q\not\in \mathcal{E_T}$ not in $\mathcal{T}$, the largest of which is $31,651,621$.
\end{thm}
\section{Improved algorithm to check whether $q\in\mathcal{T}$}
\label{Hillary
We now introduce a new algorithm to handle the 182 possible exceptions annunciated in Theorem~\ref{final_T}.
Let $L=\{\,0,1,\ldots,R-1\,\}$ be a complete set of residues modulo~$R$. For $i=1,\ldots,\omega(q-1)$, let
$L'_i=\{\,0,1,\ldots,p_i-1\,\}$ be a complete set of residues modulo~$p_i$, and let
\[
L_{r,i} = \{\;l \,:\;l\in L \,\wedge\, l\not\equiv -\log r \bmod p_i\;\}, \quad L'_{r,i} = \{\;l \,:\;l\in L'_i \,\wedge\, l\not\equiv -\log r \bmod p_i\;\}.
\]
Finally, let
\[
L_r = \bigcap_{i=1}^{\omega(q-1)} L_{r,i}.
\]
By construction, the condition $\gcd\bigl( \log u + \log r , q-1 \bigr) = 1$ is equivalent to the condition
$(\log u \bmod R) \in L_r$. Furthermore, using the Chinese remainder theorem, the number $|L_r|$ of elements
of the set $L_r$ is given by
\begin{equation}
|L_r| = \prod_{i=1}^{\omega(q-1)} |L'_{r,i}|.
\label{e:setSize}
\end{equation}
The improved strategy used to check if $q\in\mathcal{T}$ is as follows. For each non-zero value of $w$, use distinct
primitive elements $a_1$, $a_2$, $\ldots$, to construct the corresponding sets $L_{r_1}$, $L_{r_2}$, $\ldots$,
stopping when either the list of primitive elements is exhausted, in which case $q\not\in\mathcal{T}$, or when
the union of these sets is $L$, in which case all non-zero values of $u$ have been covered and so the next non-zero
$w$ needs to be tried. When the $w$ values have been exhausted we conclude that $q\in\mathcal{T}$.
In an actual computer program, sets are usually implemented as arrays of bits\footnote{Each bit of the array
indicates if the corresponding element belongs or does not belong to the set.}, with unions and intersections being bitwise
logical \underline{or} or logical \underline{and} operations. In the present case the $L_r$ sets have $R$~bits, so
using an array of bits to represent them adds a factor of $R$ to the execution time of the program. It turns out that
using the inclusion-exclusion principle~\cite[Chapter~XVI]{HW78} to count the number of elements of a union of sets
gives rise to a considerably faster program. For example
\[
|L_{r_1} \cup L_{r_2}|=|L_{r_1}|+|L_{r_2}|-|L_{r_1} \cap L_{r_2}|,
\]
so $|L_{r_1} \cup L_{r_2}|$ can be computed by evaluating $|L_{r_1}|$ and $|L_{r_2}|$ using~(\ref{e:setSize}), and
by evaluating the remaining term using
\[
|L_{r_1} \cap L_{r_2}| = \prod_{i=1}^{\omega(q-1)} |L'_{r_1,i} \cap L'_{r_2,i}|.
\]
All three computations can be done using only the $L'_{r,i}$ sets. As these sets are quite small --- for those $q$, summarily described in Table~\ref{mountain}, with $q\geq 371,281$ that were not tested by the
methods of \S\ref{comp_res} the largest $p_i$ is only $89$ --- they should be implemented as bit arrays. As
these bit arrays can be stored in two 64-bit computer words, counting the number of elements of each one of them
can be done efficiently using the population count instruction available on modern Intel/AMD 64-bit processors.
In general, to count the number of elements in the union of the $n$ sets $L_{r_k}$, $k=1,\ldots,n$, by applying
recursively the inclusion-exclusion principle it is necessary to compute $2^n$~terms. Fortunately, in the present
case most of these terms turn out to be zero, because for a small $p_i$ the intersection of several $L'_{r_k,i}$
sets has a good chance to be the empty set. Nonetheless, to avoid an uncontrolled explosion of the number of
terms as more values of $r$ are considered, the following strategy was used to
accept/reject values of~$r$:
\begin{itemize}
\item the first $10$ non-zero values of $r$ are always accepted;
\item the remaining non-zero values of $r$ are accepted only if they lead to a $3/4$ reduction in the number of residue classes that are still not covered.
\end{itemize}
This fast but aggressive strategy failed in a very small percentage of cases (less than $0.003$\% for
$q=31,651,621$). When it failed the same procedure was tried again with the factor $3/4$ replaced by $4/5$. As this never failed for our list of values of~$q$, even more relaxed parameters (more initial
values of $r$ always accepted, larger factors) were not needed.
Denote by $B'_{r,i}$ the bit array of $p_i$ bits corresponding to the set $L'_{r,i}$. In a computer program the
set $L_{r_k}$ can be efficiently represented by the tuple $B'_r=(1,B'_{r,1},\ldots,B'_{r,\omega(q-1)})$, where the
initial $1$ represents the inclusion-exclusion generation number. Intersections of sets can be represented in the
same way, with the generation number reflecting the number of intersections performed. (To apply
inclusion-exclusion it is only necessary to keep track of the parity of the number of intersections.) As
mentioned before, intersecting two sets amounts to performing bitwise \underline{and} operations of the corresponding
$B'_{\cdot,i}$ bit arrays, which, given the small size of these arrays, can be done very quickly on contemporary
processors. In Algorithm~\ref{super_algo_T} the variable $B$ is a list of tuples that represent the non-empty sets
used in the inclusion-exclusion formula. The number of residues classes not yet covered by the union of the
$L_r$ sets, denoted by $|B|$, is given by
\[
|B|=R+\sum_{B' \in B} (-1)^{\mathrm{generation}(B')}\,|B'|.
\]
These considerations give rise to Algorithm~\ref{super_algo_T} (in step 2, the list of primitive elements can be
constructed so that $a_{\phi(q-1)+1-k}$ is the inverse of $a_k$; that simplifies the computation of the values of
$r$.)
\begin{algorithm}[h]
\DontPrintSemicolon
\AlgoDontDisplayBlockMarkers
\SetAlgoNoEnd
\SetAlgoNoLine
\SetKwProg{Proc}{Procedure}{}{}%
\SetKwFunction{checkq}{check\_q}%
\SetKwFunction{checkw}{check\_w}%
\Proc{\checkq{$q$}}{
\textit{Construct $\F_q$ and list $a_1,\ldots,a_{\phi(q-1)}$ of the primitive elements of $\F_q$}\;
\For{each non-zero element $w$ of $\F_q$}{
\If{\checkw{$w,10,\frac34$} returns 0}{
\If{\checkw{$w,10,\frac45$} returns 0}{
\If{\checkw{$w,12,\frac56$} returns 0}{
\If{\checkw{$w,\phi(q-1),1$} returns 0}{
FAIL\;
}
}
}
}
}
}
\SetKwProg{Func}{Function}{}{}%
\SetKwFunction{checkw}{check\_w}%
\Func{\checkw{$w,nc,f$}}{
$c \leftarrow 0$, $B \leftarrow \emptyset$\;
\For{$1 \leq k \leq \phi(q-1)$}{
$r \leftarrow a_k+wa^{-1}_k$\;
\If{$r \not= 0$}{
$c \leftarrow c+1$\;
\textit{Compute $B'_r$}\;
\textit{Intersect $B'_r$ with all sets stored in $B$ and store the non-empty ones in $X$}\;
\textit{Append $B$ and $B'_r$ to $X$}\;
\If{$c\leq nc$ or if $|X|\leq f|A|$}
{
$A \leftarrow X$\;
\If{$|A|=0$}{
\textit{return $1$}\;
}
}
}
}
\textit{return $0$}\;
}
\caption{Check whether $q\in \mathcal{T}$}\label{super_algo_T}
\end{algorithm}
Algorithm~\ref{super_algo_T} gave rise to three optimised computer programs, written in the C programming language.
One that dealt with $q$ prime and $\max_i p_i < 64$, another that dealt with $q$ prime and $64 < \max_i p_i < 128$,
and a third one that dealt with $q$ a prime square and $\max_i < 64$. The largest case, $q=p=31,651,621$, was
confirmed to belong to $\mathcal{T}$ in one week on a single core of a $4.4$GHz i7-4790K Intel processor. All
exceptional values of $q$ not dealt with by the methods of \S\ref{comp_res} were confirmed to belong to
$\mathcal{T}$ in about four weeks of computer time (one week of real time, given that the i7-4790K processor has
four cores). These computations were double-checked on a separate machine.
Figure~\ref{execution_times} presents data for all cases that were tested by the first program. It
suggests that the execution time of the program is approximately proportional to $q$ and depends in a non-linear
way on $\omega(q-1)$. The same phenomenon occurs for the other two programs.
\begin{figure}[hpb]
\centering
\includegraphics[width=\textwidth]{times.pdf}
\caption{Execution times (in seconds) divided by $q$ versus $q$; the bottom data points correspond to values of
$q$ for which $\omega(q-1)=6$, those in the middle correspond to $\omega(q-1)=7$ and those on top to
$\omega(q-1)=8$.}%
\label{execution_times}%
\end{figure}
\section*{Acknowledgements}
The third author acknowledges the Sydney Informatics Hub and the University of Sydney's high
performance computing cluster Artemis for providing the high performance computing
resources that contributed to the results in \S\ref{comp_res}.
|
2,869,038,156,326 | arxiv |
\section{Introduction}\label{sec:intro}
\input{sec_introduction}
\section{Background}\label{sec:primilinaries}
\input{sec_preliminaries}
\section{Overview}\label{sec:overview}
\input{sec_overview}
\section{Sketch Generation}
\label{sec:sketch}
\input{sec_sketch}
\section{Implementation and Evaluation}\label{sec:evaluation}
\input{sec_implementation}
\section{Related Work}
\input{sec_related.tex}
\section{Conclusions and Future Work}
\input{sec_conclusion.tex}
\section*{Acknowledgments}
We thank the anonymous CCS reviewers for their insightful feedbacks. This work was supported by NSF Awards CNS-1702760, CNS-1931686 and a gift from Facebook.
\bibliographystyle{ACM-Reference-Format}
\section{Full Case Studies}
In this section we list the examples we studied in this paper. For each mechanism we show the original mechanism (the user's input), and the transformed mechanism for the synthesis loop.
\input{figures/noisymax.tex}
\input{figures/numsvt.tex}
\input{figures/gapsvt.tex}
\input{figures/svt_inverse.tex}
\clearpage
\input{figures/partialsum.tex}
\section{Pseudo-code for \code{GenerateTemplate}}
Here for completeness, we include the pseudo-code of the helper function \code{GenerateTemplate} proposed by~\cite{checkdp}. Note that \code{Depends} is a variable dependence checking oracle which returns \true\ if the expression $e$ depends on the variable $\eta$. This oracle can be implemented as standard program dependency analysis~\cite{aho1986compilers,ferrante1987} or information flow analysis~\cite{Bergeretti:1985:IDA:2363.2366}.
\begin{algorithm}[ht]
\setstretch{0.9}
\SetKwProg{Fn}{function}{\string:}{}
\SetKwFunction{Depends}{Depends}
\SetKw{Break}{break}
\SetKwFunction{GenerateTemplate}{GenerateTemplate}
\SetKwInOut{Input}{input}
\DontPrintSemicolon
\Input{ $\env_s$: typing environment at sampling command \\
$A$: set of the generated assertions in the program}
\Fn{\GenerateTemplate{$\env_s$, $A$}}{
$\exprset \gets \emptyset$, $\varset \gets \emptyset$\;
\ForEach{$\assert{e} \in A$}{
\If{$\Depends(e, \eta)$}{\label{line:template_depends_eta}
\If{$\assert{e}$ is generated by \ruleref{T-If}}{
$e' \gets $ the branch condition of \code{\textbf{if}}\;
$\exprset \gets \exprset \cup \{e'\}$\;
}
\ForEach{$v \in Vars\cup \{e_1[e_2] | e_1[e_2] \in e\}$ }{
\If{$\Gamma_s\not\proves v:\basety_0 \land \Depends(e, v)$}{
$\varset \gets \varset \cup \{v\}$ \;
}
}
}
}
\ForEach{$e \in \exprset\cup \varset$ }{
remove $e$ from \exprset and \varset if not in scope\;
}
\Return \exprset,\varset;
}
\caption{Template generation for $\eta := \lapm{r}$}
\end{algorithm}
\input{figures/smartsum.tex}
\section{Complete Transformation Rules}
In this section we list the transformation rules in Figure~\ref{fig:trans_rules} for completeness. Note that most rules are identical to the ones in CheckDP~\cite{checkdp}, with the differences highlighted in gray.
\input{figures/complete_rules.tex}
\subsection{Case Studies}
To illustrate the expressiveness of \tool and its capability of synthesizing privacy mechanisms of different characteristics, we used a standard benchmark as seen in prior works~\cite{shadowdp, checkdp,Ding2018CCS,Bichsel2018CCS,Aws:synthesis}, including SVT under different conditions, other variants of SVT such as NumSVT and GapSVT, the Report Noisy Max mechanism~\cite{diffpbook}, Partial Sum and Smart Sum~\cite{chan10continual}. The psudo-code and transformed program of each case study can be found in the
\subsection{Challenges}
\label{sec:challenges}
The goal of this paper is to \emph{automatically synthesize} a differentially private program (e.g., function $\code{SVT}$) from a base program that is not necessarily differentially private (e.g., function $\code{SVTBase}$). Like other program synthesis techniques~\cite{CEGIS,gulwani2011}, the synthesized program must implement similar functionality to the original program / specification. Since a privacy mechanism injects noise to offer privacy, this can be more precisely stated as: any output of the original program is still possible for the synthesized program.
What makes \tool distinguished from other program synthesizers is its capability of synthesizing a \emph{private} and \emph{useful} counterpart of the original program:
\begin{itemize}
\item Privacy: the synthesized program needs to inject sufficient noise in the right places to satisfy pure differential privacy, as formally defined in Definition~\ref{def:diffpriv}.
\item Utility: the synthesized program needs to carefully calibrate the injected noise to make the randomized outputs useful (i.e., to make the outputs ``close'' to the ones from the original program). This involves choosing the correct noise scales (including using no noise wherever it is safe to do so).
\end{itemize}
Next, we highlight the main challenges in both aspects.
\paragraph{Privacy} Developing differentially private mechanisms is a nontrivial task: injecting sufficient amount of noise in the right places and then proving correctness is notoriously tricky. For instance, Lyu et al.~\cite{ninghuisparse} catalog several incorrect variants of SVT, where each variant slightly modifies the functionality and/or injected noise of function \code{SVT} in Figure~\ref{fig:svt} (for now, safely ignore the annotations in the function signature). While the changes are minimal, the incorrect variants fail to meet their claimed differential privacy guarantees. For example, one variant tweaks the mechanism to output the noisy query answer when it is above the threshold. That is, it changes
Line 7 of \code{SVT}
by replacing $\true$ with $q[i]+\eta_2$. %
As a result, it fails to satisfy $\epsilon$-differential privacy for any value of $\epsilon$~\cite{ninghuisparse}.
\paragraph{Utility}
What makes synthesizing differentially private mechanisms even more challenging is that we also need to add as little noise as possible while maintaining the desired privacy levels (otherwise the noisy outputs may not be useful). For example, in the simplest case, if we increase the scale of noise injected at Lines 2 and 5 in SVT (Figure~\ref{fig:svt}), the mechanism is still $\epsilon$-differentially private. However, the extra randomness reduces the accuracy of SVT.
\input{figures/svt_alt}
Furthermore, utility is also affected by \emph{where} the noise is added. %
For example, an alternative way of making function $\code{SVTBase}$ $\priv$-private is shown in Figure~\ref{fig:svtalt}. Compared with SVT, SVT-ALT does not add any noise to the threshold $T$; instead, it injects Laplace noise $\lapm{(size/\priv)}$ (rather than $\lapm{(3N/\priv)}$) to each query answer. This provides the same privacy guarantees (SVT and SVT-ALT both satisfy $\epsilon$-differential privacy for the same value of $\epsilon$). However, since $N$ is typically much smaller than $\texttt{size}$ (the total number of queries), SVT-ALT injects significantly more noise into its computation.
Handling these kinds of decisions during the synthesis process is a highly non-trivial task and requires deep understanding of the privacy cost introduced by each sampling instruction. For example,
SVT and its correct variants~\cite{diffpbook,ninghuisparse,ashwinsparse,freegap} have the interesting property that outputting $\false$ \emph{does not} incur any privacy cost (i.e., the costs\footnote{The privacy cost of the threshold is $\epsilon/3$ and each of the $N$ $\true$ outputs incurs a privacy cost of $2\epsilon/(3N)$.} are only incurred for making the threshold noisy and for outputting $\true$). On the other hand SVT-ALT is too naive and incurs a privacy cost of $\epsilon/\texttt{size}$ for each iteration of the while loop (for a total cost of $\epsilon$). %
\begin{figure*}[t]
\includegraphics[width=.8\textwidth]{figures/overview.pdf}
\vspace{-8pt}
\caption{Overview of \tool.}\label{fig:overview}
\end{figure*}
Finally, in many mechanisms (including SVT) and its variants, one needs to decide how to divide up a total privacy budget $\priv$ among different parts of the mechanism (i.e., what should the privacy cost of each part of the mechanism be). In the case of SVT, a synthesizer would decide how much of the budget should be consumed by adding noise to threshold $T$ and how much should be consumed by the while loop. This is equivalent to deciding how much noise should be used for the threshold and how much should be used for the noisy query answers. In Figure~\ref{fig:svt}, the noise scale for the threshold is $\sigma_1=3/\epsilon$ while the noise scale for each query answer is $\sigma_2=3N/\epsilon$. However, any choice of $\sigma_1,\sigma_2$ that satisfies $1/\sigma_1 + 2N /\sigma_2=\epsilon$ will result in $\epsilon$-differential privacy \cite{ninghuisparse}.
As shown by Lyu et al.~\cite{ninghuisparse}, an approximately optimal ratio of $\sigma_1:\sigma_2$ is $1:(2N)^{2/3}$. %
\subsection{Approach Overview}
To synthesize a privacy mechanism, \tool adds proper amount of noise to the original program. This naturally involves two tasks: (1) finding program locations to add random noise to, and (2) finding the amount (scale) of each noise.
Accordingly, \tool synthesizes a privacy mechanism as
shown in Figure~\ref{fig:overview}.
\paragraph{Phase 1: Sketch Generation (Section~\ref{sec:sketch})}
In Phase 1, \tool generates a \emph{sketch mechanism} with candidate locations for noise. The sketch mechanism might contain more locations for noise than needed, as the unnecessary ones will eventually be optimized away in Phase 2. Moreover, each noise location $\eta_i$ is paired with a scale template $\scale_i$ which consists of a set of unknown scale holes $\scalehole$ to be synthesized in Phase 2. We use $M'(inp,\scalehole)$ to denote such a sketch mechanism with unknown scale holes.
\paragraph{Phase 2: Synthesis Loop (Section~\ref{sec:synthesis})}
Due to the tension between privacy and utility, mechanism synthesis cannot proceed without privacy in mind. Hence, \tool next generates a transformed relational program with both scale templates $\scale$ containing holes $\scalehole$, and proof templates (in the form of alignments) $\alignment$ containing holes $\hole$ to be synthesized.
Next, \tool employs a customized CEGIS loop that iteratively refines a candidate mechanism (i.e., an instantiation of $\hole$ and $\scalehole$) by generating more and more counterexamples (i.e., inputs that violates privacy constraints). %
The CEGIS loop consists of two components. The counterexample generation component starts with a null mechanism (with $\hole = \vec{0}$ and $\scalehole = \vec{1}$) and first searches for a counterexample (i.e., inputs) that \emph{maximizes} the total number of privacy violations. The reason behind the optimization goal is the following: CEGIS benefits greatly from a good set of counterexamples; intuitively, a counterexample that violates maximum number of privacy constraints serves as better guides than others.
With a set of counterexamples, the mechanism generation component searches for a mechanism (i.e., an instantiation of the mechanism template) that \emph{maximizes} utility while still being private. More specifically, the utility is defined both for privacy and accuracy:
\begin{itemize}
\item \textbf{Privacy.} A mechanism must be private for all previously seen counterexamples. Hence, any mechanism that is deemed as non-private on counterexamples has a negative utility score.
\item \textbf{Accuracy.} \tool is parameterized by either a default utility function (sum of variances), or a user-provided one. The utility function is used as the quality metric of each private candidate.
\end{itemize}
Once \tool finds a mechanism where no counterexamples can be found, the CEGIS loop terminates and \tool sends the mechanism to a verifier (we use CPAChecker~\cite{beyer2011cpachecker}). Note that although we did not encounter any incorrect synthesized mechanism in our experiments, verification is needed in general as an optimizer might miss a solution when one exists.
\subsection{Differential Privacy}
In this paper,
we focus on \emph{pure} differential privacy~\cite{dwork06Calibrating}.
Intuitively, a data analysis $A$ satisfies differential privacy if and only if for any dataset $D$, adding, removing, or changing a record in $D$ has little impact on the analysis result. Therefore, a differentially private analysis reveals little about any data record being analyzed. Each analysis is built out of atomic components called differentially private mechanisms (privacy mechanisms for short). These components themselves satisfy differential privacy.\footnote{In general, the privacy parameter of the analysis is upper bounded by the sum of the individual privacy parameters of the mechanisms \cite{pinq}.}
More formally, we say that two datasets $D, D^\prime\in \algoinput$ are \emph{adjacent}, written $D\sim D'$, when they only differ on one record.
To offer privacy, a differentially private mechanism (or analysis), say $\mechanism: \algoinput \rightarrow \algooutput$, injects carefully calibrated random noise during its computation. We call the execution of $\mechanism$ on $D$, written $\mechanism(D)$, the \emph{original execution} and its execution on (neighboring) dataset $D^\prime$, written $\mechanism(D^\prime)$, the \emph{related execution}. Intuitively, $\mechanism$ (or $A$) is $\epsilon$-differentially private for some constant $\priv$ if for any possible output $o\in \algooutput$, the ratio between the probabilities of producing $o$ on $D$ and $D'$ is bounded by $e^\priv$:
\begin{definition}[Pure Differential Privacy \cite{Dwork06diffpriv}]
\label{def:diffpriv}
Let $\epsilon \geq 0$. A probabilistic computation $\mechanism: \algoinput \rightarrow \algooutput$ is $\priv$-differentially private if %
$\forall D\sim D^\prime$ (where $D,D^\prime \in \algoinput$) and $\forall o\in \algooutput$, we have $$\prob[M (D) = o]\leq e^\priv \prob[M (D') = o]$$ %
\end{definition}
A differentially private analysis $A$ interacts with a dataset through one or multiple privacy mechanisms that take a list of queries and their exact answers as input, and produce a differentially private (noisy) aggregation of them. An important factor to determine the amount of noise needed for privacy is the \emph{sensitivity} of queries, which intuitively quantifies the maximum difference of the query results on adjacent databases. We use a vector $(q_1,q_2,\ldots)$ to denote the exact query answers from running a sequence of queries on a dataset and say that each query answer $q_i$ has a \emph{sensitivity} of $\Delta_i$ if its corresponding query has a \emph{global sensitivity} of $\Delta_i$:
\begin{definition}[Global Sensitivity \cite{diffpbook}]
\label{def:sensitivity}
The global sensitivity $\Delta_f$ of a query $f$ is $\sup_{D\sim D'} \abs{f(D)-f(D')}$.
\end{definition}
Similar to dataset adjacency, we say two vectors of query answers are \emph{adjacent}, written $(q_1,q_2,\ldots)\sim (q_1',q_2',\ldots)$, when $\forall i.~|q_i-q_i'|\leq \Delta_i$. Moreover, a privacy mechanism $\mechanism$ satisfies $\epsilon$-differential privacy if for all pairs of adjacent query answers $(q_1,q_2,\ldots)\sim (q_1',q_2',\ldots)$ and all outputs $o\in \algooutput$, we have $\prob[\mechanism(q_1,q_2,\ldots,\text{params}) = o] \leq e^\priv \prob[\mechanism(q_1',q_2',\ldots,\text{params}) = o]$, where params represent data-independent parameters (e.g., the value of $\priv$) to $\mechanism$.
As the goal of this paper is to synthesize privacy mechanisms, we assume that the sensitivity of inputs are either manually specified or computed by sensitivity analysis tools (e.g.,~\cite{Fuzz,DFuzz}).
One popular privacy mechanism is the Laplace Mechanism~\cite{dwork06Calibrating}, which adds Laplace noise to query answers.
\begin{theorem}[Laplace Mechanism \cite{dwork06Calibrating}]
Let $\lapm(n)$ be a sample from the Laplace distribution with mean 0 and scale $n$. The Laplace Mechanism takes as input a query answer $q$ with sensitivity $\Delta_q$, and a privacy parameter $\epsilon$. It outputs $q + \lapm{(\Delta_{q}/\epsilon)}$ and it satisfies $\epsilon$-differential privacy.
\end{theorem}
In this paper, we will use Laplace noise to also synthesize more sophisticated privacy mechanisms.
\subsection{Randomness Alignment}
\label{sec:alignment}
To synthesize a privacy mechanism, we need to reason about its correctness (i.e., it must satisfy pure differential privacy with a given privacy parameter $\epsilon$). To mechanize the correctness reasoning, we adopt the \emph{Randomness Alignment} technique, a simple yet powerful proof technique that enables various verification tools and counterexample detectors~\cite{lightdp,shadowdp,checkdp}.
Consider a privacy mechanism $\mechanism$ and an arbitrary pair of adjacent vectors of query answers $(q_1,q_2,\ldots)\sim (q_1',q_2',\ldots)$. A randomness alignment is a function $\phi: \mathbb{R}^{\infty}\rightarrow \mathbb{R}^{\infty}$ that maps random samples used by an execution of $\mechanism$ on $(q_1,q_2,\ldots)$ to random samples used by the adjacent execution of $\mechanism$ on $(q_1',q_2',\ldots)$ such that both executions produce the same output.
For example, consider the mechanism $\mechanism(x)=x+\lapm(\priv)$ that adds Laplace noise to a query answer $x$ of sensitivity 1. Then, given any pair of adjacent query answers $q\sim q'$, the function $\phi(r)=r+q-q'$ is an alignment. The reason is that for any possible Laplace random sample $\eta$ generated by $M(q)$, we have $q+\eta=q'+(\eta+q-q')=q'+\phi(\eta)$ (i.e., $M(q')$ produces the same result when its Laplace sample is $\phi(\eta)$).
To finish the privacy proof, we note that for Laplace distribution $\lapm(\priv)$, the ratio of the probabilities of sampling $\eta$ and $\phi(\eta)$ is bounded by $e^{\priv \max_{r\in \mathbb{R}} |\phi(r)-r|}=e^{\priv \max_{r\in \mathbb{R}} |q-q'|}\leq e^\priv$. Hence, the \emph{privacy cost}, the natural log of this ratio, is bounded by $\epsilon$.
In general, it is useful to treat the privacy cost as a function of the alignment needed for each sampling instruction. For each sampling instruction $\eta=\lapm(r)$, we define the \emph{distance} of $\eta$, written as \distance{$\eta$}, as $\phi(\eta)-\eta$\footnote{Here we abuse notation slightly by applying $\phi$ point-wise, letting $\phi(\eta)$ be the random sample $M$ should use in place of $\eta$ in the adjacent execution.}. Then, the privacy cost of aligning the sample $\eta$ is bounded by $\frac{|\distance{$\eta$}|}{r}$. To find the overall privacy cost (i.e., the $\epsilon$ in pure differential privacy), we then take the summation of privacy cost of each sample generated in program execution, due to the Composition Theorem of pure differential privacy~\cite{diffpbook}. We note that since we can align each sample individually, randomness alignment is also applicable to sophisticated mechanisms where the composition theorem falls short~\cite{lightdp,shadowdp}. This is a key to automated synthesis of a variety of mechanisms studied in this paper.
\subsection{Particle Swarm Optimization (PSO)}
Prior tools using the Randomness Alignment technique (e.g.,~\cite{lightdp, shadowdp, checkdp}) focus on \emph{privacy} only; they model privacy proof as a constraint-solving problem which is solved by an external SMT solver.
However, %
synthesizing DP mechanism is better described as a \emph{constrained optimization} problem: maximizing \emph{utility} among various candidates that have the same overall differential privacy parameter $\epsilon$.
In this paper, we use Particle Swarm Optimization (PSO)~\cite{pso} to help with the synthesis. PSO is a meta-heuristic optimization algorithm that is inspired by swarm behaviors such as birds in nature. It deploys a large population of candidate solutions (``particles'') in the search space and the particles move around iteratively to find the best location. For each iteration, each particle updates its position and velocity according to a mathematical formula consisting of its own local best position, the swarms' best position and its previous velocity. By adopting this strategy, the entire swarm is guided towards the best solutions. PSO makes no assumption about the problem being optimized and is suitable for very large search spaces. This is well suited for our complex, non-differentiable optimization problem, which makes other gradient-based optimization methods inapplicable. Specifically for the synthesis task, each candidate mechanism in the search space corresponds to a particle in PSO, and the instantiations of the sketch mechanism serves as its position. For each iteration, every candidate explores the search space by changing itself slightly according to the current global best candidate (with the best utility), its own local best in history and the amount of changes from previous iterations. The global best solution is returned after a number of iterations.
\subsection{Sparse Vector Technique (SVT)}\label{sec:svt}
\input{figures/svt}
In this paper, we use Sparse Vector Technique (SVT)~\cite{diffpbook} and its variants as running examples. Given a sequence of queries, SVT tries to find the first $N$ queries whose query answers are likely\footnote{The uncertainty is introduced by privacy requirements.} to be above a publicly known threshold $T$. When privacy is not a concern, the pseudo code of SVT's basic functionality is shown in Figure~\ref{fig:svt} (we call it \code{SVTBase}). For now, we can safely ignore the function signature. \code{SVTBase} checks each exact query answer: it outputs $\true$ (resp. $\false$) if the query answer is above (resp. below) the threshold until $N$ $\true$ outputs are produced.
To enforce differential privacy, SVT adds \emph{carefully calibrated} independent Laplace noise both to the threshold ($T$) and each query answer ($q[i]$). The pseudo code is shown in Figure~\ref{fig:svt} (we call it \code{SVT}), where the changes are highlighted. The sampling instruction $\lapm(r)$ draws one sample from the Laplace distribution with mean 0 and scale factor of $r\in \mathbb{R}$. For each query, the mechanism outputs $\true$ if the \emph{noisy} query answer is above the \emph{noisy} threshold; otherwise it outputs $\false$. It is well-known that SVT satisfies $\epsilon$ differential privacy~\cite{diffpbook}.
\subsection{Syntax of Source and Target Program}
\input{figures/syntax.tex}
\paragraph{Source Language}
The syntax of \tool source code is listed in Figure~\ref{fig:syntax}. The source language models an expressive imperative language with the following standard features:
\begin{itemize}
\item Values of real numbers, Booleans and operations on them;
\item Ternary expressions $\ternary{\bexpr}{\nexpr_1}{\nexpr_2}$, which returns $\nexpr_1$ (resp. $\nexpr_2$) when $\bexpr$ evaluates
to $\true$ (resp. $\false$);
\item List of values as well as append (::) and projection ([]) operations on lists. Note that all lists are initialized to be empty.
\item No-op commands ($\skipcmd$), assignments, sequential commands ($c_1;c_2$), return commands, if branches and while loops.
\end{itemize}
One novel feature of the source language is a while-private loop written as $\whilepriv{e}{c}$; it requests the synthesizer to synthesize an \emph{adaptive} privacy mechanism (e.g., Adaptive Sparse Vector with Gap~\cite{freegap}) that runs $\whilecmd{e}{c}$ until the privacy budget is exhausted. This powerful feature allows the synthesized privacy mechanism to adaptively control the number of outputs based on the remaining privacy budget, in order to increase the amount of queries that they can process. We show how to synthesize the Adaptive Sparse Vector with Gap mechanism in Section~\ref{sec:utility_by_example}.
Finally, the source language requires a few user-provided privacy specifications that the synthesizer should obey, including private inputs and their sensitivity\footnote{Determining the sensitivity of queries is crucial to produce an appropriate noise scale. Here, we assume that this information is provided by the user, as the sensitivities of simple queries, such as sum, mean and median, are fairly easy to compute as demonstrated in \cite{diffpbook}. For more complex queries, users can either derive manually or use sensitivity analysis tools (e.g., \cite{rappor,ashwin08:map}) to calculate sensitivity.}, the desired privacy bound (i.e., $\epsilon$ in $\epsilon$-differential privacy), as well as assumptions on the query answers. While we do not formalize the syntax of such specification, we use $\synth{\code{type}}$ to denote
private input of some $\code{type}$, $\bound{\priv}$ to denote the privacy budget, and specify sensitivity on private inputs ($\distance{$x$}$ represents the sensitivity of $x$) and other assumptions on inputs as program precondition. For example, the source program $\code{SVTBase}$ in Figure~\ref{fig:svt} specifies that query answers $q$ are the only private inputs and their sensitivity is 1. Moreover, the mechanism assumes that $N$ is much smaller than $size$, and the goal is to synthesize an $\epsilon$-differentially private mechanism.
\paragraph{Target Language}
The goal of \tool is to synthesize a randomized mechanism that both preserves the source program's semantics and offers $\epsilon$-differential privacy (where $\epsilon$ is annotated in the source program). Hence, the target language (shown in Figure~\ref{fig:syntax}) is similar to the source language, with a few important changes:
\begin{itemize}
\item The target language is probabilistic: it extends the (deterministic) source language with random variables $\eta$ and sampling commands, written as $\eta:=\lapm(\nexpr)$.
\item The target language excludes the (non-executable) while-private loops; such loops in the source code are replaced by fully synthesized standard loops that terminate the loop whenever the privacy budget is exhausted.
\end{itemize}
Consider Figure~\ref{fig:svt}. Function $\code{SVT}$ is the target program synthesized from the source program $\code{SVT-Base}$. Note that they are very similar, but function $\code{SVT}$ properly injects noise at various locations to satisfy $\epsilon$-differential privacy.
\subsection{Adding Noise Locations to Source Code}
The first step of \tool is to find a \emph{set of program locations} in the source program where extra noise is needed. In this step, the primary concern is \emph{privacy}; in other words, the lack of randomness in the source program violates differential privacy. Hence, we use static program analysis to (1) identify \emph{where} privacy is violated in the source code, (2) infer a set of variables that might require randomness, and (3) instrument the source code to inject noise to the identified variables.
\subsubsection{Identify Violations of Differential Privacy}
Recall that \tool is built on the Randomness Alignment technique (Section~\ref{sec:alignment}) to reason about privacy. Hence, instead of analyzing properties on distributions directly, as stated in Definition~\ref{def:diffpriv}, we over-approximate ``Violations of Differential Privacy'' as ``Violations of Alignment Requirements''. Recall that randomness alignment requires that when running on a pair of adjacent private inputs, a program will produce identical outputs. Since the source code has no randomness, this requirement can be formalized as the standard non-interference property~\cite{noninterference}. Hence, we use a static taint analysis (e.g.,~\cite{sm-jsac,vsi96,Hunt:flowsensitive}) to identify violations in the source code:
\begin{itemize}
\item Initially, only the private inputs are tainted.
\item The analysis tracks all \emph{explicit flows} in the program.
\item The analysis does not track, but reports all \emph{implicit flows}, where a tainted value is used in a branch condition.
\item The analysis reports all outputs with a tainted value.
\end{itemize}
For example, since query answers $q$ are the only tainted inputs in \code{SVTBase} (Figure~\ref{fig:svt}), the taint analysis finds one violation of privacy at Line 3, where the branch condition uses a tainted value $q[i]$. Since the taint analysis is standard, we omit the details here.
\subsubsection{Identify Offending Variables}
The static taint analysis returns a set of offending assignments $x:=e$ and offending branches $\ifcmd{e}{c_1}{c_2}$, where $e$ is tainted. We use $\mathbb{E}$ to represent the set of expressions that are either on the RHS of offending assignments, or in the branch condition of offending branches. Next, we need to infer a set of variables, that when randomized, will allow randomness alignment to exist on the randomized code. We call such a set of variables \emph{offending variables}.
Consider the offending branch in our running example:
\begin{lstlisting}[numbers=none, frame=none, escapechar=@]
@\ifcmd{q[i] $\geq$ T}{...}{...}@
\end{lstlisting}
where $q[i]$ is tainted while T is not. To make the branch outcome identical on two adjacent inputs $q[i]\sim q'[i]$, we can either inject noise to $q[i]$, or to $T$, or to both. While all options can allow the offending branch to be aligned, the difference will show up when we analyze their corresponding utility. For example, adding noise to $T$ is crucial to make SVT useful; intuitively, it allows the noisy $T$ to be reused across different loop iterations, which results in a less noisy program. We defer the discussion on utility to Section~\ref{sec:util}.
Based on the insight above, we define all variables used in any $e\in \mathbb{E}$ as offending variables. Note that by definition, the set of tainted variables is always a subset of offending variables.
\subsubsection{Instrument Source Code with Extra Noise}
\input{figures/svt_sketch.tex}
Finally, \tool injects noise with \emph{unknown scales} (to be synthesized in later stages) to the source code. In particular, it injects Laplacian noise both at the definition of an offending variable, as well as right before its corresponding uses in an offending command. While adding noise to both locations might seem unnecessary at this point, \tool eventually uses a utility optimizer (Section~\ref{sec:util}) to remove unnecessary noise in the code sketch.
Moreover, as the scale of each Laplacian noise is unknown at this point, we replace them with \emph{scale templates} as follows:
\[
(\scalehole_0 + \sum_{v_i \in \varset} \scalehole_i \times v_i) / \epsilon \text{ with fresh } \scalehole_i
\]
where $\varset$ contains all \emph{non-private} function parameters (as making scale private could violate privacy directly by revealing distribution statistics).
Return to our running example of SVT, the code sketch with extra noise is shown in Figure~\ref{fig:svt_sketch} where all changes are highlighted. Notably, the sketched function explicitly adds scale parameters $\lambda$ (we use $\lambda_i$ instead of $\lambda[i]$ for better readability) as extra inputs to be optimized later. No noise is injected at Line 2 for $q[i]$, essentially an iterator of $q$, as it is not in scope at that point.
Hereafter, we use $M(inp)$ and $M'(inp, \lambda)$ to represent the original program with inputs $inp$ and mechanism sketch with scale parameters $\lambda$ respectively.
\section{Synthesis and Optimization}
\label{sec:synthesis}
In Phase 2, \tool completes program synthesis with two sub-goals:
\begin{itemize}
\item It synthesizes and optimizes the randomness alignment of each sampling instruction; a sampling instruction with alignment $0$ implies that the instruction can be removed without violating differential privacy.
\item It synthesizes and optimizes the scales $\lambda$ in the sketch code from Phase 1 to offer good utility.
\end{itemize}
The main challenge is that instead of synthesizing \emph{some} privacy proof (as done in prior work with proof synthesis~\cite{checkdp,Aws:synthesis}) or \emph{optimize} scales with given randomness locations (as done in~\cite{learning}), our goal is to synthesize and optimize both the proof (with fewest randomness locations) and scales.
We first introduce the optimization problem without any while-private loop in source code and assume a default utility function that minimizes sum of variances. Then, we propose a synthesis loop to optimize alignments and scales simultaneously. Finally, we generalize the approach to optimize sketch code with while-private loops and customized utility functions.
\subsection{Mechanism Synthesis Problem}
\label{sec:modeling}
\input{sec_modeling}
\subsection{Mechanism Optimization Problem}\label{sec:generation}
Recall that the goal of \tool is to generate an \emph{accurate} and \emph{private} mechanism. That is, for a search space of alignment holes $\Theta$ and scale holes $\Lambda$, the constrained optimization problem is defined follows:
\begin{equation}
\begin{aligned}
\max_{(\hole,\scalehole) \in \Theta \times \Lambda} & \quad \code{Utility}(M', \hole, \scalehole) \\
\textrm{s.t.} & \quad \forall inp. \text{all assertions in } M'' \text{ pass } \\\nonumber
\end{aligned}
\end{equation}
\begin{figure}
\includegraphics[width=\linewidth]{figures/loop.pdf}
\caption{Overview of the search loop.}\label{fig:loop}
\end{figure}
To find alignment holes ($\hole$) and scale holes ($\lambda$) according to the optimization problem above, \tool uses a customized Counterexample-Guided Inductive Synthesis (CEGIS)~\cite{CEGIS} loop, as illustrated in Figure~\ref{fig:loop}. Each synthesis iteration contains two steps:
\begin{itemize}
\item With a candidate mechanism (initialized with null mechanism of $\hole_0 = \vec{0}, \scalehole_0 = \vec{1}$), the ``counterexample generation'' component tries to find inputs $inp$ that ``break'' the privacy requirements (i.e., assertion violations in $M''$).
\item With a set of counterexamples seen so far, the ``mechanism generation'' component synthesizes a privacy mechanism by optimizing the utility objective function (we use PSO as a black-box optimization technique in this paper) while satisfying all previously-generated counterexamples.
\end{itemize}
The CEGIS loop terminates when no counterexamples can be generated; then, the final privacy mechanism is returned.
Compared with the ``bi-directional'' search loop of CheckDP~\cite{checkdp} that improves both privacy proof and counterexamples simultaneously, the CEGIS loop in Figure~\ref{fig:loop} is more standard, as
there is no need to improve counterexamples for \tool. Hence, the use of ``bi-directional'' CEGIS loop is not necessary.
\paragraph{Discussion on Soundness}
Note that since most optimizers (including PSO~\cite{pso} that \tool uses) are unsound (i.e., they might miss a solution when one exists), the synthesized privacy program might be (in rare cases) non-private. To ensure soundness, the synthesized mechanism can be further verified by sound tools like CheckDP~\cite{checkdp}. If verification fails, the counterexamples generated from CheckDP can be passed back to the CEGIS loop to continue the search.
In practice, we did not experience any such unsound cases by running separate verification passes in CheckDP; we leave the integration of \tool and CheckDP as future work.
\subsubsection{Counterexample Generation}
Given a candidate mechanism instantiated with some $\hole,\lambda$, as well as a transformed mechanism with explicit alignments $M''(inp,\distance{$inp$},sample,\hole,\scalehole)$, a counterexample $C$ is defined as a solution of the following term:
\[
\exists inp,\distance{$inp$},sample.~\text{ some assertions in } M''(inp,\distance{$inp$},sample,\hole,\scalehole) \text{ fail}.
\]
We note that this naive definition treats all counterexamples equally: two distinct counterexamples which violate 1 and 100 assertions respectively are both acceptable. To quantify and optimize the qualities of counterexamples (for better performance), we slightly modify the mechanism $M''$ to return the total number of assertion violations and use an optimizer to find a counterexample according to the following metric:
\[
\max_{inp,\distance{$inp$},sample} M''(inp, \distance{$inp$}, sample, \hole, \scalehole)
\]
Consider the transformed program of our running example in Figure~\ref{fig:svt_trans} with a null mechanism ($\hole=\vec{0}, \scalehole=\vec{1}$) for bootstrapping the process. The optimizer tries to find a counterexample that fails as many assertions as possible. Since no alignments are set to offset $\distance{q}[i]$ (the differences introduced by the query variable $q[i]$) in the assertions, a counterexample is found by making all queries fall in the $\true$ branch (i.e., query answers $q[i]$ are all above the threshold $T$). Suppose later, an improved alignment, which properly aligns the branch by $-\distance{q}[i]$, is fed in, which makes the false branch also incur a privacy loss. Therefore a counterexample will then be generated with query answers below the threshold, to make privacy cost exceed the total privacy budget (the last assertion in code).
\subsubsection{Mechanism Generation}\label{sec:util}
In general, mechanism generation runs on both the transformed program $M''$ and the sketch mechanism $M'$ as follows:
\begin{itemize}
\item For any candidate solution (of $\hole,\scalehole$) that fails to satisfy any privacy constraint in $M''$ given any previously-generated counterexample, we assign a negative utility score to the solution.
\item Otherwise, we use the utility function $\code{Utility}(M', \hole,\scalehole)$ as its utility score.
\end{itemize}
Based on the utility scores defined above, \tool uses an optimizer to find a privacy mechanism that optimizes the utility function while remaining differentially private.
Returning to our running example. The initial few discovered counterexamples likely include ones that go to different branches to cover all code paths. They can serve as good guides to lead the optimizer towards finding a more general solution, by aligning true and false branch differently, using a conditional alignment in the form of $\ternary{\Omega}{\bullet}{\bullet}$, as other solutions will result in a negative utility score since they violate privacy.
Among the solutions that do satisfy all privacy constraints, the mechanism generation component ranks them based on their utility scores. Here, a solution that assigns a large noise (e.g., $size/\epsilon$) to the queries, although private, will have smaller utility scores than one which assigns $3N/\epsilon$ (since $N < \code{size} / 5$ in precondition). Moreover, a solution that assigns three random variables (two for the threshold, and one for the queries) will be less favorable due to larger sum of variances. This shows the power of our utility metric function in selecting good candidate solutions.
\subsection{Handling While-Private Loop and User-Provided Utility Function}
\label{sec:utility_by_example}
\input{figures/adaptive.tex}
Next, we explore the full-fledged version of \tool, with advanced features of while-private loop and user-provided utility function. We use a recently proposed variant of SVT that we call \code{AdaptiveSVT} (i.e., Adaptive Sparse Vector with Gap in~\cite{freegap}) as an example; its pseudo-code without noise is shown in Figure~\ref{fig:adaptive_base}. Compared with SVT, there are three major changes:
\begin{itemize}
\item The mechanism uses while-private loop (Line~\ref{line:adaptivesvt:while}) to request the synthesizer to adaptively answer as many quires as possible (the input $N$ specifies the \emph{minimum} number of above-threshold queries that the algorithm should output).
\item The mechanism partitions query answers into three ranges: $(-\infty,T)$, $[T,T+\sigma)$ and $[T+\sigma,+\infty)$ and requests \tool to automatically allocate the total privacy budget among quires in each range.
\item When $q[i]\geq T$, the mechanism releases the gap between $q[i]$ and $T$, instead of a constant.
\end{itemize}
Overall, the mechanism improves over SVT since it can use less privacy budget (i.e., add more noise) for queries that are much larger than the threshold $T$ (i.e., in range $[T+\sigma,+\infty)$), in order to increase the amount of queries that it can process. Moreover, it is shown that the gap information can be released for free~\cite{freegap}.
From program synthesis perspective, it poses two challenges for \tool: (1) to synthesize executable code for while-private loop, and (2) to adopt a user-specified utility function.
\input{figures/adaptive_synthesized}
\paragraph{Synthesizing while-private Loop}
\begin{figure*}
\small
\raggedright
\setstretch{0.85}
\begin{mathpar}
\inferrule*[right=(T-While-Priv)]
{\cdots
}
{\proves \flowrule{\env}{\whilepriv{e}{c}}{\cdots; (\whilecmd{(e\land \vpriv{\priv}\leq \epsilon-\bigcirc)}{(\vpriv{t}=\vpriv{\priv};\cdots;\assert{\vpriv{\priv}-\vpriv{t}\leq \bigcirc}))}}{\env \join \env_f}}
\vspace{-2ex}
\end{mathpar}
\caption{The transformation rule of while-private loop; the parts identical to a standard while-loop are omitted for readability. The complete rule is available in the Appendix.}
\label{fig:trans_while_priv}
\end{figure*}
Recall that in the transformed program $M''$, there is an distinguished variable $\vpriv{\priv}$ that tracks the consumed privacy cost at each program point. The transformation of while-private loop (Figure~\ref{fig:trans_while_priv}) uses $\vpriv{\priv}$ to ensure that the loop terminates if $\vpriv{\priv}$ might exceed $\priv$ after one more iteration: it inserts an unknown bound on the privacy cost of running one iteration ($\bigcirc$) and ensures that the actual cost of \emph{each iteration} never exceeds the bound with the assertion inserted at the end. We note that while-private ($\code{while-priv}$) is a new feature of \tool; it enables \tool to automatically infer and even optimize the loop termination conditions that are previous manually annotated in CheckDP~\cite{checkdp}.
\paragraph*{Discussion on the Soundness of \code{while-priv}}
Although \code{while-priv} is a new feature of \tool, we note that this feature is transformed to a normal \code{while} loop by the transformation rule in Figure~\ref{fig:trans_while_priv}. By construction, the unknown bound on the privacy cost of each loop iteration ($\bigcirc$) is sound. Moreover, as a synthesized mechanism only contains normal \code{while} loops, a synthesized mechanism can further be verified by tools like CheckDP.
\paragraph{User-Specified Utility Function}
Consider the default utility function that minimizes the sum of variances of all random variables (Equation~\ref{eq:default}). A solution that outputs no queries at all always beats other solutions since it injects no noise ($\code{Utility} = \infty$). However, the solution fails the requirement of outputting at least $N$ queries in total, where $N$ is a parameter of the mechanism.
Therefore, a more informative utility function is required for Adaptive SVT.
Recall that the family of SVTs are designed to report whether a query answer is above a certain threshold or not. Hence, a natural utility measurement is the number of true positives and false positives of the above-threshold queries. Moreover, the design of Adaptive SVT assumes that many queries are well-above the threshold; this allows mechanism to add relatively large noise to the outliers without impacting number of false positives.
Finally, by definition, the synthesized privacy mechanism should output at least $N$ queries in total, where $N$ is a parameter of the mechanism.
Hence, we use a sample input $inp_{ex}$ where many queries are well-above the threshold, create a modified sketch mechanism $M'_\hole$ that removes $\eta_i$ from $M'$ whose alignment is 0, and returns the number of true positives ($\#tp$) and false positives ($\#fp$). Hence, the user-specified utility function is defined as follows:
\begin{equation}
\label{eq:custom_util}
\code{Utility}(M', \hole,\scalehole) = (\#tp - \#fp) - p\times \max(N - (\#tp + \#fp), 0)
\end{equation}
where $p$ is the penalty of outputting less than $N$ outputs, which we set as $1$ to guide the search to favor a solution that answers at least $N$ above-threshold queries.
\paragraph*{Choice of Utility Functions}
The quality of the synthesized mechanism is dependent on the quality of the utility function, as the latter defines ``utility'' in the search. In general, a proper utility function of a privacy mechanism might be both data- and application-specific, such as the data- and application-specific utility function that we derived for Adaptive SVT. Nevertheless, for a variety of mechanisms, as showcased in our evaluation, the default utility function (i.e., the sum of variances of all random variables) already allows \tool to synthesize high quality privacy mechanisms.
|
2,869,038,156,327 | arxiv | \section{Introduction}
\label{sec:intro}
\setcounter{equation}{0}
Grazing-sliding bifurcations occur for piecewise-smooth systems of ODEs
that are discontinuous on manifolds where they are nonsmooth, termed discontinuity surfaces.
At places on discontinuity surfaces where the vector field is directed towards the surface from sides,
orbits evolve on the discontinuity surface --- this is known as sliding motion \cite{DiBu08,Fi88}.
A periodic orbit of a piecewise-smooth system undergoes a bifurcation
when it grazes a discontinuity surface as parameters are varied.
If, at the point of grazing, the part of the vector field that does not govern the periodic orbit
is directed towards the discontinuity surface,
then this is a grazing-sliding bifurcation, see Fig.~\ref{fig:schemGrazSlid}.
Other bifurcations of this nature are detailed in \cite{CoDi12,JeHo11,KoDi06}.
\begin{figure}[t!]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(17,4.2)
\put(0,0){\includegraphics[height=4.2cm]{schemGrazSlid_a}}
\put(5.7,0){\includegraphics[height=4.2cm]{schemGrazSlid_b}}
\put(11.4,0){\includegraphics[height=4.2cm]{schemGrazSlid_c}}
\put(1.9,4.1){\small $\gamma < \gamma_{\rm graz}$}
\put(7.6,4.1){\small $\gamma = \gamma_{\rm graz}$}
\put(13.3,4.1){\small $\gamma > \gamma_{\rm graz}$}
\end{picture}
\caption{
Three phase portraits illustrating a grazing-sliding bifurcation
occurring at $\gamma = \gamma_{\rm graz}$, where $\gamma$ is a system parameter.
To the right of a discontinuity surface,
shown with a vertical line, the vector field is directed towards the discontinuity surface, as indicated.
For $\gamma > \gamma_{\rm graz}$ the orbit shown has a segment of sliding motion.
\label{fig:schemGrazSlid}
}
\end{center}
\end{figure}
Grazing-sliding bifurcations arise naturally in mechanical systems with stick-slip friction.
In this context the bifurcation occurs most simply when regular oscillations not involving sticking,
transition to irregular oscillations involving recurring phases of sticking (these correspond to segments of sliding motion),
see for instance \cite{CaGi06,DiKo03,GuHo10,KoPi08,LuGe06} and references within \cite{MaLa12}.
Grazing-sliding bifurcations have been identified in predator-prey models
that are piecewise-smooth due to the assumption that predators
are only harvested when they are in sufficiently high numbers \cite{DeGr03,KuRi03}
and in a two-stage population model \cite{TaLi12}.
The dynamics associated with grazing-sliding bifurcations can be simple or extremely complicated.
An asymptotically stable periodic orbit can simply accumulate a segment of sliding motion.
Alternatively it may bifurcate into an asymptotically stable periodic orbit involving several loops
near the original periodic orbit, some of which involve segments of sliding motion \cite{SzOs09}.
The periodic orbit may bifurcate into a chaotic attractor \cite{Ko05}.
Interestingly, there is no restriction on the dimension of this attractor \cite{Gl15b,GlJe15}\removableFootnote{
Other studies heavily involving sliding bifurcations include \cite{DaNo00,KoDi05,NoKo06}.
}.
In \cite{GlKo12} it was shown that at a grazing-sliding bifurcation an asymptotically stable periodic orbit
can bifurcate into multiple attractors.
More recently in \cite{GlKo16} the same authors introduced an abstract ODE system
for which key calculations could be achieved explicitly and provided
examples for which an asymptotically stable periodic orbit bifurcates into (i) two asymptotically stable periodic orbits,
and (ii) an asymptotically stable periodic orbit and a chaotic attractor.
The purpose of this paper is to show that infinitely many attractors can be created in grazing-sliding bifurcations.
This is achieved by working with a return map that captures the local dynamics.
The return map is piecewise-smooth because
return trajectories that involve a segment of sliding motion produce a different functional form in the map
than return trajectories that do not.
As was first shown in \cite{DiKo02}, the return map is continuous and piecewise-differentiable.
To leading order, the map can written as
\begin{equation}
f(\bx) = \begin{cases}
A_L \bx + b \mu \;, & e_1^{\sf T} \bx \le 0 \;, \\
A_R \bx + b \mu \;, & e_1^{\sf T} \bx \ge 0 \;,
\end{cases}
\label{eq:f}
\end{equation}
where $\bx \in \mathbb{R}^N$ is the state variable and
$\mu \in \mathbb{R}$ is a parameter.
The $N \times N$ matrices $A_L$ and $A_R$ differ only in their first columns (by continuity) and $b \in \mathbb{R}^N$.
Here, and throughout the paper, $e_j$ denotes the $j^{\rm th}$ standard basis vector of $\mathbb{R}^N$
for $j = 1,\ldots,N$.
The surface $e_1^{\sf T} \bx = 0$, call it $\Sigma$, is the switching manifold of \eqref{eq:f}.
Let us suppose that the right component of \eqref{eq:f} (the part with $e_1^{\sf T} \bx \ge 0$)
corresponds to return trajectories that involve a segment of sliding motion.
Since sliding motion occurs on a codimension-one surface
(namely the discontinuity surface associated with the grazing-sliding bifurcation),
the range of the right component of \eqref{eq:f} must have dimension less than $N$.
That is, $\det(A_R) = 0$.
The periodic orbit associated with the grazing-sliding bifurcation
corresponds to a fixed point of \eqref{eq:f}.
The grazing-sliding bifurcation occurs for $\mu = 0$ when this fixed point collides with $\Sigma$ at $\bx = \b0$.
In the context of \eqref{eq:f}, this is known as a border-collision bifurcation \cite{Si16}.
A mechanism for the creation of infinitely many attractors in border-collision bifurcations
was introduced for two-dimensional maps in \cite{Si14},
and generalised to maps of any number of dimensions in \cite{SiTu17}.
Here it is shown that this mechanism can occur for grazing-sliding bifurcations.
Although the required codimension is relatively high (the bifurcation is codimension-four instead of codimension-one),
about a point in parameter space at which this phenomenon occurs, for any $K \ge 1$
there exists an open set within which the system has at least $K$ attractors.
The remainder of this paper is organised as follows.
In \S\ref{sec:SiTu17} we introduce symbolic notation to characterise periodic solutions of \eqref{eq:f}.
We then state Theorem \ref{th:SiTu17}, due to \cite{SiTu17},
that lists conditions sufficient for \eqref{eq:f} to have infinitely many periodic solutions of a simple type.
It seems that the conditions of Theorem \ref{th:SiTu17} can only be satisfied for \eqref{eq:f} with $\det(A_R) = 0$
if \eqref{eq:f} is at least three-dimensional.
For this reason we focus on \eqref{eq:f} in three dimensions.
In \S\ref{sec:derivingExamples} we describe a practical approach for
determining the parameters of the three-dimensional border-collision normal form
for which the conditions of Theorem \ref{th:SiTu17} may be satisfied.
We then use this approach to construct a two-parameter family of maps satisfying the conditions of Theorem \ref{th:SiTu17}
and numerically obtain two additional examples.
In the spirit of \cite{GlKo16},
we introduce an abstract ODE system that exhibits grazing-sliding bifurcations in \S\ref{sec:odeExample}.
This system is sufficiently simple that
the parameters of the corresponding border-collision normal form
can be written explicitly in terms of the parameters of the ODE system.
Moreover, the system is designed so that
the inverse problem of determining the parameters of the ODE system
that give desired parameters in the normal form can be solved analytically.
This is explained in \S\ref{sec:parameters}
and enables us to generate grazing-sliding bifurcations at which
infinitely many asymptotically stable periodic orbits are generated.
The identification of this phenomenon in mathematical models of real world systems is left for future work.
In \S\ref{sec:bifDiag} we describe the bifurcation diagram for a representative example.
Concluding comments are presented in \S\ref{sec:conc}.
\section{Sufficient conditions for infinitely many attractors}
\label{sec:SiTu17}
\setcounter{equation}{0}
We begin by explaining how periodic solutions of \eqref{eq:f}
can be represented symbolically, as in \cite{Si16}.
Let $\cX \in \{ L,R \}^n$ be a word of length $n$ involving the symbols $L$ and $R$.
We index the elements of such a word from $0$ to $n-1$ and write $\cX = \cX_0 \cdots \cX_{n-1}$.
Given $\cX \in \{ L,R \}^n$ and $\cY \in \{ L,R \}^p$,
the concatenation of $\cX$ and $\cY$ is
\begin{equation}
\cX \cY = \cX_0 \cdots \cX_{n-1} \cY_0 \cdots \cY_{p-1} \;,
\nonumber
\end{equation}
which is a word of length $n+p$.
We write $\cX^k$, where $k \ge 0$ is an integer,
to denote the concatenation of $\cX$ with itself $k$ times.
For any $i = 0,\ldots,n-1$,
we write $\cX^{\overline{i}}$ to denote the word of length $n$
that equals $\cX$ in all elements except $\cX_i$
(e.g.~if $\cX = RLR$, then $\cX^{\overline{2}} = RLL$).
Let
\begin{align*}
f_L(\bx) &= A_L \bx + b \mu \;, \\
f_R(\bx) &= A_R \bx + b \mu \;,
\end{align*}
denote the two components of $f$, \eqref{eq:f}.
For any $\cX \in \{ L,R \}^n$, let
\begin{equation}
f_\cX = f_{\cX_{n-1}} \circ \cdots \circ f_{\cX_0} \;,
\label{eq:fX}
\end{equation}
denote the composition of $f_L$ and $f_R$ in the order specified by the elements of $\cX$.
The function $f_\cX$ is affine and given by
\begin{equation}
f_\cX(\bx) = M_\cX \bx + P_\cX b \mu \;,
\label{eq:fX2}
\end{equation}
where
\begin{align}
M_\cX &= A_{\cX_{n-1}} \cdots A_{\cX_0} \;, \label{eq:MX} \\
P_\cX &= I + \sum_{i=1}^{n-1} A_{\cX_{n-1}} \cdots A_{\cX_i} \;. \label{eq:PX}
\end{align}
An $n$-tuple $\left( \bx^\cX_0,\ldots,\bx^\cX_{n-1} \right)$ for which
\begin{equation}
f_{\cX_i} \left( \bx^\cX_i \right) = \bx^\cX_{(i+1) \,{\rm mod}\, n} \;, \qquad
{\rm for~all~} i = 0,\ldots,n-1 \;,
\label{eq:Xcycle}
\end{equation}
is called an $\cX$-cycle.
The $\cX$-cycle is a periodic orbit of $f$ and said to {\em admissible}
if each $\bx^\cX_i$ lies on the ``correct'' side of the switching manifold $\Sigma$, or on $\Sigma$.
To be more precise, for any $\bx \notin \Sigma$ let
\begin{equation}
s(\bx) = \begin{cases}
L \;, & e_1^{\sf T} \bx < 0 \;, \\
R \;, & e_1^{\sf T} \bx > 0 \;.
\end{cases}
\label{eq:s}
\end{equation}
Then the $\cX$-cycle is admissible
if $s \left( \bx^\cX_i \right) = \cX_i$ for all $i$ for which $\bx^\cX_i \notin \Sigma$.
Since $\bx^\cX_0$ is a fixed point of \eqref{eq:fX2},
if no points of an admissible $\cX$-cycle lie on $\Sigma$
then the $\cX$-cycle is asymptotically stable if and only if all eigenvalues of $M_\cX$ have modulus less than $1$.
Given two words $\cX$ and $\cY$, the following result, taken from \cite{SiTu17},
provides sufficient conditions for $f$
to have infinitely many admissible, asymptotically stable $\cX^k \cY$-cycles.
\begin{theorem}
Let $\cX \in \{ L,R \}^n$ and $\cY \in \{ L,R \}^p$
be such that $\cX \cY = \left( \cY \cX \right)^{\overline{0} \overline{\alpha}}$
for some $\alpha \in \{ 1,\ldots,n+p-1 \}$.
\begin{enumerate}
\item
Suppose $M_\cX$ has multiplicity-one eigenvalues $\lambda_1 > 1$ and $\lambda_2 = \frac{1}{\lambda_1}$
and all other eigenvalues of $M_\cX$ have modulus less than $\lambda_2$.
\item
For $j = 1,2$, let $\omega_j^{\sf T}$ and $\zeta_j$ be left and right eigenvectors of $M_\cX$
corresponding to $\lambda_j$ and satisfying $\omega_j^{\sf T} \zeta_j = 1$
(which can always be achieved).
Suppose $e_1^{\sf T} \zeta_1 \ne 0$ and $\lambda_2 < \det(C) < 1$ where
\begin{equation}
C = \begin{bmatrix} \omega_1^{\sf T} \\ \omega_2^{\sf T} \end{bmatrix} M_\cY
\begin{bmatrix} \zeta_1 & \zeta_2 \end{bmatrix},
\label{eq:C}
\end{equation}
is a $2 \times 2$ matrix.
\item
Suppose that the $\cX$-cycle (which must be unique) is an admissible
periodic solution of $f$ with no points on $\Sigma$.
\item
Let $\cS = \cX^\infty \cY \cX^\infty$ be a bi-infinite symbol sequence, with $\cS_0$ corresponding to $\cY_0$.
Suppose there exists an orbit $\{ \by_i \}$ of $f$ that is homoclinic to the $\cX$-cycle and
\begin{enumerate}
\item
$s(\by_i) = \cS_i$ for all $i \in \mathbb{Z}$ for which $\by_i \notin \Sigma$;
\item
$\by_0 = \bx^\cX_0 - \frac{e_1^{\sf T} \bx^\cX_0}{e_1^{\sf T} \zeta_1} \,\zeta_1 \in \Sigma$;
\item
$\by_\alpha \in \Sigma$;
\item
there does not exist $i \ge 0$ for which $\by_i \in \Sigma$ and $\by_{i+n} \in \Sigma$.
\end{enumerate}
\end{enumerate}
Then there exists $k_{\rm min} \ge 0$ such that $f$ has an
admissible, asymptotically stable $\cX^k \cY$-cycle with no points on $\Sigma$
for all $k \ge k_{\rm min}$.
\label{th:SiTu17}
\end{theorem}
\section{The three-dimensional border-collision normal form}
\label{sec:derivingExamples}
\setcounter{equation}{0}
Given a three-dimensional map $f$ of the form \eqref{eq:f}, let
\begin{equation}
\cO_L = \begin{bmatrix}
e_1^{\sf T} A_L^2 \\
e_1^{\sf T} A_L \\
e_1^{\sf T}
\end{bmatrix},
\label{eq:OL}
\end{equation}
and let $\varrho^{\sf T} = e_1^{\sf T} {\rm adj}(I - A_L)$,
where ${\rm adj}(\cdot)$ denotes the {\em adjugate} of a matrix.
If $\det(\cO_L) \ne 0$ (this is the ``observability condition'')
then $f$ can be transformed such that $A_L$, $A_R$, and $b$ have the form
\begin{equation}
\begin{split}
A_L &= \begin{bmatrix} \tau_L & 1 & 0 \\ -\sigma_L & 0 & 1 \\ \delta_L & 0 & 0 \end{bmatrix}, \\
A_R &= \begin{bmatrix} \tau_R & 1 & 0 \\ -\sigma_R & 0 & 1 \\ \delta_R & 0 & 0 \end{bmatrix}, \\
b &= e_1 \;.
\end{split}
\label{eq:bcNormalForm}
\end{equation}
If also $\varrho^{\sf T} b \ne 0$ (a non-degeneracy condition for the vector $b$ in the original map)
then $f$ is conjugate to its transformed version for $\mu \ne 0$ \cite{Si16}.
With \eqref{eq:bcNormalForm} the map \eqref{eq:f}
is known the three-dimensional border-collision normal form.
The parameters $\tau_{L,R}$, $\sigma_{L,R}$, and $\delta_{L,R}$ are conveniently the
trace, second trace, and determinant of $A_{L,R}$, see Appendix \ref{app:secondTrace}.
In this section we work with the three-dimensional border-collision normal form.
We fix $\delta_R = 0$, so that $\det(A_R) = 0$,
and search for values of $\tau_L, \tau_R, \sigma_L, \sigma_R, \delta_L, \in \mathbb{R}$
that satisfy the conditions of Theorem \ref{th:SiTu17} for $\mu = 1$ and some words $\cX$ and $\cY$.
\subsection{Determining parameter values that give infinitely many attractors}
\label{sub:generalProcedure}
The phenomenon described by Theorem \ref{th:SiTu17} is codimension-three
because $\lambda_2 = \frac{1}{\lambda_1}$,
$\by_\alpha \in \Sigma$,
and the requirement that $\by_0$ belongs to the stable manifold of the $\cX$-cycle,
are independent codimension-one conditions.
It is not particularly helpful to {\em directly} use these conditions
to generate restrictions on the values of $\tau_L$, $\tau_R$, $\sigma_L$, $\sigma_R$, and $\delta_L$,
because, for instance, the eigenvalues $\lambda_1$ and $\lambda_2$
are given by the roots of a quadratic equation (assuming $\delta_R = 0$)
and the resulting square-roots create expressions that seem to be too complicated to deal with.
Instead we derive three alternate conditions
that lead to polynomial restrictions on the parameter values.
This was done for the two-dimensional border-collision normal form in \cite{Si14}.
Here we state merely state these conditions; their derivation in given in Appendix \ref{app:construction}.
They are not intended to provide additional insight into the phenomenon described by Theorem \ref{th:SiTu17},
only to be used as a tool for finding suitable parameter values.
Indeed their solutions may not satisfy all conditions of Theorem \ref{th:SiTu17}.
Let the words $\cX$ and $\cY$ be given, where $\cX \cY = \left( \cY \cX \right)^{\overline{0} \overline{\alpha}}$
for some $\alpha \in \{ 1,\ldots,n+p-1 \}$.
Here we assume $\cX$ and $\cY$ both end with the symbol $R$,
as this provides useful simplification but is not too restrictive.
We can first use the conditions of Theorem \ref{th:SiTu17} to calculate the point $\by_0$.
We have $\by_0 \in \Sigma$ (by definition), thus the first component of $\by_0$ is zero.
We have $\by_0 = f_R(\by_{-1})$ (because $\cX$ ends in $R$),
thus the third component of $\by_0$ is zero (because $A_R$ is given by \eqref{eq:bcNormalForm} with $\delta_R = 0$).
Finally, the second component of $\by_0$ can be determined by the condition $\by_\alpha = f^\alpha(\by_0) \in \Sigma$.
Specifically, from \eqref{eq:fX2} we obtain
\begin{equation}
e_2^{\sf T} \by_0 = \frac{-e_1^{\sf T} P_{\tilde{\cX}} b \mu}{e_1^{\sf T} M_{\tilde{\cX}} e_2} \;,
\label{eq:y02}
\end{equation}
where $\tilde{\cX}$ denotes the first $\alpha$ elements of $\cX \cY$.
Once $\by_0$ is calculated, let
\begin{align}
\psi_1 &= P_\cX e_1 \mu - (I - M_\cX) \by_0 \;, \label{eq:psi1} \\
\psi_2 &= M_\cY \psi_1 \;, \label{eq:psi2}
\end{align}
and
\begin{align}
\xi_1 &= M_\cX \psi_1 e_1^{\sf T} \psi_1 - \psi_1 e_1^{\sf T} M_\cX \psi_1 \;, \label{eq:xi1} \\
\xi_2 &= M_\cX \psi_2 e_1^{\sf T} M_\cX \psi_1 - \psi_2 e_1^{\sf T} \psi_1 \;. \label{eq:xi2}
\end{align}
Then our three alternate conditions are
\begin{align}
\sigma_\cX &= 1 \;, \label{eq:construct1} \\
e_2^{\sf T} \xi_1 &= 0 \;, \label{eq:construct2} \\
e_1^{\sf T} \xi_2 &= 0 \;, \label{eq:construct3}
\end{align}
where $\sigma_\cX$ denotes the second trace of $M_\cX$.
Instances of the denominator of \eqref{eq:y02} that arise in
\eqref{eq:construct1}--\eqref{eq:construct3} can be factored out
leaving equations that are polynomial in $\tau_L$, $\tau_R$, $\sigma_L$, $\sigma_R$, and $\delta_L$.
In summary, in order to find values of the parameters in \eqref{eq:bcNormalForm}
for which $f$ has infinitely many admissible, asymptotically stable $\cX^k \cY$-cycles,
we solve \eqref{eq:construct1}-\eqref{eq:construct3} (derived in Appendix \ref{app:construction})
and check that all conditions of Theorem \ref{th:SiTu17} are satisfied.
\subsection{Calculations with $\cX = RLR$ and $\cY = LR$}
\label{sub:derivationMainExample}
Here we consider
\begin{equation}
\cX = RLR \;, \qquad \cY = LR \;,
\label{eq:XYF}
\end{equation}
for which $\cX \cY = \left( \cY \cX \right)^{\overline{0} \overline{\alpha}}$ for $\alpha = 1$.
With $\mu = 1$, for this value of $\alpha$ we have $\by_0 = [0,-1,0]^{\sf T}$.
With $\delta_R = 0$, the second trace of $M_\cX = A_R A_L A_R$
is $\sigma_{\cX} = (\sigma_L \sigma_R - \delta_L \tau_R) \sigma_R$.
Thus \eqref{eq:construct1} gives
\begin{equation}
\delta_L = \frac{\sigma_L \sigma_R^2 - 1}{\tau_R \sigma_R} \;.
\label{eq:dLex1}
\end{equation}
Using a symbolic toolbox, numerically we found that
$e_2^{\sf T} \xi_1$ is an affine function of $\tau_L$ (for this example)
and \eqref{eq:construct2} can be rearranged to produce
\begin{align}
\tau_L &=
\frac{1}{\sigma_R^2 - \sigma_R \tau_R^2 - \sigma_R - \tau_R^3 - \tau_R^2 - \tau_R}
\Big( \delta_L \sigma_R - \sigma_L - \delta_L - 2 \delta_L \tau_R
+ 2 \sigma_L \sigma_R - \sigma_L \tau_R - \sigma_R \tau_R \nonumber \\
&\quad- 2 \delta_L \tau_R^2 - \delta_L \tau_R^3 - \sigma_L \sigma_R^2 - \sigma_L \tau_R^2 - \sigma_R \tau_R^2 - \sigma_R^2 \tau_R - \sigma_R^2 + \sigma_L \sigma_R \tau_R^2 + \delta_L \sigma_R \tau_R + \sigma_L \sigma_R \tau_R \Big) \;.
\label{eq:tLex1}
\end{align}
The quantity $e_1^{\sf T} \xi_2$ contains too many terms to be given here,
but upon substituting \eqref{eq:dLex1} and \eqref{eq:tLex1} simplifies to
\begin{equation}
e_1^{\sf T} \xi_2 =
\frac{\tau_R (\tau_R + 1)^2 (1 - \sigma_R)
(\sigma_R^2 + \sigma_R \tau_R - \sigma_R + \tau_R^2 + \tau_R + 1)
(\sigma_R - \tau_R^2 - \tau_R - 1)^4 (\tau_R + \sigma_R + 1)}
{\sigma_R^2 (\sigma_R^2 - \sigma_R \tau_R^2 - \sigma_R - \tau_R^3 - \tau_R^2 - \tau_R)^3} \;.
\label{eq:xi2ex1}
\end{equation}
In view of \eqref{eq:construct3}, we require one factor in the numerator of \eqref{eq:xi2ex1} to be zero.
By considering each factor in turn we find that
all conditions of Theorem \ref{th:SiTu17} can only be satisfied if the last factor is zero, that is
\begin{equation}
\tau_R = -(\sigma_R + 1) \;.
\label{eq:tRex1}
\end{equation}
Then by substituting \eqref{eq:tRex1} into \eqref{eq:dLex1} we obtain
\begin{equation}
\delta_L = \frac{1 - \sigma_L \sigma_R^2}{\sigma_R (\sigma_R + 1)} \;.
\label{eq:dLex1b}
\end{equation}
Lastly by substituting \eqref{eq:tRex1} and \eqref{eq:dLex1b} into \eqref{eq:tLex1} we obtain
\begin{equation}
\tau_L = \frac{1}{\sigma_R^2+1} - \frac{\sigma_L+\sigma_R}{\sigma_R+1} \;.
\label{eq:tLex1b}
\end{equation}
\subsection{A two-parameter family}
\label{sub:verificationMainExample}
To satisfy the conditions of Theorem \ref{th:SiTu17} with $\cX = RLR$ and $\cY = LR$,
equations \eqref{eq:tRex1}--\eqref{eq:tLex1b} must hold.
Here we show that the conditions of Theorem \ref{th:SiTu17} are indeed satisfied
if the values of the two undetermined parameters, $\sigma_L$ and $\sigma_R$,
belong to the domain
\begin{equation}
\cD = \left\{ (\sigma_L,\sigma_R) ~\middle|~
\sigma_L > \frac{\sigma_R - 1}{\sigma_R \left( \sigma_R^2 + 1 \right)} ,\,
\sigma_R > 1 \right\},
\label{eq:D}
\end{equation}
shown in Fig.~\ref{fig:domainExF}.
\begin{figure}[b!]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(8,6)
\put(0,0){\includegraphics[height=6cm]{domainExF}}
\put(4.1,0){\small $\sigma_L$}
\put(0,3.6){\small $\sigma_R$}
\end{picture}
\caption{
The domain $\cD$ \eqref{eq:D}.
The dashed curve, $\sigma_L = \frac{1}{\sigma_R^2}$, is where $\delta_L = 0$.
The black dot is the point $(\sigma_L,\sigma_R) = (0.2,1.75)$
used in Figs.~\ref{fig:qqExdRzero6} and \ref{fig:bifDiagGrazSliding}.
\label{fig:domainExF}
}
\end{center}
\end{figure}
\begin{theorem}
Choose any $(\sigma_L,\sigma_R) \in \cD$,
let $\tau_R$, $\delta_L$, and $\tau_L$ be given by \eqref{eq:tRex1}--\eqref{eq:tLex1b},
and let $\delta_R = 0$ and $\mu = 1$.
Then there exists $k_{\rm min} \in \mathbb{Z}$ such that
for $\cX = RLR$ and $\cY = LR$ the map \eqref{eq:f} with \eqref{eq:bcNormalForm}
has an admissible, asymptotically stable $\cX^k \cY$-cycle
with no points on $\Sigma$ for all $k \ge k_{\rm min}$.
\label{th:twoParamEx}
\end{theorem}
Theorem \ref{th:twoParamEx} is proved in Appendix \ref{app:proof}
by simply showing that all conditions of Theorem \ref{th:SiTu17} are satisfied.
Theorem \ref{th:twoParamEx} can also be proved by calculating the $\cX^k \cY$-cycles directly, as in \cite{Si14}.
The latter approach requires lengthy calculations, and so is not included here,
but reveals that we can take $k_{\rm min} = 1$ for all $(\sigma_L,\sigma_R) \in \cD$.
\begin{figure}[b!]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(16,8)
\put(0,0){\includegraphics[height=8cm]{qqExdRzero6}}
\put(9.45,0){\small $e_1^{\sf T} \bx$}
\put(0,5.1){\small $e_2^{\sf T} \bx$}
\put(15.48,5.1){\footnotesize $\bx^{\cX}_0$}
\put(1.06,1.3){\footnotesize $\bx^{\cX}_1$}
\put(11.66,7.6){\footnotesize $\bx^{\cX}_2$}
\put(13.46,4.53){\scriptsize $\by_{\hspace{.8mm}3}$}
\put(13.64,4.508){\tiny -}
\put(5.4,3.53){\scriptsize $\by_{\hspace{.8mm}2}$}
\put(5.58,3.508){\tiny -}
\put(12.62,7.16){\scriptsize $\by_{\hspace{.8mm}1}$}
\put(12.80,7.138){\tiny -}
\put(11.29,3.88){\scriptsize $\by_0$}
\put(10.8,6.3){\scriptsize $\by_1$}
\put(14.42,6.3){\scriptsize $\by_2$}
\put(5.68,2.22){\scriptsize $\by_3$}
\put(11.29,1.7){\small $\Sigma$}
\end{picture}
\caption{
A phase portrait of \eqref{eq:f} with \eqref{eq:bcNormalForm}, \eqref{eq:exF}, and $\mu = 1$.
Here $\cX = RLR$ and $\cY = LR$.
Asymptotically stable $\cX^k \cY$-cycles [saddle-type $\cX^k \cY^{\overline{0}}$-cycles]
are shown with coloured circles [triangles] for $k = 1,\ldots,8$.
The saddle-type $\cX$-cycle is shown with unshaded triangles.
\label{fig:qqExdRzero6}
}
\end{center}
\end{figure}
As a specific example, consider the values $(\sigma_L,\sigma_R) = (0.2,1.75)$.
From \eqref{eq:tRex1}--\eqref{eq:tLex1b}, altogether we have
\begin{equation}
\begin{aligned}
\tau_L &= -\frac{331}{715} \;, &
\tau_R &= -\frac{11}{4} \;, \\
\sigma_L &= \frac{1}{5} \;, &
\sigma_R &= \frac{7}{4} \;, \\
\delta_L &= \frac{31}{385} \;, &
\delta_R &= 0 \;.
\end{aligned}
\label{eq:exF}
\end{equation}
Fig.~\ref{fig:qqExdRzero6} shows a phase portrait using these values.
This figure shows the $\cX^k \cY$-cycles for $k = 1,\ldots,8$ (with circles).
For these values of $k$, saddle-type $\cX^k \cY^{\overline{0}}$-cycles also exist (shown with triangles).
It seems typical for the stable manifolds of these saddle solutions
to form the boundaries of the basins of attraction of the $\cX^k \cY$-cycles, see \cite{Si14}.
To show the $\cX^k \cY$ and $\cX^k \cY^{\overline{0}}$-cycles clearly,
in Fig.~\ref{fig:qqExdRzero6} for each $k$ the points of these periodic solutions are connected by line segments.
The $\cX$-cycle (with points $\bx^\cX_0$, $\bx^\cX_1$, and $\bx^\cX_2$)
has a one-dimensional unstable manifold and a two-dimensional stable manifold.
As shown in \cite{SiTu17}, the branch of the unstable manifold of the $\cX$-cycle that contains the homoclinic orbit $\{ \by_i \}$
is a subset of the stable manifold of the $\cX$-cycle.
This branch is indicated with solid black lines in Fig.~\ref{fig:qqExdRzero6}.
There also exists an asymptotically stable $\cX^{\overline{2}}=RLL$-cycle,
but this is not visible in Fig.~\ref{fig:qqExdRzero6} as it lies outside the region of phase space shown.
\subsection{Additional examples}
\label{sub:additionalExamples}
\begin{figure}[b!]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(16,8)
\put(0,0){\includegraphics[height=8cm]{qqExdRzero20}}
\put(9.45,0){\small $e_1^{\sf T} \bx$}
\put(0,5.1){\small $e_2^{\sf T} \bx$}
\put(15.36,1.26){\footnotesize $\bx^{\cX}_0$}
\put(5.53,1.03){\footnotesize $\bx^{\cX}_1$}
\put(1.33,5.95){\footnotesize $\bx^{\cX}_2$}
\put(5.2,7.65){\footnotesize $\bx^{\cX}_3$}
\put(14.64,5.62){\footnotesize $\bx^{\cX}_4$}
\put(11.59,1.09){\scriptsize $\by_0$}
\put(7.1,3.06){\scriptsize $\by_1$}
\put(7.16,5.14){\scriptsize $\by_2$}
\put(10.99,5.31){\scriptsize $\by_3$}
\put(15.15,3.11){\scriptsize $\by_4$}
\put(11.59,7.5){\small $\Sigma$}
\end{picture}
\caption{
A phase portrait of \eqref{eq:f} with \eqref{eq:bcNormalForm}, \eqref{eq:ex20}, and $\mu = 1$
using the same conventions as Fig.~\ref{fig:qqExdRzero6}.
The $\cX^k \cY$-cycles and $\cX^k \cY^{\overline{0}}$-cycles
are shown for $k = 1,\ldots,8$, where $\cX$ and $\cY$ are given by \eqref{eq:XYex20}.
\label{fig:qqExdRzero20}
}
\end{center}
\end{figure}
Here we provide two numerical examples
using combinations of words $\cX$ and $\cY$ that have not been treated
in previous studies of this phenomenon.
With
\begin{equation}
\cX = RLLLR \;, \qquad
\cY = LLLR \;,
\label{eq:XYex20}
\end{equation}
for which $\alpha = 3$, we fixed the values of $\sigma_L$ and $\sigma_R$
and solved \eqref{eq:construct1}--\eqref{eq:construct3} numerically to obtain
\begin{equation}
\begin{aligned}
\tau_L &= 1.1634777991 \;, &
\tau_R &= -0.6037872000 \;, \\
\sigma_L &= 0.95 \;, &
\sigma_R &= 1.15 \;, \\
\delta_L &= 0.0608806824 \;, &
\delta_R &= 0 \;,
\end{aligned}
\label{eq:ex20}
\end{equation}
accurate to ten decimal places.
Fig.~\ref{fig:qqExdRzero20} shows a phase portrait using these values.
Here admissible, asymptotically stable $\cX^k \cY$-cycles exist for at least $k = 1,\ldots,8$.
We expect that with the exact solution to \eqref{eq:construct1}--\eqref{eq:construct3},
all conditions of Theorem \ref{th:SiTu17} are satisfied
and thus infinitely many admissible, asymptotically stable $\cX^k \cY$-cycles exist.
The values of $\sigma_L$ and $\sigma_R$ in \eqref{eq:ex20} were obtained via numerical exploration.
\begin{figure}[t!]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(16,8)
\put(0,0){\includegraphics[height=8cm]{qqExdRzero22}}
\put(9.45,0){\small $e_1^{\sf T} \bx$}
\put(0,5.1){\small $e_2^{\sf T} \bx$}
\put(13.42,4.67){\footnotesize $\bx^{\cX}_0$}
\put(5.04,4.32){\footnotesize $\bx^{\cX}_1$}
\put(15.54,6.55){\footnotesize $\bx^{\cX}_2$}
\put(1.24,1.35){\footnotesize $\bx^{\cX}_3$}
\put(14.35,7.65){\footnotesize $\bx^{\cX}_4$}
\put(6,1.4){\footnotesize $\bx^{\cX}_5$}
\put(11.44,7.11){\footnotesize $\bx^{\cX}_6$}
\put(10.95,3.58){\scriptsize $\by_0$}
\put(10.51,6.32){\scriptsize $\by_1$}
\put(10.95,1.7){\small $\Sigma$}
\end{picture}
\caption{
A phase portrait of \eqref{eq:f} with \eqref{eq:bcNormalForm}, \eqref{eq:ex22}, and $\mu = 1$
using the same conventions as Fig.~\ref{fig:qqExdRzero6}.
The $\cX^k \cY$-cycles and $\cX^k \cY^{\overline{0}}$-cycles
are shown for $k = 0,\ldots,7$, where $\cX$ and $\cY$ are given by \eqref{eq:XYex22}.
\label{fig:qqExdRzero22}
}
\end{center}
\end{figure}
With
\begin{equation}
\cX = RLRLRLR \;, \qquad
\cY = LR \;,
\label{eq:XYex22}
\end{equation}
for which $\alpha = 1$,
we solved \eqref{eq:construct1}--\eqref{eq:construct3} numerically to obtain
\begin{equation}
\begin{aligned}
\tau_L &= -0.7831707737 \;, &
\tau_R &= -2.8347004550 \;, \\
\sigma_L &= 0.2 \;, &
\sigma_R &= 1.2 \;, \\
\delta_L &= 0.2473051527 \;, &
\delta_R &= 0 \;,
\end{aligned}
\label{eq:ex22}
\end{equation}
accurate to ten decimal places.
Admissible, asymptotically stable $\cX^k \cY$-cycles exist for at least $k = 0,\ldots,7$,
as shown in Fig.~\ref{fig:qqExdRzero22}.
Note that with $k = 0$, we have $\cX^k \cY^{\overline{0}} = RR$.
The $RR$-cycle consists only of the fixed point of $f_R$.
\section{An abstract ODE system}
\label{sec:odeExample}
\setcounter{equation}{0}
Here we study the three-dimensional non-autonomous system
\begin{equation}
\begin{bmatrix} \dot{X} \\ \dot{Y} \\ \dot{Z} \end{bmatrix} =
\begin{cases}
\begin{bmatrix} Y \\ Z \\ -\alpha_1 (X+1) - \alpha_2 Y - \alpha_3 Z + \gamma \cos(t) \end{bmatrix}, & X < 0 \;, \\
\begin{bmatrix} -1 \\ \beta_1 \\ \beta_2 \end{bmatrix}, & X > 0 \;,
\end{cases}
\label{eq:odeEx}
\end{equation}
where $\alpha_1, \alpha_2, \alpha_3, \beta_1, \beta_2, \gamma \in \mathbb{R}$ are constants.
The system \eqref{eq:odeEx} is piecewise-smooth with the discontinuity surface $X = 0$
and we write $\bX = (X,Y,Z)$.
We treat $\gamma$ as the primary bifurcation parameter.
This parameter can be thought of as a forcing amplitude;
indeed \eqref{eq:odeEx} is motivated by a harmonically forced linear oscillator.
We have included five additional parameters
so that we can fit these to five given non-zero eigenvalues of $A_L$ and $A_R$, as achieved in \S\ref{sec:parameters}.
While $X < 0$, the explicit solution to \eqref{eq:odeEx} is available.
This facilitates accurate numerical simulations, presented in \S\ref{sec:bifDiag}.
In this section we use the explicit solution to identify a grazing-sliding bifurcation
and calculate the return map to leading order.
With $\gamma = 0$, the point $\bX = (-1,0,0)$ is an equilibrium of \eqref{eq:odeEx}.
Assuming $(\alpha_1 - \alpha_3)^2 + (\alpha_2 - 1)^2 \ne 0$,
for sufficiently small $\gamma > 0$ the system \eqref{eq:odeEx} has an oscillatory solution
in the left half-space ($X < 0$) centred at $\bX = (-1,0,0)$.
This solution is given by
\begin{equation}
\bX_p(t) = \frac{\gamma}{(\alpha_1 - \alpha_3)^2 + (\alpha_2 - 1)^2}
\left( \begin{bmatrix} \alpha_1 - \alpha_3 \\ \alpha_2 - 1 \\ -(\alpha_1 - \alpha_3) \end{bmatrix} \cos(t) +
\begin{bmatrix} \alpha_2 - 1 \\ -(\alpha_1 - \alpha_3) \\ -(\alpha_2 - 1) \end{bmatrix} \sin(t) \right) - e_1 \;,
\label{eq:Xp}
\end{equation}
and grazes $X = 0$ at
\begin{equation}
\gamma_{\rm graz} = \sqrt{(\alpha_1 - \alpha_3)^2 + (\alpha_2 - 1)^2} \;.
\label{eq:gammagraz}
\end{equation}
In order to employ standard techniques regarding grazing events of piecewise-smooth systems,
we reinterpret \eqref{eq:odeEx} as a four-dimensional autonomous system by treating $t$ as a variable
(i.e.~with $\dot{t} = 1$).
We also take $t$ modulo $2 \pi$,
so that in the cylindrical phase space $\mathbb{R}^3 \times \mathbb{S}$
the oscillatory solution $\bX_p(t)$ is a periodic orbit.
Grazing occurs at the point
\begin{align}
\bX_{\rm graz} &= (0,0,-1), \label{eq:Xgraz} \\
t_{\rm graz} &= \tan^{-1} \left( \frac{\alpha_2 - 1}{\alpha_1 - \alpha_3} \right), \label{eq:tgraz}
\end{align}
where $t_{\rm graz} \in (0,\pi)$ if $\alpha_2 - 1 > 0$ and
$t_{\rm graz} \in (\pi,2 \pi)$ if $\alpha_2 - 1 < 0$.
Note that $\gamma = \gamma_{\rm graz}$ is a grazing-sliding bifurcation
because at the point of grazing the right half-system is directed towards the discontinuity surface
(specifically $\dot{X} = -1$).
For $Y > 0$, orbits slide on the discontinuity surface $X = 0$
because both components of \eqref{eq:odeEx} are directed towards $X = 0$.
As detailed in \cite{DiBu08,Fi88},
this sliding motion is governed by the convex combination of the components of \eqref{eq:odeEx}
that is tangent to $X = 0$:
\begin{equation}
\begin{bmatrix} \dot{Y} \\ \dot{Z} \end{bmatrix} =
\frac{1}{Y + 1}
\begin{bmatrix}
\beta_1 Y + Z \\
-\alpha_1 + (\beta_2 - \alpha_2) Y - \alpha_3 Z + \gamma \cos(t)
\end{bmatrix},
\label{eq:slidingODE}
\end{equation}
and $\dot{t} = 1$.
For $Y < 0$, orbits cross $X = 0$ and enter the left half-space.
\begin{figure}[t!]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(8,6)
\put(0,0){\includegraphics[height=6cm]{schemGrazSlidReturnMap}}
\put(0,4.46){\scriptsize $X$}
\put(.92,2.81){\scriptsize $Y$}
\put(1.91,5.75){\scriptsize $Z$}
\put(2.31,3.1){\scriptsize $\Gamma$}
\put(4.11,1.07){\scriptsize $\Pi$}
\put(3.15,5.47){\scriptsize $X \hspace{-.8mm}=\hspace{-.3mm} 0$}
\put(1.04,1.93){\scriptsize $\bX_1$}
\put(1.07,.5){\scriptsize $\bX_2$}
\put(2.33,1.81){\scriptsize $\bX_3$}
\put(.73,4){\scriptsize $\bX_4$}
\put(1.58,2.55){\scriptsize $\bX_5$}
\put(2.35,3.9){\scriptsize $\bX_6$}
\end{picture}
\caption{
Part of a typical orbit of \eqref{eq:odeEx} near the grazing-sliding bifurcation $\gamma = \gamma_{\rm graz}$.
Take care to note the reverse orientation of the $X$-axis.
\label{fig:schemGrazSlidReturnMap}
}
\end{center}
\end{figure}
Let $\Pi$ denote the Poincar\'{e} section $Y = 0$.
Orbits cease sliding and enter $X < 0$ at the intersection of $\Pi$ with $X = 0$, call it $\Gamma$.
Here we use $\Pi$ to build a return map valid near the grazing-sliding bifurcation.
To do this we use the standard approach of combining a global map with a discontinuity map, see \cite{DiBu08,DiKo02,GlKo12}.
Fig.~\ref{fig:schemGrazSlidReturnMap} shows part of a typical orbit near the grazing-sliding bifurcation.
The orbit intersects $X = 0$ at $\bX_2$,
then slides along to $\bX_3 \in \Gamma$.
It then sojourns away from $X = 0$ (following close to the path of $\bX_p(t)$ at $\gamma = \gamma_{\rm graz}$),
intersects $X = 0$ at $\bX_5$, then slides along to $\bX_6 \in \Gamma$.
As shown in Fig.~\ref{fig:schemGrazSlidReturnMap},
we extend the orbit beyond $\bX_2$ and $\bX_5$ to
the virtual points $\bX_1$ and $\bX_4$ where the orbit would intersect $\Pi$
if it were governed by the left half-system of \eqref{eq:odeEx} in $X > 0$.
The {\em global map}, $\cP_g : \Pi \to \Pi$,
is defined as the next intersection of the orbit with $\Pi$ obtained by just using the left half-system.
That is, $\cP_g(\bX_3) = \bX_4$.
The {\em discontinuity map} $\cP_d : \Pi \to \Pi$
is defined as the necessary correction to generate the true point of intersection with $\Pi$.
That is, $\cP_d(\bX_4) = \bX_6$.
For points $\bX \in \Pi$ with $X < 0$, we take $\cP_d$ to be the identity map.
The composition $\cP_d \circ \cP_g$ provides the true return map on $\Pi$.
This form is convenient because $\cP_d$ is a local map and can be computed via asymptotic expansions,
while $\cP_g$ involves transversal intersections with a single Poincar\'{e} section and only one functional form of \eqref{eq:odeEx}.
Below we work with the alternate return map
\begin{equation}
\cP = \cP_g \circ \cP_d \;,
\label{eq:P}
\end{equation}
as this ordering allows for a simpler description of the switching manifold.
The map $\cP$ captures the dynamics local to the grazing-sliding bifurcation
despite the fact that iterates of $\cP$ with $X > 0$ are virtual.
Next we compute $\cP$ to leading order.
The calculations are relatively routine and so details are omitted for brevity.
Let
\begin{equation}
A = \begin{bmatrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
-\alpha_1 & -\alpha_2 & -\alpha_3
\end{bmatrix},
\label{eq:A}
\end{equation}
denote the Jacobian of the left half-system of \eqref{eq:odeEx}.
Let $\varphi_t(\bX_0,t_0)$ denote the solution to left half-system
with the arbitrary initial condition $\bX = \bX_0$ at $t = t_0$.
We have
\begin{equation}
\varphi_t(\bX_0,t_0) = \bX_p(t) + \bX_h(t;\bX_0,t_0),
\label{eq:generalSoln}
\end{equation}
where $\bX_p(t)$ is the particular solution \eqref{eq:Xp} and
\begin{equation}
\bX_h(t;\bX_0,t_0) = \re^{(t-t_0) A} \left( \bX_0 - \bX_p(t_0) \right),
\label{eq:Xh}
\end{equation}
is the homogeneous solution.
Via straight-forward but lengthy calculations using \eqref{eq:generalSoln}, we obtain
\begin{align}
\cP_g(X,t,Z) &=
\begin{bmatrix} 0 \\ t_{\rm graz} + 2 \pi \\ Z_{\rm graz} \end{bmatrix} +
\re^{2 \pi A} \begin{bmatrix} X \\ t - t_{\rm graz} \\ Z - Z_{\rm graz} \end{bmatrix} +
\frac{1}{\gamma_{\rm graz}} \left( I - \re^{2 \pi A} \right)
\begin{bmatrix} 1 \\ 0 \\ -1 \end{bmatrix}
\left( \gamma - \gamma_{\rm graz} \right) \nonumber \\
&\quad+
\cO \left( \left( X, t-t_{\rm graz}, Z-Z_{\rm graz}, \gamma-\gamma_{\rm graz} \right)^2 \right),
\label{eq:Pg}
\end{align}
where $Z_{\rm graz} = -1$, see \eqref{eq:Xgraz}.
The matrix part of $\cP_g$ has the particularly simple form $\re^{2 \pi A}$
due in part our choice of the ordering $(X,t,Z)$.
By using \eqref{eq:generalSoln} and the equations governing sliding motion \eqref{eq:slidingODE}, we also obtain
\begin{align}
\cP_d(X,t,Z) &=
\begin{bmatrix} 0 \\ t_{\rm graz} \\ Z_{\rm graz} \end{bmatrix} +
\begin{bmatrix}
0 & 0 & 0 \\
\beta_1 + 1 & 1 & 0 \\
\beta_2 & 0 & 1
\end{bmatrix}
\begin{bmatrix} X \\ t - t_{\rm graz} \\ Z - Z_{\rm graz} \end{bmatrix} \nonumber \\
&\quad+
X \cO \left( \sqrt{X}, t-t_{\rm graz}, Z-Z_{\rm graz}, \gamma-\gamma_{\rm graz} \right),
\label{eq:Pd}
\end{align}
for $X > 0$.
Refer to \cite{DiBu08,DiKo02,GlKo12} for detailed calculations of such a discontinuity map.
By then writing $\bx = (X,(t-t_{\rm graz}) {\rm ~mod~} 2 \pi,Z-Z_{\rm graz})$ and $\mu = \gamma-\gamma_{\rm graz}$,
to leading order $\cP$ is given by \eqref{eq:f} with
\begin{align}
A_L &= \re^{2 \pi A} \;, \label{eq:AL} \\
A_R &= \re^{2 \pi A}
\begin{bmatrix}
0 & 0 & 0 \\
\beta_1 + 1 & 1 & 0 \\
\beta_2 & 0 & 0
\end{bmatrix}, \label{eq:AR} \\
b &= \frac{1}{\gamma_{\rm graz}} \left( I - \re^{2 \pi A} \right)
\begin{bmatrix} 1 \\ 0 \\ -1 \end{bmatrix}. \label{eq:b}
\end{align}
\section{Fitting the parameters of the ODE system}
\label{sec:parameters}
\setcounter{equation}{0}
Here we determine values of
$\alpha_1$, $\alpha_2$, $\alpha_3$, $\beta_1$, and $\beta_2$
for which $A_L$ and $A_R$, as given by \eqref{eq:AL} and \eqref{eq:AR},
have desired sets of eigenvalues.
Let $\lambda^J_j$ denote the eigenvalues of $A_J$, for $j = 1,2,3$ and $J = L,R$,
with $\lambda^R_3 = 0$.
Since $A$, given by \eqref{eq:A}, is a real-valued matrix,
the eigenvalues of $A_L = \re^{2 \pi A}$ are either real and positive or appear in complex conjugate pairs.
Here we suppose that $\lambda^L_1 > 0$ and
$\lambda^L_{2,3} = p \pm \ri q$, for some $p \in \mathbb{R}$ and $q > 0$.
Then the eigenvalues of $A$ are
\begin{equation}
\begin{split}
\nu_1 &= \frac{1}{2 \pi} \ln \left( \lambda^L_1 \right), \\
\nu_{2,3} &= \frac{1}{4 \pi} \ln \left( p^2 + q^2 \right) \pm
\frac{\ri}{2 \pi} \tan^{-1} \left( \frac{q}{p} \right).
\end{split}
\label{eq:nu123}
\end{equation}
The trace, second trace, and determinant of $A$
are given by $-\alpha_3$, $\alpha_2$, and $-\alpha_1$ respectively, thus
the required values of $\alpha_1$, $\alpha_2$, and $\alpha_3$ are given by
\begin{equation}
\begin{split}
\alpha_1 &= -\nu_1 \nu_2 \nu_3 \;, \\
\alpha_2 &= \nu_1 \nu_2 + \nu_1 \nu_3 + \nu_2 \nu_3 \;, \\
\alpha_3 &= -(\nu_1 + \nu_2 + \nu_3),
\end{split}
\label{eq:gammai}
\end{equation}
see Appendix \ref{app:secondTrace}.
It remains for us to determine $\beta_1$ and $\beta_2$ in terms of $\lambda^R_1$ and $\lambda^R_2$.
Let $a_{ij}$ denote the $(i,j)$-element of $\re^{2 \pi A}$, for $i,j = 1,2,3$.
By using \eqref{eq:AR} to evaluate $\det \left( \lambda I - A_R \right)$, we obtain
\begin{align*}
0 &= \lambda^{R^2}_j - \left( a_{12} (\beta_1 + 1) + a_{13} \beta_2 + a_{22} + a_{33} \right) \lambda^R_j +
\left( a_{12} a_{33} - a_{13} a_{32} \right) (\beta_1 + 1) \nonumber \\
&\quad+
\left( a_{13} a_{22} - a_{12} a_{23} \right) \beta_2 +
a_{22} a_{33} - a_{23} a_{32} \;,
\end{align*}
for $j = 1,2$.
This provides two linear equations for $\beta_1$ and $\beta_2$, the solution to which is
\begin{equation}
\begin{split}
\beta_1 &= -1 + \frac{a_{12} a_{23} \left( \lambda^R_1 + \lambda^R_2 - a_{22} - a_{33} \right) +
a_{13} \left( \lambda^R_1 \lambda^R_2 - a_{22} \left( \lambda^R_1 + \lambda^R_2 \right) +
a_{23} a_{32} + a_{22}^2 \right)}
{a_{12}^2 a_{23} - a_{13}^2 a_{32} +
a_{12} a_{13} \left( a_{33} - a_{22} \right)} \;, \\
\beta_2 &= -\frac{a_{12} \left( \lambda^R_1 \lambda^R_2 - a_{33} \left( \lambda^R_1 + \lambda^R_2 \right) +
a_{23} a_{32} + a_{33}^2 \right) +
a_{13} a_{32} \left(\lambda^R_1 + \lambda^R_2 - a_{22} - a_{33} \right)}
{a_{12}^2 a_{23} - a_{13}^2 a_{32} +
a_{12} a_{13} \left( a_{33} - a_{22} \right)} \;,
\end{split}
\label{eq:betai}
\end{equation}
assuming $a_{12}^2 a_{23} - a_{13}^2 a_{32} + a_{12} a_{13} \left( a_{33} - a_{22} \right) \ne 0$,
as is generically the case\removableFootnote{
I do not see a simple geometric or physical interpretation for the case that this quantity is zero.
}.
\section{A bifurcation diagram}
\label{sec:bifDiag}
\setcounter{equation}{0}
Here we apply the formulas of \S\ref{sec:parameters} to the example of \S\ref{sub:verificationMainExample}.
This example is for the border-collision normal form with $\mu = 1$.
The eigenvalues of $A_L$, given by \eqref{eq:bcNormalForm},
are of the form $\lambda^L_1 > 0$ and $\lambda^L_{2,3} = p \pm \ri q$
for all points $(\sigma_L,\sigma_R) \in \cD$ that lie to the left of the dashed curve shown in Fig.~\ref{fig:domainExF}
and with $\sigma_R < 2.97$ approximately\removableFootnote{
$\sigma_R \approx 2.9656$ on the dashed curve.
}.
With the specific values \eqref{eq:exF}, corresponding to the black dot in Fig.~\ref{fig:domainExF},
the eigenvalues of $A_L$ and $A_R$ are\removableFootnote{
See {\sc goEx6all.m} and {\sc construct4dFilippov.m}.
}
\begin{align*}
\lambda^L_1 &\approx 0.2262333771 \;, \\
\lambda^L_{2,3} &\approx -0.3445852200 \pm 0.4870055259\ri \;, \\
\lambda^R_1 &= -1 \;, \\
\lambda^R_2 &= -1.75 \;, \\
\lambda^R_3 &= 0 \;,
\end{align*}
where each $\lambda^L_j$ is given to ten decimal places.
By substituting these values into \eqref{eq:nu123}--\eqref{eq:betai}, we obtain
\begin{equation}
\begin{split}
\alpha_1 &\approx 0.0302445699 \;, \\
\alpha_2 &\approx 0.1667559781 \;, \\
\alpha_3 &\approx 0.4009520660 \;, \\
\beta_1 &\approx -0.3783802961 \;, \\
\beta_2 &\approx -0.5981255840 \;,
\end{split}
\label{eq:param2b}
\end{equation}
to ten decimal places.
Before we discuss the dynamics of the ODE system \eqref{eq:odeEx} with the values \eqref{eq:param2b},
we first note that for the grazing-sliding bifurcation at $\gamma = \gamma_{\rm graz} \approx 0.9120$,
the return map $\cP$ is given by \eqref{eq:f} with \eqref{eq:AL}--\eqref{eq:b}, to leading order.
For this map we have $\det(\cO_L) \approx -5.4366$ and $\varrho^{\sf T} b \approx 1.7351$.
As discussed at the beginning of \S\ref{sec:derivingExamples},
since these quantities are nonzero the return map is conjugate to the
border-collision normal form for $\mu \ne 0$.
With the given parameter values,
the border-collision normal form has infinitely many admissible, asymptotically stable $\cX^k \cY$-cycles
for $\mu > 0$ (Theorem \ref{th:twoParamEx}).
By conjugacy, the leading order approximation to $\cP$
has infinitely many admissible, asymptotically stable $\cX^k \cY$-cycles for $\gamma > \gamma_{\rm graz}$.
Furthermore, for each $k \ge k_{\rm min}$, the $\cX^k \cY$-cycle is a structurally stable invariant set.
Hence there exists $\gamma_k > \gamma_{\rm graz}$ such that $\cP$
has an admissible, asymptotically stable $\cX^k \cY$-cycle for all $\gamma \in (\gamma_{\rm graz},\gamma_k)$.
\begin{figure}[b!]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{picture}(16.25,6.5)
\put(0,0){\includegraphics[height=6.5cm]{bifDiagGrazSliding}}
\put(8,0){\small $\gamma - \gamma_{\rm graz}$}
\put(-.3,3.48){\small $|\cZ| \hspace{-.2mm}+\hspace{-.2mm} r X^\cZ_i$}
\put(2.3,1.3){\scriptsize $L$}
\put(14.5,2.04){\scriptsize $R$}
\put(4.7,1.7){\scriptsize $\cX^{\overline{2}}$}
\put(3.7,2.16){\scriptsize $\cX$}
\put(8.7,2.16){\scriptsize $\cX \cY$}
\put(9,2.77){\scriptsize $\cX \cY^{\overline{0}}$}
\put(5.7,3.31){\scriptsize $\cX^2 \cY$}
\put(6,4.14){\scriptsize $\cX^2 \cY^{\overline{0}}$}
\put(3.16,4.81){\scriptsize $\cX^3 \cY$}
\put(3.16,5.26){\scriptsize $\cX^3 \cY^{\overline{0}}$}
\put(7.8,5.17){\scriptsize $\cX^4 \cY$}
\put(7.1,5.7){\scriptsize $\cX^4 \cY^{\overline{0}}$}
\end{picture}
\caption{
A bifurcation diagram of the ODE system \eqref{eq:odeEx} with \eqref{eq:param2b}.
Infinitely many asymptotically stable periodic orbits are created at $\gamma = \gamma_{\rm graz}$.
These correspond to the $\cX^k \cY$-cycles of Fig.~\ref{fig:qqExdRzero6}
and are shown here for $k = 1,\ldots,4$.
Curves corresponding to stable [unstable] periodic orbits are coloured blue [red].
\label{fig:bifDiagGrazSliding}
}
\end{center}
\end{figure}
We conclude that for the ODE system \eqref{eq:odeEx} with \eqref{eq:param2b},
infinitely many asymptotically stable periodic orbits
are created in the grazing-sliding bifurcation at $\gamma_{\rm graz}$.
For this example, $\cX$ and $\cY$ are words of length three and two, respectively.
Thus each $\cX^k \cY$-cycle of $\cP$ corresponds to a periodic orbit of \eqref{eq:odeEx}
that consists of $3 k + 2$ loops near the base periodic orbit $\bX_p(t)$, \eqref{eq:Xp}.
Since $\cX$ has two $R$'s and $\cY$ has one $R$,
exactly $2 k + 1$ of these loops involve a segment of sliding motion.
Fig.~\ref{fig:bifDiagGrazSliding} is a numerically computed bifurcation diagram of \eqref{eq:odeEx} with \eqref{eq:param2b}
illustrating several stable (blue) and unstable (red) periodic orbits.
Let us first explain the quantity $|\cZ| + r X^\cZ_i$ plotted on the vertical axis.
Each periodic orbit corresponds to a $\cZ$-cycle, for some $\cZ$, in the return map $\cP$ (e.g.~$\cZ = \cX^k \cY$).
We let $|\cZ|$ denote the length of $\cZ$
(this is also the number of loops that the periodic orbit has near the base periodic orbit).
For each $\cZ$ we choose a convenient index $i$ and let $X^\cZ_i = e_1^{\sf T} \bx^\cZ_i$ denote the first component of
the $i^{\rm th}$ point of the $\cZ$-cycle.
Finally, $r = 500$ is a scaling factor that enables the various periodic orbits to be distinguished clearly.
Note that each curve in Fig.~\ref{fig:bifDiagGrazSliding} has the integer value $|\cZ|$ at $\gamma = \gamma_{\rm graz}$
because as $\gamma \to \gamma_{\rm graz}^+$ the $\cZ$-cycle collapses to the origin and so here $X^\cZ_i = 0$.
Fig.~\ref{fig:bifDiagGrazSliding} shows periodic orbits corresponding
to $\cX^k \cY$-cycles and $\cX^k \cY^{\overline{0}}$-cycles for $k = 1,\ldots,4$.
For each $k$, these exist for all $\gamma \in (\gamma_{\rm graz},\gamma_k)$.
At $\gamma = \gamma_k$, the two periodic orbits collide and annihilate in a secondary
grazing-sliding bifurcation that mimics a saddle-node bifurcation.
In Fig.~\ref{fig:bifDiagGrazSliding} the two corresponding bifurcation curves intersect at $\gamma = \gamma_k$
because for each periodic orbit the index $i$ used for the vertical axis was chosen so that $X^\cZ_i = 0$ at $\gamma = \gamma_k$.
The values of $\gamma_k$ decrease as $k$ increases.
A determination of the asymptotic rate at which $\gamma_k \to 0$ as $k \to \infty$,
akin to that achieved in \cite{Si14b}, is beyond the scope of this paper.
Fig.~\ref{fig:bifDiagGrazSliding} shows the periodic orbit $\bX_p(t)$ (labelled $L$)
which exists for $\gamma < \gamma_{\rm graz}$.
For $\gamma > \gamma_{\rm graz}$ there exists one periodic orbit with one loop.
This periodic is unstable and involves a sliding segment (so is labelled $R$).
Also the periodic orbit corresponding to the $\cX$-cycle
exists for $\gamma \in (\gamma_{\rm graz},\gamma_{\rm graz} + 0.0026)$, approximately.
At the right end-point of this interval this periodic orbit
collides and annihilates with a periodic orbit corresponding to an $\cX^{\overline{2}} = RLL$-cycle.
The numerical continuation used to compute Fig.~\ref{fig:bifDiagGrazSliding}
was achieved by evaluating $\cP_g$ and $\cP_d$ numerically.
Computation of $\cP_g$ did not require a numerical ODE solver
because the exact solution is given by \eqref{eq:generalSoln}.
A numerical ODE solver was used to simulate sliding motion
governed by the nonlinear system \eqref{eq:slidingODE}.
Newton's method was used to locate fixed points of $\cP$.
This was achieved using the return map on $\Gamma$, call it $\tilde{\cP}$.
As described in \cite{GlJe15}, the map $\tilde{\cP}$
has the numerical advantage of being of one less dimension than that of $\cP$.
We did not discuss $\tilde{\cP}$ in \S\ref{sec:odeExample} because it has the analytical disadvantage
that each iterate of $\tilde{\cP}$ corresponds to several loops near $\bX_p(t)$
(specifically $\tilde{\cP} = \cP_d \circ \cP_g^\ell$,
where $\ell \ge 1$ is the number of loops required to reintersect $X = 0$).
Numerically continuing periodic orbits corresponding to $\cX^k \cY^{\overline{0}}$-cycles required particularly high precision,
not because they are unstable, but because they involve one point relatively close to the switching manifold.
On the switching manifold $\tilde{\cP}$ is non-differentiable,
so an extremely small discretisation was required to accurately estimate derivatives of $\tilde{\cP}$
for Newton's method.
\section{Discussion}
\label{sec:conc}
\setcounter{equation}{0}
The existence of multiple attractors in a dynamical system is a critical cause for complexity.
Here the long-term dynamics depends on the initial conditions
and in the presence of noise solutions may flip-flop between neighbourhoods of attractors.
A wide range of systems have been found to have large numbers of attractors
with varying physical consequences \cite{Fe08}.
For instance, a neuron that can exhibit a wide variety of stable bursting and beating solutions
appears to have the potential for sophisticated information processing \cite{CaBa93}.
At a grazing-sliding bifurcation,
an asymptotically stable periodic orbit can split into multiple attractors.
Since the attractors coincide at the bifurcation,
we cannot expect to know which attractor a particular orbit will converge to
if the parameter governing the bifurcation is varied dynamically \cite{DuNu99}.
This paper reveals that there is no limit to the number of attractors that can be created in a grazing-sliding bifurcation.
Infinitely many attractors are created for the example shown in Fig.~\ref{fig:bifDiagGrazSliding}.
These are destroyed in subsequent bifurcations shortly thereafter,
but there is no reason to expect that in other instances several attractors
cannot coexist over a relatively large region of parameter space.
The results have been demonstrated for an abstract ODE system
but are anticipated to occur in diverse physical systems due to the generality of the phenomenon.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.